text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 528–535, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Coreference Resolution Using Semantic Relatedness Information from Automatically Discovered Patterns Xiaofeng Yang Jian Su Institute for Infocomm Research 21 Heng Mui Keng Terrace, Singapore, 119613 {xiaofengy,sujian}@i2r.a-star.edu.sg Abstract Semantic relatedness is a very important factor for the coreference resolution task. To obtain this semantic information, corpusbased approaches commonly leverage patterns that can express a specific semantic relation. The patterns, however, are designed manually and thus are not necessarily the most effective ones in terms of accuracy and breadth. To deal with this problem, in this paper we propose an approach that can automatically find the effective patterns for coreference resolution. We explore how to automatically discover and evaluate patterns, and how to exploit the patterns to obtain the semantic relatedness information. The evaluation on ACE data set shows that the pattern based semantic information is helpful for coreference resolution. 1 Introduction Semantic relatedness is a very important factor for coreference resolution, as noun phrases used to refer to the same entity should have a certain semantic relation. To obtain this semantic information, previous work on reference resolution usually leverages a semantic lexicon like WordNet (Vieira and Poesio, 2000; Harabagiu et al., 2001; Soon et al., 2001; Ng and Cardie, 2002). However, the drawback of WordNet is that many expressions (especially for proper names), word senses and semantic relations are not available from the database (Vieira and Poesio, 2000). In recent years, increasing interest has been seen in mining semantic relations from large text corpora. One common solution is to utilize a pattern that can represent a specific semantic relation (e.g., “X such as Y” for is-a relation, and “X and other Y” for other-relation). Instantiated with two given noun phrases, the pattern is searched in a large corpus and the occurrence number is used as a measure of their semantic relatedness (Markert et al., 2003; Modjeska et al., 2003; Poesio et al., 2004). However, in the previous pattern based approaches, the selection of the patterns to represent a specific semantic relation is done in an ad hoc way, usually by linguistic intuition. The manually selected patterns, nevertheless, are not necessarily the most effective ones for coreference resolution from the following two concerns: • Accuracy. Can the patterns (e.g., “X such as Y”) find as many NP pairs of the specific semantic relation (e.g. is-a) as possible, with a high precision? • Breadth. Can the patterns cover a wide variety of semantic relations, not just is-a, by which coreference relationship is realized? For example, in some annotation schemes like ACE, “Beijing:China” are coreferential as the capital and the country could be used to represent the government. The pattern for the common “isa” relation will fail to identify the NP pairs of such a “capital-country” relation. To deal with this problem, in this paper we propose an approach which can automatically discover effective patterns to represent the semantic relations 528 for coreference resolution. We explore two issues in our study: (1) How to automatically acquire and evaluate the patterns? We utilize a set of coreferential NP pairs as seeds. For each seed pair, we search a large corpus for the texts where the two noun phrases cooccur, and collect the surrounding words as the surface patterns. We evaluate a pattern based on its commonality or association with the positive seed pairs. (2) How to mine the patterns to obtain the semantic relatedness information for coreference resolution? We present two strategies to exploit the patterns: choosing the top best patterns as a set of pattern features, or computing the reliability of semantic relatedness as a single feature. In either strategy, the obtained features are applied to do coreference resolution in a supervised-learning way. To our knowledge, our work is the first effort that systematically explores these issues in the coreference resolution task. We evaluate our approach on ACE data set. The experimental results show that the pattern based semantic relatedness information is helpful for the coreference resolution. The remainder of the paper is organized as follows. Section 2 gives some related work. Section 3 introduces the framework for coreference resolution. Section 4 presents the model to obtain the patternbased semantic relatedness information. Section 5 discusses the experimental results. Finally, Section 6 summarizes the conclusions. 2 Related Work Earlier work on coreference resolution commonly relies on semantic lexicons for semantic relatedness knowledge. In the system by Vieira and Poesio (2000), for example, WordNet is consulted to obtain the synonymy, hypernymy and meronymy relations for resolving the definite anaphora. In (Harabagiu et al., 2001), the path patterns in WordNet are utilized to compute the semantic consistency between NPs. Recently, Ponzetto and Strube (2006) suggest to mine semantic relatedness from Wikipedia, which can deal with the data sparseness problem suffered by using WordNet. Instead of leveraging existing lexicons, many researchers have investigated corpus-based approaches to mine semantic relations. Garera and Yarowsky (2006) propose an unsupervised model which extracts hypernym relation for resloving definite NPs. Their model assumes that a definite NP and its hypernym words usually co-occur in texts. Thus, for a definite-NP anaphor, a preceding NP that has a high co-occurrence statistics in a large corpus is preferred for the antecedent. Bean and Riloff (2004) present a system called BABAR that uses contextual role knowledge to do coreference resolution. They apply an IE component to unannotated texts to generate a set of extraction caseframes. Each caseframe represents a linguistic expression and a syntactic position, e.g. “murder of <NP>”, “killed <patient>”. From the caseframes, they derive different types of contextual role knowledge for resolution, for example, whether an anaphor and an antecedent candidate can be filled into co-occurring caseframes, or whether they are substitutable for each other in their caseframes. Different from their system, our approach aims to find surface patterns that can directly indicate the coreference relation between two NPs. Hearst (1998) presents a method to automate the discovery of WordNet relations, by searching for the corresponding patterns in large text corpora. She explores several patterns for the hyponymy relation, including “X such as Y” “X and/or other Y”, “X including / especially Y” and so on. The use of Hearst’s style patterns can be seen for the reference resolution task. Modjeska et al. (2003) explore the use of the Web to do the other-anaphora resolution. In their approach, a pattern “X and other Y” is used. Given an anaphor and a candidate antecedent, the pattern is instantiated with the two NPs and forms a query. The query is submitted to the Google searching engine, and the returned hit number is utilized to compute the semantic relatedness between the two NPs. In their work, the semantic information is used as a feature for the learner. Markert et al. (2003) and Poesio et al. (2004) adopt a similar strategy for the bridging anaphora resolution. In (Hearst, 1998), the author also proposes to discover new patterns instead of using the manually designed ones. She employs a bootstrapping algorithm to learn new patterns from the word pairs with a known relation. Based on Hearst’s work, Pantel and Pennacchiotti (2006) further give a method 529 which measures the reliability of the patterns based on the strength of association between patterns and instances, employing the pointwise mutual information (PMI). 3 Framework of Coreference Resolution Our coreference resolution system adopts the common learning-based framework as employed by Soon et al. (2001) and Ng and Cardie (2002). In the learning framework, a training or testing instance has the form of i{NPi, NPj}, in which NPj is a possible anaphor and NPi is one of its antecedent candidates. An instance is associated with a vector of features, which is used to describe the properties of the two noun phrases as well as their relationships. In our baseline system, we adopt the common features for coreference resolution such as lexical property, distance, string-matching, namealias, apposition, grammatical role, number/gender agreement and so on. The same feature set is described in (Ng and Cardie, 2002) for reference. During training, for each encountered anaphor NPj, one single positive training instance is created for its closest antecedent. And a group of negative training instances is created for every intervening noun phrases between NPj and the antecedent. Based on the training instances, a binary classifier can be generated using any discriminative learning algorithm, like C5 in our study. For resolution, an input document is processed from the first NP to the last. For each encountered NPj, a test instance is formed for each antecedent candidate, NPi1. This instance is presented to the classifier to determine the coreference relationship. NPj will be resolved to the candidate that is classified as positive (if any) and has the highest confidence value. In our study, we augment the common framework by incorporating non-anaphors into training. We focus on the non-anaphors that the original classifier fails to identify. Specifically, we apply the learned classifier to all the non-anaphors in the training documents. For each non-anaphor that is classified as positive, a negative instance is created by pairing the non-anaphor and its false antecedent. These neg1For resolution of pronouns, only the preceding NPs in current and previous two sentences are considered as antecedent candidates. For resolution of non-pronouns, all the preceding non-pronouns are considered. ative instances are added into the original training instance set for learning, which will generate a classifier with the capability of not only antecedent identification, but also non-anaphorically identification. The new classier is applied to the testing document to do coreference resolution as usual. 4 Patterned Based Semantic Relatedness 4.1 Acquiring the Patterns To derive patterns to indicate a specific semantic relation, a set of seed NP pairs that have the relation of interest is needed. As described in the previous section, we have a set of training instances formed by NP pairs with known coreference relationships. We can just use this set of NP pairs as the seeds. That is, an instance i{NPi, NPj} will become a seed pair (Ei:Ej) in which NPi corresponds to Ei and NPj corresponds to Ej. In creating the seed, for a common noun, only the head word is retained while for a proper name, the whole string is kept. For example, instance i{“Bill Clinton”, “the former president”} will be converted to a NP pair (“Bill Clinton”:“president”). We create the seed pair for every training instance i{NPi, NPj}, except when (1) NPi or NPj is a pronoun; or (2) NPi and NPj have the same head word. We denote S+ and S- the set of seed pairs derived from the positive and the negative training instances, respectively. Note that a seed pair may possibly belong to S+ can S- at the same time. For each of the seed NP pairs (Ei:Ej), we search in a large corpus for the strings that match the regular expression “Ei * * * Ej” or “Ej * * * Ei”, where * is a wildcard for any word or symbol. The regular expression is defined as such that all the cooccurrences of Ei and Ej with at most three words (or symbols) in between are retrieved. For each retrieved string, we extract a surface pattern by replacing expression Ei with a mark <#t1#> and Ej with <#t2#>. If the string is followed by a symbol, the symbol will be also included in the pattern. This is to create patterns like “X * * * Y [, . ?]” where Y, with a high possibility, is the head word, but not a modifier of another noun phrase. As an example, consider the pair (“Bill Clinton”:“president”). Suppose that two sentences in a corpus can be matched by the regular expressions: 530 (S1) “ Bill Clinton is elected President of the United States.” (S2) “The US President, Mr Bill Clinton, today advised India to move towards nuclear nonproliferation and begin a dialogue with Pakistan to ... ”. The patterns to be extracted for (S1) and (S2), respectively, are P1: <#t1#> is elected <#t2#> P2: <#t2#> , Mr <#t1#> , We record the number of strings matched by a pattern p instantiated with (Ei:Ej), noted |(Ei, p, Ej)|, for later use. For each seed pair, we generate a list of surface patterns in the above way. We collect all the patterns derived from the positive seed pairs as a set of reference patterns, which will be scored and used to evaluate the semantic relatedness for any new NP pair. 4.2 Scoring the Patterns 4.2.1 Frequency One possible scoring scheme is to evaluate a pattern based on its commonality to positive seed pairs. The intuition here is that the more often a pattern is seen for the positive seed pairs, the more indicative the pattern is to find positive coreferential NP pairs. Based on this idea, we score a pattern by calculating the number of positive seed pairs whose pattern list contains the pattern. Formally, supposing the pattern list associated with a seed pair s is PList(s), the frequency score of a pattern p is defined as Freqency(p) = |{s|s ∈S+, p ∈PList(s)}| (1) 4.2.2 Reliability Another possible way to evaluate a pattern is based on its reliability, i.e., the degree that the pattern is associated with the positive coreferential NPs. In our study, we use pointwise mutual information (Cover and Thomas, 1991) to measure association strength, which has been proved effective in the task of semantic relation identification (Pantel and Pennacchiotti, 2006). Under pointwise mutual information (PMI), the strength of association between two events x and y is defined as follows: pmi(x, y) = log P(x, y) P(x)P(y) (2) Thus the association between a pattern p and a positive seed pair s:(Ei:Ej) is: pmi(p, (Ei : Ej)) = log |(Ei,p,Ej)| |(∗,∗,∗)| |(Ei,∗,Ej)| |(∗,∗,∗)| |(∗,p,∗)| |(∗,∗,∗)| (3) where |(Ei,p,Ej)| is the count of strings matched by pattern p instantiated with Ei and Ej. Asterisk * represents a wildcard, that is: |(Ei, ∗, Ej)| = X p∈P List(Ei:Ej) |(Ei, p, Ej)| (4) |(∗, p, ∗)| = X (Ei:Ej)∈S+∪S− |(Ei, p, Ej)| (5) |(∗, ∗, ∗)| = X (Ei:Ej)∈S+∪S−;p∈P list(Ei:Ej) |(Ei, p, Ej)| (6) The reliability of pattern is the average strength of association across each positive seed pair: r(p) = P s∈S+ pmi(p,s) max pmi |S + | (7) Here max pmi is used for the normalization purpose, which is the maximum PMI between all patterns and all positive seed pairs. 4.3 Exploiting the Patterns 4.3.1 Patterns Features One strategy is to directly use the reference patterns as a set of features for classifier learning and testing. To select the most effective patterns for the learner, we rank the patterns according to their scores and then choose the top patterns (first 100 in our study) as the features. As mentioned, the frequency score is based on the commonality of a pattern to the positive seed pairs. However, if a pattern also occurs frequently for the negative seed pairs, it should be not deemed a good feature as it may lead to many false positive pairs during real resolution. To take this factor into account, we filter the patterns based on their accuracy, which is defined as follows: Accuracy(p) = |{s|s ∈S+, p ∈PList(s)}| |{s|s ∈S + ∪S−, p ∈PList(s)}| (8) A pattern with an accuracy below threshold 0.5 is eliminated from the reference pattern set. The remaining patterns are sorted as normal, from which the top 100 patterns are selected as features. 531 NWire NPaper BNews R P F R P F R P F Normal Features 54.5 80.3 64.9 56.6 76.0 64.9 52.7 75.3 62.0 + ”X such as Y” proper names 55.1 79.0 64.9 56.8 76.1 65.0 52.6 75.1 61.9 all types 55.1 78.3 64.7 56.8 74.7 64.4 53.0 74.4 61.9 + “X and other Y” proper names 54.7 79.9 64.9 56.4 75.9 64.7 52.6 74.9 61.8 all types 54.8 79.8 65.0 56.4 75.9 64.7 52.8 73.3 61.4 + pattern features (frequency) proper names 58.7 75.8 66.2 57.5 73.9 64.7 54.0 71.1 61.4 all types 59.7 67.3 63.3 57.4 62.4 59.8 55.9 57.7 56.8 + pattern features (filtered frequency) proper names 57.8 79.1 66.8 56.9 75.1 64.7 54.1 72.4 61.9 all types 58.1 77.4 66.4 56.8 71.2 63.2 55.0 68.1 60.9 + pattern features (PMI reliability) proper names 58.8 76.9 66.6 58.1 73.8 65.0 54.3 72.0 61.9 all types 59.6 70.4 64.6 58.7 61.6 60.1 56.0 58.8 57.4 + single reliability feature proper names 57.4 80.8 67.1 56.6 76.2 65.0 54.0 74.7 62.7 all types 57.7 76.4 65.7 56.7 75.9 64.9 55.1 69.5 61.5 Table 1: The results of different systems for coreference resolution Each selected pattern p is used as a single feature, PFp. For an instance i{NPi, NPj}, a list of patterns is generated for (Ei:Ej) in the same way as described in Section 4.1. The value of PFp for the instance is simply |(Ei, p, Ej)|. The set of pattern features is used together with the other normal features to do the learning and testing. Thus, the actual importance of a pattern in coreference resolution is automatically determined in a supervised learning way. 4.3.2 Semantic Relatedness Feature Another strategy is to use only one semantic feature which is able to reflect the reliability that a NP pair is related in semantics. Intuitively, a NP pair with strong semantic relatedness should be highly associated with as many reliable patterns as possible. Based on this idea, we define the semantic relatedness feature (SRel) as follows: SRel(i{NPi, NPj}) = 1000 ∗ P p∈P List(Ei:Ej) pmi(p, (Ei : Ej)) ∗r(p) (9) where pmi(p, (Ei:Ej)) is the pointwise mutual information between pattern p and a NP pair (Ei:Ej), as defined in Eq. 3. r(p) is the reliability score of p (Eq. 7). As a relatedness value is always below 1, we multiple it by 1000 so that the feature value will be of integer type with a range from 0 to 1000. Note that among PList(Ei:Ej), only the reference patterns are involved in the feature computing. 5 Experiments and Discussion 5.1 Experimental setup In our study we did evaluation on the ACE-2 V1.0 corpus (NIST, 2003), which contains two data set, training and devtest, used for training and testing respectively. Each of these sets is further divided by three domains: newswire (NWire), newspaper (NPaper), and broadcast news (BNews). An input raw text was preprocessed automatically by a pipeline of NLP components, including sentence boundary detection, POS-tagging, Text Chunking and Named-Entity Recognition. Two different classifiers were learned respectively for resolving pronouns and non-pronouns. As mentioned, the pattern based semantic information was only applied to the non-pronoun resolution. For evaluation, Vilain et al. (1995)’s scoring algorithm was adopted to compute the recall and precision of the whole coreference resolution. For pattern extraction and feature computing, we used Wikipedia, a web-based free-content encyclopedia, as the text corpus. We collected the English Wikipedia database dump in November 2006 (refer to http://download.wikimedia.org/). After all the hyperlinks and other html tags were removed, the whole pure text contains about 220 Million words. 5.2 Results and Discussion Table 1 lists the performance of different coreference resolution systems. The first line of the table shows the baseline system that uses only the common features proposed in (Ng and Cardie, 2002). From the table, our baseline system can 532 NO Frequency Frequency (Filtered) PMI Reliabilty 1 <#t1> <#t2> <#t2> | | <#t1> | <#t1> : <#t2> 2 <#t2> <#t1> <#t1> ) is a <#t2> <#t2> : <#t1> 3 <#t1> , <#t2> <#t1> ) is an <#t2> <#t1> . the <#t2> 4 <#t2> , <#t1> <#t2> ) is an <#t1> <#t2> ( <#t1> ) 5 <#t1> . <#t2> <#t2> ) is a <#t1> <#t1> ( <#t2> 6 <#t1> and <#t2> <#t1> or the <#t2> <#t1> ( <#t2> ) 7 <#t2> . <#t1> <#t1> ( the <#t2> <#t1> | | <#t2> | 8 <#t1> . the <#t2> <#t1> . during the <#t2> <#t2> | | <#t1> | 9 <#t2> and <#t1> <#t1> | <#t2> <#t2> , the <#t1> 10 <#t1> , the <#t2> <#t1> , an <#t2> <#t1> , the <#t2> 11 <#t2> . the <#t1> <#t1> ) was a <#t2> <#t2> ( <#t1> 12 <#t2> , the <#t1> <#t1> in the <#t2> <#t1> , <#t2> 13 <#t2> <#t1> , <#t1> - <#t2> <#t1> and the <#t2> 14 <#t1> <#t2> , <#t1> ) was an <#t2> <#t1> . <#t2> 15 <#t1> : <#t2> <#t1> , many <#t2> <#t1> ) is a <#t2> 16 <#t1> <#t2> . <#t2> ) was a <#t1> <#t1> during the <#t2> 17 <#t2> <#t1> . <#t1> ( <#t2> . <#t1> <#t2> . 18 <#t1> ( <#t2> ) <#t2> | <#t1> <#t1> ) is an <#t2> 19 <#t1> and the <#t2> <#t1> , not the <#t2> <#t2> in <#t1> . 20 <#t2> ( <#t1> ) <#t2> , many <#t1> <#t2> , <#t1> . . . . . . . . . . . . Table 2: Top patterns chosen under different scoring schemes achieve a good precision (above 75%-80%) with a recall around 50%-60%. The overall F-measure for NWire, NPaper and BNews is 64.9%, 64.9% and 62.0% respectively. The results are comparable to those reported in (Ng, 2005) which uses similar features and gets an F-measure of about 62% for the same data set. The rest lines of Table 1 are for the systems using the pattern based information. In all the systems, we examine the utility of the semantic information in resolving different types of NP Pairs: (1) NP Pairs containing proper names (i.e., Name:Name or Name:Definites), and (2) NP Pairs of all types. In Table 1 (Line 2-5), we also list the results of incorporating two commonly used patterns, “X(s) such as Y” and “X and other Y(s)”. We can find that neither of the manually designed patterns has significant impact on the resolution performance. For all the domains, the manual patterns just achieve slight improvement in recall (below 0.6%), indicating that coverage of the patterns is not broad enough. 5.2.1 Pattern Features In Section 4.3.1 we propose a strategy that directly uses the patterns as features. Table 2 lists the top patterns that are sorted based on frequency, filtered frequency (by accuracy), and PMI reliability, on the NWire domain for illustration. From the table, evaluated only based on frequency, the top patterns are those that indicate the appositive structure like “X, an/a/the Y”. However, if filtered by accuracy, patterns of such a kind will be removed. Instead, the top patterns with both high frequency and high accuracy are those for the copula structure, like “X is/was/are Y”. Sorted by PMI reliability, patterns for the above two structures can be seen in the top of the list. These results are consistent with the findings in (Cimiano and Staab, 2004) that the appositive and copula structures are indicative to find the is-a relation. Also, the two commonly used patterns “X(s) such as Y” and “X and other Y(s)” were found in the feature lists (not shown in the table). Their importance for coreference resolution will be determined automatically by the learning algorithm. An interesting pattern seen in the lists is “X || Y |”, which represents the cases when Y and X appear in the same of line of a table in Wikipedia. For example, the following text “American || United States | Washington D.C. | . . . ” is found in the table “list of empires”. Thus the pair “American:United States”, which is deemed coreferential in ACE, can be identified by the pattern. The sixth till the eleventh lines of Table 1 list the results of the system with pattern features. From the table, adding the pattern features brings the improvement of the recall against the baseline. Take the system based on filtered frequency as an example. We can observe that the recall increases by up to 3.3% (for NWire). However, we see the precision drops (up to 1.2% for NWire) at the same time. Overall the system achieves an F-measure better than the baseline in NWire (1.9%), while equal (±0.2%) in NPaper and BNews. Among the three ranking schemes, simply using frequency leads to the lowest precision. By contrast, using filtered frequency yields the highest precision with nevertheless the lowest recall. It is reasonable since the low accuracy features prone to false posi533 NameAlias = 1: ... NameAlias = 0: :..Appositive = 1: ... Appositive = 0: :..P014 > 0: :...P003 <= 4: 0 (3) : P003 > 4: 1 (25) P014 <= 0: :..P004 > 0:... P004 <= 0: :..P027 > 0: 1 (25/7) P027 <= 0: :..P002 > 0: ... P002 <= 0: :..P005 > 0: 1 (49/22) P005 <= 0: :..String_Match = 1: . String_Match = 0: . // p002: <t1> ) is a <t2> // P003: <t1> ) is an <t2> // P004: <t2> ) is an <t1> // p005: <t2> ) is a <t1> // P014: <t1> ) was an <t2> // p027: <t1> , ( <t2> , Figure 1: The decision tree (NWire domain) for the system using pattern features (filtered frequency) (feature String Match records whether the string of anaphor NP j matches that of a candidate antecedent NP i) tive NP pairs are eliminated, at the price of recall. Using PMI Reliability can achieve the highest recall with a medium level of precision. However, we do not find significant difference in the overall Fmeasure for all these three schemes. This should be due to the fact that the pattern features need to be further chosen by the learning algorithm, and only those patterns deemed effective by the learner will really matter in the real resolution. From the table, the pattern features only work well for NP pairs containing proper names. Applied on all types of NP pairs, the pattern features further boost the recall of the systems, but in the meanwhile degrade the precision significantly. The F-measure of the systems is even worse than that of the baseline. Our error analysis shows that a non-anaphor is often wrongly resolved to a false antecedent once the two NPs happen to satisfy a pattern feature, which affects precision largely (as an evidence, the decrease of precision is less significant when using filtered frequency than using frequency). Still, these results suggest that we just apply the pattern based semantic information in resolving proper names which, in fact, is more compelling as the semantic information of common nouns could be more easily retrieved from WordNet. We also notice that the patterned based semantic information seems more effective in the NWire domain than the other two. Especially for NPaper, the improvement in F-measure is less than 0.1% for all the systems tested. The error analysis indicates it may be because (1) there are less NP pairs in NPaper than in NWire that require the external semantic knowledge for resolution; and (2) For many NP pairs that require the semantic knowledge, no cooccurrence can be found in the Wikipedia corpus. To address this problem, we could resort to the Web which contains a larger volume of texts and thus could lead to more informative patterns. We would like to explore this issue in our future work. In Figure 1, we plot the decision tree learned with the pattern features for non-pronoun resolution (NWire domain, filtered frequency), which visually illustrates which features are useful in the reference determination. We can find the pattern features occur in the top of the decision tree, among the features for name alias, apposition and string-matching that are crucial for coreference resolution as reported in previous work (Soon et al., 2001). Most of the pattern features deemed important by the learner are for the copula structure. 5.2.2 Single Semantic Relatedness Feature Section 4.3.2 presents another strategy to exploit the patterns, which uses a single feature to reflect the semantic relatedness between NP pairs. The last two lines of Table 1 list the results of such a system. Observed from the table, the system with the single semantic relatedness feature beats those with other solutions. Compared with the baseline, the system can get improvement in recall (up to 2.9% as in NWire), with a similar or even higher precision. The overall F-measure it produces is 67.1%, 65.0% and 62.7%, better than the baseline in all the domains. Especially in the NWire domain, we can see the significant (t-test, p ≤0.05) improvement of 2.1% in F-measure. When applied on All-Type NP pairs, the degrade of performance is less significant as using pattern features. The resulting performance is better than the baseline or equal. Compared with the systems using the pattern features, it can still achieve a higher precision and F-measure (with a little loss in recall) . There are several reasons why the single semantic relatedness feature (SRel) can perform better than the set of pattern features. Firstly, the feature value of SRel takes into consideration the information of all the patterns, instead of only the selected patterns. Secondly, since the SRel feature is computed based on all the patterns, it reduces the risk of false posi534 NameAlias = 1: ... NameAlias = 0: :..Appositive = 1: ... Appositive = 0: :..SRel > 28: :..SRel > 47: ... : SRel <= 47: ... SRel <= 28: :..String_Match = 1: ... String_Match = 0: ... Figure 2: The decision tree (Nwire) for the system using the single semantic relatedness feature tive when a NP pair happens to satisfy one or several pattern features. Lastly, from the point of view of machine learning, using only one semantic feature, instead of hundreds of pattern features, can avoid overfitting and thus benefit the classifier learning. In Figure 2, we also show the decision tree learned with the semantic relatedness feature. We observe that the decision tree is simpler than that with pattern features as depicted in Figure 1. After feature name-alias and apposite, the classifier checks different ranges of the SRel value and make different resolution decision accordingly. This figure further illustrates the importance of the semantic feature. 6 Conclusions In this paper we present a pattern based approach to coreference resolution. Different from the previous work which utilizes manually designed patterns, our approach can automatically discover the patterns effective for the coreference resolution task. In our study, we explore how to acquire and evaluate patterns, and investigate how to exploit the patterns to mine semantic relatedness information for coreference resolution. The evaluation on ACE data set shows that the patterned based features, when applied on NP pairs containing proper names, can effectively help the performance of coreference resolution in the recall (up to 4.3%) and the overall F-measure (up to 2.1%). The results also indicate that using the single semantic relatedness feature has more advantages than using a set of pattern features. For future work, we intend to investigate our approach in more difficult tasks like the bridging anaphora resolution, in which the semantic relations involved are more complicated. Also, we would like to explore the approach in technical (e.g., biomedical) domains, where jargons are frequently seen and the need for external knowledge is more compelling. Acknowledgements This research is supported by a Specific Targeted Research Project (STREP) of the European Union’s 6th Framework Programme within IST call 4, Bootstrapping Of Ontologies and Terminologies STrategic REsearch Project (BOOTStrep). References D. Bean and E. Riloff. 2004. Unsupervised learning of contextual role knowledge for coreference resolution. In Proceedings of NAACL, pages 297–304. P. Cimiano and S. Staab. 2004. Learning by googling. SIGKDD Explorations Newsletter, 6(2):24–33. T. Cover and J. Thomas. 1991. Elements of Information Theory. Hohn Wiley & Sons. N. Garera and D. Yarowsky. 2006. Resolving and generating definite anaphora by modeling hypernymy using unlabeled corpora. In Proceedings of CoNLL , pages 37–44. S. Harabagiu, R. Bunescu, and S. Maiorano. 2001. Text knowledge mining for coreference resolution. In Proceedings of NAACL, pages 55–62. M. Hearst. 1998. Automated discovery of wordnet relations. In Christiane Fellbaum, editor, WordNet: An Electronic Lexical Database and Some of its Applications. MIT Press, Cambridge, MA. K. Markert, M. Nissim, and N. Modjeska. 2003. Using the web for nominal anaphora resolution. In Proceedings of the EACL workshop on Computational Treatment of Anaphora, pages 39–46. N. Modjeska, K. Markert, and M. Nissim. 2003. Using the web in machine learning for other-anaphora resolution. In Proceedings of EMNLP, pages 176–183. V. Ng and C. Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of ACL, pages 104–111, Philadelphia. V. Ng. 2005. Machine learning for coreference resolution: From local classification to global ranking. In Proceedings of ACL, pages 157–164. P. Pantel and M. Pennacchiotti. 2006. Espresso: Leveraging generic patterns for automatically harvesting semantic relations. In Proceedings of ACL, pages 113–1200. M. Poesio, R. Mehta, A. Maroudas, and J. Hitzeman. 2004. Learning to resolve bridging references. In Proceedings of ACL, pages 143–150. S. Ponzetto and M. Strube. 2006. Exploiting semantic role labeling, wordnet and wikipedia for coreference resolution. In Proceedings of NAACL, pages 192–199. W. Soon, H. Ng, and D. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. R. Vieira and M. Poesio. 2000. An empirically based system for processing definite descriptions. Computational Linguistics, 27(4):539–592. M. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L. Hirschman. 1995. A model-theoretic coreference scoring scheme. In Proceedings of the Sixth Message understanding Conference (MUC-6), pages 45–52, San Francisco, CA. Morgan Kaufmann Publishers. 535
2007
67
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 536–543, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Semantic Class Induction and Coreference Resolution Vincent Ng Human Language Technology Research Institute University of Texas at Dallas Richardson, TX 75083-0688 [email protected] Abstract This paper examines whether a learningbased coreference resolver can be improved using semantic class knowledge that is automatically acquired from a version of the Penn Treebank in which the noun phrases are labeled with their semantic classes. Experiments on the ACE test data show that a resolver that employs such induced semantic class knowledge yields a statistically significant improvement of 2% in F-measure over one that exploits heuristically computed semantic class knowledge. In addition, the induced knowledge improves the accuracy of common noun resolution by 2-6%. 1 Introduction In the past decade, knowledge-lean approaches have significantly influenced research in noun phrase (NP) coreference resolution — the problem of determining which NPs refer to the same real-world entity in a document. In knowledge-lean approaches, coreference resolvers employ only morpho-syntactic cues as knowledge sources in the resolution process (e.g., Mitkov (1998), Tetreault (2001)). While these approaches have been reasonably successful (see Mitkov (2002)), Kehler et al. (2004) speculate that deeper linguistic knowledge needs to be made available to resolvers in order to reach the next level of performance. In fact, semantics plays a crucially important role in the resolution of common NPs, allowing us to identify the coreference relation between two lexically dissimilar common nouns (e.g., talks and negotiations) and to eliminate George W. Bush from the list of candidate antecedents of the city, for instance. As a result, researchers have re-adopted the once-popular knowledge-rich approach, investigating a variety of semantic knowledge sources for common noun resolution, such as the semantic relations between two NPs (e.g., Ji et al. (2005)), their semantic similarity as computed using WordNet (e.g., Poesio et al. (2004)) or Wikipedia (Ponzetto and Strube, 2006), and the contextual role played by an NP (see Bean and Riloff (2004)). Another type of semantic knowledge that has been employed by coreference resolvers is the semantic class (SC) of an NP, which can be used to disallow coreference between semantically incompatible NPs. However, learning-based resolvers have not been able to benefit from having an SC agreement feature, presumably because the method used to compute the SC of an NP is too simplistic: while the SC of a proper name is computed fairly accurately using a named entity (NE) recognizer, many resolvers simply assign to a common noun the first (i.e., most frequent) WordNet sense as its SC (e.g., Soon et al. (2001), Markert and Nissim (2005)). It is not easy to measure the accuracy of this heuristic, but the fact that the SC agreement feature is not used by Soon et al.’s decision tree coreference classifier seems to suggest that the SC values of the NPs are not computed accurately by this first-sense heuristic. Motivated in part by this observation, we examine whether automatically induced semantic class knowledge can improve the performance of a learning-based coreference resolver, reporting evaluation results on the commonly-used ACE corefer536 ence corpus. Our investigation proceeds as follows. Train a classifier for labeling the SC of an NP. In ACE, we are primarily concerned with classifying an NP as belonging to one of the ACE semantic classes. For instance, part of the ACE Phase 2 evaluation involves classifying an NP as PERSON, ORGANIZATION, GPE (a geographical-political region), FACILITY, LOCATION, or OTHERS. We adopt a corpus-based approach to SC determination, recasting the problem as a six-class classification task. Derive two knowledge sources for coreference resolution from the induced SCs. The first knowledge source (KS) is semantic class agreement (SCA). Following Soon et al. (2001), we represent SCA as a binary value that indicates whether the induced SCs of the two NPs involved are the same or not. The second KS is mention, which is represented as a binary value that indicates whether an NP belongs to one of the five ACE SCs mentioned above. Hence, the mention value of an NP can be readily derived from its induced SC: the value is NO if its SC is OTHERS, and YES otherwise. This KS could be useful for ACE coreference, since ACE is concerned with resolving only NPs that are mentions. Incorporate the two knowledge sources in a coreference resolver. Next, we investigate whether these two KSs can improve a learning-based baseline resolver that employs a fairly standard feature set. Since (1) the two KSs can each be represented in the resolver as a constraint (for filtering non-mentions or disallowing coreference between semantically incompatible NPs) or as a feature, and (2) they can be applied to the resolver in isolation or in combination, we have eight ways of incorporating these KSs into the baseline resolver. In our experiments on the ACE Phase 2 coreference corpus, we found that (1) our SC induction method yields a significant improvement of 2% in accuracy over Soon et al.’s first-sense heuristic method as described above; (2) the coreference resolver that incorporates our induced SC knowledge by means of the two KSs mentioned above yields a significant improvement of 2% in F-measure over the resolver that exploits the SC knowledge computed by Soon et al.’s method; (3) the mention KS, when used in the baseline resolver as a constraint, improves the resolver by approximately 5-7% in Fmeasure; and (4) SCA, when employed as a feature by the baseline resolver, improves the accuracy of common noun resolution by about 5-8%. 2 Related Work Mention detection. Many ACE participants have also adopted a corpus-based approach to SC determination that is investigated as part of the mention detection (MD) task (e.g., Florian et al. (2006)). Briefly, the goal of MD is to identify the boundary of a mention, its mention type (e.g., pronoun, name), and its semantic type (e.g., person, location). Unlike them, (1) we do not perform the full MD task, as our goal is to investigate the role of SC knowledge in coreference resolution; and (2) we do not use the ACE training data for acquiring our SC classifier; instead, we use the BBN Entity Type Corpus (Weischedel and Brunstein, 2005), which consists of all the Penn Treebank Wall Street Journal articles with the ACE mentions manually identified and annotated with their SCs. This provides us with a training set that is approximately five times bigger than that of ACE. More importantly, the ACE participants do not evaluate the role of induced SC knowledge in coreference resolution: many of them evaluate coreference performance on perfect mentions (e.g., Luo et al. (2004)); and for those that do report performance on automatically extracted mentions, they do not explain whether or how the induced SC information is used in their coreference algorithms. Joint probabilistic models of coreference. Recently, there has been a surge of interest in improving coreference resolution by jointly modeling coreference with a related task such as MD (e.g., Daum´e and Marcu (2005)). However, joint models typically need to be trained on data that is simultaneously annotated with information required by all of the underlying models. For instance, Daum´e and Marcu’s model assumes as input a corpus annotated with both MD and coreference information. On the other hand, we tackle coreference and SC induction separately (rather than jointly), since we train our SC determination model on the BBN Entity Type Corpus, where coreference information is absent. 3 Semantic Class Induction This section describes how we train and evaluate a classifier for determining the SC of an NP. 537 3.1 Training the Classifier Training corpus. As mentioned before, we use the BBN Entity Type Corpus for training the SC classifier. This corpus was originally developed to support the ACE and AQUAINT programs and consists of annotations of 12 named entity types and nine nominal entity types. Nevertheless, we will only make use of the annotations of the five ACE semantic types that are present in our ACE Phase 2 coreference corpus, namely, PERSON, ORGANIZATION, GPE, FACILITY, and LOCATION. Training instance creation. We create one training instance for each proper or common NP (extracted using an NP chunker and an NE recognizer) in each training text. Each instance is represented by a set of lexical, syntactic, and semantic features, as described below. If the NP under consideration is annotated as one of the five ACE SCs in the corpus, then the classification of the associated training instance is simply the ACE SC value of the NP. Otherwise, the instance is labeled as OTHERS. This results in 310063 instances in the training set. Features. We represent the training instance for a noun phrase, NPi, using seven types of features: (1) WORD: For each word w in NPi, we create a WORD feature whose value is equal to w. No features are created from stopwords, however. (2) SUBJ VERB: If NPi is involved in a subjectverb relation, we create a SUBJ VERB feature whose value is the verb participating in the relation. We use Lin’s (1998b) MINIPAR dependency parser to extract grammatical relations. Our motivation here is to coarsely model subcategorization. (3) VERB OBJ: A VERB OBJ feature is created in a similar fashion as SUBJ VERB if NPi participates in a verb-object relation. Again, this represents our attempt to coarsely model subcategorization. (4) NE: We use BBN’s IdentiFinder (Bikel et al., 1999), a MUC-style NE recognizer to determine the NE type of NPi. If NPi is determined to be a PERSON or ORGANIZATION, we create an NE feature whose value is simply its MUC NE type. However, if NPi is determined to be a LOCATION, we create a feature with value GPE (because most of the MUC LOCATION NEs are ACE GPE NEs). Otherwise, no NE feature will be created (because we are not interested in the other MUC NE types). ACE SC Keywords PERSON person ORGANIZATION social group FACILITY establishment, construction, building, facility, workplace GPE country, province, government, town, city, administration, society, island, community LOCATION dry land, region, landmass, body of water, geographical area, geological formation Table 1: List of keywords used in WordNet search for generating WN CLASS features. (5) WN CLASS: For each keyword w shown in the right column of Table 1, we determine whether the head noun of NPi is a hyponym of w in WordNet, using only the first WordNet sense of NPi.1 If so, we create a WN CLASS feature with w as its value. These keywords are potentially useful features because some of them are subclasses of the ACE SCs shown in the left column of Table 1, while others appear to be correlated with these ACE SCs.2 (6) INDUCED CLASS: Since the first-sense heuristic used in the previous feature may not be accurate in capturing the SC of an NP, we employ a corpusbased method for inducing SCs that is motivated by research in lexical semantics (e.g., Hearst (1992)). Given a large, unannotated corpus3, we use IdentiFinder to label each NE with its NE type and MINIPAR to extract all the appositive relations. An example extraction would be <Eastern Airlines, the carrier>, where the first entry is a proper noun labeled with either one of the seven MUC-style NE types4 or OTHERS5 and the second entry is a common noun. We then infer the SC of a common noun as follows: (1) we compute the probability that the common noun co-occurs with each of the eight NE types6 based on the extracted appositive relations, and (2) if the most likely NE type has a co-occurrence probability above a certain threshold (we set it to 0.7), we create a INDUCED CLASS fea1This is motivated by Lin’s (1998c) observation that a coreference resolver that employs only the first WordNet sense performs slightly better than one that employs more than one sense. 2The keywords are obtained via our experimentation with WordNet and the ACE SCs of the NPs in the ACE training data. 3We used (1) the BLLIP corpus (30M words), which consists of WSJ articles from 1987 to 1989, and (2) the Reuters Corpus (3.7GB data), which has 806,791 Reuters articles. 4Person, organization, location, date, time, money, percent. 5This indicates the proper noun is not a MUC NE. 6For simplicity, OTHERS is viewed as an NE type here. 538 ture for NPi whose value is the most likely NE type. (7) NEIGHBOR: Research in lexical semantics suggests that the SC of an NP can be inferred from its distributionally similar NPs (see Lin (1998a)). Motivated by this observation, we create for each of NPi’s ten most semantically similar NPs a NEIGHBOR feature whose value is the surface string of the NP. To determine the ten nearest neighbors, we use the semantic similarity values provided by Lin’s dependency-based thesaurus, which is constructed using a distributional approach combined with an information-theoretic definition of similarity. Learning algorithms. We experiment with four learners commonly employed in language learning: Decision List (DL): We use the DL learner as described in Collins and Singer (1999), motivated by its success in the related tasks of word sense disambiguation (Yarowsky, 1995) and NE classification (Collins and Singer, 1999). We apply add-one smoothing to smooth the class posteriors. 1-Nearest Neighbor (1-NN): We use the 1-NN classifier as implemented in TiMBL (Daelemans et al., 2004), employing dot product as the similarity function (which defines similarity as the number of common feature-value pairs between two instances). All other parameters are set to their default values. Maximum Entropy (ME): We employ Lin’s ME implementation7, using a Gaussian prior for smoothing and running the algorithm until convergence. Naive Bayes (NB): We use an in-house implementation of NB, using add-one smoothing to smooth the class priors and the class-conditional probabilities. In addition, we train an SVM classifier for SC determination by combining the output of five classification methods: DL, 1-NN, ME, NB, and Soon et al.’s method as described in the introduction,8 with the goal of examining whether SC classification accuracy can be improved by combining the output of individual classifiers in a supervised manner. Specifically, we (1) use 80% of the instances generated from the BBN Entity Type Corpus to train the four classifiers; (2) apply the four classifiers and Soon et al.’s method to independently make predic7See http://www.cs.ualberta.ca/∼lindek/downloads.htm 8In our implementation of Soon’s method, we label an instance as OTHERS if no NE or WN CLASS feature is generated; otherwise its label is the value of the NE feature or the ACE SC that has the WN CLASS features as its keywords (see Table 1). PER ORG GPE FAC LOC OTH Training 19.8 9.6 11.4 1.6 1.2 56.3 Test 19.5 9.0 9.6 1.8 1.1 59.0 Table 2: Distribution of SCs in the ACE corpus. tions for the remaining 20% of the instances; and (3) train an SVM classifier (using the LIBSVM package (Chang and Lin, 2001)) on these 20% of the instances, where each instance, i, is represented by a set of 31 binary features. More specifically, let Li = {li1, li2, li3, li4, li5} be the set of predictions that we obtained for i in step (2). To represent i, we generate one feature from each non-empty subset of Li. 3.2 Evaluating the Classifiers For evaluation, we use the ACE Phase 2 coreference corpus, which comprises 422 training texts and 97 test texts. Each text has its mentions annotated with their ACE SCs. We create our test instances from the ACE texts in the same way as the training instances described in Section 3.1. Table 2 shows the percentages of instances corresponding to each SC. Table 3 shows the accuracy of each classifier (see row 1) for the ACE training set (54641 NPs, with 16414 proper NPs and 38227 common NPs) and the ACE test set (13444 NPs, with 3713 proper NPs and 9731 common NPs), as well as their performance on the proper NPs (row 2) and the common NPs (row 3). We employ as our baseline system the Soon et al. method (see Footnote 8), whose accuracy is shown under the Soon column. As we can see, DL, 1-NN, and SVM show a statistically significant improvement over the baseline for both data sets, whereas ME and NB perform significantly worse.9 Additional experiments are needed to determine the reason for ME and NB’s poor performance. In an attempt to gain additional insight into the performance contribution of each type of features, we conduct feature ablation experiments using the DL classifier (DL is chosen simply because it is the best performer on the ACE training set). Results are shown in Table 4, where each row shows the accuracy of the DL trained on all types of features except for the one shown in that row (All), as well as accuracies on the proper NPs (PN) and the common NPs (CN). For easy reference, the accuracy of the DL 9We use Noreen’s (1989) Approximate Randomization test for significance testing, with p set to .05 unless otherwise stated. 539 Training Set Test Set Soon DL 1-NN ME NB SVM Soon DL 1-NN ME NB SVM 1 Overall 83.1 85.0 84.0 54.5 71.3 84.2 81.1 82.9 83.1 53.0 70.3 83.3 2 Proper NPs 83.1 84.1 81.0 54.2 65.5 82.2 79.6 82.0 79.8 55.8 64.4 80.4 3 Common NPs 83.1 85.4 85.2 54.6 73.8 85.1 81.6 83.3 84.3 51.9 72.6 84.4 Table 3: SC classification accuracies of different methods for the ACE training set and test set. Training Set Test Set Feature Type PN CN All PN CN All All features 84.1 85.4 85.0 82.0 83.3 82.9 - WORD 84.2 85.4 85.0 82.0 83.1 82.8 - SUBJ VERB 84.1 85.4 85.0 82.0 83.3 82.9 - VERB OBJ 84.1 85.4 85.0 82.0 83.3 82.9 - NE 72.9 85.3 81.6 74.1 83.2 80.7 - WN CLASS 84.1 85.9 85.3 81.9 84.1 83.5 - INDUCED C 84.0 85.6 85.1 82.0 83.6 83.2 - NEIGHBOR 82.8 84.9 84.3 80.2 82.9 82.1 Table 4: Results for feature ablation experiments. Training Set Test Set Feature Type PN CN All PN CN All WORD 64.0 83.9 77.9 66.5 82.4 78.0 SUBJ VERB 24.0 70.2 56.3 28.8 70.5 59.0 VERB OBJ 24.0 70.2 56.3 28.8 70.5 59.0 NE 81.1 72.1 74.8 78.4 71.4 73.3 WN CLASS 25.6 78.8 62.8 30.4 78.9 65.5 INDUCED C 25.8 81.1 64.5 30.0 80.3 66.3 NEIGHBOR 67.7 85.8 80.4 68.0 84.4 79.8 Table 5: Accuracies of single-feature classifiers. classifier trained on all types of features is shown in row 1 of the table. As we can see, accuracy drops significantly with the removal of NE and NEIGHBOR. As expected, removing NE precipitates a large drop in proper NP accuracy; somewhat surprisingly, removing NEIGHBOR also causes proper NP accuracy to drop significantly. To our knowledge, there are no prior results on using distributionally similar neighbors as features for supervised SC induction. Note, however, that these results do not imply that the remaining feature types are not useful for SC classification; they simply suggest, for instance, that WORD is not important in the presence of other feature types. To get a better idea of the utility of each feature type, we conduct another experiment in which we train seven classifiers, each of which employs exactly one type of features. The accuracies of these classifiers are shown in Table 5. As we can see, NEIGHBOR has the largest contribution. This again demonstrates the effectiveness of a distributional approach to semantic similarity. Its superior performance to WORD, the second largest contributor, could be attributed to its ability to combat data sparseness. The NE feature, as expected, is crucial to the classification of proper NPs. 4 Application to Coreference Resolution We can now derive from the induced SC information two KSs — semantic class agreement and mention — and incorporate them into our learning-based coreference resolver in eight different ways, as described in the introduction. This section examines whether our coreference resolver can benefit from any of the eight ways of incorporating these KSs. 4.1 Experimental Setup As in SC induction, we use the ACE Phase 2 coreference corpus for evaluation purposes, acquiring the coreference classifiers on the 422 training texts and evaluating their output on the 97 test texts. We report performance in terms of two metrics: (1) the Fmeasure score as computed by the commonly-used MUC scorer (Vilain et al., 1995), and (2) the accuracy on the anaphoric references, computed as the fraction of anaphoric references correctly resolved. Following Ponzetto and Strube (2006), we consider an anaphoric reference, NPi, correctly resolved if NPi and its closest antecedent are in the same coreference chain in the resulting partition. In all of our experiments, we use NPs automatically extracted by an in-house NP chunker and IdentiFinder. 4.2 The Baseline Coreference System Our baseline coreference system uses the C4.5 decision tree learner (Quinlan, 1993) to acquire a classifier on the training texts for determining whether two NPs are coreferent. Following previous work (e.g., Soon et al. (2001) and Ponzetto and Strube (2006)), we generate training instances as follows: a positive instance is created for each anaphoric NP, NPj, and its closest antecedent, NPi; and a negative instance is created for NPj paired with each of the intervening NPs, NPi+1, NPi+2, . . ., NPj−1. Each instance is represented by 33 lexical, grammatical, semantic, and 540 positional features that have been employed by highperforming resolvers such as Ng and Cardie (2002) and Yang et al. (2003), as described below. Lexical features. Nine features allow different types of string matching operations to be performed on the given pair of NPs, NPx and NPy10, including (1) exact string match for pronouns, proper nouns, and non-pronominal NPs (both before and after determiners are removed); (2) substring match for proper nouns and non-pronominal NPs; and (3) head noun match. In addition, one feature tests whether all the words that appear in one NP also appear in the other NP. Finally, a nationality matching feature is used to match, for instance, British with Britain. Grammatical features. 22 features test the grammatical properties of one or both of the NPs. These include ten features that test whether each of the two NPs is a pronoun, a definite NP, an indefinite NP, a nested NP, and a clausal subject. A similar set of five features is used to test whether both NPs are pronouns, definite NPs, nested NPs, proper nouns, and clausal subjects. In addition, five features determine whether the two NPs are compatible with respect to gender, number, animacy, and grammatical role. Furthermore, two features test whether the two NPs are in apposition or participate in a predicate nominal construction (i.e., the IS-A relation). Semantic features. Motivated by Soon et al. (2001), we have a semantic feature that tests whether one NP is a name alias or acronym of the other. Positional feature. We have a feature that computes the distance between the two NPs in sentences. After training, the decision tree classifier is used to select an antecedent for each NP in a test text. Following Soon et al. (2001), we select as the antecedent of each NP, NPj, the closest preceding NP that is classified as coreferent with NPj. If no such NP exists, no antecedent is selected for NPj. Row 1 of Table 6 and Table 7 shows the results of the baseline system in terms of F-measure (F) and accuracy in resolving 4599 anaphoric references (All), respectively. For further analysis, we also report the corresponding recall (R) and precision (P) in Table 6, as well as the accuracies of the system in resolving 1769 pronouns (PRO), 1675 proper NPs (PN), and 1155 common NPs (CN) in Table 7. As 10We assume that NPx precedes NPy in the associated text. we can see, the baseline achieves an F-measure of 57.0 and a resolution accuracy of 48.4. To get a better sense of how strong our baseline is, we re-implement the Soon et al. (2001) coreference resolver. This simply amounts to replacing the 33 features in the baseline resolver with the 12 features employed by Soon et al.’s system. Results of our Duplicated Soon et al. system are shown in row 2 of Tables 6 and 7. In comparison to our baseline, the Duplicated Soon et al. system performs worse according to both metrics, and although the drop in F-measure seems moderate, the performance difference is in fact highly significant (p=0.002).11 4.3 Coreference with Induced SC Knowledge Recall from the introduction that our investigation of the role of induced SC knowledge in learning-based coreference resolution proceeds in three steps: Label the SC of each NP in each ACE document. If a noun phrase, NPi, is a proper or common NP, then its SC value is determined using an SC classifier that we acquired in Section 3. On the other hand, if NPi is a pronoun, then we will be conservative and posit its SC value as UNCONSTRAINED (i.e., it is semantically compatible with all other NPs).12 Derive two KSs from the induced SCs. Recall that our first KS, Mention, is defined on an NP; its value is YES if the induced SC of the NP is not OTHERS, and NO otherwise. On the other hand, our second KS, SCA, is defined on a pair of NPs; its value is YES if the two NPs have the same induced SC that is not OTHERS, and NO otherwise. Incorporate the two KSs into the baseline resolver. Recall that there are eight ways of incorporating these two KSs into our resolver: they can each be represented as a constraint or as a feature, and they can be applied to the resolver in isolation and in combination. Constraints are applied during the antecedent selection step. Specifically, when employed as a constraint, the Mention KS disallows coreference between two NPs if at least one of them has a Mention value of NO, whereas the SCA KS disallows coreference if the SCA value of the two NPs involved is NO. When encoded as a feature for the resolver, the Mention feature for an NP pair has the 11Again, we use Approximate Randomization with p=.05. 12The only exception is pronouns whose SC value can be easily determined to be PERSON (e.g., he, him, his, himself). 541 System Variation R P F R P F R P F R P F 1 Baseline system 60.9 53.6 57.0 – – – – – – – – – 2 Duplicated Soon et al. 56.1 54.4 55.3 – – – – – – – – – Add to the Baseline Soon’s SC Method Decision List SVM Perfect Information 3 Mention(C) only 56.9 69.7 62.6 59.5 70.6 64.6 59.5 70.7 64.6 61.2 83.1 70.5 4 Mention(F) only 60.9 54.0 57.2 61.2 52.9 56.7 60.9 53.6 57.0 62.3 33.7 43.8 5 SCA(C) only 56.4 70.0 62.5 57.7 71.2 63.7 58.9 70.7 64.3 61.3 86.1 71.6 6 SCA(F) only 62.0 52.8 57.0 62.5 53.5 57.6 63.0 53.3 57.7 71.1 33.0 45.1 7 Mention(C) + SCA(C) 56.4 70.0 62.5 57.7 71.2 63.7 58.9 70.8 64.3 61.3 86.1 71.6 8 Mention(C) + SCA(F) 58.2 66.4 62.0 60.9 66.8 63.7 61.4 66.5 63.8 71.1 76.7 73.8 9 Mention(F) + SCA(C) 56.4 69.8 62.4 57.7 71.3 63.8 58.9 70.6 64.3 62.7 85.3 72.3 10 Mention(F) + SCA(F) 62.0 52.7 57.0 62.6 52.8 57.3 63.2 52.6 57.4 71.8 30.3 42.6 Table 6: Coreference results obtained via the MUC scoring program for the ACE test set. System Variation PRO PN CN All PRO PN CN All PRO PN CN All 1 Baseline system 59.2 54.8 22.5 48.4 – – – – – – – – 2 Duplicated Soon et al. 53.4 45.7 16.9 41.4 – – – – – – – – Add to the Baseline Soon’s SC Method Decision List SVM 3 Mention(C) only 58.5 51.3 16.5 45.3 59.1 54.1 20.2 47.5 59.1 53.9 20.6 47.5 4 Mention(F) only 59.2 55.0 22.5 48.5 59.2 56.1 22.4 48.8 59.4 55.2 22.6 48.6 5 SCA(C) only 58.1 50.1 16.4 44.7 58.1 51.8 17.1 45.5 58.5 52.0 19.6 46.3 6 SCA(F) only 59.2 54.9 27.8 49.7 60.4 56.7 30.1 51.5 60.8 56.4 29.4 51.3 7 Mention(C) + SCA(C) 58.1 50.1 16.4 44.7 58.1 51.8 17.1 45.5 58.5 51.9 19.5 46.3 8 Mention(C) + SCA(F) 58.9 52.0 22.3 47.2 60.2 55.9 28.1 50.6 60.7 55.3 27.4 50.4 9 Mention(F) + SCA(C) 58.1 50.3 16.3 44.8 58.1 52.4 16.7 45.6 58.6 52.4 19.7 46.6 10 Mention(F) + SCA(F) 59.2 55.0 27.6 49.7 60.4 56.8 30.1 51.5 60.8 56.5 29.5 51.4 Table 7: Resolution accuracies for the ACE test set. value YES if and only if the Mention value for both NPs is YES, whereas the SCA feature for an NP pair has its value taken from the SCA KS. Now, we can evaluate the impact of the two KSs on the performance of our baseline resolver. Specifically, rows 3-6 of Tables 6 and 7 show the F-measure and the resolution accuracy, respectively, when exactly one of the two KSs is employed by the baseline as either a constraint (C) or a feature (F), and rows 7-10 of the two tables show the results when both KSs are applied to the baseline. Furthermore, each row of Table 6 contains four sets of results, each of which corresponds to a different method for determining the SC value of an NP. For instance, the first set is obtained by using Soon et al.’s method as described in Footnote 8 to compute SC values, serving as sort of a baseline for our results using induced SC values. The second and third sets are obtained based on the SC values computed by the DL and the SVM classifier, respectively.13 The last set corresponds to an oracle experiment in which the resolver has access to perfect SC information. Rows 3-10 of Table 13Results using other learners are not shown due to space limitations. DL and SVM are chosen simply because they achieve the highest SC classification accuracies on the ACE training set. 7 can be interpreted in a similar manner. From Table 6, we can see that (1) in comparison to the baseline, F-measure increases significantly in the five cases where at least one of the KSs is employed as a constraint by the resolver, and such improvements stem mainly from significant gains in precision; (2) in these five cases, the resolvers that use SCs induced by DL and SVM achieve significantly higher F-measure scores than their counterparts that rely on Soon’s method for SC determination; and (3) none of the resolvers appears to benefit from SCA information whenever mention is used as a constraint. Moreover, note that even with perfectly computed SC information, the performance of the baseline system does not improve when neither MD nor SCA is employed as a constraint. These results provide further evidence that the decision tree learner is not exploiting these two semantic KSs in an optimal manner, whether they are computed automatically or perfectly. Hence, in machine learning for coreference resolution, it is important to determine not only what linguistic KSs to use, but also how to use them. While the coreference results in Table 6 seem to suggest that SCA and mention should be employed as constraints, the resolution results in Table 7 sug542 gest that SCA is better encoded as a feature. Specifically, (1) in comparison to the baseline, the accuracy of common NP resolution improves by about 5-8% when SCA is encoded as a feature; and (2) whenever SCA is employed as a feature, the overall resolution accuracy is significantly higher for resolvers that use SCs induced by DL and SVM than those that rely on Soon’s method for SC determination, with improvements in resolution observed on all three NP types. Overall, these results provide suggestive evidence that both KSs are useful for learning-based coreference resolution. In particular, mention should be employed as a constraint, whereas SCA should be used as a feature. Interestingly, this is consistent with the results that we obtained when the resolver has access to perfect SC information (see Table 6), where the highest F-measure is achieved by employing mention as a constraint and SCA as a feature. 5 Conclusions We have shown that (1) both mention and SCA can be usefully employed to improve the performance of a learning-based coreference system, and (2) employing SC knowledge induced in a supervised manner enables a resolver to achieve better performance than employing SC knowledge computed by Soon et al.’s simple method. In addition, we found that the MUC scoring program is unable to reveal the usefulness of the SCA KS, which, when encoded as a feature, substantially improves the accuracy of common NP resolution. This underscores the importance of reporting both resolution accuracy and clustering-level accuracy when analyzing the performance of a coreference resolver. References D. Bean and E. Riloff. 2004. Unsupervised learning of contextual role knowledge for coreference resolution. In Proc. of HLT/NAACL, pages 297–304. D. M. Bikel, R. Schwartz, and R. M. Weischedel. 1999. An algorithm that learns what’s in a name. Machine Learning 34(1–3):211–231. C.-C. Chang and C.-J. Lin, 2001. LIBSVM: a library for support vector machines. Software available at http://www.csie.ntu.edu.tw/∼cjlin/libsvm. M. Collins and Y. Singer. 1999. Unsupervised models for named entity classification. In Proc. of EMNLP/VLC. W. Daelemans, J. Zavrel, K. van der Sloot, and A. van den Bosch. 2004. TiMBL: Tilburg Memory Based Learner, version 5.1, Reference Guide. ILK Technical Report. H. Daum´e III and D. Marcu. 2005. A large-scale exploration of effective global features for a joint entity detection and tracking model. In Proc. of HLT/EMNLP, pages 97–104. R. Florian, H. Jing, N. Kambhatla, and I. Zitouni. 2006. Factorizing complex models: A case study in mention detection. In Proc. of COLING/ACL, pages 473–480. M. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proc. of COLING. H. Ji, D. Westbrook, and R. Grishman. 2005. Using semantic relations to refine coreference decisions. In Proc. of HLT/EMNLP, pages 17–24. A. Kehler, D. Appelt, L. Taylor, and A. Simma. 2004. The (non)utility of predicate-argument frequencies for pronoun interpretation. In Proc. of NAACL, pages 289–296. D. Lin. 1998a. Automatic retrieval and clustering of similar words. In Proc. of COLING/ACL, pages 768–774. D. Lin. 1998b. Dependency-based evaluation of MINIPAR. In Proc. of the LREC Workshop on the Evaluation of Parsing Systems, pages 48–56. D. Lin. 1998c. Using collocation statistics in information extraction. In Proc. of MUC-7. X. Luo, A. Ittycheriah, H. Jing, N. Kambhatla, and S. Roukos. 2004. A mention-synchronous coreference resolution algorithm based on the Bell tree. In Proc. of the ACL. K. Markert and M. Nissim. 2005. Comparing knowledge sources for nominal anaphora resolution. Computational Linguistics, 31(3):367–402. R. Mitkov. 2002. Anaphora Resolution. Longman. R. Mitkov. 1998. Robust pronoun resolution with limited knowledge. In Proc. of COLING/ACL, pages 869–875. V. Ng and C. Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proc. of the ACL. E. W. Noreen. 1989. Computer Intensive Methods for Testing Hypothesis: An Introduction. John Wiley & Sons. M. Poesio, R. Mehta, A. Maroudas, and J. Hitzeman. 2004. Learning to resolve bridging references. In Proc. of the ACL. S. P. Ponzetto and M. Strube. 2006. Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution. In Proc. of HLT/NAACL, pages 192–199. J. R. Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA. W. M. Soon, H. T. Ng, and D. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. J. Tetreault. 2001. A corpus-based evaluation of centering and pronoun resolution. Computational Linguistics, 27(4). M. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L. Hirschman. 1995. A model-theoretic coreference scoring scheme. In Proc. of MUC-6, pages 45–52. R. Weischedel and A. Brunstein. 2005. BBN pronoun coreference and entity type corpus. Linguistica Data Consortium. X. Yang, G. Zhou, J. Su, and C. L. Tan. 2003. Coreference resolution using competitive learning approach. In Proc. of the ACL, pages 176–183. D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proc. of the ACL. 543
2007
68
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 544–551, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Generating a Table-of-Contents S.R.K. Branavan, Pawan Deshpande and Regina Barzilay Massachusetts Institute of Technology {branavan, pawand, regina}@csail.mit.edu Abstract This paper presents a method for the automatic generation of a table-of-contents. This type of summary could serve as an effective navigation tool for accessing information in long texts, such as books. To generate a coherent table-of-contents, we need to capture both global dependencies across different titles in the table and local constraints within sections. Our algorithm effectively handles these complex dependencies by factoring the model into local and global components, and incrementally constructing the model’s output. The results of automatic evaluation and manual assessment confirm the benefits of this design: our system is consistently ranked higher than nonhierarchical baselines. 1 Introduction Current research in summarization focuses on processing short articles, primarily in the news domain. While in practice the existing summarization methods are not limited to this material, they are not universal: texts in many domains and genres cannot be summarized using these techniques. A particularly significant challenge is the summarization of longer texts, such as books. The requirement for high compression rates and the increased need for the preservation of contextual dependencies between summary sentences places summarization of such texts beyond the scope of current methods. In this paper, we investigate the automatic generation of tables-of-contents, a type of indicative summary particularly suited for accessing information in long texts. A typical table-of-contents lists topics described in the source text and provides information about their location in the text. The hierarchical organization of information in the table further refines information access by specifying the relations between different topics and providing rich contextual information during browsing. Commonly found in books, tables-of-contents can also facilitate access to other types of texts. For instance, this type of summary could serve as an effective navigation tool for understanding a long, unstructured transcript for an academic lecture or a meeting. Given a text, our goal is to generate a tree wherein a node represents a segment of text and a title that summarizes its content. This process involves two tasks: the hierarchical segmentation of the text, and the generation of informative titles for each segment. The first task can be addressed by using the hierarchical structure readily available in the text (e.g., chapters, sections and subsections) or by employing existing topic segmentation algorithms (Hearst, 1994). In this paper, we take the former approach. As for the second task, a naive approach would be to employ existing methods of title generation to each segment, and combine the results into a tree structure. However, the latter approach cannot guarantee that the generated table-of-contents forms a coherent representation of the entire text. Since titles of different segments are generated in isolation, some of the generated titles may be repetitive. Even nonrepetitive titles may not provide sufficient information to discriminate between the content of one seg544 Scientific computing Remarkable recursive algorithm for multiplying matrices Divide and conquer algorithm design Making a recursive algorithm Solving systems of linear equations Computing an LUP decomposition Forward and back substitution Symmetric positive definite matrices and least squares approximation Figure 1: A fragment of a table-of-contents generated by our method. ment and another. Therefore, it is essential to generate an entire table-of-contents tree in a concerted fashion. This paper presents a hierarchical discriminative approach for table-of-contents generation. Figure 1 shows a fragment of a table-of-contents automatically generated by this algorithm. Our method has two important points of departure from existing techniques. First, we introduce a structured discriminative model for table-of-contents generation that accounts for a wide range of phrase-based and collocational features. The flexibility of this model results in improved summary quality. Second, our model captures both global dependencies across different titles in the tree and local dependencies within sections. We decompose the model into local and global components that handle different classes of dependencies. We further reduce the search space through incremental construction of the model’s output by considering only the promising parts of the decision space. We apply our method to process a 1,180 page algorithms textbook. To assess the contribution of our hierarchical model, we compare our method with state-of-the-art methods that generate each segment title independently.1 The results of automatic evaluation and manual assessment of title quality show that the output of our system is consistently ranked higher than that of non-hierarchical baselines. 2 Related Work Although most current research in summarization focuses on newspaper articles, a number of approaches have been developed for processing longer texts. Most of these approaches are tailored to a par1The code and feature vector data for our model and the baselines are available at http://people.csail.mit.edu/branavan/code/toc. ticular domain, such as medical literature or scientific articles. By making strong assumptions about the input structure and the desired format of the output, these methods achieve a high compression rate while preserving summary coherence. For instance, Teufel and Moens (2002) summarize scientific articles by selecting rhetorical elements that are commonly present in scientific abstracts. Elhadad and McKeown (2001) generate summaries of medical articles by following a certain structural template in content selection and realization. Our work, however, is closer to domainindependent methods for summarizing long texts. Typically, these approaches employ topic segmentation to identify a list of topics described in a document, and then produce a summary for each part (Boguraev and Neff, 2000; Angheluta et al., 2002). In contrast to our method, these approaches perform either sentence or phrase extraction, rather than summary generation. Moreover, extraction for each segment is performed in isolation, and global constraints on the summary are not enforced. Finally, our work is also related to research on title generation (Banko et al., 2000; Jin and Hauptmann, 2001; Dorr et al., 2003). Since work in this area focuses on generating titles for one article at a time (e.g., newspaper reports), the issue of hierarchical generation, which is unique to our task, does not arise. However, this is not the only novel aspect of the proposed approach. Our model learns title generation in a fully discriminative framework, in contrast to the commonly used noisy-channel model. Thus, instead of independently modeling the selection and grammaticality constraints, we learn both types of features in a single framework. This joint training regime supports greater flexibility in modeling feature interaction. 545 3 Problem Formulation We formalize the problem of table-of-contents generation as a supervised learning task where the goal is to map a tree of text segments S to a tree of titles T. A segment may correspond to a chapter, section or subsection. Since the focus of our work is on the generation aspect of table-of-contents construction, we assume that the hierarchical segmentation of a text is provided in the input. This division can either be automatically computed using one of the many available text segmentation algorithms (Hearst, 1994), or it can be based on demarcations already present in the input (e.g., paragraph markers). During training, the algorithm is provided with a set of pairs (Si, T i) for i = 1, . . . , p, where Si is the ith tree of text segments, and T i is the table-ofcontents for that tree. During testing, the algorithm generates tables-of-contents for unseen trees of text segments. We also assume that during testing the desired title length is provided as a parameter to the algorithm. 4 Algorithm To generate a coherent table-of-contents, we need to take into account multiple constraints: the titles should be grammatical, they should adequately represent the content of their segments, and the tableof-contents as a whole should clearly convey the relations between the segments. Taking a discriminative approach for modeling this task would allow us to achieve this goal: we can easily integrate a range of constraints in a flexible manner. Since the number of possible labels (i.e., tables-of-contents) is prohibitively large and the labels themselves exhibit a rich internal structure, we employ a structured discriminative model that can easily handle complex dependencies. Our solution relies on two orthogonal strategies to balance the tractability and the richness of the model. First, we factor the model into local and global components. Second, we incrementally construct the output of each component using a search-based discriminative algorithm. Both of these strategies have the effect of intelligently pruning the decision space. Our model factorization is driven by the different types of dependencies which are captured by the two components. The first model is local: for each segment, it generates a list of candidate titles ranked by their individual likelihoods. This model focuses on grammaticality and word selection constraints, but it does not consider relations among different titles in the table-of-contents. These latter dependencies are captured in the global model that constructs a tableof-contents by selecting titles for each segment from the available candidates. Even after this factorization, the decision space for each model is large: for the local model, it is exponential in the length of the segment title, and for the global model it is exponential in the size of the tree. Therefore, we construct the output for each of these models incrementally using beam search. The algorithm maintains the most promising partial output structures, which are extended at every iteration. The model incorporates this decoding procedure into the training process, thereby learning model parameters best suited for the specific decoding algorithm. Similar models have been successfully applied in the past to other tasks including parsing (Collins and Roark, 2004), chunking (Daum´e and Marcu, 2005), and machine translation (Cowan et al., 2006). 4.1 Model Structure The model takes as input a tree of text segments S. Each segment s ∈S and its title z are represented as a local feature vector Φloc(s, z). Each component of this vector stores a numerical value. This feature vector can track any feature of the segment s together with its title z. For instance, the ith component of this vector may indicate whether the bigram (z[j]z[j + 1]) occurs in s, where z[j] is the jth word in z: (Φloc(s, z))i =  1 if (z[j]z[j + 1]) ∈s 0 otherwise In addition, our model captures dependencies among multiple titles that appear in the same tableof-contents. We represent a tree of segments S paired with titles T with the global feature vector Φglob(S, T). The components here are also numerical features. For example, the ith component of the vector may indicate whether a title is repeated in the table-of-contents T: 546 (Φglob(S, T))i =  1 repeated title 0 otherwise Our model constructs a table-of-contents in two basic steps: Step One The goal of this step is to generate a list of k candidate titles for each segment s ∈S. To do so, for each possible title z, the model maps the feature vector Φloc(s, z) to a real number. This mapping can take the form of a linear model, Φloc(s, z) · αloc where αloc is the local parameter vector. Since the number of possible titles is exponential, we cannot consider all of them. Instead, we prune the decision space by incrementally constructing promising titles. At each iteration j, the algorithm maintains a beam Q of the top k partially generated titles of length j. During iteration j + 1, a new set of candidates is grown by appending a word from s to the right of each member of the beam Q. We then sort the entries in Q: z1, z2, . . . such that Φloc(s, zi)·αloc ≥Φloc(s, zi+1)·αloc, ∀i. Only the top k candidates are retained, forming the beam for the next iteration. This process continues until a title of the desired length is generated. Finally, the list of k candidates is returned. Step Two Given a set of candidate titles z1, z2, . . . , zk for each segment s ∈S, our goal is to construct a table-of-contents T by selecting the most appropriate title from each segment’s candidate list. To do so, our model computes a score for the pair (S, T) based on the global feature vector Φglob(S, T): Φglob(S, T) · αglob where αglob is the global parameter vector. As with the local model (step one), the number of possible tables-of-contents is too large to be considered exhaustively. Therefore, we incrementally construct a table-of-contents by traversing the tree of segments in a pre-order walk (i.e., the order in which segments appear in the text). In this case, the beam contains partially generated tablesof-contents, which are expanded by one segment title at a time. To further reduce the search space, during decoding only the top five candidate titles for a segment are given to the global model. 4.2 Training the Model Training for Step One We now describe how the local parameter vector αloc is estimated from training data. We are given a set of training examples (si, yi) for i = 1, . . . , l, where si is the ith text segment, and yi is the title of this segment. This linear model is learned using a variant of the incremental perceptron algorithm (Collins and Roark, 2004; Daum´e and Marcu, 2005). This online algorithm traverses the training set multiple times, updating the parameter vector αloc after each training example in case of mis-predictions. The algorithm encourages a setting of the parameter vector αloc that assigns the highest score to the feature vector associated with the correct title. The pseudo-code of the algorithm is shown in Figure 2. Given a text segment s and the corresponding title y, the training algorithm maintains a beam Q containing the top k partial titles of length j. The beam is updated on each iteration using the functions GROW and PRUNE. For every word in segment s and for every partial title in Q, GROW creates a new title by appending this word to the title. PRUNE retains only the top ranked candidates based on the scoring function Φloc(s, z) · αloc. If y[1 . . . j] (i.e., the prefix of y of length j) is not in the modified beam Q, then αloc is updated2 as shown in line 4 of the pseudo-code in Figure 2. In addition, Q is replaced with a beam containing only y[1 . . . j] (line 5). This process is performed |y| times. We repeat this process for all training examples over 50 training iterations. 3 Training for Step Two To train the global parameter vector αglob, we are given training examples (Si, T i) for i = 1, . . . , p, where Si is the ith tree of text segments, and T i is the table-of-contents for that tree. However, we cannot directly use these tablesof-contents for training our global model: since this model selects one of the candidate titles zi 1, . . . , zi k returned by the local model, the true title of the segment may not be among these candidates. Therefore, to determine a new target title for the segment, we need to identify the title in the set of candidates 2If the word in the jth position of y does not occur in s, then the parameter update is not performed. 3For decoding, αloc is averaged over the training iterations as in Collins and Roark (2004). 547 s – segment text. y – segment title. y[1 . . . j] – prefix of y of length j. Q – beam containing partial titles. 1. for j = 1 . . . |y| 2. Q = PRUNE(GROW(s, Q)) 3. if y[1 . . . j] /∈Q 4. αloc = αloc + Φloc(s, y[1 . . . j]) −P z∈Q Φloc(s,z) |Q| 5. Q = {y[1 . . . j]} Figure 2: The training algorithm for the local model. that is closest to the true title. We employ the L1 distance measure to compare the content word overlap between two titles.4 For each input (S, T), and each segment s ∈S, we identify the segment title closest in the L1 measure to the true title y5: z∗= arg min i L1(zi, y) Once all the training targets in the corpus have been identified through this procedure, the global linear model Φglob(S, T)·αglob is learned using the same perceptron algorithm as in step one. Rather than maintaining the beam of partially generated titles, the beam Q holds partially generated tables-ofcontents. Also, the loop in line 1 of Figure 2 iterates over segment titles rather than words. The global model is trained over 200 iterations. 5 Features Local Features Our local model aims to generate titles which adequately represent the meaning of the segment and are grammatical. Selection and contextual preferences are encoded in the local features. The features that capture selection constraints are specified at the word level, and contextual features are expressed at the word sequence level. The selection features capture the position of the word, its TF*IDF, and part-of-speech information. In addition, they also record whether the word occurs in the body of neighboring segments. We also 4This measure is close to ROUGE-1 which in addition considers the overlap in auxiliary words. 5In the case of ties, one of the titles is picked arbitrarily. Segment has the same title as its sibling Segment has the same title as its parent Two adjacent sibling titles have the same head Two adjacent sibling titles start with the same word Rank given to the title by the local model Table 1: Examples of global features. generate conjunctive features by combining features of different types. The contextual features record the bigram and trigram language model scores, both for words and for part-of-speech tags. The trigram scores are averaged over the title. The language models are trained using the SRILM toolkit. Another type of contextual feature models the collocational properties of noun phrases in the title. This feature aims to eliminate generic phrases, such as “the following section” from the generated titles.6 To achieve this effect, for each noun phrase in the title, we measure the ratio of their frequency in the segment to their frequency in the corpus. Global Features Our global model describes the interaction between different titles in the tree (See Table 1). These interactions are encoded in three types of global features. The first type of global feature indicates whether titles in the tree are redundant at various levels of the tree structure. The second type of feature encourages parallel constructions within the same tree. For instance, titles of adjoining segments may be verbalized as noun phrases with the same head (e.g., “Bubble sort algorithm”, “Merge sort algorithm”). We capture this property by comparing words that appear in certain positions in adjacent sibling titles. Finally, our global model also uses the rank of the title provided by the local model. This feature enables the global model to account for the preferences of the local model in the title selection process. 6 Evaluation Set-Up Data We apply our method to an undergraduate algorithms textbook. For detailed statistics on the data see Table 2. We split its table-of-contents into a set 6Unfortunately, we could not use more sophisticated syntactic features due to the low accuracy of statistical parsers on our corpus. 548 Number of Titles 540 Number of Trees 39 Tree Depth 4 Number of Words 269,650 Avg. Title Length 3.64 Avg. Branching 3.29 Avg. Title Duplicates 21 Table 2: Statistics on the corpus used in the experiments. of independent subtrees. Given a table-of-contents of depth n with a root branching factor of r, we generate r subtrees, with a depth of at most n −1. We randomly select 80% of these trees for training, and the rest are used for testing. In our experiments, we use ten different randomizations to compensate for the small number of available trees. Admittedly, this method of generating training and testing data omits some dependencies at the level of the table-of-contents as a whole. However, the subtrees used in our experiments still exhibit a sufficiently deep hierarchical structure, rich with contextual dependencies. Baselines As an alternative to our hierarchical discriminative method, we consider three baselines that build a table-of-contents by generating a title for each segment individually, without taking into account the tree structure, and one hierarchical generative baseline. The first method generates a title for a segment by selecting the noun phrase from that segment with the highest TF*IDF. This simple method is commonly used to generate keywords for browsing applications in information retrieval, and has been shown to be effective for summarizing technical content (Wacholder et al., 2001). The second baseline is based on the noisy-channel generative (flat generative, FG) model proposed by Banko et al., (2000). Similar to our local model, this method captures both selection and grammatical constraints. However, these constraints are modeled separately, and then combined in a generative framework. We use our local model (Flat Discriminative model, FD) as the third baseline. Like the second baseline, this model omits global dependencies, and only focuses on features that capture relations within individual segments. In the hierarchical generative (HG) baseline we run our global model on the ranked list of titles produced for each section by the noisy-channel generative model. The last three baselines and our algorithm are provided with the title length as a parameter. In our experiments, the algorithms use the reference title length. Experimental Design: Comparison with reference tables-of-contents Reference based evaluation is commonly used to assess the quality of machine-generated headlines (Wang et al., 2005). We compare our system’s output with the table-ofcontents from the textbook using ROUGE metrics. We employ a publicly available software package,7 with all the parameters set to default values. Experimental Design: Human assessment The judges were each given 30 segments randomly selected from a set of 359 test segments. For each test segment, the judges were presented with its text, and 3 alternative titles consisting of the reference and the titles produced by the hierarchical discriminative model, and the best performing baseline. In addition, the judges had access to all of the segments in the book. A total of 498 titles for 166 unique segments were ranked. The system identities were hidden from the judges, and the titles were presented in random order. The judges ranked the titles based on how well they represent the content of the segment. Titles were ranked equal if they were judged to be equally representative of the segment. Six people participated in this experiment. All the participants were graduate students in computer science who had taken the algorithms class in the past and were reasonably familiar with the material. 7 Results Figure 3 shows fragments of the tables-of-contents generated by our method and the four baselines along with the reference counterpart. These extracts illustrate three general phenomena that we observed in the test corpus. First, the titles produced by keyword extraction exhibit a high degree of redundancy. In fact, 40% of the titles produced by this method are repeated more than once in the table-of-contents. In 7http://www.isi.edu/licensed-sw/see/rouge/ 549 Reference: hash tables direct address tables hash tables collision resolution by chaining analysis of hashing with chaining open addressing linear probing quadratic probing double hashing Flat Generative: linked list worst case time wasted space worst case running time to show that there are dynamic set occupied slot quadratic function double hashing Flat Discriminative: dictionary operations universe of keys computer memory element in the list hash table with load factor hash table hash function hash function double hashing Keyword Extraction: hash table dynamic set hash function worst case expected number hash table hash function hash table double hashing Hierarchical Generative: dictionary operations worst case time wasted space worst case running time to show that there are collision resolution linear time quadratic function double hashing Hierarchical Discriminative: dictionary operations direct address table computer memory worst case running time hash table with load factor address table hash function quadratic probing double hashing Figure 3: Fragments of tables-of-contents generated by our method and the four baselines along with the corresponding reference. Rouge-1 Rouge-L Rouge-W Full Match HD 0.256 0.249 0.216 13.5 FD 0.241 0.234 0.203 13.1 HG 0.139 0.133 0.117 5.8 FG 0.094 0.090 0.079 4.1 Keyword 0.168 0.168 0.157 6.3 Table 3: Title quality as compared to the reference for the hierarchical discriminative (HD), flat discriminative (FD), hierarchical generative (HG), flat generative (FG) and Keyword models. The improvement given by HD over FD in all three Rouge measures is significant at p ≤0.03 based on the Sign test. better worse equal HD vs. FD 68 32 49 Reference vs. HD 115 13 22 Reference vs. FD 123 7 20 Table 4: Overall pairwise comparisons of the rankings given by the judges. The improvement in title quality given by HD over FD is significant at p ≤0.0002 based on the Sign test. contrast, our method yields 5.5% of the titles as duplicates, as compared to 9% in the reference tableof-contents.8 Second, the fragments show that the two discriminative models — Flat and Hierarchical — have a number of common titles. However, adding global dependencies to rerank titles generated by the local model changes 30% of the titles in the test set. Comparison with reference tables-of-contents Table 3 shows the average ROUGE scores over the ten randomizations for the five automatic methods. The hierarchical discriminative method consistently outperforms the four baselines according to all ROUGE metrics. At the same time, these results also show that only a small ratio of the automatically generated titles are identical to the reference ones. In some cases, the machine-generated titles are very close in meaning to the reference, but are verbalized differently. Examples include pairs such as (“Minimum Spanning Trees”, “Spanning Tree Problem”) and (“Wallace Tree”, “Multiplication Circuit”).9 While measures like ROUGE can capture the similarity in the first pair, they cannot identify semantic proximity 8Titles such as “Analysis” and “Chapter Outline” are repeated multiple times in the text. 9A Wallace Tree is a circuit that multiplies two integers. 550 between the titles in the second pair. Therefore, we supplement the results of this experiment with a manual assessment of title quality as described below. Human assessment We analyze the human ratings by considering pairwise comparisons between the models. Given two models, A and B, three outcomes are possible: A is better than B, B is better than A, or they are of equal quality. The results of the comparison are summarized in Table 4. These results indicate that using hierarchical information yields statistically significant improvement (at p ≤0.0002 based on the Sign test) over a flat counterpart. 8 Conclusion and Future Work This paper presents a method for the automatic generation of a table-of-contents. The key strength of our method lies in its ability to track dependencies between generation decisions across different levels of the tree structure. The results of automatic evaluation and manual assessment confirm the benefits of joint tree learning: our system is consistently ranked higher than non-hierarchical baselines. We also plan to expand our method for the task of slide generation. Like tables-of-contents, slide bullets are organized in a hierarchical fashion and are written in relatively short phrases. From the language viewpoint, however, slides exhibit more variability and complexity than a typical table-ofcontents. To address this challenge, we will explore more powerful generation methods that take into account syntactic information. Acknowledgments The authors acknowledge the support of the National Science Foundation (CAREER grant IIS0448168 and grant IIS-0415865). We would also like to acknowledge the many people who took part in human evaluations. Thanks to Michael Collins, Benjamin Snyder, Igor Malioutov, Jacob Eisenstein, Luke Zettlemoyer, Terry Koo, Erdong Chen, Zoran Dzunic and the anonymous reviewers for helpful comments and suggestions. Any opinions, findings, conclusions or recommendations expressed above are those of the authors and do not necessarily reflect the views of the NSF. References Roxana Angheluta, Rik De Busser, and Marie-Francine Moens. 2002. The use of topic segmentation for automatic summarization. In Proceedings of the ACL-2002 Workshop on Automatic Summarization. Michele Banko, Vibhu O. Mittal, and Michael J. Witbrock. 2000. Headline generation based on statistical translation. In Proceedings of the ACL, pages 318– 325. Branimir Boguraev and Mary S. Neff. 2000. Discourse segmentation in aid of document summarization. In Proceedings of the 33rd Hawaii International Conference on System Sciences, pages 3004–3014. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the ACL, pages 111–118. Brooke Cowan, Ivona Kucerova, and Michael Collins. 2006. A discriminative model for tree-to-tree translation. In Proceedings of the EMNLP, pages 232–241. Hal Daum´e and Daniel Marcu. 2005. Learning as search optimization: Approximate large margin methods for structured prediction. In Proceedings of the ICML, pages 169–176. Bonnie Dorr, David Zajic, and Richard Schwartz. 2003. Hedge trimmer: a parse-and-trim approach to headline generation. In Proceedings of the HLT-NAACL 03 on Text summarization workshop, pages 1–8. Noemie Elhadad and Kathleen R. McKeown. 2001. Towards generating patient specific summaries of medical articles. In Proceedings of NAACL Workshop on Automatic Summarization, pages 31–39. Marti Hearst. 1994. Multi-paragraph segmentation of expository text. In Proceedings of the ACL, pages 9– 16. Rong Jin and Alexander G. Hauptmann. 2001. Automatic title generation for spoken broadcast news. In Proceedings of the HLT, pages 1–3. Simone Teufel and Marc Moens. 2002. Summarizing scientific articles: Experiments with relevance and rhetorical status. Computational Linguistics, 28(4):409–445. Nina Wacholder, David K. Evans, and Judith Klavans. 2001. Automatic identification and organization of index terms for interactive browsing. In JCDL, pages 126–134. R. Wang, J. Dunnion, and J. Carthy. 2005. Machine learning approach to augmenting news headline generation. In Proceedings of the IJCNLP. 551
2007
69
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 49–56, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Domain Adaptation with Active Learning for Word Sense Disambiguation Yee Seng Chan and Hwee Tou Ng Department of Computer Science National University of Singapore 3 Science Drive 2, Singapore 117543 {chanys, nght}@comp.nus.edu.sg Abstract When a word sense disambiguation (WSD) system is trained on one domain but applied to a different domain, a drop in accuracy is frequently observed. This highlights the importance of domain adaptation for word sense disambiguation. In this paper, we first show that an active learning approach can be successfully used to perform domain adaptation of WSD systems. Then, by using the predominant sense predicted by expectation-maximization (EM) and adopting a count-merging technique, we improve the effectiveness of the original adaptation process achieved by the basic active learning approach. 1 Introduction In natural language, a word often assumes different meanings, and the task of determining the correct meaning, or sense, of a word in different contexts is known as word sense disambiguation (WSD). To date, the best performing systems in WSD use a corpus-based, supervised learning approach. With this approach, one would need to collect a text corpus, in which each ambiguous word occurrence is first tagged with its correct sense to serve as training data. The reliance of supervised WSD systems on annotated corpus raises the important issue of domain dependence. To investigate this, Escudero et al. (2000) and Martinez and Agirre (2000) conducted experiments using the DSO corpus, which contains sentences from two different corpora, namely Brown Corpus (BC) and Wall Street Journal (WSJ). They found that training a WSD system on one part (BC or WSJ) of the DSO corpus, and applying it to the other, can result in an accuracy drop of more than 10%, highlighting the need to perform domain adaptation of WSD systems to new domains. Escudero et al. (2000) pointed out that one of the reasons for the drop in accuracy is the difference in sense priors (i.e., the proportions of the different senses of a word) between BC and WSJ. When the authors assumed they knew the sense priors of each word in BC and WSJ, and adjusted these two datasets such that the proportions of the different senses of each word were the same between BC and WSJ, accuracy improved by 9%. In this paper, we explore domain adaptation of WSD systems, by adding training examples from the new domain as additional training data to a WSD system. To reduce the effort required to adapt a WSD system to a new domain, we employ an active learning strategy (Lewis and Gale, 1994) to select examples to annotate from the new domain of interest. To our knowledge, our work is the first to use active learning for domain adaptation for WSD. A similar work is the recent research by Chen et al. (2006), where active learning was used successfully to reduce the annotation effort for WSD of 5 English verbs using coarse-grained evaluation. In that work, the authors only used active learning to reduce the annotation effort and did not deal with the porting of a WSD system to a new domain. Domain adaptation is necessary when the training and target domains are different. In this paper, 49 we perform domain adaptation for WSD of a set of nouns using fine-grained evaluation. The contribution of our work is not only in showing that active learning can be successfully employed to reduce the annotation effort required for domain adaptation in a fine-grained WSD setting. More importantly, our main focus and contribution is in showing how we can improve the effectiveness of a basic active learning approach when it is used for domain adaptation. In particular, we explore the issue of different sense priors across different domains. Using the sense priors estimated by expectation-maximization (EM), the predominant sense in the new domain is predicted. Using this predicted predominant sense and adopting a count-merging technique, we improve the effectiveness of the adaptation process. In the next section, we discuss the choice of corpus and nouns used in our experiments. We then introduce active learning for domain adaptation, followed by count-merging. Next, we describe an EMbased algorithm to estimate the sense priors in the new domain. Performance of domain adaptation using active learning and count-merging is then presented. Next, we show that by using the predominant sense of the target domain as predicted by the EM-based algorithm, we improve the effectiveness of the adaptation process. Our empirical results show that for the set of nouns which have different predominant senses between the training and target domains, we are able to reduce the annotation effort by 71%. 2 Experimental Setting In this section, we discuss the motivations for choosing the particular corpus and the set of nouns to conduct our domain adaptation experiments. 2.1 Choice of Corpus The DSO corpus (Ng and Lee, 1996) contains 192,800 annotated examples for 121 nouns and 70 verbs, drawn from BC and WSJ. While the BC is built as a balanced corpus, containing texts in various categories such as religion, politics, humanities, fiction, etc, the WSJ corpus consists primarily of business and financial news. Exploiting the difference in coverage between these two corpora, Escudero et al. (2000) separated the DSO corpus into its BC and WSJ parts to investigate the domain dependence of several WSD algorithms. Following the setup of (Escudero et al., 2000), we similarly made use of the DSO corpus to perform our experiments on domain adaptation. Among the few currently available manually sense-annotated corpora for WSD, the SEMCOR (SC) corpus (Miller et al., 1994) is the most widely used. SEMCOR is a subset of BC which is senseannotated. Since BC is a balanced corpus, and since performing adaptation from a general corpus to a more specific corpus is a natural scenario, we focus on adapting a WSD system trained on BC to WSJ in this paper. Henceforth, out-of-domain data will refer to BC examples, and in-domain data will refer to WSJ examples. 2.2 Choice of Nouns The WordNet Domains resource (Magnini and Cavaglia, 2000) assigns domain labels to synsets in WordNet. Since the focus of the WSJ corpus is on business and financial news, we can make use of WordNet Domains to select the set of nouns having at least one synset labeled with a business or finance related domain label. This is similar to the approach taken in (Koeling et al., 2005) where they focus on determining the predominant sense of words in corpora drawn from finance versus sports domains.1 Hence, we select the subset of DSO nouns that have at least one synset labeled with any of these domain labels: commerce, enterprise, money, finance, banking, and economy. This gives a set of 21 nouns: book, business, center, community, condition, field, figure, house, interest, land, line, money, need, number, order, part, power, society, term, use, value.2 For each noun, all the BC examples are used as out-of-domain training data. One-third of the WSJ examples for each noun are set aside as evaluation 1Note however that the coverage of the WordNet Domains resource is not comprehensive, as about 31% of the synsets are simply labeled with “factotum”, indicating that the synset does not belong to a specific domain. 225 nouns have at least one synset labeled with the listed domain labels. In our experiments, 4 out of these 25 nouns have an accuracy of more than 90% before adaptation (i.e., training on just the BC examples) and accuracy improvement is less than 1% after all the available WSJ adaptation examples are added as additional training data. To obtain a clearer picture of the adaptation process, we discard these 4 nouns, leaving a set of 21 nouns. 50 Dataset No. of MFS No. of No. of senses acc. training adaptation BC WSJ (%) examples examples 21 nouns 6.7 6.8 61.1 310 406 9 nouns 7.9 8.6 65.8 276 416 Table 1: The average number of senses in BC and WSJ, average MFS accuracy, average number of BC training, and WSJ adaptation examples per noun. data, and the rest of the WSJ examples are designated as in-domain adaptation data. The row 21 nouns in Table 1 shows some information about these 21 nouns. For instance, these nouns have an average of 6.7 senses in BC and 6.8 senses in WSJ. This is slightly higher than the 5.8 senses per verb in (Chen et al., 2006), where the experiments were conducted using coarse-grained evaluation. Assuming we have access to an “oracle” which determines the predominant sense, or most frequent sense (MFS), of each noun in our WSJ test data perfectly, and we assign this most frequent sense to each noun in the test data, we will have achieved an accuracy of 61.1% as shown in the column MFS accuracy of Table 1. Finally, we note that we have an average of 310 BC training examples and 406 WSJ adaptation examples per noun. 3 Active Learning For our experiments, we use naive Bayes as the learning algorithm. The knowledge sources we use include parts-of-speech, local collocations, and surrounding words. These knowledge sources were effectively used to build a state-of-the-art WSD program in one of our prior work (Lee and Ng, 2002). In performing WSD with a naive Bayes classifier, the sense s assigned to an example with features f1, . . . , fn is chosen so as to maximize: p(s) n Y j=1 p(fj|s) In our domain adaptation study, we start with a WSD system built using training examples drawn from BC. We then investigate the utility of adding additional in-domain training data from WSJ. In the baseline approach, the additional WSJ examples are randomly selected. With active learning (Lewis and Gale, 1994), we use uncertainty sampling as shown DT ←the set of BC training examples DA ←the set of untagged WSJ adaptation examples Γ ←WSD system trained on DT repeat pmin ←∞ for each d ∈DA do bs ←word sense prediction for d using Γ p ←confidence of prediction bs if p < pmin then pmin ←p, dmin ←d end end DA ←DA −dmin provide correct sense s for dmin and add dmin to DT Γ ←WSD system trained on new DT end Figure 1: Active learning in Figure 1. In each iteration, we train a WSD system on the available training data and apply it on the WSJ adaptation examples. Among these WSJ examples, the example predicted with the lowest confidence is selected and removed from the adaptation data. The correct label is then supplied for this example and it is added to the training data. Note that in the experiments reported in this paper, all the adaptation examples are already preannotated before the experiments start, since all the WSJ adaptation examples come from the DSO corpus which have already been sense-annotated. Hence, the annotation of an example needed during each adaptation iteration is simulated by performing a lookup without any manual annotation. 4 Count-merging We also employ a technique known as countmerging in our domain adaptation study. Countmerging assigns different weights to different examples to better reflect their relative importance. Roark and Bacchiani (2003) showed that weighted count-merging is a special case of maximum a posteriori (MAP) estimation, and successfully used it for probabilistic context-free grammar domain adaptation (Roark and Bacchiani, 2003) and language model adaptation (Bacchiani and Roark, 2003). Count-merging can be regarded as scaling of counts obtained from different data sets. We let ec denote the counts from out-of-domain training data, ¯c denote the counts from in-domain adaptation data, and bp denote the probability estimate by 51 count-merging. We can scale the out-of-domain and in-domain counts with different factors, or just use a single weight parameter β: bp(fj|si) = ec(fj, si) + β¯c(fj, si) ec(si) + β¯c(si) (1) Similarly, bp(si) = ec(si) + β¯c(si) ec + β¯c (2) Obtaining an optimum value for β is not the focus of this work. Instead, we are interested to see if assigning a higher weight to the in-domain WSJ adaptation examples, as compared to the out-of-domain BC examples, will improve the adaptation process. Hence, we just use a β value of 3 in our experiments involving count-merging. 5 Estimating Sense Priors In this section, we describe an EM-based algorithm that was introduced by Saerens et al. (2002), which can be used to estimate the sense priors, or a priori probabilities of the different senses in a new dataset. We have recently shown that this algorithm is effective in estimating the sense priors of a set of nouns (Chan and Ng, 2005). Most of this section is based on (Saerens et al., 2002). Assume we have a set of labeled data DL with n classes and a set of N independent instances (x1, . . . , xN) from a new data set. The likelihood of these N instances can be defined as: L(x1, . . . , xN) = N Y k=1 p(xk) = N Y k=1 " n X i=1 p(xk, ωi) # = N Y k=1 " n X i=1 p(xk|ωi)p(ωi) # (3) Assuming the within-class densities p(xk|ωi), i.e., the probabilities of observing xk given the class ωi, do not change from the training set DL to the new data set, we can define: p(xk|ωi) = pL(xk|ωi). To determine the a priori probability estimates bp(ωi) of the new data set that will maximize the likelihood of (3) with respect to p(ωi), we can apply the iterative procedure of the EM algorithm. In effect, through maximizing the likelihood of (3), we obtain the a priori probability estimates as a by-product. Let us now define some notations. When we apply a classifier trained on DL on an instance xk drawn from the new data set DU, we get bpL(ωi|xk), which we define as the probability of instance xk being classified as class ωi by the classifier trained on DL. Further, let us define bpL(ωi) as the a priori probability of class ωi in DL. This can be estimated by the class frequency of ωi in DL. We also define bp(s)(ωi) and bp(s)(ωi|xk) as estimates of the new a priori and a posteriori probabilities at step s of the iterative EM procedure. Assuming we initialize bp(0)(ωi) = bpL(ωi), then for each instance xk in DU and each class ωi, the EM algorithm provides the following iterative steps: bp(s)(ωi|xk) = bpL(ωi|xk) bp(s)(ωi) bpL(ωi) Pn j=1 bpL(ωj|xk) bp(s)(ωj) bpL(ωj) (4) bp(s+1)(ωi) = 1 N N X k=1 bp(s)(ωi|xk) (5) where Equation (4) represents the expectation Estep, Equation (5) represents the maximization Mstep, and N represents the number of instances in DU. Note that the probabilities bpL(ωi|xk) and bpL(ωi) in Equation (4) will stay the same throughout the iterations for each particular instance xk and class ωi. The new a posteriori probabilities bp(s)(ωi|xk) at step s in Equation (4) are simply the a posteriori probabilities in the conditions of the labeled data, bpL(ωi|xk), weighted by the ratio of the new priors bp(s)(ωi) to the old priors bpL(ωi). The denominator in Equation (4) is simply a normalizing factor. The a posteriori bp(s)(ωi|xk) and a priori probabilities bp(s)(ωi) are re-estimated sequentially during each iteration s for each new instance xk and each class ωi, until the convergence of the estimated probabilities bp(s)(ωi), which will be our estimated sense priors. This iterative procedure will increase the likelihood of (3) at each step. 6 Experimental Results For each adaptation experiment, we start off with a classifier built from an initial training set consisting 52 52 54 56 58 60 62 64 66 68 70 72 74 76 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 WSD Accuracy (%) Percentage of adaptation examples added (%) a-c a r a-truePrior Figure 2: Adaptation process for all 21 nouns. of the BC training examples. At each adaptation iteration, WSJ adaptation examples are selected one at a time and added to the training set. The adaptation process continues until all the adaptation examples are added. Classification accuracies averaged over 3 random trials on the WSJ test examples at each iteration are calculated. Since the number of WSJ adaptation examples differs for each of the 21 nouns, the learning curves we will show in the various figures are plotted in terms of different percentage of adaptation examples added, varying from 0 to 100 percent in steps of 1 percent. To obtain these curves, we first calculate for each noun, the WSD accuracy when different percentages of adaptation examples are added. Then, for each percentage, we calculate the macro-average WSD accuracy over all the nouns to obtain a single learning curve representing all the nouns. 6.1 Utility of Active Learning and Count-merging In Figure 2, the curve r represents the adaptation process of the baseline approach, where additional WSJ examples are randomly selected during each adaptation iteration. The adaptation process using active learning is represented by the curve a, while applying count-merging with active learning is represented by the curve a-c. Note that random selection r achieves its highest WSD accuracy after all the adaptation examples are added. To reach the same accuracy, the a approach requires the addition of only 57% of adaptation examples. The a-c approach is even more effective and requires only 42% of adaptation examples. This demonstrates the effectiveness of count-merging in further reducing the annotation effort, when compared to using only active learning. To reach the MFS accuracy of 61.1% as shown earlier in Table 1, a-c requires just 4% of the adaptation examples. To determine the utility of the out-of-domain BC examples, we have also conducted three active learning runs using only WSJ adaptation examples. Using 10%, 20%, and 30% of WSJ adaptation examples to build a classifier, the accuracy of these runs is lower than the active learning a curve and paired t-tests show that the difference is statistically significant at the level of significance 0.01. 6.2 Using Sense Priors Information As mentioned in section 1, research in (Escudero et al., 2000) noted an improvement in accuracy when they adjusted the BC and WSJ datasets such that the proportions of the different senses of each word were the same between BC and WSJ. We can similarly choose BC examples such that the sense priors in the BC training data adhere to the sense priors in the WSJ evaluation data. To gauge the effectiveness of this approach, we first assume that we know the true sense priors of each noun in the WSJ evaluation data. We then gather BC training examples for a noun to adhere as much as possible to the sense priors in WSJ. Assume sense si is the predominant sense in the WSJ evaluation data, si has a sense prior of pi in the WSJ data and has ni BC training examples. Taking ni examples to represent a sense prior of pi, we proportionally determine the number of BC examples to gather for other senses s according to their respective sense priors in WSJ. If there are insufficient training examples in BC for some sense s, whatever available examples of s are used. This approach gives an average of 195 BC training examples for the 21 nouns. With this new set of training examples, we perform adaptation using active learning and obtain the a-truePrior curve in Figure 2. The a-truePrior curve shows that by ensuring that the sense priors in the BC training data adhere as much as possible to the sense priors in the WSJ data, we start off with a higher WSD accuracy. However, the performance is no different from the a 53 curve after 35% of adaptation examples are added. A possible reason might be that by strictly adhering to the sense priors in the WSJ data, we have removed too many BC training examples, from an average of 310 examples per noun as shown in Table 1, to an average of 195 examples. 6.3 Using Predominant Sense Information Research by McCarthy et al. (2004) and Koeling et al. (2005) pointed out that a change of predominant sense is often indicative of a change in domain. For example, the predominant sense of the noun interest in the BC part of the DSO corpus has the meaning “a sense of concern with and curiosity about someone or something”. In the WSJ part of the DSO corpus, the noun interest has a different predominant sense with the meaning “a fixed charge for borrowing money”, which is reflective of the business and finance focus of the WSJ corpus. Instead of restricting the BC training data to adhere strictly to the sense priors in WSJ, another alternative is just to ensure that the predominant sense in BC is the same as that of WSJ. Out of the 21 nouns, 12 nouns have the same predominant sense in both BC and WSJ. The remaining 9 nouns that have different predominant senses in the BC and WSJ data are: center, field, figure, interest, line, need, order, term, value. The row 9 nouns in Table 1 gives some information for this set of 9 nouns. To gauge the utility of this approach, we conduct experiments on these nouns by first assuming that we know the true predominant sense in the WSJ data. Assume that the WSJ predominant sense of a noun is si and si has ni examples in the BC data. We then gather BC examples for a noun to adhere to this WSJ predominant sense, by gathering only up to ni BC examples for each sense of this noun. This approach gives an average of 190 BC examples for the 9 nouns. This is higher than an average of 83 BC examples for these 9 nouns if BC examples are selected to follow the sense priors of WSJ evaluation data as described in the last subsection 6.2. For these 9 nouns, the average KL-divergence between the sense priors of the original BC data and WSJ evaluation data is 0.81. This drops to 0.51 after ensuring that the predominant sense in BC is the same as that of WSJ, confirming that the sense priors in the newly gathered BC data more closely follow 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 WSD Accuracy (%) Percentage of adaptation examples added (%) a-truePrior a-truePred a Figure 3: Using true predominant sense for the 9 nouns. the sense priors in WSJ. Using this new set of training examples, we perform domain adaptation using active learning to obtain the curve a-truePred in Figure 3. For comparison, we also plot the curves a and a-truePrior for this set of 9 nouns in Figure 3. Results in Figure 3 show that a-truePred starts off at a higher accuracy and performs consistently better than the a curve. In contrast, though a-truePrior starts at a high accuracy, its performance is lower than a-truePred and a after 50% of adaptation examples are added. The approach represented by atruePred is a compromise between ensuring that the sense priors in the training data follow as closely as possible the sense priors in the evaluation data, while retaining enough training examples. These results highlight the importance of striking a balance between these two goals. In (McCarthy et al., 2004), a method was presented to determine the predominant sense of a word in a corpus. However, in (Chan and Ng, 2005), we showed that in a supervised setting where one has access to some annotated training data, the EMbased method in section 5 estimates the sense priors more effectively than the method described in (McCarthy et al., 2004). Hence, we use the EM-based algorithm to estimate the sense priors in the WSJ evaluation data for each of the 21 nouns. The sense with the highest estimated sense prior is taken as the predominant sense of the noun. For the set of 12 nouns where the predominant 54 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 WSD Accuracy (%) Percentage of adaptation examples added (%) a-c-estPred a-truePred a-estPred a r Figure 4: Using estimated predominant sense for the 9 nouns. Accuracy % adaptation examples needed r a a-estPred a-c-estPred 50%: 61.1 8 7 (0.88) 5 (0.63) 4 (0.50) 60%: 64.5 10 9 (0.90) 7 (0.70) 5 (0.50) 70%: 68.0 15 12 (0.80) 9 (0.60) 6 (0.40) 80%: 71.5 23 16 (0.70) 12 (0.52) 9 (0.39) 90%: 74.9 46 24 (0.52) 21 (0.46) 15 (0.33) 100%: 78.4 100 51 (0.51) 38 (0.38) 29 (0.29) Table 2: Annotation savings and percentage of adaptation examples needed to reach various accuracies. sense remains unchanged between BC and WSJ, the EM-based algorithm is able to predict that the predominant sense remains unchanged for all 12 nouns. Hence, we will focus on the 9 nouns which have different predominant senses between BC and WSJ for our remaining adaptation experiments. For these 9 nouns, the EM-based algorithm correctly predicts the WSJ predominant sense for 6 nouns. Hence, the algorithm is able to predict the correct predominant sense for 18 out of 21 nouns overall, representing an accuracy of 86%. Figure 4 plots the curve a-estPred, which is similar to a-truePred, except that the predominant sense is now estimated by the EM-based algorithm. Employing count-merging with a-estPred produces the curve a-c-estPred. For comparison, the curves r, a, and a-truePred are also plotted. The results show that a-estPred performs consistently better than a, and a-c-estPred in turn performs better than aestPred. Hence, employing the predicted predominant sense and count-merging, we further improve the effectiveness of the active learning-based adaptation process. With reference to Figure 4, the WSD accuracies of the r and a curves before and after adaptation are 43.7% and 78.4% respectively. Starting from the mid-point 61.1% accuracy, which represents a 50% accuracy increase from 43.7%, we show in Table 2 the percentage of adaptation examples required by the various approaches to reach certain levels of WSD accuracies. For instance, to reach the final accuracy of 78.4%, r, a, a-estPred, and ac-estPred require the addition of 100%, 51%, 38%, and 29% adaptation examples respectively. The numbers in brackets give the ratio of adaptation examples needed by a, a-estPred, and a-c-estPred versus random selection r. For instance, to reach a WSD accuracy of 78.4%, a-c-estPred needs only 29% adaptation examples, representing a ratio of 0.29 and an annotation saving of 71%. Note that this represents a more effective adaptation process than the basic active learning a approach, which requires 51% adaptation examples. Hence, besides showing that active learning can be used to reduce the annotation effort required for domain adaptation, we have further improved the effectiveness of the adaptation process by using the predicted predominant sense of the new domain and adopting the count-merging technique. 7 Related Work In applying active learning for domain adaptation, Zhang et al. (2003) presented work on sentence boundary detection using generalized Winnow, while Tur et al. (2004) performed language model adaptation of automatic speech recognition systems. In both papers, out-of-domain and indomain data were simply mixed together without MAP estimation such as count-merging. For WSD, Fujii et al. (1998) used selective sampling for a Japanese language WSD system, Chen et al. (2006) used active learning for 5 verbs using coarse-grained evaluation, and H. T. Dang (2004) employed active learning for another set of 5 verbs. However, their work only investigated the use of active learning to reduce the annotation effort necessary for WSD, but 55 did not deal with the porting of a WSD system to a different domain. Escudero et al. (2000) used the DSO corpus to highlight the importance of the issue of domain dependence of WSD systems, but did not propose methods such as active learning or countmerging to address the specific problem of how to perform domain adaptation for WSD. 8 Conclusion Domain adaptation is important to ensure the general applicability of WSD systems across different domains. In this paper, we have shown that active learning is effective in reducing the annotation effort required in porting a WSD system to a new domain. Also, we have successfully used an EM-based algorithm to detect a change in predominant sense between the training and new domain. With this information on the predominant sense of the new domain and incorporating count-merging, we have shown that we are able to improve the effectiveness of the original adaptation process achieved by the basic active learning approach. Acknowledgement Yee Seng Chan is supported by a Singapore Millennium Foundation Scholarship (ref no. SMF-20041076). References M. Bacchiani and B. Roark. 2003. Unsupervised language model adaptation. In Proc. of IEEE ICASSP03. Y. S. Chan and H. T. Ng. 2005. Word sense disambiguation with distribution estimation. In Proc. of IJCAI05. J. Chen, A. Schein, L. Ungar, and M. Palmer. 2006. An empirical study of the behavior of active learning for word sense disambiguation. In Proc. of HLT/NAACL06. H. T. Dang. 2004. Investigations into the Role of Lexical Semantics in Word Sense Disambiguation. PhD dissertation, University of Pennsylvania. G. Escudero, L. Marquez, and G. Rigau. 2000. An empirical study of the domain dependence of supervised word sense disambiguation systems. In Proc. of EMNLP/VLC00. A. Fujii, K. Inui, T. Tokunaga, and H. Tanaka. 1998. Selective sampling for example-based word sense disambiguation. Computational Linguistics, 24(4). R. Koeling, D. McCarthy, and J. Carroll. 2005. Domainspecific sense distributions and predominant sense acquisition. In Proc. of Joint HLT-EMNLP05. Y. K. Lee and H. T. Ng. 2002. An empirical evaluation of knowledge sources and learning algorithms for word sense disambiguation. In Proc. of EMNLP02. D. D. Lewis and W. A. Gale. 1994. A sequential algorithm for training text classifiers. In Proc. of SIGIR94. B. Magnini and G. Cavaglia. 2000. Integrating subject field codes into WordNet. In Proc. of LREC-2000. D. Martinez and E. Agirre. 2000. One sense per collocation and genre/topic variations. In Proc. of EMNLP/VLC00. D. McCarthy, R. Koeling, J. Weeds, and J. Carroll. 2004. Finding predominant word senses in untagged text. In Proc. of ACL04. G. A. Miller, M. Chodorow, S. Landes, C. Leacock, and R. G. Thomas. 1994. Using a semantic concordance for sense identification. In Proc. of HLT94 Workshop on Human Language Technology. H. T. Ng and H. B. Lee. 1996. Integrating multiple knowledge sources to disambiguate word sense: An exemplar-based approach. In Proc. of ACL96. B. Roark and M. Bacchiani. 2003. Supervised and unsupervised PCFG adaptation to novel domains. In Proc. of HLT-NAACL03. M. Saerens, P. Latinne, and C. Decaestecker. 2002. Adjusting the outputs of a classifier to new a priori probabilities: A simple procedure. Neural Computation, 14(1). D. H. Tur, G. Tur, M. Rahim, and G. Riccardi. 2004. Unsupervised and active learning in automatic speech recognition for call classification. In Proc. of IEEE ICASSP04. T. Zhang, F. Damerau, and D. Johnson. 2003. Updating an NLP system to fit new domains: an empirical study on the sentence segmentation problem. In Proc. of CONLL03. 56
2007
7
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 552–559, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Towards an Iterative Reinforcement Approach for Simultaneous Document Summarization and Keyword Extraction Xiaojun Wan Jianwu Yang Jianguo Xiao Institute of Computer Science and Technology Peking University, Beijing 100871, China {wanxiaojun,yangjianwu,xiaojianguo}@icst.pku.edu.cn Abstract Though both document summarization and keyword extraction aim to extract concise representations from documents, these two tasks have usually been investigated independently. This paper proposes a novel iterative reinforcement approach to simultaneously extracting summary and keywords from single document under the assumption that the summary and keywords of a document can be mutually boosted. The approach can naturally make full use of the reinforcement between sentences and keywords by fusing three kinds of relationships between sentences and words, either homogeneous or heterogeneous. Experimental results show the effectiveness of the proposed approach for both tasks. The corpus-based approach is validated to work almost as well as the knowledge-based approach for computing word semantics. 1 Introduction Text summarization is the process of creating a compressed version of a given document that delivers the main topic of the document. Keyword extraction is the process of extracting a few salient words (or phrases) from a given text and using the words to represent the text. The two tasks are similar in essence because they both aim to extract concise representations for documents. Automatic text summarization and keyword extraction have drawn much attention for a long time because they both are very important for many text applications, including document retrieval, document clustering, etc. For example, keywords of a document can be used for document indexing and thus benefit to improve the performance of document retrieval, and document summary can help to facilitate users to browse the search results and improve users’ search experience. Text summaries and keywords can be either query-relevant or generic. Generic summary and keyword should reflect the main topics of the document without any additional clues and prior knowledge. In this paper, we focus on generic document summarization and keyword extraction for single documents. Document summarization and keyword extraction have been widely explored in the natural language processing and information retrieval communities. A series of workshops and conferences on automatic text summarization (e.g. SUMMAC, DUC and NTCIR) have advanced the technology and produced a couple of experimental online systems. In recent years, graph-based ranking algorithms have been successfully used for document summarization (Mihalcea and Tarau, 2004, 2005; ErKan and Radev, 2004) and keyword extraction (Mihalcea and Tarau, 2004). Such algorithms make use of “voting” or “recommendations” between sentences (or words) to extract sentences (or keywords). Though the two tasks essentially share much in common, most algorithms have been developed particularly for either document summarization or keyword extraction. Zha (2002) proposes a method for simultaneous keyphrase extraction and text summarization by using only the heterogeneous sentence-to-word relationships. Inspired by this, we aim to take into account all the three kinds of relationships among sentences and words (i.e. the homogeneous relationships between words, the homogeneous relationships between sentences, and the heterogeneous relationships between words and sentences) in 552 a unified framework for both document summarization and keyword extraction. The importance of a sentence (word) is determined by both the importance of related sentences (words) and the importance of related words (sentences). The proposed approach can be considered as a generalized form of previous graph-based ranking algorithms and Zha’s work (Zha, 2002). In this study, we propose an iterative reinforcement approach to realize the above idea. The proposed approach is evaluated on the DUC2002 dataset and the results demonstrate its effectiveness for both document summarization and keyword extraction. Both knowledge-based approach and corpus-based approach have been investigated to compute word semantics and they both perform very well. The rest of this paper is organized as follows: Section 2 introduces related works. The details of the proposed approach are described in Section 3. Section 4 presents and discusses the evaluation results. Lastly we conclude our paper in Section 5. 2 Related Works 2.1 Document Summarization Generally speaking, single document summarization methods can be either extraction-based or abstraction-based and we focus on extraction-based methods in this study. Extraction-based methods usually assign a saliency score to each sentence and then rank the sentences in the document. The scores are usually computed based on a combination of statistical and linguistic features, including term frequency, sentence position, cue words, stigma words, topic signature (Hovy and Lin, 1997; Lin and Hovy, 2000), etc. Machine learning methods have also been employed to extract sentences, including unsupervised methods (Nomoto and Matsumoto, 2001) and supervised methods (Kupiec et al., 1995; Conroy and O’Leary, 2001; Amini and Gallinari, 2002; Shen et al., 2007). Other methods include maximal marginal relevance (MMR) (Carbonell and Goldstein, 1998), latent semantic analysis (LSA) (Gong and Liu, 2001). In Zha (2002), the mutual reinforcement principle is employed to iteratively extract key phrases and sentences from a document. Most recently, graph-based ranking methods, including TextRank ((Mihalcea and Tarau, 2004, 2005) and LexPageRank (ErKan and Radev, 2004) have been proposed for document summarization. Similar to Kleinberg’s HITS algorithm (Kleinberg, 1999) or Google’s PageRank (Brin and Page, 1998), these methods first build a graph based on the similarity between sentences in a document and then the importance of a sentence is determined by taking into account global information on the graph recursively, rather than relying only on local sentence-specific information. 2.2 Keyword Extraction Keyword (or keyphrase) extraction usually involves assigning a saliency score to each candidate keyword by considering various features. Krulwich and Burkey (1996) use heuristics to extract keyphrases from a document. The heuristics are based on syntactic clues, such as the use of italics, the presence of phrases in section headers, and the use of acronyms. Muñoz (1996) uses an unsupervised learning algorithm to discover two-word keyphrases. The algorithm is based on Adaptive Resonance Theory (ART) neural networks. Steier and Belew (1993) use the mutual information statistics to discover two-word keyphrases. Supervised machine learning algorithms have been proposed to classify a candidate phrase into either keyphrase or not. GenEx (Turney, 2000) and Kea (Frank et al., 1999; Witten et al., 1999) are two typical systems, and the most important features for classifying a candidate phrase are the frequency and location of the phrase in the document. More linguistic knowledge (such as syntactic features) has been explored by Hulth (2003). More recently, Mihalcea and Tarau (2004) propose the TextRank model to rank keywords based on the co-occurrence links between words. 3 Iterative Reinforcement Approach 3.1 Overview The proposed approach is intuitively based on the following assumptions: Assumption 1: A sentence should be salient if it is heavily linked with other salient sentences, and a word should be salient if it is heavily linked with other salient words. Assumption 2: A sentence should be salient if it contains many salient words, and a word should be salient if it appears in many salient sentences. The first assumption is similar to PageRank which makes use of mutual “recommendations” 553 between homogeneous objects to rank objects. The second assumption is similar to HITS if words and sentences are considered as authorities and hubs respectively. In other words, the proposed approach aims to fuse the ideas of PageRank and HITS in a unified framework. In more detail, given the heterogeneous data points of sentences and words, the following three kinds of relationships are fused in the proposed approach: SS-Relationship: It reflects the homogeneous relationships between sentences, usually computed by their content similarity. WW-Relationship: It reflects the homogeneous relationships between words, usually computed by knowledge-based approach or corpus-based approach. SW-Relationship: It reflects the heterogeneous relationships between sentences and words, usually computed as the relative importance of a word in a sentence. Figure 1 gives an illustration of the relationships. Figure 1. Illustration of the Relationships The proposed approach first builds three graphs to reflect the above relationships respectively, and then iteratively computes the saliency scores of the sentences and words based on the graphs. Finally, the algorithm converges and each sentence or word gets its saliency score. The sentences with high saliency scores are chosen into the summary, and the words with high saliency scores are combined to produce the keywords. 3.2 Graph Building 3.2.1 Sentence-to-Sentence Graph ( SS-Graph) Given the sentence collection S={si | 1≤i≤m} of a document, if each sentence is considered as a node, the sentence collection can be modeled as an undirected graph by generating an edge between two sentences if their content similarity exceeds 0, i.e. an undirected link between si and sj (i≠j) is constructed and the associated weight is their content similarity. Thus, we construct an undirected graph GSS to reflect the homogeneous relationship between sentences. The content similarity between two sentences is computed with the cosine measure. We use an adjacency matrix U to describe GSS with each entry corresponding to the weight of a link in the graph. U= [Uij]m×m is defined as follows:      ≠ ´ ⋅ = otherwise , j , if i s s s s U j i j i ij 0 r r r r (1) where is and jsr are the corresponding term vectors of sentences si and sj respectively. The weight associated with term t is calculated with tft.isft, where tft is the frequency of term t in the sentence and isft is the inverse sentence frequency of term t, i.e. 1+log(N/nt), where N is the total number of sentences and nt is the number of sentences containing term t in a background corpus. Note that other measures (e.g. Jaccard, Dice, Overlap, etc.) can also be explored to compute the content similarity between sentences, and we simply choose the cosine measure in this study. Then U is normalized to U~ as follows to make the sum of each row equal to 1:    ≠ = ∑ ∑ = = erwise , oth U , if U U U m j ij m j ij ij ij 0 0 ~ 1 1 (2) 3.2.2 Word-to-Word Graph ( WW-Graph) Given the word collection T={tj|1≤j≤n } of a document 1, the semantic similarity between any two words ti and tj can be computed using approaches that are either knowledge-based or corpus-based (Mihalcea et al., 2006). Knowledge-based measures of word semantic similarity try to quantify the degree to which two words are semantically related using information drawn from semantic networks. WordNet (Fellbaum, 1998) is a lexical database where each 1 The stopwords defined in the Smart system have been removed from the collection. sentence word SS WW SW 554 unique meaning of a word is represented by a synonym set or synset. Each synset has a gloss that defines the concept that it represents. Synsets are connected to each other through explicit semantic relations that are defined in WordNet. Many approaches have been proposed to measure semantic relatedness based on WordNet. The measures vary from simple edge-counting to attempt to factor in peculiarities of the network structure by considering link direction, relative path, and density, such as vector, lesk, hso, lch, wup, path, res, lin and jcn (Pedersen et al., 2004). For example, “cat” and “dog” has higher semantic similarity than “cat” and “computer”. In this study, we implement the vector measure to efficiently evaluate the similarities of a large number of word pairs. The vector measure (Patwardhan, 2003) creates a co– occurrence matrix from a corpus made up of the WordNet glosses. Each content word used in a WordNet gloss has an associated context vector. Each gloss is represented by a gloss vector that is the average of all the context vectors of the words found in the gloss. Relatedness between concepts is measured by finding the cosine between a pair of gloss vectors. Corpus-based measures of word semantic similarity try to identify the degree of similarity between words using information exclusively derived from large corpora. Such measures as mutual information (Turney 2001), latent semantic analysis (Landauer et al., 1998), log-likelihood ratio (Dunning, 1993) have been proposed to evaluate word semantic similarity based on the co-occurrence information on a large corpus. In this study, we simply choose the mutual information to compute the semantic similarity between word ti and tj as follows: ) ( ) ( ) ( log ) ( j i j i j i t p t p ,t t p N ,t t sim ´ ´ = (3) which indicates the degree of statistical dependence between ti and tj. Here, N is the total number of words in the corpus and p(ti) and p(tj) are respectively the probabilities of the occurrences of ti and tj, i.e. count(ti)/N and count(tj)/N, where count(ti) and count(tj) are the frequencies of ti and tj. p(ti, tj) is the probability of the co-occurrence of ti and tj within a window with a predefined size k, i.e. count(ti, tj)/N, where count(ti, tj) is the number of the times ti and tj co-occur within the window. Similar to the SS-Graph, we can build an undirected graph GWW to reflect the homogeneous relationship between words, in which each node corresponds to a word and the weight associated with the edge between any different word ti and tj is computed by either the WordNet-based vector measure or the corpus-based mutual information measure. We use an adjacency matrix V to describe GWW with each entry corresponding to the weight of a link in the graph. V= [Vij]n×n, where Vij =sim(ti, tj) if i≠j and Vij=0 if i=j. Then V is similarly normalized to V~ to make the sum of each row equal to 1. 3.2.3 Sentence-to-Word Graph ( SW-Graph) Given the sentence collection S={si | 1≤i≤m} and the word collection T={tj|1≤j≤n } of a document, we can build a weighted bipartite graph GSW from S and T in the following way: if word tj appears in sentence si, we then create an edge between si and tj. A nonnegative weight aff(si,tj) is specified on the edge, which is proportional to the importance of word tj in sentence si, computed as follows: ∑ Î ´ ´ = i j j s t t t t t j i isf tf isf tf ,t s aff ) ( (4) where t represents a unique term in si and tft, isft are respectively the term frequency in the sentence and the inverse sentence frequency. We use an adjacency (affinity) matrix W=[Wij]m×n to describe GSW with each entry Wij corresponding to aff(si,tj). Similarly, W is normalized to W~ to make the sum of each row equal to 1. In addition, we normalize the transpose of W, i.e. WT, to Wˆ to make the sum of each row in WT equal to 1. 3.3 Reinforcement Algorithm We use two column vectors u=[u(si)]m×1 and v =[v(tj)]n×1 to denote the saliency scores of the sentences and words in the specified document. The assumptions introduced in Section 3.1 can be rendered as follows: ∑ µ j j ji i s u U s u ) ( ~ ) ( (5) ∑ µ i i ij j t v V t v ) ( ~ ) ( (6) ∑ µ j j ji i t v W s u ) ( ˆ ) ( (7) 555 ∑ µ i i ij j s u W t v ) ( ~ ) ( (8) After fusing the above equations, we can obtain the following iterative forms: ∑ ∑ = = + = n j j ji m j j ji i t v W β s u U α s u 1 1 ) ( ˆ ) ( ~ ) ( (9) ∑ ∑ = = + = m i i ij n i i ij j s u W β t v V α t v 1 1 ) ( ~ ) ( ~ ) ( (10) And the matrix form is: v W u U u T T β α ˆ ~ + = (11) u W v V v T T β α ~ ~ + = (12) where α and β specify the relative contributions to the final saliency scores from the homogeneous nodes and the heterogeneous nodes and we have α+β=1. In order to guarantee the convergence of the iterative form, u and v are normalized after each iteration. For numerical computation of the saliency scores, the initial scores of all sentences and words are set to 1 and the following two steps are alternated until convergence, 1. Compute and normalize the scores of sentences: ) (nT ) (nT (n) β α 1 1 ˆ ~ v W u U u + = , 1 (n) (n) (n) / u u u = 2. Compute and normalize the scores of words: ) (nT ) (nT (n) β α 1 1 ~ ~ u W v V v + = , 1 (n) (n) (n) / v v v = where u(n) and v(n) denote the vectors computed at the n-th iteration. Usually the convergence of the iteration algorithm is achieved when the difference between the scores computed at two successive iterations for any sentences and words falls below a given threshold (0.0001 in this study). 4 Empirical Evaluation 4.1 Summarization Evaluation 4.1.1 Evaluation Setup We used task 1 of DUC2002 (DUC, 2002) for evaluation. The task aimed to evaluate generic summaries with a length of approximately 100 words or less. DUC2002 provided 567 English news articles collected from TREC-9 for singledocument summarization task. The sentences in each article have been separated and the sentence information was stored into files. In the experiments, the background corpus for using the mutual information measure to compute word semantics simply consisted of all the documents from DUC2001 to DUC2005, which could be easily expanded by adding more documents. The stopwords were removed and the remaining words were converted to the basic forms based on WordNet. Then the semantic similarity values between the words were computed. We used the ROUGE (Lin and Hovy, 2003) toolkit (i.e.ROUGEeval-1.4.2 in this study) for evaluation, which has been widely adopted by DUC for automatic summarization evaluation. It measured summary quality by counting overlapping units such as the n-gram, word sequences and word pairs between the candidate summary and the reference summary. ROUGE toolkit reported separate scores for 1, 2, 3 and 4-gram, and also for longest common subsequence co-occurrences. Among these different scores, unigram-based ROUGE score (ROUGE-1) has been shown to agree with human judgment most (Lin and Hovy, 2003). We showed three of the ROUGE metrics in the experimental results: ROUGE-1 (unigrambased), ROUGE-2 (bigram-based), and ROUGEW (based on weighted longest common subsequence, weight=1.2). In order to truncate summaries longer than the length limit, we used the “-l” option 2 in the ROUGE toolkit. 4.1.2 Evaluation Results For simplicity, the parameters in the proposed approach are simply set to α=β=0.5, which means that the contributions from sentences and words are equally important. We adopt the WordNetbased vector measure (WN) and the corpus-based mutual information measure (MI) for computing the semantic similarity between words. When using the mutual information measure, we heuristically set the window size k to 2, 5 and 10, respectively. The proposed approaches with different word similarity measures (WN and MI) are compared 2 The “-l” option is very important for fair comparison. Some previous works not adopting this option are likely to overestimate the ROUGE scores. 556 with two solid baselines: SentenceRank and MutualRank. SentenceRank is proposed in Mihalcea and Tarau (2004) to make use of only the sentence-tosentence relationships to rank sentences, which outperforms most popular summarization methods. MutualRank is proposed in Zha (2002) to make use of only the sentence-to-word relationships to rank sentences and words. For all the summarization methods, after the sentences are ranked by their saliency scores, we can apply a variant form of the MMR algorithm to remove redundancy and choose both the salient and novel sentences to the summary. Table 1 gives the comparison results of the methods before removing redundancy and Table 2 gives the comparison results of the methods after removing redundancy. System ROUGE-1 ROUGE-2 ROUGE-W Our Approach (WN) 0.47100*# 0.20424*# 0.16336# Our Approach (MI:k=2) 0.46711# 0.20195# 0.16257# Our Approach (MI:k=5) 0.46803# 0.20259# 0.16310# Our Approach (MI:k=10) 0.46823# 0.20301# 0.16294# SentenceRank 0.45591 0.19201 0.15789 MutualRank 0.43743 0.17986 0.15333 Table 1. Summarization Performance before Removing Redundancy (w/o MMR) System ROUGE-1 ROUGE-2 ROUGE-W Our Approach (WN) 0.47329*# 0.20249# 0.16352# Our Approach (MI:k=2) 0.47281# 0.20281# 0.16373# Our Approach (MI:k=5) 0.47282# 0.20249# 0.16343# Our Approach (MI:k=10) 0.47223# 0.20225# 0.16308# SentenceRank 0.46261 0.19457 0.16018 MutualRank 0.43805 0.17253 0.15221 Table 2. Summarization Performance after Removing Redundancy (w/ MMR) (* indicates that the improvement over SentenceRank is significant and # indicates that the improvement over MutualRank is significant, both by comparing the 95% confidence intervals provided by the ROUGE package.) Seen from Tables 1 and 2, the proposed approaches always outperform the two baselines over all three metrics with different word semantic measures. Moreover, no matter whether the MMR algorithm is applied or not, almost all performance improvements over MutualRank are significant and the ROUGE-1 performance improvements over SentenceRank are significant when using WordNet-based measure (WN). Word semantics can be naturally incorporated into the computation process, which addresses the problem that SentenceRank cannot take into account word semantics, and thus improves the summarization performance. We also observe that the corpus-based measure (MI) works almost as well as the knowledge-based measure (WN) for computing word semantic similarity. In order to better understand the relative contributions from the sentence nodes and the word nodes, the parameter α is varied from 0 to 1. The larger α is, the more contribution is given from the sentences through the SS-Graph, while the less contribution is given from the words through the SW-Graph. Figures 2-4 show the curves over three ROUGE scores with respect to α. Without loss of generality, we use the case of k=5 for the MI measure as an illustration. The curves are similar to Figures 2-4 when k=2 and k=10. 0.435 0.44 0.445 0.45 0.455 0.46 0.465 0.47 0.475 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 α ROUGE-1 MI(w/o MMR) MI(w/ MMR) WN(w/o MMR) WN(w/ MMR) Figure 2. ROUGE-1 vs. α 0.17 0.175 0.18 0.185 0.19 0.195 0.2 0.205 0.21 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 α ROUGE-2 MI(w/o MMR) MI(w/ MMR) WN(w/o MMR) WN(w/ MMR) Figure 3. ROUGE-2 vs. α 557 0.151 0.153 0.155 0.157 0.159 0.161 0.163 0.165 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 α ROUGE-W MI(w/o MMR) MI(w/ MMR) WN(w/o MMR) WN(w/ MMR) Figure 4. ROUGE-W vs. α Seen from Figures 2-4, no matter whether the MMR algorithm is applied or not (i.e. w/o MMR or w/ MMR), the ROUGE scores based on either word semantic measure (MI or WN) achieves the peak when α is set between 0.4 and 0.6. The performance values decrease sharply when α is very large (near to 1) or very small (near to 0). The curves demonstrate that both the contribution from the sentences and the contribution from the words are important for ranking sentences; moreover, the contributions are almost equally important. Loss of either contribution will much deteriorate the final performance. Similar results and observations have been obtained on task 1 of DUC2001 in our study and the details are omitted due to page limit. 4.2 Keyword Evaluation 4.1.1 Evaluation Setup In this study we performed a preliminary evaluation of keyword extraction. The evaluation was conducted on the single word level instead of the multi-word phrase (n-gram) level, in other words, we compared the automatically extracted unigrams (words) and the manually labeled unigrams (words). The reasons were that: 1) there existed partial matching between phrases and it was not trivial to define an accurate measure to evaluate phrase quality; 2) each phrase was in fact composed of a few words, so the keyphrases could be obtained by combining the consecutive keywords. We used 34 documents in the first five document clusters in DUC2002 dataset (i.e. d061-d065). At most 10 salient words were manually labeled for each document to represent the document and the average number of manually assigned keywords was 6.8. Each approach returned 10 words with highest saliency scores as the keywords. The extracted 10 words were compared with the manually labeled keywords. The words were converted to their corresponding basic forms based on WordNet before comparison. The precision p, recall r, F-measure (F=2pr/(p+r)) were obtained for each document and then the values were averaged over all documents for evaluation purpose. 4.1.2 Evaluation Results Table 3 gives the comparison results. The proposed approaches are compared with two baselines: WordRank and MutualRank. WordRank is proposed in Mihalcea and Tarau (2004) to make use of only the co-occurrence relationships between words to rank words, which outperforms traditional keyword extraction methods. The window size k for WordRank is also set to 2, 5 and 10, respectively. System Precision Recall F-measure Our Approach (WN) 0.413 0.504 0.454 Our Approach (MI:k=2) 0.428 0.485 0.455 Our Approach (MI:k=5) 0.425 0.491 0.456 Our Approach (MI:k=10) 0.393 0.455 0.422 WordRank (k=2) 0.373 0.412 0.392 WordRank (k=5) 0.368 0.422 0.393 WordRank (k=10) 0.379 0.407 0.393 MutualRank 0.355 0.397 0.375 Table 3. The Performance of Keyword Extraction Seen from the table, the proposed approaches significantly outperform the baseline approaches. Both the corpus-based measure (MI) and the knowledge-based measure (WN) perform well on the task of keyword extraction. A running example is given below to demonstrate the results: Document ID: D062/AP891018-0301 Labeled keywords: insurance earthquake insurer damage california Francisco pay Extracted keywords: WN: insurance earthquake insurer quake california spokesman cost million wednesday damage MI(k=5): insurance insurer earthquake percent benefit california property damage estimate rate 558 5 Conclusion and Future Work In this paper we propose a novel approach to simultaneously document summarization and keyword extraction for single documents by fusing the sentence-to-sentence, word-to-word, sentence-toword relationships in a unified framework. The semantics between words computed by either corpus-based approach or knowledge-based approach can be incorporated into the framework in a natural way. Evaluation results demonstrate the performance improvement of the proposed approach over the baselines for both tasks. In this study, only the mutual information measure and the vector measure are employed to compute word semantics, and in future work many other measures mentioned earlier will be investigated in the framework in order to show the robustness of the framework. The evaluation of keyword extraction is preliminary in this study, and we will conduct more thorough experiments to make the results more convincing. Furthermore, the proposed approach will be applied to multidocument summarization and keyword extraction, which are considered more difficult than single document summarization and keyword extraction. Acknowledgements This work was supported by the National Science Foundation of China (60642001). References M. R. Amini and P. Gallinari. 2002. The use of unlabeled data to improve supervised learning for text summarization. In Proceedings of SIGIR2002, 105-112. S. Brin and L. Page. 1998. The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems, 30(1–7). J. Carbonell and J. Goldstein. 1998. The use of MMR, diversitybased reranking for reordering documents and producing summaries. In Proceedings of SIGIR-1998, 335-336. J. M. Conroy and D. P. O’Leary. 2001. Text summarization via Hidden Markov Models. In Proceedings of SIGIR2001, 406407. DUC. 2002. The Document Understanding Workshop 2002. http://www-nlpir.nist.gov/projects/duc/guidelines/2002.html T. Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics 19, 61–74. G. ErKan and D. R. Radev. 2004. LexPageRank: Prestige in multi-document text summarization. In Proceedings of EMNLP2004. C. Fellbaum. 1998. WordNet: An Electronic Lexical Database. The MIT Press. E. Frank, G. W. Paynter, I. H. Witten, C. Gutwin, and C. G. Nevill-Manning. 1999. Domain-specific keyphrase extraction. Proceedings of IJCAI-99, pp. 668-673. Y. H. Gong and X. Liu. 2001. Generic text summarization using Relevance Measure and Latent Semantic Analysis. In Proceedings of SIGIR2001, 19-25. E. Hovy and C. Y. Lin. 1997. Automated text summarization in SUMMARIST. In Proceeding of ACL’1997/EACL’1997 Worshop on Intelligent Scalable Text Summarization. A. Hulth. 2003. Improved automatic keyword extraction given more linguistic knowledge. In Proceedings of EMNLP2003, Japan, August. J. M. Kleinberg. 1999. Authoritative sources in a hyperlinked environment. Journal of the ACM, 46(5):604–632. B. Krulwich and C. Burkey. 1996. Learning user information interests through the extraction of semantically significant phrases. In AAAI 1996 Spring Symposium on Machine Learning in Information Access. J. Kupiec, J. Pedersen, and F. Chen. 1995. A.trainable document summarizer. In Proceedings of SIGIR1995, 68-73. T. K. Landauer, P. Foltz, and D. Laham. 1998. Introduction to latent semantic analysis. Discourse Processes 25. C. Y. Lin and E. Hovy. 2000. The automated acquisition of topic signatures for text Summarization. In Proceedings of ACL2000, 495-501. C.Y. Lin and E.H. Hovy. 2003. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of HLT-NAACL2003, Edmonton, Canada, May. R. Mihalcea, C. Corley, and C. Strapparava. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In Proceedings of AAAI-06. R. Mihalcea and P. Tarau. 2004. TextRank: Bringing order into texts. In Proceedings of EMNLP2004. R. Mihalcea and P.Tarau. 2005. A language independent algorithm for single and multiple document summarization. In Proceedings of IJCNLP2005. A. Muñoz. 1996. Compound key word generation from document databases using a hierarchical clustering ART model. Intelligent Data Analysis, 1(1). T. Nomoto and Y. Matsumoto. 2001. A new approach to unsupervised text summarization. In Proceedings of SIGIR2001, 26-34. S. Patwardhan. 2003. Incorporating dictionary and corpus information into a context vector measure of semantic relatedness. Master’s thesis, Univ. of Minnesota, Duluth. T. Pedersen, S. Patwardhan, and J. Michelizzi. 2004. WordNet::Similarity – Measuring the relatedness of concepts. In Proceedings of AAAI-04. D. Shen, J.-T. Sun, H. Li, Q. Yang, and Z. Chen. 2007. Document Summarization using Conditional Random Fields. In Proceedings of IJCAI 07. A. M. Steier and R. K. Belew. 1993. Exporting phrases: A statistical analysis of topical language. In Proceedings of Second Symposium on Document Analysis and Information Retrieval, pp. 179-190. P. D. Turney. 2000. Learning algorithms for keyphrase extraction. Information Retrieval, 2:303-336. P. Turney. 2001. Mining the web for synonyms: PMI-IR versus LSA on TOEFL. In Proceedings of ECML-2001. I. H. Witten, G. W. Paynter, E. Frank, C. Gutwin, and C. G. Nevill-Manning. 1999. KEA: Practical automatic keyphrase extraction. Proceedings of Digital Libraries 99 (DL'99), pp. 254-256. H. Y. Zha. 2002. Generic summarization and keyphrase extraction using mutual reinforcement principle and sentence clustering. In Proceedings of SIGIR2002, pp. 113-120. 559
2007
70
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 560–567, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Fast Semantic Extraction Using a Novel Neural Network Architecture Ronan Collobert NEC Laboratories America, Inc. 4 Independence Way Suite 200, Princeton, NJ 08540 [email protected] Jason Weston NEC Laboratories America, Inc. 4 Independence Way Suite 200, Princeton, NJ 08540 [email protected] Abstract We describe a novel neural network architecture for the problem of semantic role labeling. Many current solutions are complicated, consist of several stages and handbuilt features, and are too slow to be applied as part of real applications that require such semantic labels, partly because of their use of a syntactic parser (Pradhan et al., 2004; Gildea and Jurafsky, 2002). Our method instead learns a direct mapping from source sentence to semantic tags for a given predicate without the aid of a parser or a chunker. Our resulting system obtains accuracies comparable to the current state-of-the-art at a fraction of the computational cost. 1 Introduction Semantic understanding plays an important role in many end-user applications involving text: for information extraction, web-crawling systems, question and answer based systems, as well as machine translation, summarization and search. Such applications typically have to be computationally cheap to deal with an enormous quantity of data, e.g. web-based systems process large numbers of documents, whilst interactive human-machine applications require almost instant response. Another issue is the cost of producing labeled training data required for statistical models, which is exacerbated when those models also depend on syntactic features which must themselves be learnt. To achieve the goal of semantic understanding, the current consensus is to divide and conquer the [The company]ARG0 [bought]REL [sugar]ARG1 [on the world market]ARGM-LOC [to meet export commitments]ARGM-PNC Figure 1: Example of Semantic Role Labeling from the PropBank dataset (Palmer et al., 2005). ARG0 is typically an actor, REL an action, ARG1 an object, and ARGM describe various modifiers such as location (LOC) and purpose (PNC). problem. Researchers tackle several layers of processing tasks ranging from the syntactic, such as part-of-speech labeling and parsing, to the semantic: word-sense disambiguation, semantic role-labeling, named entity extraction, co-reference resolution and entailment. None of these tasks are end goals in themselves but can be seen as layers of feature extraction that can help in a language-based end application, such as the ones described above. Unfortunately, the state-of-the-art solutions of many of these tasks are simply too slow to be used in the applications previously described. For example, stateof-the-art syntactic parsers theoretically have cubic complexity in the sentence length (Younger, 1967)1 and several semantic extraction algorithms use the parse tree as an initial feature. In this work, we describe a novel type of neural network architecture that could help to solve some of these issues. We focus our experimental study on the semantic role labeling problem (Palmer et al., 2005): being able to give a semantic role to a syn1Even though some parsers effectively exhibit linear behavior in sentence length (Ratnaparkhi, 1997), fast statistical parsers such as (Henderson, 2004) still take around 1.5 seconds for sentences of length 35 in tests that we made. 560 tactic constituent of a sentence, i.e. annotating the predicate argument structure in text (see for example Figure 1). Because of its nature, role labeling seems to require the syntactic analysis of a sentence before attributing semantic labels. Using this intuition, state-of-the-art systems first build a parse tree, and syntactic constituents are then labeled by feeding hand-built features extracted from the parse tree to a machine learning system, e.g. the ASSERT system (Pradhan et al., 2004). This is rather slow, taking a few seconds per sentence at test time, partly because of the parse tree component, and partly because of the use of Support Vector Machines (Boser et al., 1992), which have linear complexity in testing time with respect to the number of training examples. This makes it hard to apply this method to interesting end user applications. Here, we propose a radically different approach that avoids the more complex task of building a full parse tree. From a machine learning point of view, a human does not need to be taught about parse trees to talk. It is possible, however, that our brains may implicitly learn features highly correlated with those extracted from a parse tree. We propose to develop an architecture that implements this kind of implicit learning, rather than using explicitly engineered features. In practice, our system also provides semantic tags at a fraction of the computational cost of other methods, taking on average 0.02 seconds to label a sentence from the Penn Treebank, with almost no loss in accuracy. The rest of the article is as follows. First, we describe the problem of shallow semantic parsing in more detail, as well as existing solutions to this problem. We then detail our algorithmic approach – the neural network architecture we employ – followed by experiments that evaluate our method. Finally, we conclude with a summary and discussion of future work. 2 Shallow Semantic Parsing FrameNet (Baker et al., 1998) and the Proposition Bank (Palmer et al., 2005), or PropBank for short, are the two main systems currently developed for semantic role-labeling annotation. We focus here on PropBank. PropBank encodes role labels by semantically tagging the syntactic structures of hand annotated parses of sentences. The current version of the dataset gives semantic tags for the same sentences as in the Penn Treebank (Marcus et al., 1993), which are excerpts from the Wall Street Journal. The central idea is that each verb in a sentence is labeled with its propositional arguments, where the abstract numbered arguments are intended to fill typical roles. For example, ARG0 is typically the actor, and ARG1 is typically the thing acted upon. The precise usage of the numbering system is labeled for each particular verb as so-called frames. Additionally, semantic roles can also be labeled with one of 13 ARGM adjunct labels, such as ARGM-LOC or ARGM-TMP for additional locational or temporal information relative to some verb. Shallow semantic parsing has immediate applications in tasks such as meta-data extraction (e.g. from web documents) and question and answer based systems (e.g. call center systems), amongst others. 3 Previous Work Several authors have already attempted to build machine learning approaches for the semantic rolelabeling problem. In (Gildea and Jurafsky, 2002) the authors presented a statistical approach to learning (for FrameNet), with some success. They proposed to take advantage of the syntactic tree structure that can be predicted by a parser, such as Charniak’s parser (Charniak, 2000). Their aim is, given a node in the parse tree, to assign a semantic role label to the words that are the children of that node. They extract several key types of features from the parse tree to be used in a statistical model for prediction. These same features also proved crucial to subsequent approaches, e.g. (Pradhan et al., 2004). These features include: • The parts of speech and syntactic labels of words and nodes in the tree. • The node’s position (left or right) in relation to the verb. • The syntactic path to the verb in the parse tree. • Whether a node in the parse tree is part of a noun or verb phrase (by looking at the parent nodes of that node). 561 • The voice of the sentence: active or passive (part of the PropBank gold annotation); as well as several other features (predicate, head word, verb sub-categorization, .. .). The authors of (Pradhan et al., 2004) used a similar structure, but added more features, notably head word part-of-speech, the predicted named entity class of the argument, word sense disambiguation of the verb and verb clustering, and others (they add 25 variants of 12 new feature types overall.) Their system also uses a parser, as before, and then a polynomial Support Vector Machine (SVM) (Boser et al., 1992) is used in two further stages: to classify each node in the tree as being a semantic argument or not for a given verb; and then to classify each semantic argument into one of the classes (ARG1, ARG2, etc.). The first SVM solves a twoclass problem, the second solves a multi-class problem using a one-vs-the-rest approach. The final system, called ASSERT, gives state-of-the-art performance and is also freely available at: http:// oak.colorado.edu/assert/. We compare to this system in our experimental results in Section 5. Several other competing methods exist, e.g. the ones that participated in the CONLL 2004 and 2005 challenges (http://www.lsi.upc.edu/ ˜srlconll/st05/st05.html). In this paper we focus on a comparison with ASSERT because software to re-run it is available online. This also gives us a timing result for comparison purposes. The three-step procedure used in ASSERT (calculating a parse tree and then applying SVMs twice) leads to good classification performance, but has several drawbacks. First in speed: predicting a parse tree is extremely demanding in computing resources. Secondly, choosing the features necessary for SVM classification requires extensive research. Finally, the SVM classification algorithm used in existing approaches is rather slow: SVM training is at least quadratic in time with respect to the number of training examples. The number of support vectors involved in the SVM decision function also increases linearly with the number of training examples. This makes SVMs slow on large-scale problems, both during training and testing phases. To alleviate the burden of parse tree computation, several attempts have been made to remove the full parse tree information from the semantic role labeling system, in fact the shared task of CONLL 2004 was devoted to this goal, but the results were not completely satisfactory. Previously, in (Gildea and Palmer, 2001), the authors tried to show that the parse tree is necessary for good generalization by showing that segments derived from a shallow syntactic parser or chunker do not perform as well for this goal. A further analysis of using chunkers, with improved results was also given in (Punyakanok et al., 2005), but still concluded the full parse tree is most useful. 4 Neural Network Architecture Ideally, we want an end-to-end fast learning system to output semantic roles for syntactic constituents without using a time consuming parse tree. Also, as explained before, we are interesting in exploring whether machine learning approaches can learn structure implicitly. Hence, even if there is a deep relationship between syntax and semantics, we prefer to avoid hand-engineered features that exploit this, and see if we can develop a model that can learn these features instead. We are thus not interested in chunker-based techniques, even though they are faster than parser-based techniques. We propose here a neural network based architecture which achieves these two goals. 4.1 Basic Architecture The type of neural network that we employ is a Multi Layer Perceptron (MLP). MLPs have been used for many years in the machine learning field and slowly abandoned for several reasons: partly because of the difficulty of solving the non-convex optimization problems associated with learning (LeCun et al., 1998), and partly because of the difficulty of their theoretical analysis compared to alternative convex approaches. An MLP works by successively projecting the data to be classified into different spaces. These projections are done in what is called hidden layers. Given an input vector z, a hidden layer applies a linear transformation (matrix M) followed by a squashing function h: z 7→Mz 7→h(Mz) . (1) 562 A typical squashing function is the hyperbolic tangent h(·) = tanh(·). The last layer (the output layer) linearly separates the classes. The composition of the projections in the hidden layers could be viewed as the work done by the kernel in SVMs. However there is a very important difference: the kernel in SVM is fixed and arbitrarily chosen, while the hidden layers in an MLP are trained and adapted to the classification task. This allows us to create much more flexible classification architectures. Our method for semantic role labeling classifies each word of a sentence separately. We do not use any semantic constituent information: if the model is powerful enough, words in the same semantic constituent should have the same class label. This means we also do not separate the problem into an identification and classification phase, but rather solve in a single step. 4.1.1 Notation We represent words as indices. We consider a finite dictionary of words D ⊂N. Let us represent a sentence of nw words to be analyzed as a function s(·). The ith word in the sentence is given by the index s(i): 1 ≤i ≤nw s(i) ∈D . We are interested in predicting the semantic role label of the word at position posw, given a verb at position posv (1 ≤posw, posv ≤nw). A mathematical description of our network architecture schematically shown in Figure 2 follows. 4.1.2 Transforming words into feature vectors Our first concern in semantic role labeling is that we have to deal with words, and that a simple index i ∈D does not carry any information specific to a word: for each word we need a set of features relevant for the task. As described earlier, previous methods construct a parse tree, and then compute hand-built features which are then fed to a classification algorithm. In order to bypass the use of a parse tree, we convert each word i ∈D into a particular vector wi ∈Rd which is going to be learnt for the task we are interested in. This approach has already been used with great success in the domain of language models (Bengio and Ducharme, 2001; Schwenk and Gauvain, 2002).       Lookup Table d ... d Linear Layer with sentence−adapted columns d C(position w.r.t. cat, position w.r.t. sat) Softmax Squashing Layer ... ARG1 ARG2 ARGM LOC ARGM TMP Classical Linear Layer Tanh Squashing Layer nhu Ci ws(6) ws(2) s(1) w ... C1 C2 C6 Classical Linear Layer ws(6) ... ws(2) s(1) w s(1) s(2) ... s(6) sat the Input Sentence on the mat cat Figure 2: MLP architecture for shallow semantic parsing. The input sequence is at the top. The output class probabilities for the word of interest (“cat”) given the verb of interest (“sat”) are given at the bottom. The first layer of our MLP is thus a lookup table which replaces the word indices into a concatenation of vectors: {s(1), . . . , s(nw)} 7→(ws(1) . . . ws(nw)) ∈Rnw d . (2) The weights {wi | i ∈D} for this layer are considered during the backpropagation phase of the MLP, and thus adapted automatically for the task we are interested in. 4.1.3 Integrating the verb position Feeding word vectors alone to a linear classification layer as in (Bengio and Ducharme, 2001) leads 563 to very poor accuracy because the semantic classification of a given word also depends on the verb in question. We need to provide the MLP with information about the verb position within the sentence. For that purpose we use a kind of linear layer which is adapted to the sentence considered. It takes the form: (ws(1) . . . ws(nw)) 7→M    wT s(1) ... wT s(nw)   , where M ∈Rnhu×nw d, and nhu is the number of hidden units. The specific nature of this layer is that the matrix M has a special block-column form which depends on the sentence: M = (C1| . . . |Cnw) , where each column Ci ∈Rnhu×d depends on the position of the ith word in s(·), with respect to the position posw of the word of interest, and with respect to the position posv of the verb of interest: Ci = C(i −posw, i −posv) , where C(·, ·) is a function to be chosen. In our experiments C(·, ·) was a linear layer with discretized inputs (i −posw, i −posv) which were transformed into two binary vectors of size wsz, where a bit is set to 1 if it corresponds to the position to encode, and 0 otherwise. These two binary vectors are then concatenated and fed to the linear layer. We chose the “window size” wsz = 11. If a position lies outside the window, then we still set the leftmost or rightmost bit to 1. The parameters involved in this function are also considered during the backpropagation. With such an architecture we allow our MLP to automatically adapt the importance of a word in the sentence given its distance to the word we want to classify, and to the verb we are interested in. This idea is the major novelty in this work, and is crucial for the success of the entire architecture, as we will see in the experiments. 4.1.4 Learning class probabilities The last layer in our MLP is a classical linear layer as described in (1), with a softmax squashing function (Bridle, 1990). Considering (1) and given ˜z = Mz, we have hi(˜z) = exp ˜zi P j exp ˜zj . This allows us to interpret outputs as probabilities for each semantic role label. The training of the whole system is achieved using a normal stochastic gradient descent. 4.2 Word representation As we have seen, in our model we are learning one d dimensional vector to represent each word. If the dataset were large enough, this would be an elegant solution. In practice many words occur infrequently within PropBank, so (independent of the size of d) we can still only learn a very poor representation for words that only appear a few times. Hence, to control the capacity of our model we take the original word and replace it with its part-of-speech if it is a verb, noun, adjective, adverb or number as determined by a part-of-speech classifier, and keep the words for all other parts of speech. This classifier is itself a neural network. This way we keep linking words which are important for this task. We do not do this replacement for the predicate itself. 5 Experiments We used Sections 02-21 of the PropBank dataset version 1 for training and validation and Section 23 for testing as standard in all our experiments. We first describe the part-of-speech tagger we employ, and then describe our semantic role labeling experiments. Software for our method, SENNA (Semantic Extraction using a Neural Network Architecture), more details on its implementation, an online applet and test set predictions of our system in comparison to ASSERT can be found at http: //ml.nec-labs.com/software/senna. Part-Of-Speech Tagger The part-of-speech classifier we employ is a neural network architecture of the same type as in Section 4, where the function Ci = C(i −posw) depends now only on the word position, and not on a verb. More precisely: Ci =  0 if 2 |i −posw| > wsz −1 Wi−posw otherwise , 564 where Wk ∈Rnhu×d and wsz is a window size. We chose wsz = 5 in our experiments. The d-dimensional vectors learnt take into account the capitalization of a word, and the prefix and suffix calculated using Porter-Stemmer. See http: //ml.nec-labs.com/software/senna for more details. We trained on the training set of PropBank supplemented with the Brown corpus, resulting in a test accuracy on the test set of PropBank of 96.85% which compares to 96.66% using the Brill tagger (Brill, 1992). Semantic Role Labeling In our experiments we considered a 23-class problem of NULL (no label), the core arguments ARG0-5, REL, ARGA, and ARGM- along with the 13 secondary modifier labels such as ARGM-LOC and ARGM-TMP. We simplified R-ARGn and C-ARGn to be written as ARGn, and post-processed ASSERT to do this as well. We compared our system to the freely available ASSERT system (Pradhan et al., 2004). Both systems are fed only the input sentence during testing, with traces removed, so they cannot make use of many PropBank features such as frameset identitifier, person, tense, aspect, voice, and form of the verb. As our algorithm outputs a semantic tag for each word of a sentence, we directly compare this per-word accuracy with ASSERT. Because ASSERT uses a parser, and because PropBank was built by labeling the nodes of a hand-annotated parse tree, pernode accuracy is usually reported in papers such as (Pradhan et al., 2004). Unfortunately our approach is based on a completely different premise: we tag words, not syntactic constituents coming from the parser. We discuss this further in Section 5.2. The per-word accuracy comparison results can be seen in Table 5. Before labeling the semantic roles of each predicate, one must first identify the predicates themselves. If a predicate is not identified, NULL tags are assigned to each word for that predicate. The first line of results in the table takes into account this identification process. For the neural network, we used our part-of-speech tagger to perform this as a verb-detection task. We noticed ASSERT failed to identify relatively many predicates. In particular, it seems predicates such as “is” are sometimes labeled as AUX by the part-of-speech tagger, and subsequently ignored. We informed the authors of this, but we did not receive a response. To deal with this, we considered the additional accuracy (second row in the table) measured over only those sentences where the predicate was identified by ASSERT. Timing results The per-sentence compute time is also given in Table 5, averaged over all sentences in the test set. Our method is around 250 times faster than ASSERT. It is not really feasible to run ASSERT for most applications. Measurement NN ASSERT Per-word accuracy (all verbs) 83.64% 83.46% Per-word accuracy (ASSERT verbs) 84.09% 86.06% Per-sentence compute time (secs) 0.02 secs 5.08 secs Table 1: Experimental comparison with ASSERT 5.1 Analysis of our MLP While we gave an intuitive justification of the architecture choices of our model in Section 4, we now give a systematic empirical study of those choices. First of all, providing the position of the word and the predicate in function C(·, ·) is essential: the best model we obtained with a window around the word only gave 51.3%, assuming correct identification of all predicates. Our best model achieves 83.95% in this setting. If we do not cluster the words according to their part-of-speech, we also lose some performance, obtaining 78.6% at best. On the other hand, clustering all words (such as CC, DT, IN part-of-speech tags) also gives weaker results (81.1% accuracy at best). We believe that including all words would give very good performance if the dataset was large enough, but training only on PropBank leads to overfitting, many words being infrequent. Clustering is a way to fight against overfitting, by grouping infrequent words: for example, words with the label NNP, JJ, RB (which we cluster) appear on average 23, 22 and 72 times respectively in the training set, while CC, DT, IN (which we do not cluster) appear 2420, 5659 and 1712 times respectively. 565 Even though some verbs are infrequent, one cannot cluster all verbs into a single group, as each verb dictates the types of semantic roles in the sentence, depending on its frame. Clustering all words into their part-of-speech, including the predicate, gives a poor 73.8% compared with 81.1%, where everything is clustered apart from the predicate. Figure 3 gives some anecdotal examples of test set predictions of our final model compared to ASSERT. 5.2 Argument Classification Accuracy So far we have not used the same accuracy measures as in previous work (Gildea and Jurafsky, 2002; Pradhan et al., 2004). Currently our architecture is designed to label on a per-word basis, while existing systems perform a segmentation process, and then label segments. While we do not optimize our model for the same criteria, it is still possible to measure the accuracy using the same metrics. We measured the argument classification accuracy of our network, assuming the correct segmentation is given to our system, as in (Pradhan et al., 2004), by post-processing our per-word tags to form a majority vote over each segment. This gives 83.18% accuracy for our network when we suppose the predicate must also be identified, and 80.53% for the ASSERT software. Measuring only on predicates identified by ASSERT we instead obtain 84.32% accuracy for our network, and 87.02% for ASSERT. 6 Discussion We have introduced a neural network architecture that can provide computationally efficient semantic role tagging. It is also a general architecture that could be applied to other problems as well. Because our network currently outputs labels on a per-word basis it is difficult to assess existing accuracy measures. However, it should be possible to combine our approach with a shallow parser to enhance performance, and make comparisons more direct. We consider this work as a starting point for different research directions, including the following areas: • Incorporating hand-built features Currently, the only prior knowledge our system encodes comes from part-of-speech tags, in stark contrast to other methods. Of course, performance TRUTH: He camped out at a high-tech nerve center on the floor of [the Big Board, where]ARGM-LOC [he]ARG0 [could]ARGM-MOD [watch]REL [updates on prices and pending stock orders]ARG1. ASSERT (68.7%): He camped out at a high-tech nerve center on the floor of the Big Board, [ where]ARGM-LOC [he]ARG0 [could]ARGM-MOD [watch]REL [updates]ARG1 on prices and pending stock orders. NN (100%): He camped out at a high-tech nerve center on the floor of [the Big Board, where]ARGM-LOC [he]ARG0 [could]ARGM-MOD [watch]REL [updates on prices and pending stock orders]ARG1. TRUTH: [United Auto Workers Local 1069, which]ARG0 [represents]REL [3,000 workers at Boeing’s helicopter unit in Delaware County, Pa.]ARG1 , said it agreed to extend its contract on a day-by-day basis, with a 10-day notification to cancel, while it continues bargaining. ASSERT (100%): [United Auto Workers Local 1069, which]ARG0 [represents]REL [3,000 workers at Boeing’s helicopter unit in Delaware County, Pa.]ARG1 , said it agreed to extend its contract on a day-by-day basis, with a 10-day notification to cancel, while it continues bargaining. NN (89.1%): [United Auto Workers Local 1069, which]ARG0 [represents]REL [3,000 workers at Boeing’s helicopter unit]ARG1 [ in Delaware County]ARGM-LOC , Pa. , said it agreed to extend its contract on a day-by-day basis, with a 10-day notification to cancel, while it continues bargaining. Figure 3: Two examples from the PropBank test set, showing Neural Net and ASSERT and gold standard labelings, with per-word accuracy in brackets. Note that even though our labeling does not match the hand-annotated one in the second sentence it still seems to make some sense as “in Delaware County” is labeled as a location modifier. The complete set of predictions on the test set can be found at http: //ml.nec-labs.com/software/senna. would improve with more hand-built features. For example, simply adding whether each word is part of a noun or verb phrase using the handannotated parse tree (the so-called “GOV” feature from (Gildea and Jurafsky, 2002)) improves the performance of our system from 83.95% to 85.8%. One must trade the generality of the model with its specificity, and also take into account how long the features take to compute. • Incorporating segment information Our system has no prior knowledge about segmentation in text. This could be encoded in many ways: most obviously by using a chunker, but also by 566 designing a different network architecture, e.g. by encoding contiguity constraints. To show the latter is useful, using hand-annotated segments to force contiguity by majority vote leads to an improvement from 83.95% to 85.6%. • Incorporating known invariances via virtual training data. In image recognition problems it is common to create artificial training data by taking into account invariances in the images, e.g. via rotation and scale. Such data improves generalization substantially. It may be possible to achieve similar results for text, by “warping” training data to create new sentences, or by constructing sentences from scratch using a hand-built grammar. • Unlabeled data. Our representation of words is as d dimensional vectors. We could try to improve this representation by learning a language model from unlabeled data (Bengio and Ducharme, 2001). As many words in PropBank only appear a few times, the representation might improve, even though the learning is unsupervised. This may also make the system generalize better to types of data other than the Wall Street Journal. • Transductive Inference. Finally, one can also use unlabeled data as part of the supervised training process, which is called transduction or semi-supervised learning. In particular, we find the possibility of using unlabeled data, invariances and the use of transduction exciting. These possibilities naturally fit into our framework, whereas scalability issues will limit their application in competing methods. References C.F. Baker, C.J. Fillmore, and J.B. Lowe. 1998. The Berkeley FrameNet project. Proceedings of COLINGACL, 98. Y. Bengio and R. Ducharme. 2001. A neural probabilistic language model. In Advances in Neural Information Processing Systems, NIPS 13. B.E. Boser, I.M. Guyon, and V.N. Vapnik. 1992. A training algorithm for optimal margin classifiers. Proceedings of the fifth annual workshop on Computational learning theory, pages 144–152. J.S. Bridle. 1990. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In F. Fogelman Souli´e and J. H´erault, editors, Neurocomputing: Algorithms, Architectures and Applications, pages 227– 236. NATO ASI Series. E. Brill. 1992. A simple rule-based part of speech tagger. Proceedings of the Third Conference on Applied Natural Language Processing, pages 152–155. E. Charniak. 2000. A maximum-entropy-inspired parser. Proceedings of the first conference on North American chapter of the Association for Computational Linguistics, pages 132–139. D. Gildea and D. Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245–288. D. Gildea and M. Palmer. 2001. The necessity of parsing for predicate argument recognition. Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 239–246. J. Henderson. 2004. Discriminative training of a neural network statistical parser. In Proceedings of the 42nd Meeting of Association for Computational Linguistics. Y. LeCun, L. Bottou, G. B. Orr, and K.-R. M¨uller. 1998. Efficient backprop. In G.B. Orr and K.-R. M¨uller, editors, Neural Networks: Tricks of the Trade, pages 9– 50. Springer. M.P. Marcus, M.A. Marcinkiewicz, and B. Santorini. 1993. Building a large annotated corpus of English: the penn treebank. Computational Linguistics, 19(2):313–330. M. Palmer, D. Gildea, and P. Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Comput. Linguist., 31(1):71–106. S. Pradhan, W. Ward, K. Hacioglu, J. Martin, and D. Jurafsky. 2004. Shallow semantic parsing using support vector machines. Proceedings of HLT/NAACL-2004. V. Punyakanok, D. Roth, and W. Yih. 2005. The necessity of syntactic parsing for semantic role labeling. Proceedings of IJCAI’05, pages 1117–1123. A. Ratnaparkhi. 1997. A linear observed time statistical parser based on maximum entropy models. Proceedings of EMNLP. H. Schwenk and J.L. Gauvain. 2002. Connectionist language modeling for large vocabulary continuousspeech recognition. Proceedings of ICASSP’02. D. H. Younger. 1967. Recognition and parsing of context-free languages in time n3. Information and Control, 10. 567
2007
71
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 568–575, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Improving the Interpretation of Noun Phrases with Cross-linguistic Information Roxana Girju University of Illinois at Urbana-Champaign [email protected] Abstract This paper addresses the automatic classification of semantic relations in noun phrases based on cross-linguistic evidence from a set of five Romance languages. A set of novel semantic and contextual English– Romance NP features is derived based on empirical observations on the distribution of the syntax and meaning of noun phrases on two corpora of different genre (Europarl and CLUVI). The features were employed in a Support Vector Machines algorithm which achieved an accuracy of 77.9% (Europarl) and 74.31% (CLUVI), an improvement compared with two state-of-the-art models reported in the literature. 1 Introduction Semantic knowledge is very important for any application that requires a deep understanding of natural language. The automatic acquisition of semantic information in text has become increasingly important in ontology development, information extraction, question answering, and other advanced natural language processing applications. In this paper we present a model for the automatic semantic interpretation of noun phrases (NPs), which is the task of determining the semantic relation among the noun constituents. For example, family estate encodes a POSSESSION relation, while dress of silk refers to PART-WHOLE. The problem, while simple to state is hard to solve. The reason is that the meaning of these constructions is most of the time ambiguous or implicit. Interpreting NPs correctly requires various types of information from world knowledge to complex context features. Moreover, the extension of this task to other natural languages brings forward new issues and problems. For instance, beer glass translates into tarro de cerveza in Spanish, bicchiere da birra in Italian, verre `a bi`ere in French, and pahar de bere in Romanian. Thus, an important research question is how do the syntactic constructions in the target language contribute to the preservation of meaning in context. In this paper we investigate noun phrases based on cross-linguistic evidence and present a domain independent model for their semantic interpretation. We aim at uncovering the general aspects that govern the semantics of NPs in English based on a set of five Romance languages: Spanish, Italian, French, Portuguese, and Romanian. The focus on Romance languages is well motivated. It is mostly true that English noun phrases translate into constructions of the form N P N in Romance languages where, as we will show below, the P (preposition) varies in ways that correlate with the semantics. Thus Romance languages will give us another source of evidence for disambiguating the semantic relations in English NPs. We also present empirical observations on the distribution of the syntax and meaning of noun phrases on two different corpora based on two state-of-the-art classification tag sets: Lauer’s set of 8 prepositions (Lauer, 1995) and our list of 22 semantic relations. We show that various crosslingual cues can help in the NP interpretation task when employed in an SVM model. The results are compared against two state of the art approaches: a su568 pervised machine learning model, Semantic Scattering (Moldovan and Badulescu, 2005), and a webbased probabilistic model (Lapata and Keller, 2004). The paper is organized as follows. In Section 2 we present a summary of the previous work. Section 3 lists the syntactic and semantic interpretation categories used along with observations regarding their distribution on the two different cross-lingual corpora. Sections 4 and 5 present a learning model and results for the interpretation of English noun phrases. Finally, in Section 6 we offer some discussion and conclusions. 2 Related Work Currently, the best-performing NP interpretation methods in computational linguistics focus mostly on two consecutive noun instances (noun compounds) and rely either on rather ad-hoc, domainspecific semantic taxonomies, or on statistical models on large collections of unlabeled data. Recent results have shown that symbolic noun compound interpretation systems using machine learning techniques coupled with a large lexical hierarchy perform with very good accuracy, but they are most of the time tailored to a specific domain (Rosario and Hearst, 2001). On the other hand, the majority of corpus statistics approaches to noun compound interpretation collect statistics on the occurrence frequency of the noun constituents and use them in a probabilistic model (Lauer, 1995). More recently, (Lapata and Keller, 2004) showed that simple unsupervised models perform significantly better when the frequencies are obtained from the web, rather than from a large standard corpus. Other researchers (Pantel and Pennacchiotti, 2006), (Snow et al., 2006) use clustering techniques coupled with syntactic dependency features to identify IS-A relations in large text collections. (Kim and Baldwin, 2006) and (Turney, 2006) focus on the lexical similarity of unseen noun compounds with those found in training. However, although the web-based solution might overcome the data sparseness problem, the current probabilistic models are limited by the lack of deep linguistic information. In this paper we investigate the role of cross-linguistic information in the task of English NP semantic interpretation and show the importance of a set of novel linguistic features. 3 Corpus Analysis For a better understanding of the meaning of the N N and N P N instances, we analyzed the semantic behavior of these constructions on a large crosslinguistic corpora of examples. We are interested in what syntactic constructions are used to translate the English instances to the target Romance languages and vice-versa, what semantic relations do these constructions encode, and what is the corpus distribution of the semantic relations. 3.1 Lists of semantic classification relations Although the NP interpretation problem has been studied for a long time, researchers haven’t agreed on the number and the level of abstraction of these semantic categories. They can vary from a few prepositions (Lauer, 1995) to hundreds or thousands specific semantic relations (Finin, 1980). The more abstract the categories, the more noun phrases are covered, but also the more room for variation as to which category a phrase should be assigned. In this paper we experiment with two state of the art classification sets used in NP interpretation. The first is a core set of 22 semantic relations (22 SRs) identified by us from the computational linguistics literature. This list, presented in Table 1 along with examples is general enough to cover a large majority of text semantics while keeping the semantic relations to a manageable number. The second set is Lauer’s list of 8 prepositions (8 PP) and can be applied only to noun compounds (of, for, with, in, on, at, about, and from – e.g., according to this classification, love story can be classified as story about love). We selected these sets as they are of different size and contain semantic classification categories at different levels of abstraction. Lauer’s list is more abstract and, thus capable of encoding a large number of noun compound instances, while the 22-SR list contains finer grained semantic categories. We show below the coverage of these semantic lists on two different corpora and how well they solve the interpretation problem of noun phrases. 3.2 The data The data was collected from two text collections with different distributions and of different genre, 569 POSSESSION (family estate); KINSHIP (sister of the boy); PROPERTY (lubricant viscosity); AGENT (return of the natives); THEME (acquisition of stock); TEMPORAL (morning news); DEPICTION-DEPICTED (a picture of my niece); PART-WHOLE (brush hut); HYPERNYMY (IS-A) (daisy flower); CAUSE (scream of pain); MAKE/PRODUCE (chocolate factory); INSTRUMENT (laser treatment); LOCATION (castle in the desert); PURPOSE (cough syrup); SOURCE (grapefruit oil); TOPIC (weather report); MANNER (performance with passion); beneficiary (rights of citizens); MEANS (bus service); EXPERIENCER (fear of the girl); MEASURE (cup of sugar); TYPE (framework law); Table 1: The list of 22 semantic relations (22-SRs). Europarl1 and CLUVI2. The Europarl data was assembled by combining the Spanish-English, ItalianEnglish, French-English and Portuguese-English corpora which were automatically aligned based on exact matches of English translations. Then, we considered only the English sentences which appeared verbatim in all four language pairs. The resulting English corpus contained 10,000 sentences which were syntactically parsed (Charniak, 2000). From these we extracted the first 3,000 NP instances (N N: 48.82% and N P N: 51.18%). CLUVI is an open text repository of parallel corpora of contemporary oral and written texts in some of the Romance languages. Here, we focused only on the English-Portuguese and English-Spanish parallel texts from the works of John Steinbeck, H. G. Wells, J. Salinger, and others. Using the CLUVI search interface we created a sentence-aligned parallel corpus of 2,800 English-Spanish and EnglishPortuguese sentences. The English versions were automatically parsed after which each N N and N P N instance thus identified was manually mapped to the corresponding translations. The resulting corpus contains 2,200 English instances with a distribution of 26.77% N N and 73.23% N P N. 3.3 Corpus Annotation For each corpus, each NP instance was presented separately to two experienced annotators in a web interface in context along with the English sentence and its translations. Since the corpora do not cover some of the languages (Romanian in Europarl and CLUVI, and Italian and French in CLUVI), three other native speakers of these languages and fluent in English provided the translations which were 1http://www.isi.edu/koehn/europarl/. This corpus contains over 20 million words in eleven official languages of the European Union covering the proceedings of the European Parliament from 1996 to 2001. 2CLUVI - Linguistic Corpus of the University of Vigo - Parallel Corpus 2.1 - http://sli.uvigo.es/CLUVI/ added to the list. The two computational semantics annotators had to tag each English constituent noun with its corresponding WordNet sense and each instance with the corresponding semantic category. If the word was not found in WordNet the instance was not considered. Whenever the annotators found an example encoding a semantic category other than those provided or they didn’t know what interpretation to give, they had to tag it as “OTHER-SR”, and respectively “OTHER-PP”3. The details of the annotation task and the observations drawn from there are presented in a companion paper (Girju, 2007). The corpus instances used in the corpus analysis phase have the following format: <NPEn ;NPEs; NPIt; NPFr; NPPort; NPRo; target>. The word target is one of the 23 (22 + OTHER-SR) semantic relations and one of the eight prepositions considered or OTHER-PP (with the exception of those N P N instances that already contain a preposition). For example, <development cooperation; cooperaci´on para el desarrollo; cooperazione allo sviluppo; coop´eration au d´eveloppement; cooperare pentru dezvoltare; PURPOSE / FOR>. The annotators’ agreement was measured using Kappa statistics: K = Pr(A)−Pr(E) 1−Pr(E) , where Pr(A) is the proportion of times the annotators agree and Pr(E) is the probability of agreement by chance. The Kappa values were obtained on Europarl (N N: 0.80 for 8-PP and 0.61 for 22-SR; N P N: 0.67 for 22-SR) and CLUVI (N N: 0.77 for 8-PP and 0.56 for 22-SR; N P N: 0.68 for 22-SR). We also computed the number of pairs that were tagged with OTHER by both annotators for each semantic relation and preposition paraphrase, over the number of examples classified in that category by at least one of the judges (in Europarl: 91% for 8-PP and 78% for 22SR; in CLUVI: 86% for 8-PP and 69% for 22-SR). The agreement obtained on the Europarl corpus is 3The annotated corpora resulted in this research is available at http://apfel.ai.uiuc.edu. 570 higher than the one on CLUVI on both classification sets. This is partially explained by the distribution of semantic relations in both corpora, as will be shown in the next subsection. 3.4 Cross-linguistic distribution of Syntactic Constructions From the sets of 2,954 (Europarl) and 2,168 (CLUVI) instances resulted after annotation, the data show that over 83% of the translation patterns for both text corpora on all languages were of the type N N and N P N. However, while their distribution is balanced in the Europarl corpus (about 45%, with a 64% N P N – 26% N N ratio for Romanian), in CLUVI the N P N constructions occur in more than 85% of the cases (again, with the exception of Romanian – 50%). It is interesting to note here that some of the English NPs are translated into both noun–noun and noun–adjective compounds in the target languages. For example, love affair translates in Italian as storia d’amore or the noun–adjective compound relazione amorosa. There are also instances that have just one word correspondent in the target language (e.g., ankle boot is bottine in French). The rest of the data is encoded by other syntactic paraphrases (e.g., bomb site is luogo dove `e esplosa la bomba (It.)). 4. From the initial corpus we considered those English instances that had all the translations encoded only by N N and N P N. Out of these, we selected only 1,023 Europarl and 1,008 CLUVI instances encoded by N N and N P N in all languages considered and resulted after agreement. 4 Model 4.1 Feature space We have identified and experimented with 13 NP features presented below. With the exceptions of features F1-F5 (Girju et al., 2005), all the other features are novel. A. English Features F1 and F2. Noun semantic class specifies the WordNet sense of the head (F1) and modifier noun (F2) and implicitly points to all its hypernyms. For example, the hypernyms of car#1 are: {motor vehi4“the place where the bomb is exploded” (It.) cle}, .. {entity}. This feature helps generalize over the semantic classes of the two nouns in the corpus. F3 and F4. WordNet derivationally related form specifies if the head (F3) and the modifier (F4) nouns are related to a corresponding WordNet verb (e.g. statement derived from to state; cry from to cry). F5. Prepositional cues that link the two nouns in an NP. These can be either simple or complex prepositions such as “of” or “according to”. In case of N N instances, this feature is “–” (e.g., framework law). F6 and F7. Type of nominalized noun indicates the specific class of nouns the head (F6) or modifier (F7) belongs to depending on the verb it derives from. First, we check if the noun is a nominalization. For English we used NomLex-Plus (Meyers et al., 2004) to map nouns to corresponding verbs.5 For example, “destruction of the city”, where destruction is a nominalization. F6 and F7 may overlap with features F3 and F4 which are used in case the noun to be checked does not have an entry in the NomLex-Plus dictionary. These features are of particular importance since they impose some constraints on the possible set of relations the instance can encode. They take the following values (identified based on list of verbs extracted from VerbNet (Kipper et al., 2000)): a. Active form nouns which have an intrinsic active voice predicate-argument structure. (Giorgi and Longobardi, 1991) argue that in English this is a necessary restriction. Most of the time, they represent states of emotion, such as fear, desire, etc. These nouns mark their internal argument through of and require most of the time prepositions like por and not de when translated in Romance. Our observations on the Romanian translations (captured by features F12 and F13 below) show that the possible cases of ambiguity are solved by the type of syntactic construction used. For example, N N genitivemarked constructions are used for EXPERIENCER– encoding instances, while N de N or N pentru N (N for N) are used for other relations. Such examples are the love of children – THEME (and not the love by the children). (Giorgi and Longobardi, 1991) mention that with such nouns that resist passivisation, 5NomLex-Plus is a hand-coded database of 5,000 verb nominalizations, de-adjectival, and de-adverbial nouns including the corresponding subcategorization frames (verb-argument structure information). 571 the preposition introducing the internal argument, even if it is of, has always a semantic content, and is not a bare case-marker realizing the genitive case. b. Unaccusative (ergative) nouns which are derived from ergative verbs that take only internal arguments (e.g., not agentive ones). For example, the transitive verb to disband allows the subject to be deleted as in the following sentences (1) “The lead singer disbanded the group in 1991.” and (2) “The group disbanded.”. Thus, the corresponding ergative nominalization the disbandment of the group encodes a THEME relation and not AGENT. c. Unergative (intransitive) nouns are derived from intransitive verbs and take only AGENT semantic relations. For example, the departure of the girl. d. Inherently passive nouns such as the capture of the soldier. These nouns, like the verbs they are derived from, assume a default AGENT (subject) and being transitive, associate to their internal argument (introduced by “of” in the example above) the THEME relation. B. Romance Features F8, F9, F10, F11 and F12. Prepositional cues that link the two nouns are extracted from each translation of the English instance: F8 (Es.), F9 (Fr.), F10 (It.), F11 (Port.), and F12 (Ro.). These can be either simple or complex prepositions (e.g., de, in materia de (Es.)) in all five Romance languages, or the Romanian genitival article a/ai/ale. In Romanian the genitive case is assigned by the definite article of the first noun to the second noun, case realized as a suffix if the second noun is preceded by the definite article or as one of the genitival articles a/ai/ale. For example, the noun phrase the beauty of the girl is translated as frumuset¸ea fetei (beauty-the girl-gen), and the beauty of a girl as frumuset¸ea unei fete (beautythe gen girl). For N N instances, this feature is “–”. F13. Noun inflection is defined only for Romanian and shows if the modifier noun is inflected (indicates the genitive case). This feature is used to help differentiate between instances encoding IS-A and other semantic relations in N N compounds in Romanian. It also helps in features F6 and F7, case a) when the choice of syntactic construction reflects different semantic content. For example, iubirea pentru copii (N P N) (the love for children) and not iubirea copiilor (N N) (love expressed by the children). 4.2 Learning Models We have experimented with the support vector machines (SVM) model6 and compared the results against two state-of-the-art models: a supervised model, Semantic Scattering (SS), (Moldovan and Badulescu, 2005), and a web-based unsupervised model (Lapata and Keller, 2004). The SVM and SS models were trained and tested on the Europarl and CLUVI corpora using a 8:2 ratio. The test dataset was randomly selected from each corpus and the test nouns (only for English) were tagged with the corresponding sense in context using a state of the art WSD tool (Mihalcea and Faruque, 2004). After the initial NP instances in the training and test corpora were expanded with the corresponding features, we had to prepare them for SVM and SS. The method consists of a set of automatic iterative procedures of specialization of the English nouns on the WordNet IS-A hierarchy. Thus, after a set of necessary specialization iterations, the method produces specialized examples which through supervised machine learning are transformed into sets of semantic rules. This specialization procedure improves the system’s performance since it efficiently separates the positive and negative noun-noun pairs in the WordNet hierarchy. Initially, the training corpus consists of examples in the format exemplified by the feature space. Note that for the English NP instances, each noun constituent was expanded with the corresponding WordNet top semantic class. At this point, the generalized training corpus contains two types of examples: unambiguous and ambiguous. The second situation occurs when the training corpus classifies the same noun – noun pair into more than one semantic category. For example, both relationships “chocolate cake”-PART-WHOLE and “chocolate article”-TOPIC are mapped into the more general type <entity#1, entity#1, PART-WHOLE/TOPIC>7. We recursively specialize these examples to eliminate the ambiguity. By specialization, the semantic class is replaced with the corresponding hyponym for that particular sense, i.e. the concept immediately below in the hierarchy. These steps are repeated until there are no 6We used the package LIBSVM with a radial-based kernel http://www.csie.ntu.edu.tw/∼cjlin/libsvm/ 7The specialization procedure applies only to features 1, 2. 572 more ambiguous examples. For the example above, the specialization stops at the first hyponym of entity: physical entity (for cake) and abstract entity (for article). For the unambiguous examples in the generalized training corpus (those that are classified with a single semantic relation), constraints are determined using cross validation on SVM. A. Semantic Scattering uses a training data set to establish a boundary G∗on WordNet noun hierarchies such that each feature pair of noun – noun senses fij on this boundary maps uniquely into one of a predefined list of semantic relations, and any feature pair above the boundary maps into more than one semantic relation. For any new pair of noun– noun senses, the model finds the closest WordNet boundary pair. The authors define with SCm = {fm i } and SCh = {fh j } the sets of semantic class features for modifier noun and, respectively head noun. A pair of <modifier – head> nouns maps uniquely into a semantic class feature pair < fm i , fh j >, denoted as fij. The probability of a semantic relation r given feature pair fij, P(r|fij) = n(r,fij) n(fij) , is defined as the ratio between the number of occurrences of a relation r in the presence of feature pair fij over the number of occurrences of feature pair fij in the corpus. The most probable semantic relation ˆr is arg maxr∈R P(r|fij) = arg maxr∈R P(fij|r)P(r). B. (Lapata and Keller, 2004)’s web-based unsupervised model classifies noun - noun instances based on Lauer’s list of 8 prepositions and uses the web as training corpus. They show that the best performance is obtained with the trigram model f(n1, p, n2). The count used for a given trigram is the number of pages returned by Altavista on the trigram corresponding queries. For example, for the test instance war stories, the best number of hits was obtained with the query stories about war. For the Europarl and CLUVI test sets, we replicated Lapata & Keller’s experiments using Google8. We formed inflected queries with the patterns they proposed and searched the web. 8As Google limits the number of queries to 1,000 per day, we repeated the experiment for a number of days. Although (Lapata and Keller, 2004) used Altavista in their experiments, they showed there is almost no difference between the correlations achieved using Google and Altavista counts. 5 Experimental results Table 2 shows the results obtained against SS and Lapata & Keller’s model on both corpora and the contribution the features exemplified in one baseline and six versions of the SVM model. The baseline is defined only for the English part of the NP feature set and measures the the contribution of the WordNet IS-A lexical hierarchy specialization. The baseline does not differentiate between unambiguous and ambiguous training examples (after just one level specialization) and thus, does not specialize the ambiguous ones. Moreover, here we wanted to see what is the difference between SS and SVM, and what is the contribution of the other English features, such as preposition and nominalization (F1–F7). The table shows that, overall the performance is better for the Europarl corpus than for CLUVI. For the Baseline and SV M1, SS [F1 + F2] gives better results than SVM. The inclusion of other English features (SVM [F1–F7]) adds more than 15% (with a higher increase in Europarl) for SV M1. The contribution of Romance linguistic features. Since our intuition is that the more translations are provided for an English noun phrase instance, the better the results, we wanted to see what is the impact of each Romance language on the overall performance. Thus, SV M2 shows the results obtained for English and the Romance language that contributed the least to the performance (F1–F12). Here we computed the performance on all five English – Romance language combinations and chose the Romance language that provided the best result. Thus, SVM #2, #3, #4, #5, and #6 add Spanish, French, Italian, Portuguese, and Romanian in this order and show the contribution of each Romance preposition and all features for English. The language ranking in Table 2 shows that Romance languages considered here have a different contribution to the overall performance. While the addition of Italian in Europarl decreases the performance, Portuguese doesn’t add anything. However, a closer analysis of the data shows that this is mostly due to the distribution of the corpus instances. For example, French, Italian, Spanish, and Portuguese are most of the time consistent in the choice of preposition (e.g. most of the time, if the preposition ’de’ (’of’) is used in French, then the 573 Learning models Results [%] CLUVI Europarl 8-PP 22-SR 8-PP 22-SR Baseline (En.) (no specializ.) SS (F1+F2) 44.11 48.03 38.7 38 SVM (F1+F2) 36.37 40.67 31.18 34.81 SVM (F1-F7) – 52.15 – 47.37 SVM1 (En.) SS (F1+F2) 56.22 61.33 53.1 56.81 SVM (F1+F2) 45.08 46.1 40.23 42.2 SVM (F1-F7) – 62.54 – 74.19 SVM2 (En. + Es.) SVM (F1-F8) – 64.18 – 75.74 SVM3 (En.+Es.+Fr.) SVM (F1-F9) – 67.8 – 76.52 SVM4 (En.+Es.+Fr.+It.) SVM (F1-F10) – 66.31 – 75.74 SVM5 (En.+Es.+Fr.+It+Port.) SVM (F1-F11) – 67.12 – 75.74 SVM6 (En.+Romance: F1–F13) – 74.31 – 77.9 Lapata & Keller’s unsupervised model (En.) 44.15 – 45.31 – Table 2: The performance of the cross-linguistic SVM models compared against one baseline, SS model and Lapata & Keller’s unsupervised model. Accuracy (number of correctly labeled instances over the number of instances in the test set). corresponding preposition is used in the other four language translations). A notable exception here is Romanian which provides two possible constructions: the N P N and the genitive-marked N N. The table shows (in the increase in performance between SV M5 and SV M6) that this choice is not random, but influenced by the meaning of the instances (features F12, F13). This observation is also supported by the contribution of each feature to the overall performance. For example, in Europarl, the WordNet verb and nominalization features of the head noun (F3, F6) have a contribution of 4.08%, while for the modifier nouns it decreases by about 2%. The preposition (F5) contributes 4.41% (Europarl) and 5.24% (CLUVI) to the overall performance. A closer analysis of the data shows that in Europarl most of the N N instances were naming noun compounds such as framework law (TYPE) and, most of the time, are encoded by N N patterns in the target languages (e.g., legge quadro (It.)). In the CLUVI corpus, on the other hand, the N N Romance translations represented only 1% of the data. A notable exception here is Romanian where most NPs are represented as genitive–marked noun compounds. However, there are instances that are encoded mostly or only as N P N constructions and this choice correlates with the meaning of the instance. For example, the milk glass (PURPOSE) translates as paharul de lapte (glass-the of milk) and not as paharul laptelui (glass-the milk-gen), the olive oil (SOURCE) translates as uleiul de mˇasline (oil-the of olive) and not as uleiul mˇaslinei (oil-the olive-gen). Other examples include CAUSE and TOPIC. Lauer’s set of 8 prepositions represents 94.5% (Europarl) and 97% (CLUVI) of the N P N instances. From these, the most frequent preposition is “of” with a coverage of 70.31% (Europarl) and 85.08% (CLUVI). Moreover, in the Europarl corpus, 26.39% of the instances are synthetic phrases (where one of the nouns is a nominalization) encoding AGENT, EXPERIENCER, THEME, BENEFICIARY. Out of these instances, 74.81% use the preposition of. In CLUVI, 11.71% of the examples were verbal, from which the preposition of has a coverage of 82.20%. The many-to-many mappings of the prepositions (especially of/de) to the semantic classes adds to the complexity of the interpretation task. Thus, for the interpretation of these constructions a system must rely on the semantic information of the preposition and two constituent nouns in particular, and on context in general. In Europarl, the most frequently occurring relations are PURPOSE, TYPE, and THEME that together represent about 57% of the data followed by PART-WHOLE, PROPERTY, TOPIC, AGENT, and LOCATION with an average coverage of about 6.23%. Moreover, other relations such as KINSHIP, DEPICTION, MANNER, MEANS did not occur in this corpus and 5.08% represented OTHER-SR relations. This semantic distribution contrasts with the one in CLUVI, which uses a more descriptive language. Here, the most frequent relation by far 574 is PART-WHOLE (32.14%), followed by LOCATION (12.40%), THEME (9.23%) and OTHER-SR (7.74%). It is interesting to note here that only 5.70% of the TYPE relation instances in Europarl were unique. This is in contrast with the other relations in both corpora, where instances were mostly unique. We also report here our observations on Lapata & Keller’s unsupervised model. An analysis of these results showed that the order of the constituent nouns in the N P N paraphrase plays an important role. For example, a search for blood vessels generated similar frequency counts for vessels of blood and blood in vessels. About 30% noun noun paraphrasable pairs preserved the order in the corresponding N P N paraphrases. We also manually checked the first five entries generated by Google for each most frequent prepositional paraphrase for 50 instances and noticed that about 35% of them were wrong due to syntactic and/or semantic ambiguities. Thus, since we wanted to measure the impact of these ambiguities of noun compounds on the interpretation performance, we further tested the probabilistic web-based model on four distinct test sets selected from Europarl, each containing 30 noun noun pairs encoding different types of ambiguity: in set#1 the noun constituents had only one part of speech and one WordNet sense; in set#2 the nouns had at least two possible parts of speech and were semantically unambiguous, in set#3 the nouns were ambiguous only semantically, and in set#4 they were ambiguous both syntactically and semantically. For unambiguous noun-noun pairs (set#1), the model obtained an accuracy of 35.01%, while for more semantically ambiguous compounds it obtained an accuracy of about 48.8%. This shows that for more semantically ambiguous noun - noun pairs, the webbased probabilistic model introduces a significant number of false positives. Thus, the more abstract the categories, the more noun compounds are covered, but also the more room for variation as to which category a compound should be assigned. 6 Discussion and Conclusions In this paper we presented a supervised, knowledgeintensive interpretation model which takes advantage of new linguistic information from English and a list of five Romance languages. Our approach to NP interpretation is novel in several ways. We defined the problem in a cross-linguistic framework and provided empirical observations on the distribution of the syntax and meaning of noun phrases on two different corpora based on two state-of-the-art classification tag sets. As future work we consider the inclusion of other features such as the semantic classes of Romance nouns from aligned EuroWordNets, and other sentence features. Since the results obtained can be seen as an upper bound on NP interpretation due to perfect English - Romance NP alignment, we will experiment with automatic translations generated for the test data. Moreover, we like to extend the analysis to other set of languages whose structures are very different from English and Romance. References T. W. Finin. 1980. The Semantic Interpretation of Compound Nominals. Ph.D. thesis, University of Illinois at UrbanaChampaign. A. Giorgi and G. Longobardi. 1991. The syntax of noun phrases. Cambridge University Press. R. Girju, D. Moldovan, M. Tatu, and D. Antohe. 2005. On the semantics of noun compounds. Computer Speech and Language, 19(4):479–496. R. Girju. 2007. Experiments with an annotation scheme for a knowledge-rich noun phrase interpretation system. The Linguistic Annotation Workshop at ACL, Prague. Su Nam Kim and T. Baldwin. 2006. Interpreting semantic relations in noun compounds via verb semantics. COLING-ACL. K. Kipper, H. Dong, and M. Palmer. 2000. Class-based construction of a verb lexicon. AAAI Conference, Austin. M. Lapata and F. Keller. 2004. The Web as a baseline: Evaluating the performance of unsupervised Web-based models for a range of NLP tasks. HLT-NAACL. M. Lauer. 1995. Corpus statistics meet the noun compound: Some empirical results. ACL, Cambridge, Mass. A. Meyers, R. Reeves, C. Macleod, R. Szekeley V. Zielinska, and B. Young. 2004. The cross-breeding of dictionaries. LREC-2004, Lisbon, Portugal. R. Mihalcea and E. Faruque. 2004. Senselearner: Minimally supervised word sense disambiguation for all words in open text. ACL/SIGLEX Senseval-3, Barcelona, Spain. D. Moldovan and A. Badulescu. 2005. A semantic scattering model for the automatic interpretation of genitives. HLT/EMNLP Conference, Vancouver, Canada. P. Pantel and M. Pennacchiotti. 2006. Espresso: Leveraging generic patterns for automatically harvesting semantic relations. COLING/ACL, Sydney, Australia. B. Rosario and M. Hearst. 2001. Classifying the semantic relations in noun compounds. EMNLP Conference. R. Snow, D. Jurafsky, and A. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. COLING-ACL. P. Turney. 2006. Expressing implicit semantic relations without supervision. COLING/ACL, Sydney, Australia. 575
2007
72
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 576–583, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Learning to Extract Relations from the Web using Minimal Supervision Razvan C. Bunescu Department of Computer Sciences University of Texas at Austin 1 University Station C0500 Austin, TX 78712 [email protected] Raymond J. Mooney Department of Computer Sciences University of Texas at Austin 1 University Station C0500 Austin, TX 78712 [email protected] Abstract We present a new approach to relation extraction that requires only a handful of training examples. Given a few pairs of named entities known to exhibit or not exhibit a particular relation, bags of sentences containing the pairs are extracted from the web. We extend an existing relation extraction method to handle this weaker form of supervision, and present experimental results demonstrating that our approach can reliably extract relations from web documents. 1 Introduction A growing body of recent work in information extraction has addressed the problem of relation extraction (RE), identifying relationships between entities stated in text, such as LivesIn(Person, Location) or EmployedBy(Person, Company). Supervised learning has been shown to be effective for RE (Zelenko et al., 2003; Culotta and Sorensen, 2004; Bunescu and Mooney, 2006); however, annotating large corpora with examples of the relations to be extracted is expensive and tedious. In this paper, we introduce a supervised learning approach to RE that requires only a handful of training examples and uses the web as a corpus. Given a few pairs of well-known entities that clearly exhibit or do not exhibit a particular relation, such as CorpAcquired(Google, YouTube) and not(CorpAcquired(Yahoo, Microsoft)), a search engine is used to find sentences on the web that mention both of the entities in each of the pairs. Although not all of the sentences for positive pairs will state the desired relationship, many of them will. Presumably, none of the sentences for negative pairs state the targeted relation. Multiple instance learning (MIL) is a machine learning framework that exploits this sort of weak supervision, in which a positive bag is a set of instances which is guaranteed to contain at least one positive example, and a negative bag is a set of instances all of which are negative. MIL was originally introduced to solve a problem in biochemistry (Dietterich et al., 1997); however, it has since been applied to problems in other areas such as classifying image regions in computer vision (Zhang et al., 2002), and text categorization (Andrews et al., 2003; Ray and Craven, 2005). We have extended an existing approach to relation extraction using support vector machines and string kernels (Bunescu and Mooney, 2006) to handle this weaker form of MIL supervision. This approach can sometimes be misled by textual features correlated with the specific entities in the few training pairs provided. Therefore, we also describe a method for weighting features in order to focus on those correlated with the target relation rather than with the individual entities. We present experimental results demonstrating that our approach is able to accurately extract relations from the web by learning from such weak supervision. 2 Problem Description We address the task of learning a relation extraction system targeted to a fixed binary relationship R. The only supervision given to the learning algo576 rithm is a small set of pairs of named entities that are known to belong (positive) or not belong (negative) to the given relationship. Table 1 shows four positive and two negative example pairs for the corporate acquisition relationship. For each pair, a bag of sentences containing the two arguments can be extracted from a corpus of text documents. The corpus is assumed to be sufficiently large and diverse such that, if the pair is positive, it is highly likely that the corresponding bag contains at least one sentence that explicitly asserts the relationship R between the two arguments. In Section 6 we describe a method for extracting bags of relevant sentences from the web. +/− Arg a1 Arg a2 + Google YouTube + Adobe Systems Macromedia + Viacom DreamWorks + Novartis Eon Labs − Yahoo Microsoft − Pfizer Teva Table 1: Corporate Acquisition Pairs. Using a limited set of entity pairs (e.g. those in Table 1) and their associated bags as training data, the aim is to induce a relation extraction system that can reliably decide whether two entities mentioned in the same sentence exhibit the target relationship or not. In particular, when tested on the example sentences from Figure 1, the system should classify S1, S3,and S4 as positive, and S2 and S5 as negative. +/S1: Search engine giant Google has bought videosharing website YouTube in a controversial $1.6 billion deal. −/S2: The companies will merge Google’s search expertise with YouTube’s video expertise, pushing what executives believe is a hot emerging market of video offered over the Internet. +/S3: Google has acquired social media company, YouTube for $1.65 billion in a stock-for-stock transaction as announced by Google Inc. on October 9, 2006. +/S4: Drug giant Pfizer Inc. has reached an agreement to buy the private biotechnology firm Rinat Neuroscience Corp., the companies announced Thursday. −/S5: He has also received consulting fees from Alpharma, Eli Lilly and Company, Pfizer, Wyeth Pharmaceuticals, Rinat Neuroscience, Elan Pharmaceuticals, and Forest Laboratories. Figure 1: Sentence examples. As formulated above, the learning task can be seen as an instance of multiple instance learning. However, there are important properties that set it apart from problems previously considered in MIL. The most distinguishing characteristic is that the number of bags is very small, while the average size of the bags is very large. 3 Multiple Instance Learning Since its introduction by Dietterich (1997), an extensive and quite diverse set of methods have been proposed for solving the MIL problem. For the task of relation extraction, we consider only MIL methods where the decision function can be expressed in terms of kernels computed between bag instances. This choice was motivated by the comparatively high accuracy obtained by kernel-based SVMs when applied to various natural language tasks, and in particular to relation extraction. Through the use of kernels, SVMs (Vapnik, 1998; Sch¨olkopf and Smola, 2002) can work efficiently with instances that implicitly belong to a high dimensional feature space. When used for classification, the decision function computed by the learning algorithm is equivalent to a hyperplane in this feature space. Overfitting is avoided in the SVM formulation by requiring that positive and negative training instances be maximally separated by the decision hyperplane. Gartner et al. (2002) adapted SVMs to the MIL setting using various multi-instance kernels. Two of these – the normalized set kernel, and the statistic kernel – have been experimentally compared to other methods by Ray and Craven (2005), with competitive results. Alternatively, a simple approach to MIL is to transform it into a standard supervised learning problem by labeling all instances from positive bags as positive. An interesting outcome of the study conducted by Ray and Craven (2005) was that, despite the class noise in the resulting positive examples, such a simple approach often obtains competitive results when compared against other more sophisticated MIL methods. We believe that an MIL method based on multiinstance kernels is not appropriate for training datasets that contain just a few, very large bags. In a multi-instance kernel approach, only bags (and not instances) are considered as training examples, 577 which means that the number of support vectors is going to be upper bounded by the number of training bags. Taking the bags from Table 1 as a sample training set, the decision function is going to be specified by at most seven parameters: the coefficients for at most six support vectors, plus an optional bias parameter. A hypothesis space characterized by such a small number of parameters is likely to have insufficient capacity. Based on these observations, we decided to transform the MIL problem into a standard supervised problem as described above. The use of this approach is further motivated by its simplicity and its observed competitive performance on very diverse datasets (Ray and Craven, 2005). Let X be the set of bags used for training, Xp ⊆X the set of positive bags, and Xn ⊆X the set of negative bags. For any instance x ∈X from a bag X ∈X, let φ(x) be the (implicit) feature vector representation of x. Then the corresponding SVM optimization problem can be formulated as in Figure 2: minimize: J(w, b, ξ) = 1 2∥w∥2 + C L  cp Ln L Ξp + cn Lp L Ξn  Ξp = X X∈Xp X x∈X ξx Ξn = X X∈Xn X x∈X ξx subject to: w φ(x) + b ≥+1 −ξx, ∀x ∈X ∈Xp w φ(x) + b ≤−1 + ξx, ∀x ∈X ∈Xn ξx ≥0 Figure 2: SVM Optimization Problem. The capacity control parameter C is normalized by the total number of instances L = Lp + Ln = P X∈Xp |X| + P X∈Xn |X|, so that it remains independent of the size of the dataset. The additional non-negative parameter cp (cn = 1−cp) controls the relative influence that false negative vs. false positive errors have on the value of the objective function. Because not all instances from positive bags are real positive instances, it makes sense to have false negative errors be penalized less than false positive errors (i.e. cp < 0.5). In the dual formulation of the optimization problem from Figure 2, bag instances appear only inside dot products of the form K(x1, x2) = φ(x1)φ(x2). The kernel K is instantiated to a subsequence kernel, as described in the next section. 4 Relation Extraction Kernel The training bags consist of sentences extracted from online documents, using the methodology described in Section 6. Parsing web documents in order to obtain a syntactic analysis often gives unreliable results – the type of narrative can vary greatly from one web document to another, and sentences with grammatical errors are frequent. Therefore, for the initial experiments, we used a modified version of the subsequence kernel of Bunescu and Mooney (2006), which does not require syntactic information. This kernel computes the number of common subsequences of tokens between two sentences. The subsequences are constrained to be “anchored” at the two entity names, and there is a maximum number of tokens that can appear in a sequence. For example, a subsequence feature for the sentence S1 in Figure 1 is ˜s = “⟨e1⟩. . . bought . . . ⟨e2⟩. . . in . . . billion . . . deal”, where ⟨e1⟩and ⟨e2⟩are generic placeholders for the two entity names. The subsequence kernel induces a feature space where each dimension corresponds to a sequence of words. Any such sequence that matches a subsequence of words in a sentence example is down-weighted as a function of the total length of the gaps between every two consecutive words. More exactly, let s = w1w2 . . . wk be a sequence of k words, and ˜s = w1 g1 w2 g2 . . . wk−1 gk−1 wk a matching subsequence in a relation example, where gi stands for any sequence of words between wi and wi+1. Then the sequence s will be represented in the relation example as a feature with weight computed as τ(s) = λg(˜s). The parameter λ controls the magnitude of the gap penalty, where g(˜s) = P i |gi| is the total gap. Many relations, like the ones that we explore in the experimental evaluation, cannot be expressed without using at least one content word. We therefore modified the kernel computation to optionally ignore subsequence patterns formed exclusively of 578 stop words and punctuation signs. In Section 5.1, we introduce a new weighting scheme, wherein a weight is assigned to every token. Correspondingly, every sequence feature will have an additional multiplicative weight, computed as the product of the weights of all the tokens in the sequence. The aim of this new weighting scheme, as detailed in the next section, is to eliminate the bias caused by the special structure of the relation extraction MIL problem. 5 Two Types of Bias As already hinted at the end of Section 2, there is one important property that distinguishes the current MIL setting for relation extraction from other MIL problems: the training dataset contains very few bags, and each bag can be very large. Consequently, an application of the learning model described in Sections 3 & 4 is bound to be affected by the following two types of bias: ■[Type I Bias] By definition, all sentences inside a bag are constrained to contain the same two arguments. Words that are semantically correlated with either of the two arguments are likely to occur in many sentences. For example, consider the sentences S1 and S2 from the bag associated with “Google” and “YouTube” (as shown in Figure 1). They both contain the words “search” – highly correlated with “Google”, and “video” – highly correlated with “YouTube”, and it is likely that a significant percentage of sentences in this bag contain one of the two words (or both). The two entities can be mentioned in the same sentence for reasons other than the target relation R, and these noisy training sentences are likely to contain words that are correlated with the two entities, without any relationship to R. A learning model where the features are based on words, or word sequences, is going to give too much weight to words or combinations of words that are correlated with either of individual arguments. This overweighting will adversely affect extraction performance through an increased number of errors. A method for eliminating this type of bias is introduced in Section 5.1. ■[Type II Bias] While Type I bias is due to words that are correlated with the arguments of a relation instance, the Type II bias is caused by words that are specific to the relation instance itself. Using FrameNet terminology (Baker et al., 1998), these correspond to instantiated frame elements. For example, the corporate acquisition frame can be seen as a subtype of the “Getting” frame in FrameNet. The core elements in this frame are the Recipient (e.g. Google) and the Theme (e.g. YouTube), which for the acquisition relationship coincide with the two arguments. They do not contribute any bias, since they are replaced with the generic tags ⟨e1⟩and ⟨e2⟩in all sentences from the bag. There are however other frame elements – peripheral, or extra-thematic – that can be instantiated with the same value in many sentences. In Figure 1, for instance, sentence S3 contains two non-core frame elements: the Means element (e.g “in a stock-for-stock transaction”) and the Time element (e.g. “on October 9, 2006”). Words from these elements, like “stock”, or “October”, are likely to occur very often in the Google-YouTube bag, and because the training dataset contains only a few other bags, subsequence patterns containing these words will be given too much weight in the learned model. This is problematic, since these words can appear in many other frames, and thus the learned model is likely to make errors. Instead, we would like the model to focus on words that trigger the target relationship (in FrameNet, these are the lexical units associated with the target frame). 5.1 A Solution for Type I Bias In order to account for how strongly the words in a sequence are correlated with either of the individual arguments of the relation, we modify the formula for the sequence weight τ(s) by factoring in a weight τ(w) for each word in the sequence, as illustrated in Equation 1. τ(s) = λg(˜s) · Y w∈s τ(w) (1) Given a predefined set of weights τ(w), it is straightforward to update the recursive computation of the subsequence kernel so that it reflects the new weighting scheme. If all the word weights are set to 1, then the new kernel is equivalent to the old one. What we want, however, is a set of weights where words that are correlated with either of the two arguments are given lower weights. For any word, the decrease in weight 579 should reflect the degree of correlation between that word and the two arguments. Before showing the formula used for computing the word weights, we first introduce some notation: • Let X ∈X be an arbitrary bag, and let X.a1 and X.a2 be the two arguments associated with the bag. • Let C(X) be the size of the bag (i.e. the number of sentences in the bag), and C(X, w) the number of sentences in the bag X that contain the word w. Let P(w|X) = C(X, w)/C(X). • Let P(w|X.a1 ∨X.a2) be the probability that the word w appears in a sentence due only to the presence of X.a1 or X.a2, assuming X.a1 and X.a2 are independent causes for w. The word weights are computed as follows: τ(w) = C(X, w) −P(w|X.a1 ∨X.a2) · C(X) C(X, w) = 1 −P(w|X.a1 ∨X.a2) P(w|X) (2) The quantity P(w|X.a1 ∨X.a2) · C(X) represents the expected number of sentences in which w would occur, if the only causes were X.a1 or X.a2, independent of each other. We want to discard this quantity from the total number of occurrences C(X, w), so that the effect of correlations with X.a1 or X.a2 is eliminated. We still need to compute P(w|X.a1 ∨X.a2). Because in the definition of P(w|X.a1 ∨X.a2), the arguments X.a1 and X.a2 were considered independent causes, P(w|X.a1 ∨X.a2) can be computed with the noisy-or operator (Pearl, 1986): P(·) = 1−(1−P(w|a1)) · (1−P(w|a2)) (3) = P(w|a1)+P(w|a2)−P(w|a1) · P(w|a2) The quantity P(w|a) represents the probability that the word w appears in a sentence due only to the presence of a, and it could be estimated using counts on a sufficiently large corpus. For our experimental evaluation, we used the following approximation: given an argument a, a set of sentences containing a are extracted from web documents (details in Section 6). Then P(w|a) is simply approximated with the ratio of the number of sentences containing w over the total number of sentences, i.e. P(w|a) = C(w, a)/C(a). Because this may be an overestimate (w may appear in a sentence containing a due to causes other than a), and also because of data sparsity, the quantity τ(w) may sometimes result in a negative value – in these cases it is set to 0, which is equivalent to ignoring the word w in all subsequence patterns. 6 MIL Relation Extraction Datasets For the purpose of evaluation, we created two datasets: one for corporate acquisitions, as shown in Table 2, and one for the person-birthplace relation, with the example pairs from Table 3. In both tables, the top part shows the training pairs, while the bottom part shows the test pairs. +/− Arg a1 Arg a2 Size + Google YouTube 1375 + Adobe Systems Macromedia 622 + Viacom DreamWorks 323 + Novartis Eon Labs 311 − Yahoo Microsoft 163 − Pfizer Teva 247 + Pfizer Rinat Neuroscience 50 (41) + Yahoo Inktomi 433 (115) − Google Apple 281 − Viacom NBC 231 Table 2: Corporate Acquisition Pairs. +/− Arg a1 Arg a2 Size + Franz Kafka Prague 552 + Andre Agassi Las Vegas 386 + Charlie Chaplin London 292 + George Gershwin New York 260 − Luc Besson New York 74 − Wolfgang A. Mozart Vienna 288 + Luc Besson Paris 126 (6) + Marie Antoinette Vienna 105 (39) − Charlie Chaplin Hollywood 266 − George Gershwin London 104 Table 3: Person-Birthplace Pairs. Given a pair of arguments (a1, a2), the corresponding bag of sentences is created as follows: ■A query string “a1 ∗∗∗∗∗∗∗a2” containing seven wildcard symbols between the two arguments is submitted to Google. The preferences are set to search only for pages written in English, with Safesearch turned on. This type of query will match documents where an occurrence of a1 is separated from an occurrence of a2 by at most seven content words. This is an approximation of our actual information 580 need: “return all documents containing a1 and a2 in the same sentence”. ■The returned documents (limited by Google to the first 1000) are downloaded, and then the text is extracted using the HTML parser from the Java Swing package. Whenever possible, the appropriate HTML tags (e.g. BR, DD, P, etc.) are used as hard end-of-sentence indicators. The text is further segmented into sentences with the OpenNLP1 package. ■Sentences that do not contain both arguments a1 and a2 are discarded. For every remaining sentence, we find the occurrences of a1 and a2 that are closest to each other, and create a relation example by replacing a1 with ⟨e1⟩and a2 with ⟨e2⟩. All other occurrences of a1 and a2 are replaced with a null token ignored by the subsequence kernel. The number of sentences in every bag is shown in the last column of Tables 2 & 3. Because Google also counts pages that are deemed too similar in the first 1000, some of the bags can be relatively small. As described in Section 5.1, the word-argument correlations are modeled through the quantity P(w|a) = C(w, a)/C(a), estimated as the ratio between the number of sentences containing w and a, and the number of sentences containing a. These counts are computed over a bag of sentences containing a, which is created by querying Google for the argument a, and then by processing the results as described above. 7 Experimental Evaluation Each dataset is split into two sets of bags: one for training and one for testing. The test dataset was purposefully made difficult by including negative bags with arguments that during training were used in positive bags, and vice-versa. In order to evaluate the relation extraction performance at the sentence level, we manually annotated all instances from the positive test bags. The last column in Tables 2 & 3 shows, between parentheses, how many instances from the positive test bags are real positive instances. The corporate acquisition test set has a total of 995 instances, out of which 156 are positive. The person-birthplace test set has a total of 601 instances, and only 45 of them are positive. Extrapolating from the test set distribution, the pos1http://opennlp.sourceforge.net itive bags in the person-birthplace dataset are significantly sparser in real positive instances than the positive bags in the corporate acquisition dataset. The subsequence kernel described in Section 4 was used as a custom kernel for the LibSVM2 Java package. When run with the default parameters, the results were extremely poor – too much weight was given to the slack term in the objective function. Minimizing the regularization term is essential in order to capture subsequence patterns shared among positive bags. Therefore LibSVM was modified to solve the optimization problem from Figure 2, where the capacity parameter C is normalized by the size of the transformed dataset. In this new formulation, C is set to its default value of 1.0 – changing it to other values did not result in significant improvement. The trade-off between false positive and false negative errors is controlled by the parameter cp. When set to its default value of 0.5, false-negative errors and false positive errors have the same impact on the objective function. As expected, setting cp to a smaller value (0.1) resulted in better performance. Tests with even lower values did not improve the results. We compare the following four systems: ■SSK–MIL: This corresponds to the MIL formulation from Section 3, with the original subsequence kernel described in Section 4. ■SSK–T1: This is the SSK–MIL system augmented with word weights, so that the Type I bias is reduced, as described in Section 5.1. ■BW-MIL: This is a bag-of-words kernel, in which the relation examples are classified based on the unordered words contained in the sentence. This baseline shows the performance of a standard textclassification approach to the problem using a stateof-the art algorithm (SVM). ■SSK–SIL: This corresponds to the original subsequence kernel trained with traditional, single instance learning (SIL) supervision. For evaluation, we train on the manually labeled instances from the test bags. We use a combination of one positive bag and one negative bag for training, while the other two bags are used for testing. The results are averaged over all four possible combinations. Note that the supervision provided to SSK–SIL requires sig2http://www.csie.ntu.edu.tw/˜cjlin/libsvm 581 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 Precision (%) Recall (%) SSK-T1 SSK-MIL BW-MIL 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 Precision (%) Recall (%) SSK-T1 SSK-MIL BW-MIL (a) Corporate Acquisitions (b) Person-Birthplace Figure 3: Precision-Recall graphs on the two datasets. nificantly more annotation effort, therefore, given a sufficient amount of training examples, we expect this system to perform at least as well as its MIL counterpart. In Figure 3, precision is plotted against recall by varying a threshold on the value of the SVM decision function. To avoid clutter, we show only the graphs for the first three systems. In Table 4 we show the area under the precision recall curves of all four systems. Overall, the learned relation extractors are able to identify the relationship in novel sentences quite accurately and significantly out-perform a bag-of-words baseline. The new version of the subsequence kernel SSK–T1 is significantly more accurate in the MIL setting than the original subsequence kernel SSK–MIL, and is also competitive with SSK–SIL, which was trained using a reasonable amount of manually labeled sentence examples. Dataset SSK–MIL SSK–T1 BW–MIL SSK–SIL (a) CA 76.9% 81.1% 45.9% 80.4% (b) PB 72.5% 78.2% 69.2% 73.4% Table 4: Area Under Precision-Recall Curve. 8 Future Work An interesting potential application of our approach is a web relation-extraction system similar to Google Sets, in which the user provides only a handful of pairs of entities known to exhibit or not to exhibit a particular relation, and the system is used to find other pairs of entities exhibiting the same relation. Ideally, the user would only need to provide positive pairs. Sentences containing one of the relation arguments could be extracted from the web, and likely negative sentence examples automatically created by pairing this entity with other named entities mentioned in the sentence. In this scenario, the training set can contain both false positive and false negative noise. One useful side effect is that Type I bias is partially removed – some bias still remains due to combinations of at least two words, each correlated with a different argument of the relation. We are also investigating methods for reducing Type II bias, either by modifying the word weights, or by integrating an appropriate measure of word distribution across positive bags directly in the objective function for the MIL problem. Alternatively, implicit negative evidence can be extracted from sentences in positive bags by exploiting the fact that, besides the two relation arguments, a sentence from a positive bag may contain other entity mentions. Any pair of entities different from the relation pair is very likely to be a negative example for that relation. This is similar to the concept of negative neighborhoods introduced by Smith and Eisner (2005), and has the potential of eliminating both Type I and Type II bias. 9 Related Work One of the earliest IE methods designed to work with a reduced amount of supervision is that of Hearst (1992), where a small set of seed patterns is used in a bootstrapping fashion to mine pairs of 582 hypernym-hyponym nouns. Bootstrapping is actually orthogonal to our method, which could be used as the pattern learner in every bootstrapping iteration. A more recent IE system that works by bootstrapping relation extraction patterns from the web is KNOWITALL (Etzioni et al., 2005). For a given target relation, supervision in KNOWITALL is provided as a rule template containing words that describe the class of the arguments (e.g. “company”), and a small set of seed extraction patterns (e.g. “has acquired”). In our approach, the type of supervision is different – we ask only for pairs of entities known to exhibit the target relation or not. Also, KNOWITALL requires large numbers of search engine queries in order to collect and validate extraction patterns, therefore experiments can take weeks to complete. Comparatively, the approach presented in this paper requires only a small number of queries: one query per relation pair, and one query for each relation argument. Craven and Kumlien (1999) create a noisy training set for the subcellular-localization relation by mining Medline for sentences that contain tuples extracted from relevant medical databases. To our knowledge, this is the first approach that is using a “weakly” labeled dataset for relation extraction. The resulting bags however are very dense in positive examples, and they are also many and small – consequently, the two types of bias are not likely to have significant impact on their system’s performance. 10 Conclusion We have presented a new approach to relation extraction that leverages the vast amount of information available on the web. The new RE system is trained using only a handful of entity pairs known to exhibit and not exhibit the target relationship. We have extended an existing relation extraction kernel to learn in this setting and to resolve problems caused by the minimal supervision provided. Experimental results demonstrate that the new approach can reliably extract relations from web documents. Acknowledgments We would like to thank the anonymous reviewers for their helpful suggestions. This work was supported by grant IIS-0325116 from the NSF, and a gift from Google Inc. References Stuart Andrews, Ioannis Tsochantaridis, and Thomas Hofmann. 2003. Support vector machines for multiple-instance learning. In NIPS 15, pages 561–568, Vancouver, BC. MIT Press. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Proc. of COLING–ACL ’98, pages 86–90, San Francisco, CA. Morgan Kaufmann Publishers. Razvan C. Bunescu and Raymond J. Mooney. 2006. Subsequence kernels for relation extraction. In Y. Weiss, B. Sch¨olkopf, and J. Platt, editors, NIPS 18. M. Craven and J. Kumlien. 1999. Constructing biological knowledge bases by extracting information from text sources. In Proc. of ISMB’99, pages 77–86, Heidelberg, Germany. Aron Culotta and Jeffrey Sorensen. 2004. Dependency tree kernels for relation extraction. In Proc. of ACL’04, pages 423–429, Barcelona, Spain, July. Thomas G. Dietterich, Richard H. Lathrop, and Tomas LozanoPerez. 1997. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence, 89(1-2):31– 71. Oren Etzioni, Michael Cafarella, Doug Downey, Ana-Maria Popescu, Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. 2005. Unsupervised named-entity extraction from the web: an experimental study. Artificial Intelligence, 165(1):91–134. T. Gartner, P.A. Flach, A. Kowalczyk, and A.J. Smola. 2002. Multi-instance kernels. In In Proc. of ICML’02, pages 179– 186, Sydney, Australia, July. Morgan Kaufmann. M. A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proc. of ACL’92, Nantes, France. Judea Pearl. 1986. Fusion, propagation, and structuring in belief networks. Artificial Intelligence, 29(3):241–288. Soumya Ray and Mark Craven. 2005. Supervised versus multiple instance learning: An empirical comparison. In Proc. of ICML’05, pages 697–704, Bonn, Germany. Bernhard Sch¨olkopf and Alexander J. Smola. 2002. Learning with kernels - support vector machines, regularization, optimization and beyond. MIT Press, Cambridge, MA. N. A. Smith and J. Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proc. of ACL’05, pages 354–362, Ann Arbor, Michigan. Vladimir N. Vapnik. 1998. Statistical Learning Theory. John Wiley & Sons. D. Zelenko, C. Aone, and A. Richardella. 2003. Kernel methods for relation extraction. Journal of Machine Learning Research, 3:1083–1106. Q. Zhang, S. A. Goldman, W. Yu, and J. Fritts. 2002. Contentbased image retrieval using multiple-instance learning. In Proc. of ICML’02, pages 682–689. 583
2007
73
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 584–591, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics A Seed-driven Bottom-up Machine Learning Framework for Extracting Relations of Various Complexity Feiyu Xu, Hans Uszkoreit and Hong Li Language Technology Lab, DFKI GmbH Stuhlsatzenhausweg 3, D-66123 Saarbruecken {feiyu,uszkoreit,hongli}@dfki.de Abstract A minimally supervised machine learning framework is described for extracting relations of various complexity. Bootstrapping starts from a small set of n-ary relation instances as “seeds”, in order to automatically learn pattern rules from parsed data, which then can extract new instances of the relation and its projections. We propose a novel rule representation enabling the composition of n-ary relation rules on top of the rules for projections of the relation. The compositional approach to rule construction is supported by a bottom-up pattern extraction method. In comparison to other automatic approaches, our rules cannot only localize relation arguments but also assign their exact target argument roles. The method is evaluated in two tasks: the extraction of Nobel Prize awards and management succession events. Performance for the new Nobel Prize task is strong. For the management succession task the results compare favorably with those of existing pattern acquisition approaches. 1 Introduction Information extraction (IE) has the task to discover n-tuples of relevant items (entities) belonging to an n-ary relation in natural language documents. One of the central goals of the ACE program1 is to develop a more systematically grounded approach to IE starting from elementary entities, binary rela 1 http://projects.ldc.upenn.edu/ace/ tions to n-ary relations such as events. Current semi- or unsupervised approaches to automatic pattern acquisition are either limited to a certain linguistic representation (e.g., subject-verb-object), or only deal with binary relations, or cannot assign slot filler roles to the extracted arguments, or do not have good selection and filtering methods to handle the large number of tree patterns (Riloff, 1996; Agichtein and Gravano, 2000; Yangarber, 2003; Sudo et al., 2003; Greenwood and Stevenson, 2006; Stevenson and Greenwood, 2006). Most of these approaches do not consider the linguistic interaction between relations and their projections on k dimensional subspaces where 1≤k<n, which is important for scalability and reusability of rules. Stevenson and Greenwood (2006) present a systematic investigation of the pattern representation models and point out that substructures of the linguistic representation and the access to the embedded structures are important for obtaining a good coverage of the pattern acquisition. However, all considered representation models (subject-verbobject, chain model, linked chain model and subtree model) are verb-centered. Relations embedded in non-verb constructions such as a compound noun cannot be discovered: (1) the 2005 Nobel Peace Prize (1) describes a ternary relation referring to three properties of a prize: year, area and prize name. We also observe that the automatically acquired patterns in Riloff (1996), Yangarber (2003), Sudo et al. (2003), Greenwood and Stevenson (2006) cannot be directly used as relation extraction rules because the relation-specific argument role information is missing. E.g., in the management succession domain that concerns the identification of job changing events, a person can either move into a 584 job (called Person_In) or leave a job (called Person_Out). (2) is a simplified example of patterns extracted by these systems: (2) <subject: person> verb <object:organisation> In (2), there is no further specification of whether the person entity in the subject position is Person_In or Person_Out. The ambitious goal of our approach is to provide a general framework for the extraction of relations and events with various complexity. Within this framework, the IE system learns extraction patterns automatically and induces rules of various complexity systematically, starting from sample relation instances as seeds. The arity of the seed determines the complexity of extracted relations. The seed helps us to identify the explicit linguistic expressions containing mentionings of relation instances or instances of their k-ary projections where 1≤k<n. Because our seed samples are not linguistic patterns, the learning system is not restricted to a particular linguistic representation and is therefore suitable for various linguistic analysis methods and representation formats. The pattern discovery is bottom-up and compositional, i.e., complex patterns can build on top of simple patterns for projections. We propose a rule representation that supports this strategy. Therefore, our learning approach is seed-driven and bottom-up. Here we use dependency trees as input for pattern extraction. We consider only trees or their subtrees containing seed arguments. Therefore, our method is much more efficient than the subtree model of Sudo et al., (2003), where all subtrees containing verbs are taken into account. Our pattern rule ranking and filtering method considers two aspects of a pattern: its domain relevance and the trustworthiness of its origin. We tested our framework in two domains: Nobel Prize awards and management succession. Evaluations have been conducted to investigate the performance with respect to the seed parameters: the number of seeds and the influence of data size and its redundancy property. The whole system has been evaluated for the two domains considering precision and recall. We utilize the evaluation strategy “Ideal Matrix” of Agichtein and Gravano (2000) to deal with unannotated test data. The remainder of the paper is organised as follows: Section 2 provides an overview of the system architecture. Section 3 discusses the rule representation. In Section 4, a detailed description of the seed-driven bottom-up pattern acquisition is presented. Section 5 describes our experiments with pattern ranking, filtering and rule induction. Section 6 presents the experiments and evaluations for the two application domains. Section 7 provides a conclusion and an outline of future work. 2 System Architecture Given the framework, our system architecture can be depicted as follows: Figure 1. Architecture This architecture has been inspired by several existing seed-oriented minimally supervised machine learning systems, in particular by Snowball (Agichtein and Gravano, 2000) and ExDisco (Yangarber et al., 2000). We call our system DARE, standing for “Domain Adaptive Relation Extraction based on Seeds”. DARE contains four major components: linguistic annotation, classifier, rule learning and relation extraction. The first component only applies once, while the last three components are integrated in a bootstrapping loop. At each iteration, rules will be learned based on the seed and then new relation instances will be extracted by applying the learned rules. The new relation instances are then used as seeds for the next iteration of the learning cycle. The cycle terminates when no new relations can be acquired. The linguistic annotation is responsible for enriching the natural language texts with linguistic information such as named entities and dependency structures. In our framework, the depth of the linguistic annotation can be varied depending on the domain and the available resources. The classifier has the task to deliver relevant paragraphs and sentences that contain seed elements. It has three subcomponents: document re585 trieval, paragraph retrieval and sentence retrieval. The document retrieval component utilizes the open source IR-system Lucene2. A translation step is built in to convert the seed into the proper IR query format. As explained in Xu et al. (2006), we generate all possible lexical variants of the seed arguments to boost the retrieval coverage and formulate a boolean query where the arguments are connected via conjunction and the lexical variants are associated via disjunction. However, the translation could be modified. The task of paragraph retrieval is to find text snippets from the relevant documents where the seed relation arguments cooccur. Given the paragraphs, a sentence containing at least two arguments of a seed relation will be regarded as relevant. As mentioned above, the rule learning component constitutes the core of our system. It identifies patterns from the annotated documents inducing extraction rules from the patterns, and validates them. In section 4, we will give a detailed explanation of this component. The relation extraction component applies the newly learned rules to the relevant documents and extracts relation instances. The validated relation instances will then be used as new seeds for the next iteration. 3 DARE Rule Representation Our rule representation is designed to specify the location and the role of the arguments w.r.t. the target relation in a linguistic construction. In our framework, the rules should not be restricted to a particular linguistic representation and should be adaptable to various NLP tools on demand. A DARE rule is allowed to call further DARE rules that extract a subset of the arguments. Let us step through some example rules for the prize award domain. One of the target relations in the domain is about a person who obtains a special prize in a certain area in a certain year, namely, a quaternary tuple, see (3). (4) is a domain relevant sentence. (3) <recipient, prize, area, year> (4) Mohamed ElBaradei won the 2005 Nobel Peace Prize on Friday for his efforts to limit the spread of atomic weapons. (5) is a rule that extracts a ternary projection instance <prize, area, year> from a noun phrase 2 http://www.lucene.de compound, while (6) is a rule which triggers (5) in its object argument and extracts all four arguments. (5) and (6) are useful rules for extracting arguments from (4). (5) (6) Next we provide a definition of a DARE rule: A DARE rule has three components 1. rule name: ri; 2. output: a set A containing the n arguments of the n-ary relation, labelled with their argument roles; 3. rule body in AVM format containing: - specific linguistic labels or attributes (e.g., subject, object, head, mod), derived from the linguistic analysis, e.g., dependency structures and the named entity information - rule: its value is a DARE rule which extracts a subset of arguments of A The rule in (6) is a typical DARE rule. Its subject and object descriptions call appropriate DARE rules that extract a subset of the output relation arguments. The advantages of this rule representation strategy are that (1) it supports the bottom-up rule composition; (2) it is expressive enough for the representation of rules of various complexity; (3) it reflects the precise linguistic relationship among the relation arguments and reduces the template merging task in the later phase; (4) the rules for the subset of arguments may be reused for other relation extraction tasks. The rule representation models for automatic or unsupervised pattern rule extraction discussed by 586 Stevenson and Greenwood (2006) do not account for these considerations. 4 Seed-driven Bottom-up Rule Learning Two main approaches to seed construction have been discussed in the literature: pattern-oriented (e.g., ExDisco) and semantics-oriented (e.g., Snowball) strategies. The pattern-oriented method suffers from poor coverage because it makes the IE task too dependent on one linguistic representation construction (e.g., subject-verb-object) and has moreover ignored the fact that semantic relations and events could be dispersed over different substructures of the linguistic representation. In practice, several tuples extracted by different patterns can contribute to one complex relation instance. The semantics-oriented method uses relation instances as seeds. It can easily be adapted to all relation/event instances. The complexity of the target relation is not restricted by the expressiveness of the seed pattern representation. In Brin (1998) and Agichtein and Gravano (2000), the semanticsoriented methods have proved to be effective in learning patterns for some general binary relations such as booktitle-author and company-headquarter relations. In Xu et al. (2006), the authors show that at least for the investigated task it is more effective to start with the most complex relation instance, namely, with an n-ary sample for the target n-ary relation as seed, because the seed arguments are often centred in a relevant textual snippet where the relation is mentioned. Given the bottom-up extracted patterns, the task of the rule induction is to cluster and generalize the patterns. In comparison to the bottom-up rule induction strategy (Califf and Mooney, 2004), our method works also in a compositional way. For reasons of space this part of the work will be reported in Xu and Uszkoreit (forthcoming). 4.1 Pattern Extraction Pattern extraction in DARE aims to find linguistic patterns which do not only trigger the relations but also locate the relation arguments. In DARE, the patterns can be extracted from a phrase, a clause or a sentence, depending on the location and the distribution of the seed relation arguments. Figure 2. Pattern extraction step 1 Figure 3. Pattern extraction step 2 Figures 2 and 3 depict the general steps of bottom-up pattern extraction from a dependency tree t where three seed arguments arg1, arg2 and arg3 are located. All arguments are assigned their relation roles r1, r2 and r3. The pattern-relevant subtrees are trees in which seed arguments are embedded: t1, t2 and t3. Their root nodes are n1, n2 and n3. Figure 2 shows the extraction of a unary pattern n2_r3_i, while Figure 3 illustrates the further extraction and construction of a binary pattern n1_r1_r2_j and a ternary pattern n3_r1_r2_r3_k. In practice, not all branches in the subtrees will be kept. In the following, we give a general definition of our seed-driven bottom-up pattern extraction algorithm: input: (i) relation = <r1, r2, ..., rn>: the target relation tuple with n argument roles. T: a set of linguistic analysis trees annotated with i seed relation arguments (1≤i≤n) output: P: a set of pattern instances which can extract i or a subset of i arguments. Pattern extraction: for each tree t ∈T 587 Step 1: (depicted in Figure 2) 1. replace all terminal nodes that are instantiated with the seed arguments by new nodes. Label these new nodes with the seed argument roles and possibly the corresponding entity classes; 2. identify the set of the lowest nonterminal nodes N1 in t that dominate only one argument (possibly among other nodes). 3. substitute N1 by nodes labelled with the seed argument roles and their entity classes 4. prune the subtrees dominated by N1 from t and add these subtrees into P. These subtrees are assigned the argument role information and a unique id. Step2: For i=2 to n: (depicted in Figure 3) 1. find the set of the lowest nodes N1 in t that dominate in addition to other children only i seed arguments; 2. substitute N1 by nodes labelled with the i seed argument role combination information (e.g., ri_rj) and with a unique id. 3. prune the subtrees Ti dominated by Ni in t; 4. add Ti to P together with the argument role combination information and the unique id With this approach, we can learn rules like (6) in a straightforward way. 4.2 Rule Validation: Ranking and Filtering Our ranking strategy has incorporated the ideas proposed by Riloff (1996), Agichtein and Gravano (2000), Yangarber (2003) and Sudo et al. (2003). We take two properties of a pattern into account: • domain relevance: its distribution in the relevant documents and irrelevant documents (documents in other domains); • trustworthiness of its origin: the relevance score of the seeds from which it is extracted. In Riloff (1996) and Sudo et al. (2003), the relevance of a pattern is mainly dependent on its occurrences in the relevant documents vs. the whole corpus. Relevant patterns with lower frequencies cannot float to the top. It is known that some complex patterns are relevant even if they have low occurrence rates. We propose a new method for calculating the domain relevance of a pattern. We assume that the domain relevance of a pattern is dependent on the relevance of the lexical terms (words or collocations) constructing the pattern, e.g., the domain relevance of (5) and (6) are dependent on the terms “prize” and “win” respectively. Given n different domains, the domain relevance score (DR) of a term t in a domain di is: DR(t, di)= 0, if df(t, di) =0; df(t,di) N×D ×LOG(n× df(t,di) df(t,dj) j=1 n ∑ ), otherwise where • df(t, di): is the document frequency of a term t in the domain di • D: the number of the documents in di • N: the total number of the terms in di Here the domain relevance of a term is dependent both on its document frequency and its document frequency distribution in other domains. Terms mentioned by more documents within the domain than outside are more relevant (Xu et al., 2002). In the case of n=3 such different domains might be, e.g., management succession, book review or biomedical texts. Every domain corpus should ideally have the same number of documents and similar average document size. In the calculation of the trustworthiness of the origin, we follow Agichtein and Gravano (2000) and Yangarber (2003). Thus, the relevance of a pattern is dependent on the relevance of its terms and the score value of the most trustworthy seed from which it origins. Finally, the score of a pattern p is calculated as follows: score(p)= } :) ( max{ ) ( 0 Seeds s s score t DR T i i ∈ × ∑ = where |T|> 0 and ti ∈ T • T: is the set of the terms occur in p; • Seeds: a set of seeds from which the pattern is extracted; • score(s): is the score of the seed s; This relevance score is not dependent on the distribution frequency of a pattern in the domain corpus. Therefore, patterns with lower frequency, in particular, some complex patterns, can be ranked higher when they contain relevant domain terms or come from reliable seeds. 588 5 Top down Rule Application After the acquisition of pattern rules, the DARE system applies these rules to the linguistically annotated corpus. The rule selection strategy moves from complex to simple. It first matches the most complex pattern to the analyzed sentence in order to extract the maximal number of relation arguments. According to the duality principle (Yangarber 2001), the score of the new extracted relation instance S is dependent on the patterns from which it origins. Our score method is a simplified version of that defined by Agichtein and Gravano (2000): score(S)=1− (1−score(Pi) i=0 P ∏ ) where P={Pi} is the set of patterns that extract S. The extracted instances can be used as potential seeds for the further pattern extraction iteration, when their scores are validated. The initial seeds obtain 1 as their score. 6 Experiments and Evaluation We apply our framework to two application domains: Nobel Prize awards and management succession events. Table 1 gives an overview of our test data sets. Data Set Name Doc Number Data Amount Nobel Prize A (1999-2005) 2296 12,6 MB Nobel Prize B (1981-1998) 1032 5,8 MB MUC-6 199 1 MB Table1. Overview of Test Data Sets. For the Nobel Prize award scenario, we use two test data sets with different sizes: Nobel Prize A and Nobel Prize B. They are Nobel Prize related articles from New York Times, online BBC and CNN news reports. The target relation for the experiment is a quaternary relation as mentioned in (3), repeated here again: <recipient, prize, area, year> Our test data is not annotated with target relation instances. However, the entire list of Nobel Prize award events is available for the evaluation from the Nobel Prize official website3. We use it as our reference relation database for building our Ideal table (Agichtein and Gravano, 2000). For the management succession scenario, we use the test data from MUC-6 (MUC-6, 1995) and de 3 http://nobelprize.org/ fine a simpler relation structure than the MUC-6 scenario template with four arguments: <Person_In, Person_Out, Position, Organisation> In the following tables, we use PI for Person_In, PO for Person_Out, POS for Position and ORG for Organisation. In our experiments, we attempt to investigate the influence of the size of the seed and the size of the test data on the performance. All these documents are processed by named entity recognition (Drozdzynski et al., 2004) and dependency parser MINIPAR (Lin, 1998). 6.1 Nobel Prize Domain Evaluation For this domain, three test runs have been evaluated, initialized by one randomly selected relation instance as seed each time. In the first run, we use the largest test data set Nobel Prize A. In the second and third runs, we have compared two random selected seed samples with 50% of the data each, namely Nobel Prize B. For data sets in this domain, we are faced with an evaluation challenge pointed out by DIPRE (Brin, 1998) and Snowball (Agichtein and Gravano, 2000), because there is no gold-standard evaluation corpus available. We have adapted the evaluation method suggested by Agichtein and Gravano, i.e., our system is successful if we capture one mentioning of a Nobel Prize winner event through one instance of the relation tuple or its projections. We constructed two tables (named Ideal) reflecting an approximation of the maximal detectable relation instances: one for Nobel Prize A and another for Nobel Prize B. The Ideal tables contain the Nobel Prize winners that co-occur with the word “Nobel” in the test corpus. Then precision is the correctness of the extracted relation instances, while recall is the coverage of the extracted tuples that match with the Ideal table. In Table 2 we show the precision and recall of the three runs and their random seed sample: Recall Data Set Seed Precision total time interval Nobel Prize A [Zewail, Ahmed H], nobel, chemistry,1999 71,6% 50,7% 70,9% (1999-2005) Nobel Prize B [Sen, Amartya], nobel, economics, 1998 87,3% 31% 43% (1981-1998) Nobel Prize B [Arias, Oscar], nobel, peace, 1987 83,8% 32% 45% (1981-1998) Table 2. Precision, Recall against the Ideal Table The first experiment with the full test data has achieved much higher recall than the two experiments with the set Nobel Prize B. The two experiments with the Nobel Prize B corpus show similar 589 performance. All three experiments have better recalls when taking only the relation instances during the report years into account, because there are more mentionings during these years in the corpus. Figure (6) depicts the pattern learning and new seed extracting behavior during the iterations for the first experiment. Similar behaviours are observed in the other two experiments. Figure 6. Experiment with Nobel Prize A 6.2 Management Succession Domain The MUC-6 corpus is much smaller than the Nobel Prize corpus. Since the gold standard of the target relations is available, we use the standard IE precision and recall method. The total gold standard table contains 256 event instances, from which we randomly select seeds for our experiments. Table 3 gives an overview of performance of the experiments. Our tests vary between one seed, 20 seeds and 55 seeds. Initial Seed Nr. Precision Recall A 12.6% 7.0% 1 B 15.1% 21.8% 20 48.4% 34.2% 55 62.0% 48.0% Table 3. Results for various initial seed sets The first two one-seed tests achieved poor performance. With 55 seeds, we can extract additional 67 instances to obtain the half size of the instances occurring in the corpus. Table 4 show evaluations of the single arguments. B works a little better because the randomly selected single seed appears a better sample for finding the pattern for extracting PI argument. Arg precision (A) precision (B) Recall (A) Recall (B) PI 10.9% 15.1% 8.6% 34.4% PO 28.6% - 2.3% 2.3% ORG 25.6% 100% 2.6% 2.6% POS 11.2% 11.2% 5.5% 5.5% Table 4. Evaluation of one-seed tests (A and B) Table 5 shows the performance with 20 and 55 seeds respectively. Both of them are better than the one-seed tests, while 55 seeds deliver the best performance in average, in particular, the recall value. arg precision (20) precision (55) recall (20) recall (55) PI 84% 62.8% 27.9% 56.1% PO 41.2% 59% 34.2% 31.2% ORG 82.4% 58.2% 7.4% 20.2% POS 42% 64.8% 25.6% 30.6% Table 5. Evaluation of 20 and 55 seeds tests Our result with 20 seeds (precision of 48.4% and recall of 34.2%) is comparable with the best result reported by Greenwood and Stevenson (2006) with the linked chain model (precision of 0.434 and recall of 0.265). Since the latter model uses patterns as seeds, applying a similarity measure for pattern ranking, a fair comparison is not possible. Our result is not restricted to binary relations and our model also assigns the exact argument role to the Person role, i.e. Person_In or Person_Out. We have also evaluated the top 100 eventindependent binary relations such as PersonOrganisation and Position-Organisation. The precision of these by-product relations of our IE system is above 98%. 7 Conclusion and Future Work Several parameters are relevant for the success of a seed-based bootstrapping approach to relation extraction. One of these is the arity of the relation. Another one is the locality of the relation instance in an average mentioning. A third one is the types of the relation arguments: Are they named entities in the classical sense? Are they lexically marked? Are there several arguments of the same type? Both tasks we explored involved extracting quaternary relations. The Nobel Prize domain shows better lexical marking because of the prize name. The management succession domain has two slots of the same NE type, i.e., persons. These differences are relevant for any relation extraction approach. The success of the bootstrapping approach crucially depends on the nature of the training data base. One of the most relevant properties of this data base is the ratio of documents to relation instances. Several independent reports of an instance usually yield a higher number of patterns. The two tasks we used to investigate our method drastically differ in this respect. The Nobel Prize 590 domain we selected as a learning domain for general award events since it exhibits a high degree of redundancy in reporting. A Nobel Prize triggers more news reports than most other prizes. The achieved results met our expectations. With one randomly selected seed, we could finally extract most relevant events in some covered time interval. However, it turns out that it is not just the average number of reports per events that matters but also the distribution of reportings to events. Since the Nobel prizes data exhibit a certain type of skewed distribution, the graph exhibits properties of scale-free graphs. The distances between events are shortened to a few steps. Therefore, we can reach most events in a few iterations. The situation is different for the management succession task where the reports came from a single newspaper. The ratio of events to reports is close to one. This lack of informational redundancy requires a higher number of seeds. When we started the bootstrapping with a single event, the results were rather poor. Going up to twenty seeds, we still did not get the performance we obtain in the Nobel Prize task but our results compare favorably to the performance of existing bootstrapping methods. The conclusion, we draw from the observed difference between the two tasks is simple: We shall always try to find a highly redundant training data set. If at all possible, the training data should exhibit a skewed distribution of reports to events. Actually, such training data may be the only realistic chance for reaching a large number of rare patterns. In future work we will try to exploit the web as training resource for acquiring patterns while using the parsed domain data as the source for obtaining new seeds in bootstrapping the rules before applying these to any other nonredundant document base. This is possible because our seed tuples can be translated into simple IR queries and further linguistic processing is limited to the retrieved candidate documents. Acknowledgement The presented research was partially supported by a grant from the German Federal Ministry of Education and Research to the project Hylap (FKZ: 01IWF02) and EU–funding for the project RASCALLI. Our special thanks go to Doug Appelt and an anonymous reviewer for their thorough and highly valuable comments. References E. Agichtein and L. Gravano. 2000. Snowball: extracting relations from large plain-text collections. In ACM 2000, pages 85–94, Texas, USA. S. Brin. Extracting patterns and relations from the World-Wide Web. In Proc. 1998 Int'l Workshop on the Web and Databases (WebDB '98), March 1998. M. E. Califf and R. J. Mooney. 2004. Bottom-Up Relational Learning of Pattern Matching Rules for Information Extraction. Journal of Machine Learning Research, MIT Press. W. Drozdzynski, H.-U.Krieger, J. Piskorski; U. Schäfer, and F. Xu. 2004. Shallow Processing with Unification and Typed Feature Structures — Foundations and Applications. Künstliche Intelligenz 1:17—23. M. A. Greenwood and M. Stevenson. 2006. Improving Semi-supervised Acquisition of Relation Extraction Patterns. In Proc. of the Workshop on Information Extraction Beyond the Document, Australia. D. Lin. 1998. Dependency-based evaluation of MINIPAR. In Workshop on the Evaluation of Parsing Systems, Granada, Spain. MUC. 1995. Proceedings of the Sixth Message Understanding Conference (MUC-6), Morgan Kaufmann. E. Riloff. 1996. Automatically Generating Extraction Patterns from Untagged Text. In Proc. of the Thirteenth National Conference on Articial Intelligence, pages 1044–1049, Portland, OR, August. M. Stevenson and Mark A. Greenwood. 2006. Comparing Information Extraction Pattern Models. In Proc. of the Workshop on Information Extraction Beyond the Document, Sydney, Australia. K. Sudo, S. Sekine, and R. Grishman. 2003. An Improved Extraction Pattern Representation Model for Automatic IE Pattern Acquisition. In Proc. of ACL03, pages 224–231, Sapporo, Japan. R. Yangarber, R. Grishman, P. Tapanainen, and S. Huttunen. 2000. Automatic Acquisition of Domain Knowledge for Information Extraction. In Proc. of COLING 2000, Saarbrücken, Germany. R. Yangarber. 2003. Counter-training in the Discovery of Semantic Patterns. In Proceedings of ACL-03, pages 343–350, Sapporo, Japan. F. Xu, D. Kurz, J. Piskorski and S. Schmeier. 2002. A Domain Adaptive Approach to Automatic Acquisition of Domain Relevant Terms and their Relations with Bootstrapping. In Proc. of LREC 2002, May 2002. F. Xu, H. Uszkoreit and H. Li. 2006. Automatic Event and Relation Detection with Seeds of Varying Complexity. In Proceedings of AAAI 2006 Workshop Event Extraction and Synthesis, Boston, July, 2006. 591
2007
74
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 592–599, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics A Multi-resolution Framework for Information Extraction from Free Text Mstislav Maslennikov and Tat-Seng Chua Department of Computer Science National University of Singapore {maslenni,chuats}@comp.nus.edu.sg Abstract Extraction of relations between entities is an important part of Information Extraction on free text. Previous methods are mostly based on statistical correlation and dependency relations between entities. This paper re-examines the problem at the multiresolution layers of phrase, clause and sentence using dependency and discourse relations. Our multi-resolution framework ARE (Anchor and Relation) uses clausal relations in 2 ways: 1) to filter noisy dependency paths; and 2) to increase reliability of dependency path extraction. The resulting system outperforms the previous approaches by 3%, 7%, 4% on MUC4, MUC6 and ACE RDC domains respectively. 1 Introduction Information Extraction (IE) is the task of identifying information in texts and converting it into a predefined format. The possible types of information include entities, relations or events. In this paper, we follow the IE tasks as defined by the conferences MUC4, MUC6 and ACE RDC: slotbased extraction, template filling and relation extraction, respectively. Previous approaches to IE relied on cooccurrence (Xiao et al., 2004) and dependency (Zhang et al., 2006) relations between entities. These relations enable us to make reliable extraction of correct entities/relations at the level of a single clause. However, Maslennikov et al. (2006) reported that the increase of relation path length will lead to considerable decrease in performance. In most cases, this decrease in performance occurs because entities may belong to different clauses. Since clauses in a sentence are connected by clausal relations (Halliday and Hasan, 1976), it is thus important to perform discourse analysis of a sentence. Discourse analysis may contribute to IE in several ways. First, Taboada and Mann (2005) reported that discourse analysis helps to decompose long sentences into clauses. Therefore, it helps to distinguish relevant clauses from non-relevant ones. Second, Miltsakaki (2003) stated that entities in subordinate clauses are less salient. Third, the knowledge of textual structure helps to interpret the meaning of entities in a text (Grosz and Sidner 1986). As an example, consider the sentences “ABC Co. appointed a new chairman. Additionally, the current CEO was retired”. The word ‘additionally’ connects the event in the second sentence to the entity ‘ABC Co.’ in the first sentence. Fourth, Moens and De Busser (2002) reported that discourse segments tend to be in a fixed order for structured texts such as court decisions or news. Hence, analysis of discourse order may reduce the variability of possible relations between entities. To model these factors, we propose a multiresolution framework ARE that integrates both discourse and dependency relations at 2 levels. ARE aims to filter noisy dependency relations from training and support their evaluation with discourse relations between entities. Additionally, we encode semantic roles of entities in order to utilize semantic relations. Evaluations on MUC4, MUC6 and ACE RDC 2003 corpora demonstrates that our approach outperforms the state-of-art systems mainly due to modeling of discourse relations. The contribution of this paper is in applying discourse relations to supplement dependency relations in a multi-resolution framework for IE. The 592 framework enables us to connect entities in different clauses and thus improve the performance on long-distance dependency paths. Section 2 describes related work, while Section 3 presents our proposed framework, including the extraction of anchor cues and various types of relations, integration of extracted relations, and complexity classification. Section 4 describes our experimental results, with the analysis of results in Section 5. Section 6 concludes the paper. 2 Related work Recent work in IE focuses on relation-based, semantic parsing-based and discourse-based approaches. Several recent research efforts were based on modeling relations between entities. Culotta and Sorensen (2004) extracted relationships using dependency-based kernel trees in Support Vector Machines (SVM). They achieved an F1measure of 63% in relation detection. The authors reported that the primary source of mistakes comes from the heterogeneous nature of non-relation instances. One possible direction to tackle this problem is to carry out further relationship classification. Maslennikov et al. (2006) classified relation path between candidate entities into simple, average and hard cases. This classification is based on the length of connecting path in dependency parse tree. They reported that dependency relations are not reliable for the hard cases, which, in our opinion, need the extraction of discourse relations to supplement dependency relation paths. Surdeanu et al. (2003) applied semantic parsing to capture the predicate-argument sentence structure. They suggested that semantic parsing is useful to capture verb arguments, which may be connected by long-distance dependency paths. However, current semantic parsers such as the ASSERT are not able to recognize support verb constructions such as “X conducted an attack on Y” under the verb frame “attack” (Pradhan et al. 2004). Hence, many useful predicate-argument structures will be missed. Moreover, semantic parsing belongs to the intra-clausal level of sentence analysis, which, as in the dependency case, will need the support of discourse analysis to bridge inter-clausal relations. Webber et al. (2002) reported that discourse structure helps to extract anaphoric relations. However, their set of grammatical rules is heuristic. Our task needs construction of an automated approach to be portable across several domains. Cimiano et al. (2005) employed a discourse-based analysis for IE. However, their approach requires a predefined domain-dependent ontology in the format of extended logical description grammar as described by Cimiano and Reely (2003). Moreover, they used discourse relations between events, whereas in our approach, discourse relations connect entities. 3 Motivation for using discourse relations Our method is based on Rhetorical Structure Theory (RST) by Taboada and Mann (2005). RST splits the texts into 2 parts: a) nuclei, the most important parts of texts; and b) satellites, the secondary parts. We can often remove satellites without losing the meaning of text. Both nuclei and satellites are connected with discourse relations in a hierarchical structure. In our work, we use 16 classes of discourse relations between clauses: Attribution, Background, Cause, Comparison, Condition, Contrast, Elaboration, Enablement, Evaluation, Explanation, Joint, Manner-Means, TopicComment, Summary, Temporal, Topic-Change. The additional 3 relations impose a tree structure: textual-organization, span and same-unit. All the discourse relation classes are potentially useful, since they encode some knowledge about textual structure. Therefore, we decide to include all of them in the learning process to learn patterns with best possible performance. We consider two main rationales for utilizing discourse relations to IE. First, discourse relations help to narrow down the search space to the level of a single clause. For example, the sentence “[<Soc-A1>Trudeau</>'s <Soc-A2>son</> told everyone], [their prime minister was his father], [who took him to a secret base in the arctic] [and let him peek through a window].” contains 4 clauses and 7 anchor cues (key phrases) for the type Social, which leads to 21 possible variants. Splitting this sentence into clauses reduces the combinations to 4 possible variants. Additionally, this reduction eliminates the long and noisy dependency paths. Second, discourse analysis enables us to connect entities in different clauses with clausal relations. As an example, we consider a sentence “It’s a dark comedy about a boy named <AT-A1>Marshal</> played by Amourie Kats who discovers all kinds of 593 on and scary things going on in <AT-A2>a seemingly quiet little town</>”. In this example, we need to extract the relation “At” between the entities “Marshal” and “a seemingly quiet little town”. The discourse structure of this sentence is given in . Figure 1 Figure 1. Example of discourse parsing The discourse path “Marshal <-elaboration- _ <-span- _ -elaboration-> _ -elaboration-> town” is relatively short and captures the necessary relations. At the same time, prediction based on dependency path “Marshal <–obj- _ <-i- _ <-fc- _ <-pnmod- _ <-pred- _ <-i- _ <-null- _ -null-> _ rel-> _ -i-> _ -mod-> _ -pcomp-n-> town” is unreliable, since the relation path is long. Thus, it is important to rely on discourse analysis in this example. In addition, we need to evaluate both the score and reliability of prediction by relation path of each type. 4 Anchors and Relations In this section, we define the key components that we use in ARE: anchors, relation types and general architecture of our system. Some of these components are also presented in detail in our previous work (Maslennikov et al., 2006). 4.1 Anchors The first task in IE is to identify candidate phrases (which we call anchor or anchor cue) of a predefined type (anchor type) to fill a desired slot in an IE template. The example anchor for the phrase “Marshal” is shown in Figure 2. Given a training set of sentences, we extract the anchor cues ACj = [A1, …, ANanch] of type Cj using the procedures described in Maslennikov et al. (2006). The linguistic features of these anchors for the anchor types of Per- petrator, Action, Victim and Target for the MUC4 domain are given in Table 1. Anchor types Feature Perpetrator_Cue (A) Action_Cue (D) Victim_Cue (A) Target_Cue (A) Lexical (Head noun) terrorists, individuals, Soldiers attacked, murder, Massacre Mayor, general, priests bridge, house, Ministry Part-of-Speech Noun Verb Noun Noun Named Entities Soldiers (PERSON) - Jesuit priests (PERSON) WTC (OBJECT) Synonyms Synset 130, 166 Synset 22 Synset 68 Synset 71 Concept Class ID 2, 3 ID 9 ID 22, 43 ID 61, 48 Co-referenced entity He -> terrorist, soldier - They -> peasants - Clausal type Nucleus Satellite Nucleus, Satellite Nucleus, Satellite Nucleus, Satellite Argument type Arg0, Arg1 Root Target, -, ArgM-MNR Arg0, Arg1 Arg1, ArgMMNR Table 1. Linguistic features for anchor extraction Given an input phrase P from a test sentence, we need to classify if the phrase belongs to anchor cue type Cj. We calculate the entity score as: Entity_Score(P) =∑ δi * Feature_Scorei(P,Cj) (1) where Feature_Score(P,Cj) is a score function for a particular linguistic feature representation of type Cj, and δi is the corresponding weight for that representation in the overall entity score. The weights are learned automatically using Expectation Maximization (Dempster et al., 1977). The Feature_Scorei(P,Cj) is estimated from the training set as the number of slots containing the correct feature representation type versus all the slots: Feature_Scorei(P,Cj) = #(positive slots) / #(all slots) (2) We classify the phrase P as belonging to an anchor type Cj when its Entity_score(P) is above an empirically determined threshold ω. We refer to this anchor as Aj. We allow a phrase to belong to multiple anchor types and hence the anchors alone are not enough for filling templates. 4.2 Relations To resolve the correct filling of phrase P of type Ci in a desired slot in the template, we need to consider the relations between multiple candidate phrases of related slots. To do so, we consider several types of relations between anchors: discourse, dependency and semantic relations. These relations capture the interactions between anchors and are therefore useful for tackling the paraphrasing and alignment problems (Maslennikov et al., 2006). Given 2 anchors Ai and Aj of anchor types Ci and Cj, we consider a relation Pathl = [Ai, Rel1,…, Reln, Aj] between them, such that there are no anchors between Ai and Aj. Additionally, we assume that the relations between anchors are represented in the form of a tree Tl, where l = {s, c, d} refers to Satellite who discovers all kinds of on and scary things going on in a seemingly quiet little town. Nucleus It's a dark comedy about a boy Satellite named Marshal Nucleus played by Amourie Kats Nucleus Satellite span elaboration span elaboration elaboration span Figure 2. Example of anchor Anchor Ai Marshal pos_NNP list_personWord Cand_AtArg1 Minipar_obj Arg2 Spade_Satellite 594 discourse, dependency and semantic relation types respectively. We describe the nodes and edges of Tl separately for each type, because their representations are different: 1) The nodes of discourse tree Tc consist of clauses [Clause1, …, ClauseNcl]; and their relation edges are obtained from the Spade system described in Soricut and Marcu (2003). This system performs RST-based parsing at the sentence level. The reported accuracy of Spade is 49% on the RST-DT corpus. To obtain a clausal path, we map each anchor Ai to its clause in Spade. If anchors Ai and Aj belong to the same clause, we assign them the relation same-clause. es. 2) The nodes of dependency tree Td consist of words in sentences; and their relation edges are obtained from Minipar by Lin (1997). Lin (1997) reported a parsing performance of Precision = 88.5% and Recall = 78.6% on the SUSANNE corpus. 3) The nodes of semantic tree Ts consist of arguments [Arg0, …, ArgNarg] and targets [Target1, …, TargetNtarg]. Both arguments and targets are obtained from the ASSERT parser developed by Pradhan (2004). The reported performance of ASSERT is F1=83.8% on the identification and classification task for all arguments, evaluated using PropBank and AQUAINT as the training and testing corpora, respectively. Since the relation edges have a form Targetk -> Argl, the relation path in semantic frame contains only a single relation. Therefore, we encode semantic relations as part of the anchor features. In later parts of this paper, we consider only discourse and dependency relation paths Pathl, where l={c, d}. Figure 3. Architecture of the system 4.3 Architecture of ARE system In order to perform IE, it is important to extract candidate entities (anchors) of appropriate anchor types, evaluate the relationships between them, further evaluate all possible candidate templates, and output the final template. For the case of relation extraction task, the final templates are the same as an extracted binary relation. The overall architecture of ARE is given in Figure 3. The focus of this paper is in applying discourse relations for binary relationship evaluation. 5 Overall approach In this section, we describe our relation-based approach to IE. We start with the evaluation of relation paths (single relation ranking, relation path ranking) to assess the suitability of their anchors as entities to template slots. Here we want to evaluate given a single relation or relation path, whether the two anchors are correct in filling the appropriate slots in a template. This is followed by the integration of relation paths and evaluation of templates. 5.1 Evaluation of relation path In the first stage, we evaluate from training data the relevance of relation path Pathl = [Ai, Rel1,…, Reln, Aj] between candidate anchors Ai and Aj of types Ci and Cj. We divide this task into 2 steps. The first step ranks each single relation Relk ∈ Pathl; while the second step combines the evaluations of Relk to rank the whole relation path Pathl. Single relation ranking Let Seti and Setj be the set of linguistic features of anchors Ai and Aj respectively. To evaluate Relk, we consider 2 characteristics: (1) the direction of relation Relk as encoded in the tree structure; and (2) the linguistic features, Seti and Setj, of anchors Ai and Aj. We need to construct multiple single relation classifiers, one for each anchor pair of types Ci and Cj, to evaluate the relevance of Relk with respect to these 2 anchor typ Preprocessing Corpus (a) Construction of classifiers. The training data to each classifier consists of anchor pairs of types Ci and Cj extracted from the training corpus. We use these anchor pairs to construct each classifier in four stages. First, we compose the set of possible patterns in the form P+ = { Pm = <Si –Rel-> Sj> | Si ∈ Seti , Sj ∈ Setj }. The construction of Pm Anchor evaluation Templates Anchor NEs Template evaluation Sentences Binary relationship evaluation Candidate templates 595 conforms to the 2 characteristics given above. Figure 4 illustrates several discourse and dependency patterns of P+ constructed from a sample sentence. Figure 4. Examples of discourse and dependency patterns Second, we identify the candidate anchor A, whose type matches slot C in a template. Third, we find the correct patterns for the following 2 cases: 1) Ai, Aj are of correct anchor types; and 2) Ai is an action anchor, while Aj is a correct anchor. Any other patterns are considered as incorrect. We note that the discourse and dependency paths between anchors Ai and Aj are either correct or wrong simultaneously. Fourth, we evaluate the relevance of each pattern Pm ∈ P+. Given the training set, let PairSetm be the set of anchor pairs extracted by Pm; and PairSet+(Ci, Cj) be the set of correct anchor pairs of types Ci, Cj. We evaluate both precision and recall of Pm as || || | ) , ( || ) ( m j i m m PairSet C C PairsSet PairSet P recision P | = + Ι (3) || ) , ( || | ) , ( || ) ( j i j i m m C C PairsSet C C PairsSet PairSet P ecall R + + | = Ι (4) These values are stored and used in the training model for use during testing. (b) Evaluation of relation. Here we want to evaluate whether relation InputRel belongs to a path between anchors InputAi and InputAj. We employ the constructed classifier for the anchor types InputCi and InputCj in 2 stages. First, we find a subset P(0) = { Pm = <Si –InputRel-> Sj> ∈ P+ | Si ∈ InputSeti, Sj ∈ InputSetj } of applicable patterns. Second, we utilize P(0) to find the pattern Pm (0) with maximal precision: Precision(Pm (0)) = argmaxPm∈P(0) Precision (Pm) (5) A problem arises if Pm (0) is evaluated only on a small amount of training instances. For example, we noticed that patterns that cover 1 or 2 instances may lead to Precision=1, whereas on the testing corpus their accuracy becomes less than 50%. Therefore, it is important to additionally consider the recall parameter of Pm (0). Relation path ranking In this section, we want to evaluate relation path connecting template slots Ci and Cj. We do this independently for each relation of type discourse and dependency. Let Recallk and Precisionk be the recall and precision values of Relk in Path = [Ai, Rel1,…, Reln, Aj], both obtained from the previous step. First, we calculate the average recall of the involved relations: W = (1/LengthPath) * ∑Relk∈Path Recallk (6) W gives the average recall of the involved relations and can be used as a measure of reliability of the relation Path. Next, we compute a combined score of average Precisionk weighted by Recallk: Score = 1/(W*LengthPath)*∑Relk∈Path Recallk*Precisionk (7) We use all Precisionk values in the path here, because omitting a single relation may turn a correct path into the wrong one, or vice versa. The combined score value is used as a ranking of the relation path. Experiments show that we need to give priority to scores with higher reliability W. Hence we use (W, Score) to evaluate each Path. 5.2 Integration of different relation path types The purpose of this stage is to integrate the evaluations for different types of relation paths. The input to this stage consists of evaluated relation paths PathC and PathD for discourse and dependency relations respectively. Let (Wl, Scorel) be an evaluation for Pathl, l ∈ [c, d]. We first define an integral path PathI between Ai and Aj as: 1) PathI is enabled if at least one of Pathl, l ∈ [c, d], is enabled; and 2) PathI is correct if at least one of Pathl is correct. To evaluate PathI, we consider the average recall Wl of each Pathl, because Wl estielaboration obj Anchor Aj town pos_NN Cand_AtArg2 Minipar_pcompn ArgM-Loc Spade_Satellite Anchor Ai Marshal pos_NNP list_personWord Cand_AtArg1 Minipar_obj Arg2 Spade Satellite pcomp-n fc span Discourse path Dependency path i elaboration Input sentence Marshal … named <At-A1> </> played by Amourie Kats who discovers all kinds of on and scary things going on in <At-A2> Dependency patterns Minipar_obj <–i- ArgM-Loc Minipar_obj <–obj- ArgM-Loc Minipar_obj –pcompn-> Minipar_pcompn Minipar_obj –mod-> Minipar_pcompn … a seemingly quiet little town</> ... elaboration pnmod pred i null null rel i mod Discourse patterns list_personWord <–elaboration- pos_NN list_personWord –elaboration-> town list_personWord <–span- town list_personWord <–elaboration- town … 596 mates the reliability of Scorel. We define a weighted average for Pathl as: WI = WC + WD (8) ScoreI = 1/WI * ∑l Wl*Scorel (9) Next, we want to determine the threshold score ScoreI O above which ScoreI is acceptable. This score may be found by analyzing the integral paths on the training corpus. Let SI = { PathI } be the set of integral paths between anchors Ai and Aj on the training set. Among the paths in SI, we need to define a set function SI(X) = { PathI | ScoreI(PathI) ≥ X } and find the optimal threshold for X. We find the optimal threshold based on F1-measure, because precision and recall are equally important in IE. Let SI(X)+ ⊂ SI(X) and S(X)+ ⊂ S(X) be sets of correct path extractions. Let FI(X) be F1-measure of SI(X): || ) ( || || ) ( || ) ( X S X S X P I I I + = (10) || ) ( || || ) ( || ) ( + + = X S X S X R I I (11) ) ( ) ( ) ( * ) ( * 2 ) ( X R X P X R X P X F I I I I I + = (12) Based on the computed values FI(X) for each X on the training data, we determine the optimal threshold as Score = argmax F (X) I O X I , which corresponds to the maximal expected F1-measure of anchor pair Ai and Aj. 5.3 Evaluation of templates At this stage, we have a set of accepted integral relation paths between any anchor pair Ai and Aj. The next task is to merge appropriate set of anchors into candidate templates. Here we follow the methodology of Maslennikov et al. (2006). For each sentence, we compose a set of candidate templates T using the extracted relation paths between each Ai and Aj. To evaluate each template Ti∈T, we combine the integral scores from relation paths between its anchors Ai and Aj into the overall Relation_ScoreT: M A A Score T Score elation R K j i j i I i T ∑ ≤ ≤ = , 1 ) , ( ) ( _ (13) where K is the number of extracted slots, M is the number of extracted relation paths between anchors Ai and Aj, and ScoreI(Ai, Aj) is obtained from Equation (9). Next, we calculate the extracted entity score based on the scores of all the anchors in Ti: ∑ ≤ ≤ = K k k i T K A Score Entity T Score Entity 1 /) ( _ ) ( _ (14) where Entity_Score(Ai) is taken from Equation (1). Finally, we obtain the combined evaluation for a template: ScoreT(Ti) = (1- λ) * Entity_ScoreT (Ti) + λ * Relation_ScoreT (Ti) (15) where λ is a predefined constant. In order to decide whether the template Ti should be accepted or rejected, we need to determine a threshold ScoreT O from the training data. If anchors of a candidate template match slots in a correct template, we consider the candidate template as correct. Let TrainT = { Ti } be the set of candidate templates extracted from the training data, TrainT+ ⊂ TrainT be the subset of correct candidate templates, and TotalT+ be the total set of correct templates in the training data. Also, let TrainT(X) = { Ti | ScoreT(Ti) ≥ X, Ti ∈ TrainT } be the set of candidate templates with score above X and TrainT+(X) ⊂ TrainT(X) be the subset of correct candidate templates. We define the measures of precision, recall and F1 as follows: || ) ( || || ) ( || ) ( X TrainT X TrainT X PT + = (16) || || || ) ( || ) ( + + = TotalT X TrainT X RT (17) ) ( ) ( ) ( ) ( * 2 ) ( X R X P X R X P X F T T T T T + = (18) Since the performance in IE is measured in F1measure, an appropriate threshold to be used for the most prominent candidate templates is: ScoreT O = argmaxX FT (X) (19) The value ScoreT O is used as a training model. During testing, we accept a candidate template InputTi if ScoreT(InputTi) > Sco O re . T As an additional remark, we note that domains MUC4, MUC6 and ACE RDC 2003 are significantly different in the evaluation methodology for the candidate templates. While the performance of the MUC4 domain is measured for each slot individually; the MUC6 task measures the performance on the extracted templates; and the ACE RDC 2003 task evaluates performance on the matching relations. To overcome these differences, we construct candidate templates for all the domains and measure the required type of performance for each domain. Our candidate templates for the ACE RDC 2003 task consist of only 2 slots, which correspond to entities of the correct relations. 597 6 Experimental results We carry out our experiments on 3 domains: MUC4 (Terrorism), MUC6 (Management Succession), and ACE-Relation Detection and Characterization (2003). The MUC4 corpus contains 1,300 documents as training set and 200 documents (TST3 and TST4) as official testing set. We used a modified version of the MUC6 corpus described by Soderland (1999). This version includes 599 documents as training set and 100 documents as testing set. Following the methodology of Zhang et al. (2006), we use only the English portion of ACE RDC 2003 training data. We used 97 documents for testing and the remaining 155 documents for training. Our task is to extract 5 major relation types and 24 subtypes. Case (%) P R F1 GRID 52% 62% 57% Riloff’05 46% 51% 48% ARE (2006) 58% 61% 60% ARE 65% 61% 63% Table 2. Results on MUC4 To compare the results on the terrorism domain in MUC4, we choose the recent state-of-art systems GRID by Xiao et al. (2004), Riloff et al. (2005) and ARE (2006) by Maslennikov et al. (2006) which does not utilize discourse and semantic relations. The comparative results are given in Table 2. It shows that our enhanced ARE results in 3% improvement in F1 measure over ARE (2006) that does not use clausal relations. The improvement was due to the use of discourse relations on long paths, such as “X distributed leaflets claiming responsibility for murder of Y”. At the same time, for many instances, it would be useful to store the extracted anchors for another round of learning. For example, the extracted features of discourse pattern “murder –same_clause-> HUM_PERSON” may boost the score for patterns that correspond to relation path “X <-span- _ -Elaboration-> murder”. In this way, high-precision patterns will support the refinement of patterns with average recall and low precision. This observation is similar to that described in Ciravegna’s work on (LP)2 (Ciravegna 2001). Case (%) P R F1 Chieu et al.’02 75% 49% 59% ARE (2006) 73% 58% 65% ARE 73% 70% 72% Table 3. Results on MUC6 Next, we present the performance of our system on MUC6 corpus (Management Succession) as shown in Table 3. The improvement of 7% in F1 is mainly due to the filtering of irrelevant dependency relations. Additionally, we noticed that 22% of testing sentences contain 2 answer templates, and entities in many of such templates are intertwined. One example is the sentence “Mr. Bronczek who is 39 years old succeeds Kenneth Newell 55 who was named to the new post of senior vice president”, which refers to 2 positions. We therefore we need to extract 2 templates “PersonIn: Bronczek, PersonOut: Newell” and “PersonIn: Newell, Post: senior vice president”. The discourse analysis is useful to extract the second template, while rejecting another long-distance template “PersonIn: Bronczek, PersonOut: Newell, Post: seniour vice president”. Another remark is that it is important to assign 2 anchors of ‘Cand_PersonIn’ and ‘Cand_PersonOut’ for the phrase “Kenneth Newell”. The characteristic of the ACE corpus is that it contains a large amount of variations, while only 2% of possible dependency paths are correct. Since many of the relations occur only at the level of single clause (for example, most instances of relation At), the discourse analysis is used to eliminate long-distance dependency paths. It allows us to significantly decrease the dimensionality of the problem. We noticed that 38% of relation paths in ACE contain a single relation, 28% contain 2 relations and 34% contain ≥ 3 relations. For the case of ≥ 3 relations, the analysis of dependency paths alone is not sufficient to eliminate the unreliable paths. Our results for general types and specific subtypes are presented in Tables 6 and 7, respectively. Case (%) P R F1 Zhang et al.’06 77% 65% 70% ARE 79% 66% 73% Table 4. Results on ACE RDC’03, general types Based on our results in Table 4, discourse and dependency relations support each other in different situations. We also notice that multiple instances require modeling of entities in the path. Thus, in our future work we need to enrich the search space for relation patterns. This observation corresponds to that reported in Zhang et al. (2006). Discourse parsing is very important to reduce the amount of variations for specific types on ACE 598 RDC’03, as there are 48 possible anchor types. Case (%) P R F1 Zhang et al.’06 64% 51% 57% ARE 67% 54% 61% Table 5. Results on ACE RDC’03, specific types The relatively small improvement of results in Table 5 may be attributed to the following reasons: 1) it is important to model the commonality relations, as was done by Zhou et al. (2006); and 2) our relation paths do not encode entities. This is different from Zhang et al. (2006), who were using entities in their subtrees. Overall, the results indicate that the use of discourse relations leads to improvement over the state-of-art systems. 7 Conclusion We presented a framework that permits the integration of discourse relations with dependency relations. Different from previous works, we tried to use the information about sentence structure based on discourse analysis. Consequently, our system improves the performance in comparison with the state-of-art IE systems. Another advantage of our approach is in using domain-independent parsers and features. Therefore, ARE may be easily portable into new domains. Currently, we explored only 2 types of relation paths: dependency and discourse. For future research, we plan to integrate more relations in our multi-resolution framework. References P. Cimiano and U. Reyle. 2003. Ontology-based semantic construction, underspecification and disambiguation. In Proc of the Prospects and Advances in the SyntaxSemantic Interface Workshop. P. Cimiano, U. Reyle and J. Saric. 2005. Ontology-driven discourse analysis for information extraction. Data & Knowledge Engineering, 55(1):59-83. H.L. Chieu and H.T. Ng. 2002. A Maximum Entropy Approach to Information Extraction from Semi-Structured and Free Text. In Proc of AAAI-2002. F. Ciravegna. 2001. Adaptive Information Extraction from Text by Rule Induction and Generalization. In Proc of IJCAI-2001. A. Culotta and J. Sorensen J. 2004. Dependency tree kernels for relation extraction. In Proc of ACL-2004. A. Dempster, N. Laird, and D. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39(1):1–38. B. Grosz and C. Sidner. 1986. Attention, Intentions and the Structure of Discourse. Computational Linguistics, 12(3):175-204. M. Halliday and R. Hasan. 1976. Cohesion in English. Longman, London. D. Lin. 1997. Dependency-based Evaluation of Minipar. In Workshop on the Evaluation of Parsing systems. M. Maslennikov, H.K. Goh and T.S. Chua. 2006. ARE: Instance Splitting Strategies for Dependency Relationbased Information Extraction. In Proc of ACL-2006. E. Miltsakaki. 2003. The Syntax-Discourse Interface: Effects of the Main-Subordinate Distinction on Attention Structure. PhD thesis. M.F. Moens and R. De Busser. 2002. First steps in building a model for the retrieval of court decisions. International Journal of Human-Computer Studies, 57(5):429-446. S. Pradhan, W. Ward, K. Hacioglu, J. Martin and D. Jurafsky. 2004. Shallow Semantic Parsing using Support Vector Machines. In Proc of HLT/NAACL-2004. E. Riloff, J. Wiebe, and W. Phillips. 2005. Exploiting Subjectivity Classification to Improve Information Extraction. In Proc of AAAI-2005. S. Soderland. 1999. Learning Information Extraction Rules for Semi-Structured and Free Text. Machine Learning, 34:233-272. R. Soricut and D. Marcu. 2003. Sentence Level Discourse Parsing using Syntactic and Lexical Information. In Proc of HLT/NAACL. M. Surdeanu, S. Harabagiu, J. Williams, P. Aarseth. 2003. Using Predicate Arguments Structures for Information Extraction. In Proc of ACL-2003. M. Taboada and W. Mann. 2005. Applications of Rhetorical Structure Theory. Discourse studies, 8(4). B. Webber, M. Stone, A. Joshi and A. Knott. 2002. Anaphora and Discourse Structure. Computational Linguistics, 29(4). J. Xiao, T.S. Chua and H. Cui. 2004. Cascading Use of Soft and Hard Matching Pattern Rules for Weakly Supervised Information Extraction. In Proc of COLING2004. M. Zhang, J. Zhang, J. Su and G. Zhou. 2006. A Composite Kernel to Extract Relations between Entities with both Flat and Structured Features. In Proc of ACL-2006. G. Zhou, J. Su and M. Zhang. 2006. Modeling Commonality among Related Classes in Relation Extraction. In Proc of ACL-2006. 599
2007
75
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 600–607, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Using Corpus Statistics on Entities to Improve Semi-supervised Relation Extraction from the Web Benjamin Rosenfeld Information Systems HU School of Business, Hebrew University, Jerusalem, Israel [email protected] Ronen Feldman Information Systems HU School of Business, Hebrew University, Jerusalem, Israel [email protected] Abstract Many errors produced by unsupervised and semi-supervised relation extraction (RE) systems occur because of wrong recognition of entities that participate in the relations. This is especially true for systems that do not use separate named-entity recognition components, instead relying on general-purpose shallow parsing. Such systems have greater applicability, because they are able to extract relations that contain attributes of unknown types. However, this generality comes with the cost in accuracy. In this paper we show how to use corpus statistics to validate and correct the arguments of extracted relation instances, improving the overall RE performance. We test the methods on SRES – a self-supervised Web relation extraction system. We also compare the performance of corpus-based methods to the performance of validation and correction methods based on supervised NER components. 1 Introduction Information Extraction (IE) is the task of extracting factual assertions from text. Most IE systems rely on knowledge engineering or on machine learning to generate the “task model” that is subsequently used for extracting instances of entities and relations from new text. In the knowledge engineering approach the model (usually in the form of extraction rules) is created manually, and in the machine learning approach the model is learned automatically from a manually labeled training set of documents. Both approaches require substantial human effort, particularly when applied to the broad range of documents, entities, and relations on the Web. In order to minimize the manual effort necessary to build Web IE systems, semisupervised and completely unsupervised systems are being developed by many researchers. The task of extracting facts from the Web has significantly different aims than the regular information extraction. The goal of regular IE is to identify and label all mentions of all instances of the given relation type inside a document or inside a collection of documents. Whereas, in the Web Extraction (WE) tasks we are only interested in extracting relation instances and not interested in particular mentions. This difference in goals leads to a difference in the methods of performance evaluation. The usual measures of performance of regular IE systems are precision, recall, and their combinations – the breakeven point and F-measure. Unfortunately, the true recall usually cannot be known for WE tasks. Consequently, for evaluating the performance of WE systems, the recall is substituted by the number of extracted instances. WE systems usually order the extracted instances by the system’s confidence in their correctness. The precision of top-confidence extractions is usually very high, but it gets progressively lower when lower-confidence candidates are considered. The curve that plots the number of extractions against precision level is the best indicator of system’s quality. Naturally, for a comparision be600 tween different systems to be meaningful, the evaluations must be performed on the same corpus. In this paper we are concerned with Web RE systems that extract binary relations between named entities. Most of such systems utilize separate named entity recognition (NER) components, which are usualy trained in a supervised way on a separate set of manually labeled documents. The NER components recognize and extract the values of relation attributes (also called arguments, or slots), while the RE systems are concerned with patterns of contexts in which the slots appear. However, good NER components only exist for common and very general entity types, such as Person, Organization, and Location. For some relations, the types of attributes are less common, and no ready NER components (or ready labeled training sets) exist for them. Also, some Web RE systems (e.g., KnowItAll (Etzioni, Cafarella et al. 2005)) do not use separate NER components even for known entity types, because such components are usually domain-specific and may perform poorly on cross-domain text collections extracted from the Web. In such cases, the values for relation attributes must be extracted by generic methods – shallow parsing (extracting noun phrases), or even simple substring extraction. Such methods are naturally much less precise and produce many entityrecognition errors (Feldman and Rosenfeld 2006). In this paper we propose several methods of using corpus statistics to improve Web RE precision by validating and correcting the entities extracted by generic methods. The task of Web Extraction is particularly suited for the corpus statistics-based methods because of very large size of the corpora involved, and because the system is not required to identify individual mentions of the relations. Our methods of entity validation and correction are based on the following two observations: First, the entities that appear in target relations will often also appear in many other contexts, some of which may strongly discriminate in favor of entities of specific type. For example, assume the system encounters a sentence “Oracle bought PeopleSoft.” If the system works without a NER component, it only knows that “Oracle” and “PeopleSoft” are proper noun phrases, and its confidence in correctness of a candidate relation instance Acquisition(Oracle, PeopleSoft) cannot be very high. However, both entities occur many times elsewhere in the corpus, sometimes in strongly discriminating contexts, such as “Oracle is a company that…” or “PeopleSoft Inc.” If the system somehow learned that such contexts indicate entities of the correct type for the Acquisition relation (i.e., companies), then the system would be able to boost its confidence in both entities (“Oracle” and “PeopleSoft”) being of correct types and, consequently, in (Oracle, PeopleSoft) being a correct instance of the Acquisition relation. Another observation that we can use is the fact that the entities, in which we are interested, usually have sufficient frequency in the corpus for statistical term extraction methods to perform reasonably well. These methods may often correct a wrongly placed entity boundary, which is a common mistake of general-purpose shallow parsers. In this paper we show how to use these observations to supplement a Web RE system with an entity validation and correction component, which is able to significantly improve the system’s accuracy. We evaluate the methods using SRES (Feldman and Rosenfeld 2006) – a Web RE system, designed to extend and improve KnowItAll (Etzioni, Cafarella et al. 2005). The contributions of this paper are as follows: • We show how to automatically generate the validating patterns for the target relation arguments, and how to integrate the results produced by the validating patterns into the whole relation extraction system. • We show how to use corpus statistics and term extraction methods to correct the boundaries of relation arguments. • We experimentally compare the improvement produced by the corpus-based entity validation and correction methods with the improvements produced by two alternative validators – a CRF-based NER system trained on a separate labeled corpus, and a small manually-built rule-based NER component. The rest of the paper is organized as follows: Section 2 describes previous work. Section 3 outlines the general design principles of SRES and briefly describes its components. Section 4 describes in detail the different entity validation and correction methods, and Section 5 presents their 601 experimental evaluation. Section 6 contains conclusions and directions for future work. 2 Related Work We are not aware of any work that deals specifically with validation and/or correction of entity recognition for the purposes of improving relation extraction accuracy. However, the background techniques of our methods are relatively simple and known. The validation is based on the same ideas that underlie semi-supervised entity extraction (Etzioni, Cafarella et al. 2005), and uses a simplified SRES code. The boundary correction process utilizes well-known term extraction methods, e.g., (Su, Wu et al. 1994). We also recently became aware of the work by Downey, Broadhead and Etzioni (2007) that deals with locating entities of arbitrary types in large corpora using corpus statistics. The IE systems most similar to SRES are based on bootstrap learning: Mutual Bootstrapping (Riloff and Jones 1999), the DIPRE system (Brin 1998), and the Snowball system (Agichtein and Gravano 2000). Ravichandran and Hovy (Ravichandran and Hovy 2002) also use bootstrapping, and learn simple surface patterns for extracting binary relations from the Web. Unlike these systems, SRES surface patterns allow gaps that can be matched by any sequences of tokens. This makes SRES patterns more general, and allows to recognize instances in sentences inaccessible to the simple surface patterns of systems such as (Brin 1998; Riloff and Jones 1999; Ravichandran and Hovy 2002). Another direction for unsupervised relation learning was taken in (Hasegawa, Sekine et al. 2004; Chen, Ji et al. 2005). These systems use a NER system to identify frequent pairs of entities and then cluster the pairs based on the types of the entities and the words appearing between the entities. The main benefit of this approach is that all relations between two entity types can be discovered simultaneously and there is no need for the user to supply the relations definitions. 3 Description of SRES The goal of SRES is extracting instances of specified relations from the Web without human supervision. Accordingly, the supervised input to the system is limited to the specifications of the target relations. A specification for a given relation consists of the relation schema and a small set of seeds – known true instances of the relation. In the fullscale SRES, the seeds are also generated automatically, by using a set of generic patterns instantiated with the relation schema. However, the seed generation is not relevant to this paper. A relation schema specifies the name of the relation, the names and types of its arguments, and the arguments ordering. For example, the schema of the Acquisition relation Acquisition(Buyer=ProperNP, Acquired=ProperNP) ordered specifies that Acquisition has two slots, named Buyer and Acquired, which must be filled with entities of type ProperNP. The order of the slots is important (as signified by the word “ordered”, and as opposed to relations like Merger, which are “unordered” or, in binary case, “symmetric”). The baseline SRES does not utilize a named entity recognizer, instead using a shallow parser for exracting the relation slots. Thus, the only allowed entity types are ProperNP, CommonNP, and AnyNP, which mean the heads of, respectively, proper, common, and arbitrary noun phrases. In the experimental section we compare the baseline SRES to its extensions containing additional NER components. When using those components we allow further subtypes of ProperNP, and the relation schema above becomes … (Buyer=Company, Acquired=Company) … The main components of SRES are the Pattern Learner, the Instance Extractor, and the Classifier. The Pattern Learner uses the seeds to learn likely patterns of relation occurrences. Then, the Instance Extractor uses the patterns to extract the candidate instances from the sentences. Finally, the Classifier assigns the confidence score to each extraction. We shall now briefly describe these components. 3.1 Pattern Learner The Pattern Learner receives a relation schema and a set of seeds. Then it finds the occurences of seeds inside a large (unlabeled) text corpus, analyzes their contexts, and extracts common patterns among these contexts. The details of the patterns language and the process of pattern learning are not significant for this paper, and are described fully in (Feldman and Rosenfeld 2006). 602 3.2 Instance Extractor The Instance Extractor applies the patterns generated by the Pattern Learner to the text corpus. In order to be able to match the slots of the patterns, the Instance Extractor utilizes an external shallow parser from the OpenNLP package (http://opennlp.sourceforge.net/), which is able to find all proper and common noun phrases in a sentence. These phrases are matched to the slots of the patterns. In other respects, the pattern matching and extraction process is straightforward. 3.3 Classifier The goal of the final classification stage is to filter the list of all extracted instances, keeping the correct extractions, and removing mistakes that would always occur regardless of the quality of the patterns. It is of course impossible to know which extractions are correct, but there exist properties of patterns and pattern matches that increase or decrease the confidence in the extractions that they produce. These properties are turned into a set of binary features, which are processed by a linear featurerich classifier. The classifier receives a feature vector for a candidate, and produces a confidence score between 0 and 1. The set of features is small and is not specific to any particular relation. This allows to train a model using a small amount of labeled data for one relation, and then use the model for scoring the candidates of all other relations. Since the supervised training stage needs to be run only once, it is a part of the system development, and the complete system remains unsupervised, as demonstrated in (Feldman and Rosenfeld 2006). 4 Entity Validation and Correction In this paper we describe three different methods of validation and correction of relation arguments in the extracted instances. Two of them are “classical” and are based, respectively, on the knowledgeengineering, and on the statistical supervised approaches to the named entity recognition problems. The third is our novel approach, based on redundancy and corpus statistics. The methods are implemented as components for SRES, called Entity Validators, inserted between the Instance Extractor and the Classifier. The result of applying Entity Validator to a candidate instance is an (optionally) fixed instance, with validity values attached to all slots. There are three validity values: valid, invalid, and uncertain. The Classifier uses the validity values by converting them into two additional binary features, which are then able to influence the confidence of extractions. We shall now describe the three different validators in details. 4.1 Small Rule-based NER validator This validator is a small Perl script that checks whether a character string conforms to a set of simple regular expression patterns, and whether it appears inside lists of known named entities. There are two sets of regular expression patterns – for Person and for Company entity types, and three large lists – for known personal names, known companies, and “other known named entities”, currently including locations, universities, and government agencies. The manually written regular expression represent simple regularities in the internal structure of the entity types. For example, the patterns for Person include: Person = KnownFirstName [Initial] LastName Person = Honorific [FirstName] [Initial] LastName Honorific = (“Mr” | “Ms” | “Dr” |…) [“.”] Initial = CapitalLetter [“.”] KnownFirstName = member of KnownPersonalNamesList FirstName = CapitalizedWord LastName = CapitalizedWord LastName = CapitalizedWord [“–”CapitalizedWord] LastName = (“o” | “de” | …) “`”CapitalizedWord … while the patterns for Company include: Company = KnownCompanyName Company = CompanyName CompanyDesignator Company = CompanyName FrequentCompanySfx KnownCompanyName = member of KnownCompaniesList CompanyName = CapitalizedWord + CompanyDesignator = “inc” | “corp” | “co” | … FrequentCompanySfx = “systems” | “software” | … … The validator works in the following way: it receives a sentence with a labeled candidate entity of a specified entity type (which can be either Person or Company). It then applies all of the regular expression patterns to the labeled text and to its en603 closing context. It also checks for membership in the lists of known entities. If a boundary is incorrectly placed according to the patterns or to the lists, it is fixed. Then, the following result is returned: Valid, if some pattern/list of the right entity type matched the candidate entity, while there were no matches for patterns/lists of other entity types. Invalid, if no pattern/list of the right entity type matched the candidate entity, while there were matches for patterns/lists of other entity types. Uncertain, otherwise, that is either if there were no matches at all, or if both correct and incorrect entity types matched. The number of patterns is relatively small, and the whole component consists of about 300 lines in Perl and costs several person-days of knowledge engineering work. Despite its simplicity, we will show in the experimental section that it is quite effective, and even often outperforms the CRFbased NER component, described below. 4.2 CRF-based NER validator This validator is built using a feature-rich CRFbased sequence classifier, trained upon an English dataset of the CoNLL 2003 shared task (Rosenfeld, Fresko et al. 2005). For the gazetteer lists it uses the same large lists as the rule-based component described above. The validator receives a sentence with a labeled candidate entity of a specified entity type (which can be either Person or Company). It then sends the sentence to the CRF-based classifier, which labels all named entities it knows – Dates, Times, Percents, Persons, Organizations, and Locations. If the CRF classifier places the entity boundaries differently, they are fixed. Then, the following result is returned: Valid, if CRF classification of the entity accords with the expected argument type. Invalid, if CRF classification of the entity is different from the expected argument type. Uncertain, otherwise, that is if the CRF classifier didn’t recognize the entity at all. 4.3 Corpus-based NER validator The goal of building the corpus-based NER validator is to provide the same level of performance as the supervised NER components, while requiring neither additional human supervision nor additional labeled corpora or other resources. There are several important facts that help achieve this goal. First, the relation instances that are used as seeds for the pattern learning are known to contain correct instances of the right entity type. These instances can be used as seeds in their own right, for learning the patterns of occurrence of the corresponding entity types. Second, the entities in which we are interested usually appear in the corpus with a sufficient frequency. The validation is based on the first observation, while the boundary fixing on the second. Corpus-based entity validation There is a preparation stage, during which the information required for validation is extracted from the corpus. This information is the lists of all entities of every type that appears in the target relations. In order to extract these lists we use a simplified SRES. The entities are considered to be unary relations, and the seeds for them are taken from the slots of the target binary relations seeds. We don’t use the Classifier on the extracted entity instances. Instead, for every extracted instance we record the number of different sentences the entity was extracted from. During the validation process, the validator’s task is to evaluate a given candidate entity instance. The validator compares the number of times the instance was extracted (during the preparation stage) by the patterns for the correct entity type, and by the patterns for all other entity types. The validator then returns Valid, if the number of times the entity was extracted for the specified entity type is at least 5, and at least two times bigger than the number of times it was extracted for all other entity types. Invalid, if the number of times the instance was extracted for the specified entity type is less than 5, and at least 2 times smaller than the number of times it was extracted for all other entity types. 604 Uncertain, otherwise, that is if it was never extracted at all, or extracted with similar frequency for both correct and wrong entity types. Corpus-based correction of entity boundaries Our entity boundaries correction mechanism is similar to the known statistical term extraction techniques (Su, Wu et al. 1994). It is based on the assumption that the component words of a term (an entity in our case) are more tightly bound to each other than to the context. In the statistical sense, this fact is expressed by a high mutual information between the adjacent words belonging to the same term. There are two possible boundary fixes: removing words from the candidate entity, or adding words from the context to the entity. There is a significant practical difference between the two cases. Assume that an entity boundary was placed too broadly, and included extra words. If this was a chance occurrence (and only such cases can be found by statistical methods), then the resulting sequence of tokens will be very infrequent, while its parts will have relatively high frequency. For example, consider a sequence “Formerly Microsoft Corp.”, which is produced by mistakenly labeling “Formerly” as a proper noun by the PoS tagger. While it is easy to know from the frequencies that a boundary mistake was made, it is unclear (to the system) which part is the correct entity. But since the entity (one of the parts of the candidate) has a high frequency, there is a chance that the relation instance, in which the entity appears, will be repeated elsewhere in the corpus and will be extracted correctly there. Therefore, in such case, the simplest recourse is to simply label the entity as Invalid, and not to try fixing the boundaries. On the other hand, if a word was missed from an entity (e.g., “Beverly O”, instead of “Beverly O ' Neill”), the resulting sequence will be frequent. Moreover, it is quite probable that the same boundary mistake is made in many places, because the same sequence of tokens is being analyzed in all those places. Therefore, it makes sense to try to fix the bounary in this case, especially since it can be done simply and reliably: a word (or several words) is attached to the entity string if both their frequencies and their mutual information are above a threshold. 5 Experimental Evaluation The experiments described in this paper aim to confirm the effectiveness of the proposed corpusbased relation argument validation and correction method, and to compare its performance with the classical knowledge-engineering-based and supervised-training-based methods. The experiments were performed with five relations: Acquisition(BuyerCompany, AcquiredCompany), Merger(Company1, Company2), CEO_Of(Company, Person), MayorOf(City, Person), InventorOf(Person, Invention). The data for the experiments were collected by the KnowItAll crawler. The data for the Acquisition and Merger consist of about 900,000 sentences for each of the two relations. The data for the bound relations consist of sentences, such that each contains one of a hundred values of the first (bound) attribute. Half of the hundred are frequent entities (>100,000 search engine hits), and another half are rare (<10,000 hits). For evaluating the validators we randomly selected a set of 10000 sentences from the corpora for each of the relations, and manually evaluated the SRES results generated from these sentences. Four sets of results were evaluated: the baseline results produced without any NER validator, and three sets of results produced using three different NER validators. For the InventorOf relation, only the corpus-based validator results can be produced, since the other two NER components cannot be adapted to validate/correct entities of type Invention. The results for the five relations are shown in the Figure 1. Several conclusions can be drawn from the graphs. First, all of the NER validators improve over the baseline SRES, sometimes as much as doubling the recall at the same level of precision. In most cases the three validators show roughly similar levels of performance. A notable difference is the CEO_Of relation, where the simple rule-based component performs much better than CRF, which performs yet better than the corpus-based component. The CEO_Of relation is tested as bound, which means that only the second relation argument, of type Person, is validated. The Person entities have much more rigid internal structure than the other entities – Companies and Inventions. Consequently, the best performing of 605 Acquisition 0.50 0.60 0.70 0.80 0.90 1.00 0 50 100 150 Correct Extractions Precision Baseline RB-NER CRF Corpus Merger 0.50 0.60 0.70 0.80 0.90 1.00 0 50 100 150 Correct Extractions Precision Baseline RB-NER CRF Corpus CeoOf 0.50 0.60 0.70 0.80 0.90 1.00 0 20 40 60 80 100 120 Correct Extractions Precision Baseline RB-NER CRF Corpus InventorOf 0.50 0.60 0.70 0.80 0.90 1.00 0 20 40 60 80 100 120 Correct Extractions Precision Baseline Corpus Figure 1. Comparison between Baseline-SRES and its extensions with three different NER validators: a simple Rule-Based one, a CRF-based statistical one, and a Corpus-based one. the three validators is the rule-based, which directly tests this internal structure. The CRF-based validator is also able to take advantage of the structure, although in a weaker manner. The Corpusbased validator, however, works purely on the basis of context, entirely disregarding the internal structure of entities, and thus performs worst of all in this case. On the other hand, the Corpus-based validator is able to improve the results for the Inventor relation, which the other two validators are completely unable to do. It is also of interest to compare the performance of CRF-based and the rule-based NER components in other cases. As can be seen, in most cases the rule-based component, despite its simplicity, outperforms the CRF-based one. The possible reason for this is that relation extraction setting is significantly different from the classical named entity recognition setting. A classical NER system is set to maximize the F1 measure of all mentions of all entities in the corpus. A relation argument extractor, on the other hand, should maximize its performance on relation arguments, and apparently their statistical properties are often significantly different. 6 Conclusions We have presented a novel method for validation and correction of relation arguments for the stateof-the-art unsupervised Web relation extraction system SRES. The method is based on corpus statistics and requires no human supervision and no additional corpus resources beyond the corpus that is used for relation extraction. We showed experimentally the effectiveness of our method, which performed comparably to both simple rule-based NER and a statistical CRF-based NER in the task of validating Companies, and somewhat worse in the task of validating Persons, 606 due to its complete disregard of internal structure of entities. The ways to learn and use this structure in an unsupervised way are left for future research. Our method also successfully validated the Invention entities, which are inaccessible to the other methods due to the lack of training data. In our experiments we made use of a unique feature of SRES system – a feature-rich classifier that assigns confidence score to the candidate instances, basing its decisions on various features of the patterns and of the contexts from which the candidates were extracted. This architecture allows easy integration of the entity validation components as additional feature generators. We believe, however, that our results have greater applicability, and that the corpus statistics-based components can be added to RE systems with other architectures as well. References Agichtein, E. and L. Gravano (2000). Snowball: Extracting Relations from Large Plain-Text Collections. Proceedings of the 5th ACM International Conference on Digital Libraries (DL). Brin, S. (1998). Extracting Patterns and Relations from the World Wide Web. WebDB Workshop at 6th International Conference on Extending Database Technology, EDBT’98, Valencia, Spain. Chen, J., D. Ji, C. L. Tan and Z. Niu (2005). Unsupervised Feature Selection for Relation Extraction. IJCNLP-05, Jeju Island, Korea. Downey, D., M. Broadhead and O. Etzioni (2007). Locating Complex Named Entities in Web Text. IJCAI07. Etzioni, O., M. Cafarella, D. Downey, A. Popescu, T. Shaked, S. Soderland, D. Weld and A. Yates (2005). Unsupervised named-entity extraction from the Web: An experimental study. Artificial Intelligence 165(1): 91-134. Feldman, R. and B. Rosenfeld (2006). Boosting Unsupervised Relation Extraction by Using NER. EMNLP-06, Sydney, Australia. Feldman, R. and B. Rosenfeld (2006). Self-Supervised Relation Extraction from the Web. ISMIS-2006, Bari, Italy. Hasegawa, T., S. Sekine and R. Grishman (2004). Discovering Relations among Named Entities from Large Corpora. ACL 2004. Ravichandran, D. and E. Hovy (2002). Learning Surface Text Patterns for a Question Answering System. 40th ACL Conference. Riloff, E. and R. Jones (1999). Learning Dictionaries for Information Extraction by Multi-level Bootstrapping. AAAI-99. Rosenfeld, B., M. Fresko and R. Feldman (2005). A Systematic Comparison of Feature-Rich Probabilistic Classifiers for NER Tasks. PKDD. Su, K.-Y., M.-W. Wu and J.-S. Chang (1994). A Corpus-based Approach to Automatic Compound Extraction. Meeting of the Association for Computational Linguistics: 242-247. 607
2007
76
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 608–615, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Beyond Projectivity: Multilingual Evaluation of Constraints and Measures on Non-Projective Structures Jiˇr´ı Havelka Institute of Formal and Applied Linguistics Charles University in Prague Czech Republic [email protected] Abstract Dependency analysis of natural language has gained importance for its applicability to NLP tasks. Non-projective structures are common in dependency analysis, therefore we need fine-grained means of describing them, especially for the purposes of machine-learning oriented approaches like parsing. We present an evaluation on twelve languages which explores several constraints and measures on non-projective structures. We pursue an edge-based approach concentrating on properties of individual edges as opposed to properties of whole trees. In our evaluation, we include previously unreported measures taking into account levels of nodes in dependency trees. Our empirical results corroborate theoretical results and show that an edge-based approach using levels of nodes provides an accurate and at the same time expressive means for capturing non-projective structures in natural language. 1 Introduction Dependency analysis of natural language has been gaining an ever increasing interest thanks to its applicability in many tasks of NLP—a recent example is the dependency parsing work of McDonald et al. (2005), which introduces an approach based on the search for maximum spanning trees, capable of handling non-projective structures naturally. The study of dependency structures occurring in natural language can be approached from two sides: by trying to delimit permissible dependency structures through formal constraints (for a recent review paper, see Kuhlmann and Nivre (2006)), or by providing their linguistic description (see e.g. Vesel´a et al. (2004) and Hajiˇcov´a et al. (2004) for a linguistic analysis of non-projective constructions in Czech.1) We think that it is worth bearing in mind that neither syntactic structures in dependency treebanks, nor structures arising in machine-learning approaches, such as MST dependency parsing, need a priori fall into any formal subclass of dependency trees. We should therefore aim at formal means capable of describing all non-projective structures that are both expressive and fine-grained enough to be useful in statistical approaches, and at the same time suitable for an adequate linguistic description.2 Holan et al. (1998) first defined an infinite hierarchy of classes of dependency trees, going from projective to unrestricted dependency trees, based on the notion of gap degree for subtrees (cf. Section 3). Holan et al. (2000) present linguistic considerations concerning Czech and English with respect to this hierarchy (cf. also Section 6). In this paper, we consider all constraints and measures evaluated by Kuhlmann and Nivre (2006)— with some minor variations, cf. Section 4.2. Ad1These two papers contain an error concerning an alternative condition of projectivity, which is rectified in Havelka (2005). 2The importance of such means becomes more evident from the asymptotically negligible proportion of projective trees to all dependency trees; there are super-exponentially many unrestricted trees compared to exponentially many projective trees on n nodes. Unrestricted dependency trees (i.e. labelled rooted trees) and projective dependency trees are counted by sequences A000169 and A006013 (offset 1), respectively, in the On-Line Encyclopedia of Sequences (Sloane, 2007). 608 ditionally, we introduce several measures not considered in their work. We also extend the empirical basis from Czech and Danish to twelve languages, which were made available in the CoNLL-X shared task on dependency parsing. In our evaluation, we do not address the issue of what possible effects the annotations and/or conversions used when creating the data might have on non-projective structures in the different languages. The newly considered measures have the first or both of the following desiderata: they are based on properties of individual non-projective edges (cf. Definition 3); and they take into account levels of nodes in dependency trees explicitly. None of the constraints and measures in Kuhlmann and Nivre (2006) take into account levels of nodes explicitly. Level types of non-projective edges, introduced by Havelka (2005), have both desiderata. They provide an edge-based means of characterizing all nonprojective structures; they also have some further interesting formal properties. We propose a novel, more detailed measure, level signatures of non-projective edges, combining levels of nodes with the partitioning of gaps of nonprojective edges into components. We derive a formal property of these signatures that links them to the constraint of well-nestedness, which is an extension of the result for level types (see also Havelka (2007b)). The paper is organized as follows: Section 2 contains formal preliminaries; in Section 3 we review the constraint of projectivity and define related notions necessary in Section 4, where we define and discuss all evaluated constraints and measures; Section 5 describes our data and experimental setup; empirical results are presented in Section 6. 2 Formal preliminaries Here we provide basic definitions and notation used in subsequent sections. Definition 1 A dependency tree is a triple (V,→,⪯), where V is a finite set of nodes, →a dependency relation on V, and ⪯a total order on V.3 3We adopt the following convention: nodes are drawn topdown according to their increasing level, with nodes on the same level being the same distance from the root; nodes are drawn from left to right according to the total order on nodes; edges are drawn as solid lines, paths as dotted curves. Relation →models linguistic dependency, and so represents a directed, rooted tree on V. There are many ways of characterizing rooted trees, we give here a characterization via the properties of →: there is a root r ∈V such that r →∗v for all v ∈V and there is a unique edge p →v for all v ∈V, v ̸= r, and no edge into r. Relation →∗is the reflexive transitive closure of →and is usually called subordination. For each node i we define its level as the length of the path r →∗i; we denote it leveli. The symmetrization ↔= →∪→−1 makes it possible to talk about edges (pairs of nodes i, j such that i →j) without explicitly specifying the parent (head; i here) and the child (dependent; j here); so →represents directed edges and ↔undirected edges. To retain the ability to talk about the direction of edges, we define Parenti↔j =  i if i →j j if j →i and Childi↔j =  j if i →j i if j →i. To make the exposition clearer by avoiding overuse of the symbol →, we introduce notation for rooted subtrees not only for nodes, but also for edges: Subtreei = {v ∈V | i →∗v}, Subtreei↔j = {v ∈V | Parenti↔j →∗v} (note that the subtree of an edge is defined relative to its parent node). To be able to talk concisely about the total order on nodes ⪯, we define open intervals whose endpoints need not be in a prescribed order (i, j) = {v ∈V | min⪯{i, j} ≺v ≺ max⪯{i, j}}. 3 Condition of projectivity Projectivity of a dependency tree can be characterized both through the properties of its subtrees and through the properties of its edges.4 Definition 2 A dependency tree T = (V,→,⪯) is projective if it satisfies the following equivalent conditions: i →j & v ∈(i, j) =⇒v ∈Subtreei , (Harper & Hays) j ∈Subtreei & v ∈(i, j) =⇒v ∈Subtreei , (Lecerf & Ihm) j1, j2 ∈Subtreei & v ∈(j1, j2) =⇒v ∈Subtreei . (Fitialov) Otherwise T is non-projective. 4There are many other equivalent characterizations of projectivity, we give only three historically prominent ones. 609 It was Marcus (1965) who proved the equivalence of the conditions in Definition 2, proposed in the early 1960’s (we denote them by the names of those to whom Marcus attributes their authorship). We see that the antecedents of the projectivity conditions move from edge-focused to subtreefocused (i.e. from talking about dependency to talking about subordination). It is the condition of Fitialov that has been mostly explored when studying so-called relaxations of projectivity. (The condition is usually worded as follows: A dependency tree is projective if the nodes of all its subtrees constitute contiguous intervals in the total order on nodes.) However, we find the condition of Harper & Hays to be the most appealing from the linguistic point of view because it gives prominence to the primary notion of dependency edges over the derived notion of subordination. We therefore use an edge-based approach whenever we find it suitable. To that end, we need the notion of a nonprojective edge and its gap. Definition 3 For any edge i ↔j in a dependency tree T we define its gap as follows Gapi↔j = {v ∈V | v ∈(i, j) & v /∈Subtreei↔j} . An edge with an empty gap is projective, an edge whose gap is non-empty is non-projective.5 We see that non-projective are those edges i ↔j for which there is a node v such that together they violate the condition of Harper & Hays; we group all such nodes v into Gapi↔j, the gap of the nonprojective edge i ↔j. The notion of gap is defined differently for subtrees of a dependency tree (Holan et al., 1998; Bodirsky et al., 2005). There it is defined through the nodes of the whole dependency tree not in the considered subtree that intervene between its nodes in the total order on nodes ⪯. 4 Relaxations of projectivity: evaluated constraints and measures In this section we present all constraints and measures on dependency trees that we evaluate empir5In figures with sample configurations we adopt this convention: for a non-projective edge, we draw all nodes in its gap explicitly and assume that no node on any path crossing the span of the edge lies in the interval delimited by its endpoints. ically in Section 6. First we give definitions of global constraints on dependency trees, then we present measures of non-projectivity based on properties of individual non-projective edges (some of the edge-based measures have corresponding treebased counterparts, however we do not discuss them in detail). 4.1 Tree constraints We consider the following three global constraints on dependency trees: projectivity, planarity, and well-nestedness. All three constraints can be applied to more general structures, e.g. dependency forests or even general directed graphs. Here we adhere to their primary application to dependency trees. Definition 4 A dependency tree T is non-planar if there are two edges i1 ↔j1, i2 ↔j2 in T such that i1 ∈(i2, j2) & i2 ∈(i1, j1) . Otherwise T is planar. Planarity is a relaxation of projectivity that corresponds to the “no crossing edges” constraint. Although it might get confused with projectivity, it is in fact a strictly weaker constraint. Planarity is equivalent to projectivity for dependency trees with their root node at either the left or right fringe of the tree. Planarity is a recent name for a constraint studied under different names already in the 1960’s— we are aware of independent work in the USSR (weakly non-projective trees; see the survey paper by Dikovsky and Modina (2000) for references) and in Czechoslovakia (smooth trees; Nebesk´y (1979) presents a survey of his results). Definition 5 A dependency tree T is ill-nested if there are two non-projective edges i1 ↔j1, i2 ↔j2 in T such that i1 ∈Gapi2↔j2 & i2 ∈Gapi1↔j1 . Otherwise T is well-nested. Well-nestedness was proposed by Bodirsky et al. (2005). The original formulation forbids interleaving of disjoint subtrees in the total order on nodes; we present an equivalent formulation in terms of non-projective edges, derived in (Havelka, 2007b). Figure 1 illustrates the subset hierarchy between classes of dependency trees satisfying the particular constraints: projective ⊊planar ⊊well-nested ⊊unrestricted 610 projective planar well-nested unrestricted Figure 1: Sample dependency trees (trees satisfy corresponding constraints and violate all preceding ones) 4.2 Edge measures The first two measures are based on two ways of partitioning the gap of a non-projective edge—into intervals and into components. The third measure, level type, is based on levels of nodes. We also propose a novel measure combining levels of nodes and the partitioning of gaps into components. Definition 6 For any edge i ↔j in a dependency tree T we define its interval degree as follows idegi↔j = number of intervals in Gapi↔j . By an interval we mean a contiguous interval in ⪯, i.e. a maximal set of nodes comprising all nodes between its endpoints in the total order on nodes ⪯. This measure corresponds to the tree-based gap degree measure in (Kuhlmann and Nivre, 2006), which was first introduced in (Holan et al., 1998)— there it is defined as the maximum over gap degrees of all subtrees of a dependency tree (the gap degree of a subtree is the number of contiguous intervals in the gap of the subtree). The interval degree of an edge is bounded from above by the gap degree of the subtree rooted in its parent node. Definition 7 For any edge i ↔j in a dependency tree T we define its component degree as follows cdegi↔j = number of components in Gapi↔j . By a component we mean a connected component in the relation ↔, in other words a weak component in the relation →(we consider relations induced on the set Gapi↔j by relations on T). This measure was introduced by Nivre (2006); Kuhlmann and Nivre (2006) call it edge degree. Again, they define it as the maximum over all edges. Each component of a gap can be represented by a single node, its root in the dependency relation induced on the nodes of the gap (i.e. a node of the component closest to the root of the whole tree). Note that a component need not constitute a full subtree positive type type 0 negative type Figure 2: Sample configurations with non-projective edges of different level types of the dependency tree (there may be nodes in the subtree of the component root that lie outside the span of the particular non-projective edge). Definition 8 The level type (or just type) of a nonprojective edge i ↔j in a dependency tree T is defined as follows Typei↔j = levelChildi↔j −minn∈Gapi↔j leveln . The level type of an edge is the relative distance in levels of its child node and a node in its gap closest to the root; there may be more than one node witnessing an edge’s type. For sample configurations see Figure 2. Properties of level types are presented in Havelka (2005; 2007b).6 We propose a new measure combining level types and component degrees. (We do not use interval degrees, i.e. the partitioning of gaps into intervals, because we cannot specify a unique representative of an interval with respect to the tree structure.) Definition 9 The level signature (or just signature) of an edge i ↔j in a dependency tree T is a mapping Signaturei↔j : P(V) →ZN0 defined as follows Signaturei↔j = {levelChildi↔j −levelr | r is component root in Gapi↔j} . (The right-hand side is considered as a multiset, i.e. elements may repeat.) We call the elements of a signature component levels. The signature of an edge is a multiset consisting of the relative distances in levels of all component roots in its gap from its child node. Further, we disregard any possible orderings on signatures and concentrate only on the relative distances in levels. We present signatures as non6For example, presence of non-projective edges of nonnegative level type in equivalent to non-projectivity of a dependency tree; moreover, all such edges can be found in linear time. 611 decreasing sequences and write them in angle brackets ⟨⟩, component levels separated by commas (by doing so, we avoid combinatorial explosion). Notice that level signatures subsume level types: the level type of a non-projective edge is the component level of any of possibly several component roots closest to the root of the whole tree. In other words, the level type of an edge is equal to the largest component level occurring in its level signature. Level signatures share interesting formal properties with level types of non-projective edges. The following result is a direct extension of the results presented in Havelka (2005; 2007b). Theorem 10 Let i ↔j be a non-projective edge in a dependency tree T. For any component c in Gapi↔j represented by root rc with component level lc ≤0 (< 0) there is a non-projective edge v →rc in T with Typev↔rc ≥0 (> 0) such that either i ∈Gapv↔rc, or j ∈Gapv↔rc. PROOF. From the assumptions lc ≤0 and rc ∈ Gapi↔j the parent v of node rc lies outside the span of the edge i ↔j, hence v /∈Gapi↔j. Thus either i ∈(v,rc), or j ∈(v,rc). Since levelv ≥ levelParenti↔j, we have that Parenti↔j /∈Subtreev, and so either i ∈Gapv↔rc, or j ∈Gapv↔rc. Finally from lc = levelChildi↔j −levelrc ≤0 (< 0) we get levelrc − levelChildi↔j ≥0 (> 0), hence Typev↔rc ≥0 (> 0). This result links level signatures to wellnestedness: it tells us that whenever an edge’s signature contains a nonpositive component level, the whole dependency tree is ill-nested (because then there are two edges satisfying Definition 5). All discussed edge measures take integer values: interval and component degrees take only nonnegative values, level types and level signatures take integer values (in all cases, their absolute values are bounded by the size of the whole dependency tree). Both interval and component degrees are defined also for projective edges (for which they take value 0), level type is undefined for projective edges, however the level signature of projective edges is defined—it is the empty multiset/sequence. 5 Data and experimental setup We evaluate all constraints and measures described in the previous section on 12 languages, whose treebanks were made available in the CoNLL-X shared Figure 3: Sample non-projective tree considered planar in empirical evaluation task on dependency parsing (Buchholz and Marsi, 2006). In alphabetical order they are: Arabic, Bulgarian, Czech, Danish, Dutch, German, Japanese, Portuguese, Slovene, Spanish, Swedish, and Turkish (Hajiˇc et al., 2004; Simov et al., 2005; B¨ohmov´a et al., 2003; Kromann, 2003; van der Beek et al., 2002; Brants et al., 2002; Kawata and Bartels, 2000; Afonso et al., 2002; Dˇzeroski et al., 2006; Civit Torruella and Mart´ı Anton´ın, 2002; Nilsson et al., 2005; Oflazer et al., 2003).7 We do not include Chinese, which is also available in this data format, because all trees in this data set are projective. We take the data “as is”, although we are aware that structures occurring in different languages depend on the annotations and/or conversions used (some languages were not originally annotated with dependency syntax, but only converted to a unified dependency format from other representations). The CoNLL data format is a simple tabular format for capturing dependency analyses of natural language sentences. For each sentence, it uses a technical root node to which dependency analyses of parts of the sentence (possibly several) are attached. Equivalently, the representation of a sentence can be viewed as a forest consisting of dependency trees. By conjoining partial dependency analyses under one technical root node, we let all their edges interact. Since the technical root comes before the sentence itself, no new non-projective edges are introduced. However, edges from technical roots may introduce non-planarity. Therefore, in our empirical evaluation we disregard all such edges when counting trees conforming to the planarity constraint; we also exclude them from the total numbers of edges. Figure 3 exemplifies how this may affect counts of non-planar trees;8 cf. also the remark after Definition 4. Counts of well-nested trees are not affected. 7All data sets are the train parts of the CoNLL-X shared task. 8The sample tree is non-planar according to Definition 4, however we do not consider it as such, because all pairs of “crossing edges” involve an edge from the technical root (edges from the technical root are depicted as dotted lines). 612 6 Empirical results Our complete results for global constraints on dependency trees are given in Table 1. They confirm the findings of Kuhlmann and Nivre (2006): planarity seems to be almost as restrictive as projectivity; well-nestedness, on the other hand, covers large proportions of trees in all languages. In contrast to global constraints, properties of individual non-projective edges allow us to pinpoint the causes of non-projectivity. Therefore they provide a means for a much more fine-grained classification of non-projective structures occurring in natural language. Table 2 presents highlights of our analysis of edge measures. Both interval and component degrees take generally low values. On the other hand, Holan et al. (1998; 2000) show that at least for Czech neither of these two measures can in principle be bounded. Taking levels of nodes into account seems to bring both better accuracy and expressivity. Since level signatures subsume level types as their last components, we only provide counts of edges of positive, nonpositive, and negative level types. For lack of space, we do not present full distributions of level types nor of level signatures. Positive level types give an even better fit with real linguistic data than the global constraint of wellnestedness (an ill-nested tree need not contain a nonprojective edge of nonpositive level type; cf. Theorem 10). For example, in German less than one tenth of ill-nested trees contain an edge of nonpositive level type. Minimum negative level types for Czech, Slovene, Swedish, and Turkish are respectively −1, −5, −2, and −4. Level signatures combine level types and component degrees, and so give an even more detailed picture of the gaps of non-projective edges. In some languages the actually occurring signatures are quite limited, in others there is a large variation. Because we consider it linguistically relevant, we also count how many non-projective edges contain in their gaps a component rooted in an ancestor of the edge (an ancestor of an edge is any node on the path from the root of the whole tree to the parent node of the edge). The proportions of such nonprojective edges vary widely among languages and for some this property seems highly important. Empirical evidence shows that edge measures of non-projectivity taking into account levels of nodes fit very well with linguistic data. This supports our theoretical results and confirms that properties of non-projective edges provide a more accurate as well as expressive means for describing nonprojective structures in natural language than the constraints and measures considered by Kuhlmann and Nivre (2006). 7 Conclusion In this paper, we evaluate several constraints and measures on non-projective dependency structures. We pursue an edge-based approach giving prominence to properties of individual edges. At the same time, we consider levels of nodes in dependency trees. We find an edge-based approach also more appealing linguistically than traditional approaches based on properties of whole dependency trees or their subtrees. Furthermore, edge-based properties allow machine-learning techniques to model global phenomena locally, resulting in less sparse models. We propose a new edge measure of nonprojectivity, level signatures of non-projective edges. We prove that, analogously to level types, they relate to the constraint of well-nestedness. Our empirical results on twelve languages can be summarized as follows: Among the global constraints, well-nestedness fits best with linguistic data. Among edge measures, the previously unreported measures taking into account levels of nodes stand out. They provide both the best fit with linguistic data of all constraints and measures we have considered, as well as a substantially more detailed capability of describing non-projective structures. The interested reader can find a more in-depth and broader-coverage discussion of properties of dependency trees and their application to natural language syntax in (Havelka, 2007a). As future work, we plan to investigate more languages and carry out linguistic analyses of nonprojective structures in some of them. We will also apply our results to statistical approaches to NLP tasks, such as dependency parsing. Acknowledgement The research reported in this paper was supported by Project No. 1ET201120505 of the Ministry of Education of the Czech Republic. 613 Language Arabic Bulgarian Czech Danish Dutch German Japanese Portuguese Slovene Spanish Swedish Turkish ill-nested 1 79 6 15 416 7 3 71 14 non-planar 150 677 13783 787 4115 10865 1 1713 283 56 1076 556 non-projective 163 690 16831 811 4865 10883 902 1718 340 57 1079 580 proportion of all (%) 11.16% 5.38% 23.15% 15.63% 36.44% 27.75% 5.29% 18.94% 22.16% 1.72% 9.77% 11.6% all 1460 12823 72703 5190 13349 39216 17044 9071 1534 3306 11042 4997 Table 1: Counts of dependency trees violating global constraints of well-nestedness, planarity, and projectivity; the last line gives the total numbers of dependency trees. (An empty cell means count zero.) Language Arabic Bulgarian Czech Danish Dutch German Japanese Portuguese Slovene Spanish Swedish Turkish ideg = 1 211 724 23376 940 10209 14605 1570 2398 548 58 1829 813 ideg = 2 1 189 5 349 1198 81 272 2 1 46 27 ideg = 3 3 8 37 12 24 9 1 cdeg = 1 200 723 23190 842 10264 13107 1484 2466 531 59 1546 623 cdeg = 2 10 1 292 78 238 2206 143 151 11 204 146 cdeg = 3 1 1 66 22 47 434 26 64 2 76 55 Type > 0 211 725 23495 942 10564 15803 1667 2699 547 59 1847 833 Type ≤0 75 3 2 41 3 3 50 8 Type < 0 4 2 15 2 Signature / count ⟨1⟩/ 92 ⟨2⟩/ 674 ⟨2⟩/ 18507 ⟨2⟩/ 555 ⟨2⟩/ 8061 ⟨2⟩/ 8407 ⟨1⟩/ 466 ⟨2⟩/ 1670 ⟨2⟩/ 384 ⟨2⟩/ 46 ⟨2⟩/ 823 ⟨2⟩/ 341 ⟨2⟩/ 56 ⟨3⟩/ 32 ⟨1⟩/ 2886 ⟨1⟩/ 115 ⟨3⟩/ 1461 ⟨1⟩/ 3112 ⟨2⟩/ 209 ⟨1⟩/ 571 ⟨1⟩/ 67 ⟨3⟩/ 7 ⟨1⟩/ 530 ⟨1⟩/ 189 ⟨3⟩/ 18 ⟨1⟩/ 10 ⟨3⟩/ 1515 ⟨3⟩/ 100 ⟨1⟩/ 512 ⟨1,1⟩/ 1503 ⟨4⟩/ 186 ⟨3⟩/ 208 ⟨3⟩/ 45 ⟨4⟩/ 4 ⟨3⟩/ 114 ⟨1,1⟩/ 91 ⟨4⟩/ 10 ⟨4⟩/ 5 ⟨4⟩/ 154 ⟨1,1⟩/ 63 ⟨4⟩/ 201 ⟨3⟩/ 1397 ⟨3⟩/ 183 ⟨1,1⟩/ 113 ⟨4⟩/ 13 ⟨1⟩/ 2 ⟨1,1⟩/ 94 ⟨3⟩/ 53 ⟨1,1⟩/ 8 ⟨5⟩/ 2 ⟨1,1⟩/ 115 ⟨4⟩/ 41 ⟨1,1⟩/ 118 ⟨2,2⟩/ 476 ⟨5⟩/ 126 ⟨1,1,1⟩/ 44 ⟨5⟩/ 12 ⟨0⟩/ 31 ⟨2,2⟩/ 31 ⟨5⟩/ 7 ⟨1,1,1⟩/ 1 ⟨0⟩/ 70 ⟨5⟩/ 16 ⟨2,2⟩/ 52 ⟨1,1,1⟩/ 312 ⟨6⟩/ 113 ⟨2,2⟩/ 29 ⟨1,1⟩/ 6 ⟨1,3⟩/ 27 ⟨1,1,1⟩/ 29 ⟨6⟩/ 6 ⟨1,1⟩/ 1 ⟨2,2⟩/ 58 ⟨1,1,1⟩/ 16 ⟨1,1,1⟩/ 25 ⟨4⟩/ 136 ⟨7⟩/ 78 ⟨2,2,2⟩/ 13 ⟨6⟩/ 4 ⟨1,1,1⟩/ 25 ⟨4⟩/ 19 ⟨7⟩/ 4 ⟨1,1,1⟩/ 48 ⟨2,2⟩/ 7 ⟨5⟩/ 23 ⟨3,3⟩/ 98 ⟨1,1⟩/ 63 ⟨4⟩/ 12 ⟨1,1,1,1⟩/ 4 ⟨4⟩/ 21 ⟨2,2,2⟩/ 10 ⟨2,2⟩/ 2 ⟨2,4⟩/ 44 ⟨6⟩/ 6 ⟨1,3⟩/ 16 ⟨2,2,2⟩/ 69 ⟨8⟩/ 49 ⟨1,1,1,1⟩/ 7 ⟨7⟩/ 2 ⟨1,2⟩/ 19 ⟨3,3⟩/ 6 ⟨9⟩/ 1 ⟨1,3⟩/ 32 ⟨2,2,2⟩/ 6 ⟨3,3⟩/ 15 ⟨1,1,1,1⟩/ 59 ⟨9⟩/ 35 ⟨1,1,1,1,1⟩/ 6 ⟨1,1,3⟩/ 2 ⟨2,2⟩/ 16 ⟨2,2,2,2⟩/ 6 ... ... ... ... ... ... ... ... ... ... ancestor comp. root 39 711 20035 703 9781 10128 0 1832 392 57 950 345 only ancestor comp. r. 39 711 19913 685 9697 9526 0 1820 386 57 857 340 non-projective 211 725 23570 945 10566 15844 1667 2702 550 59 1897 841 proportion of all (%) 0.42% 0.41% 2.13% 1.06% 5.9% 2.4% 1.32% 1.37% 2.13% 0.07% 1.05% 1.61% all 50097 177394 1105437 89171 179063 660394 126511 197607 25777 86028 180425 52273 Table 2: Counts for edge measures interval degree, component degree (for values from 1 to 3; larger values are not included), level type (for positive, nonpositive, and negative values), level signature (up to 10 most frequent values), and numbers of edges with ancestor component roots in their gaps and solely with ancestor component roots in their gaps; the second to last line gives the total numbers of non-projective edges, the last line gives the total numbers of all edges—we exclude edges from technical roots. (The listings need not be exhaustive; an empty cell means count zero.) 614 References A. Abeill´e, editor. 2003. Treebanks: Building and Using Parsed Corpora, volume 20 of Text, Speech and Language Technology. Kluwer Academic Publishers, Dordrecht. S. Afonso, E. Bick, R. Haber, and D. Santos. 2002. “Floresta sint´a(c)tica”: a treebank for Portuguese. In Proceedings of the 3rd Intern. Conf. on Language Resources and Evaluation (LREC), pages 1698–1703. Manuel Bodirsky, Marco Kuhlmann, and Matthias M¨ohl. 2005. Well-nested drawings as models of syntactic structure. In Proceedings of Tenth Conference on Formal Grammar and Ninth Meering on Mathematics of Language. A. B¨ohmov´a, J. Hajiˇc, E. Hajiˇcov´a, and B. Hladk´a. 2003. The PDT: a 3-level annotation scenario. In Abeill´e (2003), chapter 7. S. Brants, S. Dipper, S. Hansen, W. Lezius, and G. Smith. 2002. The TIGER treebank. In Proceedings of the 1st Workshop on Treebanks and Linguistic Theories (TLT). S. Buchholz and E. Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceedings of CoNLLX. SIGNLL. M. Civit Torruella and Ma A. Mart´ı Anton´ın. 2002. Design principles for a Spanish treebank. In Proceedings of the 1st Workshop on Treebanks and Linguistic Theories (TLT). Alexander Dikovsky and Larissa Modina. 2000. Dependencies on the other side of the Curtain. Traitement Automatique des Langues (TAL), 41(1):67–96. S. Dˇzeroski, T. Erjavec, N. Ledinek, P. Pajas, Z. ˇZabokrtsky, and A. ˇZele. 2006. Towards a Slovene dependency treebank. In Proceedings of the 5th Intern. Conf. on Language Resources and Evaluation (LREC). J. Hajiˇc, O. Smrˇz, P. Zem´anek, J. ˇSnaidauf, and E. Beˇska. 2004. Prague Arabic dependency treebank: Development in data and tools. In Proceedings of the NEMLAR Intern. Conf. on Arabic Language Resources and Tools, pages 110–117. Eva Hajiˇcov´a, Jiˇr´ı Havelka, Petr Sgall, Kateˇrina Vesel´a, and Daniel Zeman. 2004. Issues of Projectivity in the Prague Dependency Treebank. Prague Bulletin of Mathematical Linguistics, 81:5–22. Jiˇr´ı Havelka. 2005. Projectivity in Totally Ordered Rooted Trees: An Alternative Definition of Projectivity and Optimal Algorithms for Detecting Non-Projective Edges and Projectivizing Totally Ordered Rooted Trees. Prague Bulletin of Mathematical Linguistics, 84:13–30. Jiˇr´ı Havelka. 2007a. Mathematical Properties of Dependency Trees and their Application to Natural Language Syntax. Ph.D. thesis, Institute of Formal and Applied Linguistics, Charles University in Prague, Czech Republic. Jiˇr´ı Havelka. 2007b. Relationship between Non-Projective Edges, Their Level Types, and Well-Nestedness. In Proceedings of HLT/NAACL; Companion Volume, Short Papers, pages 61–64. Tom´aˇs Holan, Vladislav Kuboˇn, Karel Oliva, and Martin Pl´atek. 1998. Two Useful Measures of Word Order Complexity. In Alain Polgu`ere and Sylvain Kahane, editors, Proceedings of Dependency-Based Grammars Workshop, COLING/ACL, pages 21–28. Tom´aˇs Holan, Vladislav Kuboˇn, Karel Oliva, and Martin Pl´atek. 2000. On Complexity of Word Order. Traitement Automatique des Langues (TAL), 41(1):273–300. Y. Kawata and J. Bartels. 2000. Stylebook for the Japanese treebank in VERBMOBIL. Verbmobil-Report 240, Seminar f¨ur Sprachwissenschaft, Universit¨at T¨ubingen. M. T. Kromann. 2003. The Danish dependency treebank and the underlying linguistic theory. In Proceedings of the 2nd Workshop on Treebanks and Linguistic Theories (TLT). Marco Kuhlmann and Joakim Nivre. 2006. Mildly NonProjective Dependency Structures. In Proceedings of COLING/ACL, pages 507–514. Solomon Marcus. 1965. Sur la notion de projectivit´e [On the notion of projectivity]. Zeitschrift f¨ur Mathematische Logik und Grundlagen der Mathematik, 11:181–192. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajiˇc. 2005. Non-Projective Dependency Parsing using Spanning Tree Algorithms. In Proceedings of HLT/EMNLP, pages 523–530. Ladislav Nebesk´y. 1979. Graph theory and linguistics (chapter 12). In R. J. Wilson and L. W. Beineke, editors, Applications of Graph Theory, pages 357–380. Academic Press. J. Nilsson, J. Hall, and J. Nivre. 2005. MAMBA meets TIGER: Reconstructing a Swedish treebank from antiquity. In Proceedings of the NODALIDA Special Session on Treebanks. Joakim Nivre. 2006. Constraints on Non-Projective Dependency Parsing. In Proceedings of EACL, pages 73–80. K. Oflazer, B. Say, D. Zeynep Hakkani-T¨ur, and G. T¨ur. 2003. Building a Turkish treebank. In Abeill´e (2003), chapter 15. K. Simov, P. Osenova, A. Simov, and M. Kouylekov. 2005. Design and implementation of the Bulgarian HPSG-based treebank. In Journal of Research on Language and Computation – Special Issue, pages 495–522. Kluwer Academic Publishers. Neil J. A. Sloane. 2007. On-Line Encyclopedia of Integer Sequences. Published electronically at www.research.att.com/˜njas/sequences/. L. van der Beek, G. Bouma, R. Malouf, and G. van Noord. 2002. The Alpino dependency treebank. In Computational Linguistics in the Netherlands (CLIN). Kateˇrina Vesel´a, Jiˇr´ı Havelka, and Eva Hajiˇcov´a. 2004. Condition of Projectivity in the Underlying Dependency Structures. In Proceedings of COLING, pages 289–295. 615
2007
77
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 616–623, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Self-Training for Enhancement and Domain Adaptation of Statistical Parsers Trained on Small Datasets Roi Reichart ICNC Hebrew University of Jerusalem [email protected] Ari Rappoport Institute of Computer Science Hebrew University of Jerusalem [email protected] Abstract Creating large amounts of annotated data to train statistical PCFG parsers is expensive, and the performance of such parsers declines when training and test data are taken from different domains. In this paper we use selftraining in order to improve the quality of a parser and to adapt it to a different domain, using only small amounts of manually annotated seed data. We report significant improvement both when the seed and test data are in the same domain and in the outof-domain adaptation scenario. In particular, we achieve 50% reduction in annotation cost for the in-domain case, yielding an improvement of 66% over previous work, and a 20-33% reduction for the domain adaptation case. This is the first time that self-training with small labeled datasets is applied successfully to these tasks. We were also able to formulate a characterization of when selftraining is valuable. 1 Introduction State of the art statistical parsers (Collins, 1999; Charniak, 2000; Koo and Collins, 2005; Charniak and Johnson, 2005) are trained on manually annotated treebanks that are highly expensive to create. Furthermore, the performance of these parsers decreases as the distance between the genres of their training and test data increases. Therefore, enhancing the performance of parsers when trained on small manually annotated datasets is of great importance, both when the seed and test data are taken from the same domain (the in-domain scenario) and when they are taken from different domains (the outof-domain or parser adaptation scenario). Since the problem is the expense in manual annotation, we define ‘small’ to be 100-2,000 sentences, which are the sizes of sentence sets that can be manually annotated by constituent structure in a few hours1. Self-training is a method for using unannotated data when training supervised models. The model is first trained using manually annotated (‘seed’) data, then the model is used to automatically annotate a pool of unannotated (‘self-training’) data, and then the manually and automatically annotated datasets are combined to create the training data for the final model. Self-training of parsers trained on small datasets is of enormous potential practical importance, due to the huge amounts of unannotated data that are becoming available today and to the high cost of manual annotation. In this paper we use self-training to enhance the performance of a generative statistical PCFG parser (Collins, 1999) for both the in-domain and the parser adaptation scenarios, using only small amounts of manually annotated data. We perform four experiments, examining all combinations of in-domain and out-of-domain seed and self-training data. Our results show that self-training is of substantial benefit for the problem. In particular, we present: • 50% reduction in annotation cost when the seed and test data are taken from the same domain, which is 66% higher than any previous result with small manually annotated datasets. 1We note in passing that quantitative research on the cost of annotation using various annotation schemes is clearly lacking. 616 • The first time that self-training improves a generative parser when the seed and test data are from the same domain. • 20-33% reduction in annotation cost when the seed and test data are from different domains. • The first time that self-training succeeds in adapting a generative parser between domains using a small manually annotated dataset. • The first formulation (related to the number of unknown words in a sentence) of when selftraining is valuable. Section 2 discusses previous work, and Section 3 compares in-depth our protocol to a previous one. Sections 4 and 5 present the experimental setup and our results, and Section 6 analyzes the results in an attempt to shed light on the phenomenon of selftraining. 2 Related Work Self-training might seem a strange idea: why should a parser trained on its own output learn anything new? Indeed, (Clark et al., 2003) applied selftraining to POS-tagging with poor results, and (Charniak, 1997) applied it to a generative statistical PCFG parser trained on a large seed set (40K sentences), without any gain in performance. Recently, (McClosky et al., 2006a; McClosky et al., 2006b) have successfully applied self-training to various parser adaptation scenarios using the reranking parser of (Charniak and Johnson, 2005). A reranking parser (see also (Koo and Collins, 2005)) is a layered model: the base layer is a generative statistical PCFG parser that creates a ranked list of k parses (say, 50), and the second layer is a reranker that reorders these parses using more detailed features. McClosky et al (2006a) use sections 2-21 of the WSJ PennTreebank as seed data and between 50K to 2,500K unlabeled NANC corpus sentences as self-training data. They train the PCFG parser and the reranker with the manually annotated WSJ data, and parse the NANC data with the 50-best PCFG parser. Then they proceed in two directions. In the first, they reorder the 50-best parse list with the reranker to create a new 1-best list. In the second, they leave the 1-best list produced by the generative PCFG parser untouched. Then they combine the 1-best list (each direction has its own list) with the WSJ training set, to retrain the PCFG parser. The final PCFG model and the reranker (trained only on annotated WSJ material) are then used to parse the test section (23) of WSJ. There are two major differences between these papers and the current one, stemming from their usage of a reranker and of large seed data. First, when their 1-best list of the base PCFG parser was used as self training data for the PCFG parser (the second direction), the performance of the base parser did not improve. It had improved only when the 1best list of the reranker was used. In this paper we show how the 1-best list of a base (generative) PCFG parser can be used as a self-training material for the base parser itself and enhance its performance, without using any reranker. This reveals a noteworthy characteristic of generative PCFG models and offers a potential direction for parser improvement, since the quality of a parser-reranker combination critically depends on that of the base parser. Second, these papers did not explore self-training when the seed is small, a scenario whose importance has been discussed above. In general, PCFG models trained on small datasets are less likely to parse the self-training data correctly. For example, the fscore of WSJ data parsed by the base PCFG parser of (Charniak and Johnson, 2005) when trained on the training sections of WSJ is between 89% to 90%, while the f-score of WSJ data parsed with the Collins’ model that we use, and a small seed, is between 40% and 80%. As a result, the good results of (McClosky et al, 2006a; 2006b) with large seed sets do not immediately imply success with small seed sets. Demonstration of such success is a contribution of the present paper. Bacchiani et al (2006) explored the scenario of out-of-domain seed data (the Brown training set containing about 20K sentences) and in-domain self-training data (between 4K to 200K sentences from the WSJ) and showed an improvement over the baseline of training the parser with the seed data only. However, they did not explore the case of small seed datasets (the effort in manually annotating 20K is substantial) and their work addresses only one of our scenarios (OI, see below). 617 A work closely related to ours is (Steedman et al., 2003a), which applied co-training (Blum and Mitchell, 1998) and self-training to Collins’ parsing model using a small seed dataset (500 sentences for both methods and 1,000 sentences for co-training only). The seed, self-training and test datasets they used are similar to those we use in our II experiment (see below), but the self-training protocols are different. They first train the parser with the seed sentences sampled from WSJ sections 2-21. Then, iteratively, 30 sentences are sampled from these sections, parsed by the parser, and the 20 best sentences (in terms of parser confidence defined as probability of top parse) are selected and combined with the previously annotated data to retrain the parser. The cotraining protocol is similar except that each parser is trained with the 20 best sentences of the other parser. Self-training did not improve parser performance on the WSJ test section (23). Steedman et al (2003b) followed a similar co-training protocol except that the selection function (three functions were explored) considered the differences between the confidence scores of the two parsers. In this paper we show a self-training protocol that achieves better results than all of these methods (Table 2). The next section discusses possible explanations for the difference in results. Steedman et al (2003b) and Hwa et al, (2003) also used several versions of corrected co-training which are not comparable to ours and other suggested methods because their evaluation requires different measures (e.g. reviewed and corrected constituents are separately counted). As far as we know, (Becker and Osborne, 2005) is the only additional work that tries to improve a generative PCFG parsers using small seed data. The techniques used are based on active learning (Cohn et al., 1994). The authors test two novel methods, along with the tree entropy (TE) method of (Hwa, 2004). The seed, the unannotated and the test sets, as well as the parser used in that work, are similar to those we use in our II experiment. Our results are superior, as shown in Table 3. 3 Self-Training Protocols There are many possible ways to do self-training. A main goal of this paper is to identify a selftraining protocol most suitable for enhancement and domain adaptation of statistical parsers trained on small datasets. No previous work has succeeded in identifying such a protocol for this task. In this section we try to understand why. In the protocol we apply, the self-training set contains several thousand sentences A parser trained with a small seed set parses the self-training set, and then the whole automatically annotated self-training set is combined with the manually annotated seed set to retrain the parser. This protocol and that of Steedman et al (2003a) were applied to the problem, with the same seed, self-training and test sets. As we show below (see Section 4 and Section 5), while Steedman’s protocol does not improve over the baseline of using only the seed data, our protocol does. There are four differences between the protocols. First, Steedman et al’s seed set consists of consecutive WSJ sentences, while we select them randomly. In the next section we show that this difference is immaterial. Second, Steedman et al’s protocol looks for sentences of high quality parse, while our protocol prefers to use many sentences without checking their parse quality. Third, their protocol is iterative while ours uses a single step. Fourth, our selftraining set is orders of magnitude larger than theirs. To examine the parse quality issue, we performed their experiment using their setting but selecting the high quality parse sentences using their f-score relative to the gold standard annotation from secs 221 rather than a quality estimate. No improvement over the baseline was achieved even with this oracle. Thus the problem with their protocol does not lie with the parse quality assessment function; no other function would produce results better than the oracle. To examine the iteration issue, we performed their experiment in a single step, selecting at once the oracle-best 2,000 among 3,000 sentences2, which produced only a mediocre improvement. We thus conclude that the size of the self-training set is a major factor responsible for the difference between the protocols. 4 Experimental Setup We used a reimplementation of Collins’ parsing model 2 (Bikel, 2004). We performed four experiments, II, IO, OI, and OO, two with in-domain seed 2Corresponding to a 100 iterations of 30 sentences each. 618 (II, IO) and two with out-of-domain seed (OI, OO), examining in-domain self-training (II, OI) and outof-domain self-training (IO, OO). Note that being ‘in’ or ‘out’ of domain is determined by the test data. Each experiment contained 19 runs. In each run a different seed size was used, from 100 sentences onwards, in steps of 100. For statistical significance, we repeated each experiment five times, in each repetition randomly sampling different manually annotated sentences to form the seed dataset3. The seed data were taken from WSJ sections 221. For II and IO, the test data is WSJ section 23 (2416 sentences) and the self-training data are either WSJ sections 2-21 (in II, excluding the seed sentences) or the Brown training section (in IO). For OI and OO, the test data is the Brown test section (2424 sentences), and the self-training data is either the Brown training section (in OI) or WSJ sections 2-21 (in OO). We removed the manual annotations from the self-training sections before using them. For the Brown corpus, we based our division on (Bacchiani et al., 2006; McClosky et al., 2006b). The test and training sections consist of sentences from all of the genres that form the corpus. The training division consists of 90% (9 of each 10 consecutive sentences) of the data, and the test section are the remaining 10% (We did not use any held out data). Parsing performance is measured by f-score, f = 2×P×R P+R , where P, R are labeled precision and recall. To further demonstrate our results for parser adaptation, we also performed the OI experiment where seed data is taken from WSJ sections 2-21 and both self-training and test data are taken from the Switchboard corpus. The distance between the domains of these corpora is much greater than the distance between the domains of WSJ and Brown. The Brown and Switchboard corpora were divided to sections in the same way. We have also performed all four experiments with the seed data taken from the Brown training section. 3 (Steedman et al., 2003a) used the first 500 sentences of WSJ training section as seed data. For direct comparison, we performed our protocol in the II scenario using the first 500 or 1000 sentences of WSJ training section as seed data and got similar results to those reported below for our protocol with random selection. We also applied the protocol of Steedman et al to scenario II with 500 randomly selected sentences, getting no improvement over the random baseline. The results were very similar and will not be detailed here due to space constraints. 5 Results 5.1 In-domain seed data In these two experiments we show that when the seed and test data are taken from the same domain, a very significant enhancement of parser performance can be achieved, whether the self-training material is in-domain (II) or out-of-domain (IO). Figure 1 shows the improvement in parser f-score when selftraining data is used, compared to when it is not used. Table 1 shows the reduction in manually annotated seed data needed to achieve certain f-score levels. The enhancement in performance is very impressive in the in-domain self-training data scenario – a reduction of 50% in the number of manually annotated sentences needed for achieving 75 and 80 fscore values. A significant improvement is achieved in the out-of-domain self-training scenario as well. Table 2 compares our results with self-training and co-training results reported by (Steedman et al, 20003a; 2003b). As stated earlier, the experimental setup of these works is similar to ours, but the selftraining protocols are different. For self-training, our II improves an absolute 3.74% over their 74.3% result, which constitutes a 14.5% reduction in error (from 25.7%). The table shows that for both seed sizes our self training protocol outperforms both the selftraining and co-training protocols of (Steedman et al, 20003a; 2003b). Results are not included in the table only if they are not reported in the relevant paper. The self-training protocol of (Steedman et al., 2003a) does not actually improve over the baseline of using only the seed data. Section 3 discussed a possible explanation to the difference in results. In Table 3 we compare our results to the results of the methods tested in (Becker and Osborne, 2005) (including TE)4. To do that, we compare the reduction in manually annotated data needed to achieve an f-score value of 80 on WSJ section 23 achieved by each method. We chose this measure since it is 4The measure is constituents and not sentences because this is how results are reported in (Becker and Osborne, 2005). However, the same reduction is obtained when sentences are counted, because the number of constituents is averaged when taking many sentences. 619 f-score 75 80 Seed data only 600(0%) 1400(0%) II 300(50%) 700(50%) IO 500(17%) 1200(14.5%) Table 1: Number of in-domain seed sentences needed for achieving certain f-scores. Reductions compared to no self-training (line 1) are given in parentheses. Seed size our II our IO Steedman ST Steedman CT Steedman CT 2003a 2003b 500 sent. 78.04 75.81 74.3 76.9 —1,000 sent. 81.43 79.49 —79 81.2 Table 2: F-scores of our in-domain-seed selftraining vs. self-training (ST) and co-training (CT) of (Steedman et al, 20003a; 2003b). the only explicitly reported number in that work. As the table shows, our method is superior: our reduction of 50% constitutes an improvement of 66% over their best reduction of 30.6%. When applying self-training to a parser trained with a small dataset we expect the coverage of the parser to increase, since the combined training set should contain items that the seed dataset does not. On the other hand, since the accuracy of annotation of such a parser is poor (see the no self-training curve in Figure 1) the combined training set surely includes inaccurate labels that might harm parser performance. Figure 2 (left) shows the increase in coverage achieved for in-domain and out-of-domain self-training data. The improvements induced by both methods are similar. This is quite surprising given that the Brown sections we used as selftraining data contain science, fiction, humor, romance, mystery and adventure texts while the test section in these experiments, WSJ section 23, contains only news articles. Figure 2 also compares recall (middle) and precision (right) for the different methods. For II there is a significant improvement in both precision and recall even though many more sentences are parsed. For IO, there is a large gain in recall and a much smaller loss in precision, yielding a substantial improvement in f-score (Figure 1). F score This work - II Becker unparsed Becker entropy/unparsed Hwa TE 80 50% 29.4% 30.6% -5.7% Table 3: Reduction of the number of manually annotated constituents needed for achieving f score value of 80 on section 23 of the WSJ. In all cases the seed and additional sentences selected to train the parser are taken from sections 02-21 of WSJ. 5.2 Out-of-domain seed data In these two experiments we show that self-training is valuable for adapting parsers from one domain to another. Figure 3 compares out-of-domain seed data used with in-domain (OI) or out-of-domain (OO) self-training data against the baseline of training only with the out-of-domain seed data. The left graph shows a significant improvement in f-score. In the middle and right graphs we examine the quality of the parses produced by the model by plotting recall and precision vs. seed size. Regarding precision, the difference between the three conditions is small relative to the f-score difference shown in the left graph. The improvement in the recall measure is much greater than the precision differences, and this is reflected in the f-score result. The gain in coverage achieved by both methods, which is not shown in the figure, is similar to that reported for the in-domain seed experiments. The left graph along with the increase in coverage show the power of self-training in parser adaptation when small seed datasets are used: not only do OO and OI parse many more sentences than the baseline, but their f-score values are consistently better. To see how much manually annotated data can be saved by using out-of-domain seed, we train the parsing model with manually annotated data from the Brown training section, as described in Section 4. We assume that given a fixed number of training sentences the best performance of the parser without self-training will occur when these sentences are selected from the domain of the test section, the Brown corpus. We compare the amounts of manually annotated data needed to achieve certain fscore levels in this condition with the corresponding amounts of data needed by OI and OO. The results are summarized in Table 4. We compare to two baselines using in- and out-of-domain seed data without 620 0 200 400 600 800 1000 40 50 60 70 80 90 number of manually annotated sentences f score no self training wsj self−training brown self−training 1000 1200 1400 1600 1800 2000 78 79 80 81 82 83 84 number of manually annotated sentences f score no self−training wsj self−training brown self−training Figure 1: Number of seed sentences vs. f-score, for the two in-domain seed experiments: II (triangles) and IO (squares), and for the no self-training baseline. Self-training provides a substantial improvement. 0 500 1000 1500 2000 1000 1500 2000 2500 number of manually annotated sentences number of covered sentences no self−training wsj self−training brown self−training 0 500 1000 1500 2000 20 40 60 80 100 number of manually annotated sentences recall no self−training wsj self−training brown self−training 0 500 1000 1500 2000 65 70 75 80 85 number of manually annotated sentences precision no self−training wsj self−training brown self−training Figure 2: Number of seed sentences vs. coverage (left), recall (middle) and precision (right) for the two in-domain seed experiments: II (triangles) and IO (squares), and for the no self-training baseline. any self-training. The second line (ID) serves as a reference to compute how much manual annotation of the test domain was saved, and the first line (OD) serves as a reference to show by how much selftraining improves the out-of-domain baseline. The table stops at an f-score of 74 because that is the best that the baselines can do. A significant reduction in annotation cost over the ID baseline is achieved where the seed size is between 100 and 1200. Improvement over the OD baseline is for the whole range of seed sizes. Both OO and OI achieve 20-33% reduction in manual annotation compared to the ID baseline and enhance the performance of the parser by as much as 42.9%. The only previous work that adapts a parser trained on a small dataset between domains is that of (Steedman et al., 2003a), which used co-training (no self-training results were reported there or elsewhere). In order to compare with that work, we performed OI with seed taken from the Brown corpus and self-training and test taken from WSJ, which is the setup they use, obtaining a similar improvement to that reported there. However, co-training is a more complex method that requires an additional parser (LTAG in their case). To further substantiate our results for the parser adaptation scenario, we used an additional corpus, Switchboard. Figure 4 shows the results of an OI experiment with WSJ seed and Switchboard selftraining and test data. Although the domains of these two corpora are very different (more so than WSJ and Brown), self-training provides a substantial improvement. We have also performed all four experiments with Brown and WSJ trading places. The results obtained were very similar to those reported here, and will not be detailed due to lack of space. 6 Analysis In this section we try to better understand the benefit in using self-training with small seed datasets. We formulate the following criterion: the number of words in a test sentence that do not appear in the seed data (‘unknown words’) is a strong indicator 621 0 500 1000 1500 2000 30 40 50 60 70 80 number of manually annotated sentences f score no self−training wsj self−training brown self−training 0 500 1000 1500 2000 20 30 40 50 60 70 80 number of manually annotated sentences recall no self−training wsj self−training brown self−training 0 500 1000 1500 2000 72 74 76 78 80 82 number of manually annotated sentences precision no self−training wsj self−training brown self−training Figure 3: Number of seed sentences vs. f-score (left), recall (middle) and precision (right), for the two out-of-domain seed data experiments: OO (triangles) and OI (squares), and for the no self-training baseline. f-sc. 66 68 70 72 74 OD 600 800 1, 000 1, 400 – ID 600 700 800 1, 000 1, 200 OO 400 500 600 800 1100 33, 33 28.6, 37.5 33, 40 20, 42.9 8, – OI 400 500 600 800 1, 300 33, 33 28.6, 37.5 33, 40 20, 42.9 −8, – Table 4: Number of manually annotated seed sentences needed for achieving certain f-score values. The first two lines show the out-of-domain and indomain seed baselines. The reductions compared to the baselines is given as ID, OD. 0 500 1000 1500 2000 10 20 30 40 50 number of manually annotated sentences f score switchboard self−training no self−training Figure 4: Number of seed sentences vs. f-score, for the OI experiment using WSJ seed data and SwitchBoard self-training and test data. In spite of the strong dissimilarity between the domains, selftraining provides a substantial improvement. to whether it is worthwhile to use small seed selftraining. Figure 5 shows the number of unknown words in a sentence vs. the probability that the selftraining model will parse a sentence no worse (upper curve) or better (lower curve) than the baseline model. The upper curve shows that regardless of the 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 number of unknown words probability ST > baseline ST >= baseline Figure 5: For sentences having the same number of unknown words, we show the probability that the self-training model parses a sentence from the set no worse (upper curve) or better (lower curve) than the baseline model. number of unknown words in the sentence, there is more than 50% chance that the self-training model will not harm the result. This probability decreases from almost 1 for a very small number of unknown words to about 0.55 for 50 unknown words. The lower curve shows that when the number of unknown words increases, the probability that the self-training model will do better than the baseline model increases from almost 0 (for a very small number of unknown words) to about 0.55. Hence, the number of unknown words is an indication for the potential benefit (value on the lower curve) and risk (1 minus the value on the upper curve) in using the self-training model compared to using the baseline model. Unknown words were not identified in (McClosky et al., 2006a) as a useful predictor for the benefit of self-training. 622 We also identified a length effect similar to that studied by (McClosky et al., 2006a) for self-training (using a reranker and large seed, as detailed in Section 2). Due to space limitations we do not discuss it here. 7 Discussion Self-training is usually not considered to be a valuable technique in improving the performance of generative statistical parsers, especially when the manually annotated seed sentence dataset is small. Indeed, in the II scenario, (Steedman et al., 2003a; McClosky et al., 2006a; Charniak, 1997) reported no improvement of the base parser for small (500 sentences, in the first paper) and large (40K sentences, in the last two papers) seed datasets respectively. In the II, OO, and OI scenarios, (McClosky et al, 2006a; 2006b) succeeded in improving the parser performance only when a reranker was used to reorder the 50-best list of the generative parser, with a seed size of 40K sentences. Bacchiani et al (2006) improved the parser performance in the OI scenario but their seed size was large (about 20K sentences). In this paper we have shown that self-training can enhance the performance of generative parsers, without a reranker, in four in- and out-of-domain scenarios using a small seed dataset. For the II, IO and OO scenarios, we are the first to show improvement by self-training for generative parsers. We achieved a 50% (20-33%) reduction in annotation cost for the in-domain (out-of-domain) seed data scenarios. Previous work with small seed datasets considered only the II and OI scenarios. Our results for the former are better than any previous method, and our results for the latter (which are the first reported self-training results) are similar to previous results for co-training, a more complex method. We demonstrated our results using three corpora of varying degrees of domain difference. A direction for future research is combining self-training data from various domains to enhance parser adaptation. Acknowledgement. We would like to thank Dan Roth for his constructive comments on this paper. References Michiel Bacchiani, Michael Riley, Brian Roark, and Richard Sproat, 2006. MAP adaptation of stochastic grammars. Computer Speech and Language, 20(1):41–68. Markus Becker and Miles Osborne, 2005. A two-stage method for active learning of statistical grammars. IJCAI ’05. Daniel Bikel, 2004. Code developed at University of Pennsylvania. http://www.cis.upenn.edu.bikel. Avrim Blum and Tom M. Mitchell, 1998. Combining labeled and unlabeled data with co-training. COLT ’98. Eugene Charniak, 1997. Statistical parsing with a context-free grammar and word statistics. AAAI ’97. Eugene Charniak, 2000. A maximum-entropy-inspired parser. ANLP ’00. Eugene Charniak and Mark Johnson, 2005. Coarse-tofine n-best parsing and maxent discriminative reranking. ACL ’05. Stephen Clark, James Curran, and Miles Osborne, 2003. Bootstrapping pos taggers using unlabelled data. CoNLL ’03. David A. Cohn, Les Atlas, and Richard E. Ladner, 1994. Improving generalization with active learning. Machine Learning, 15(2):201–221. Michael Collins, 1999. Head-driven statistical models for natural language parsing. Ph.D. thesis, University of Pennsylvania. Rebecca Hwa, Miles Osborne, Anoop Sarkar and Mark Steedman, 2003. Corrected co-training for statistical parsers. In ICML ’03, Workshop on the Continuum from Labeled to Unlabeled Data in Machine Learning and Data Mining. Rebecca Hwa, 2004. Sample selection for statistical parsing. Computational Linguistics, 30(3):253–276. Terry Koo and Michael Collins, 2005. Hidden-variable models for discriminative reranking. EMNLP ’05. David McClosky, Eugene Charniak, and Mark Johnson, 2006a. Effective self-training for parsing. HLTNAACL ’06. David McClosky, Eugene Charniak, and Mark Johnson, 2006b. Reranking and self-training for parser adaptation. ACL-COLING ’06. Mark Steedman, Anoop Sarkar, Miles Osborne, Rebecca Hwa, Stephen Clark, Julia Hockenmaier, Paul Ruhlen, Steven Baker, and Jeremiah Crim, 2003a. Bootstrapping statistical parsers from small datasets. EACL ’03. Mark Steedman, Rebecca Hwa, Stephen Clark, Miles Osborne, Anoop Sarkar, Julia Hockenmaier, Paul Ruhlen,Steven Baker, and Jeremiah Crim, 2003b. Example selection for bootstrapping statistical parsers. NAACL ’03. 623
2007
78
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 624–631, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics HPSG Parsing with Shallow Dependency Constraints Kenji Sagae1 and Yusuke Miyao1 and Jun’ichi Tsujii1,2,3 1Department of Computer Science University of Tokyo Hongo 7-3-1, Bunkyo-ku, Tokyo, Japan 2School of Computer Science, University of Manchester 3National Center for Text Mining {sagae,yusuke,tsujii}@is.s.u-tokyo.ac.jp Abstract We present a novel framework that combines strengths from surface syntactic parsing and deep syntactic parsing to increase deep parsing accuracy, specifically by combining dependency and HPSG parsing. We show that by using surface dependencies to constrain the application of wide-coverage HPSG rules, we can benefit from a number of parsing techniques designed for highaccuracy dependency parsing, while actually performing deep syntactic analysis. Our framework results in a 1.4% absolute improvement over a state-of-the-art approach for wide coverage HPSG parsing. 1 Introduction Several efficient, accurate and robust approaches to data-driven dependency parsing have been proposed recently (Nivre and Scholz, 2004; McDonald et al., 2005; Buchholz and Marsi, 2006) for syntactic analysis of natural language using bilexical dependency relations (Eisner, 1996). Much of the appeal of these approaches is tied to the use of a simple formalism, which allows for the use of efficient parsing algorithms, as well as straightforward ways to train discriminative models to perform disambiguation. At the same time, there is growing interest in parsing with more sophisticated lexicalized grammar formalisms, such as Lexical Functional Grammar (LFG) (Bresnan, 1982), Lexicalized Tree Adjoining Grammar (LTAG) (Schabes et al., 1988), Headdriven Phrase Structure Grammar (HPSG) (Pollard and Sag, 1994) and Combinatory Categorial Grammar (CCG) (Steedman, 2000), which represent deep syntactic structures that cannot be expressed in a shallower formalism designed to represent only aspects of surface syntax, such as the dependency formalism used in current mainstream dependency parsing. We present a novel framework that combines strengths from surface syntactic parsing and deep syntactic parsing, specifically by combining dependency and HPSG parsing. We show that, by using surface dependencies to constrain the application of wide-coverage HPSG rules, we can benefit from a number of parsing techniques designed for high-accuracy dependency parsing, while actually performing deep syntactic analysis. From the point of view of HPSG parsing, accuracy can be improved significantly through the use of highly accurate discriminative dependency models, without the difficulties involved in adapting these models to a more complex and linguistically sophisticated formalism. In addition, improvements in dependency parsing accuracy are converted directly into improvements in HPSG parsing accuracy. From the point of view of dependency parsing, the application of HPSG rules to structures generated by a surface dependency model provides a principled and linguistically motivated way to identify deep syntactic phenomena, such as long-distance dependencies, raising and control. We begin by describing our dependency and HPSG parsing approaches in section 2. In section 3, we present our framework for HPSG parsing with shallow dependency constraints, and in section 4 we 624 Figure 1: HPSG parsing evaluate this framework empirically. Sections 5 and 6 discuss related work and conclusions. 2 Fast dependency parsing and wide-coverage HPSG parsing 2.1 Data-driven dependency parsing Because we use dependency parsing as a step in deep parsing, it is important that we choose a parsing approach that is not only accurate, but also efficient. The deterministic shift/reduce classifier-based dependency parsing approach (Nivre and Scholz, 2004) has been shown to offer state-of-the-art accuracy (Nivre et al., 2006) with high efficiency due to a greedy search strategy. Our approach is based on Nivre and Scholz’s approach, using support vector machines for classification of shift/reduce actions. 2.2 Wide-coverage HPSG parsing HPSG (Pollard and Sag, 1994) is a syntactic theory based on lexicalized grammar formalism. In HPSG, a small number of schemas explain general construction rules, and a large number of lexical entries express word-specific syntactic/semantic constraints. Figure 1 shows an example of the process of HPSG parsing. First, lexical entries are assigned to each word in a sentence. In Figure 1, lexical entries express subcategorization frames and predicate argument structures. Parsing proceeds by applying schemas to lexical entries. In this example, the Head-Complement Schema is applied to the lexical entries of “tried” and “running”. We then obtain a phrasal structure for “tried running”. By repeatedly applying schemas to lexical/phrasal structures, Figure 2: Extracting HPSG lexical entries from the Penn Treebank we finally obtain an HPSG parse tree that covers the entire sentence. In this paper, we use an HPSG parser developed by Miyao and Tsujii (2005). This parser has a widecoverage HPSG lexicon which is extracted from the Penn Treebank. Figure 2 illustrates their method for extraction of HPSG lexical entries. First, given a parse tree from the Penn Treebank (top), HPSGstyle constraints are added and an HPSG-style parse tree is obtained (middle). Lexical entries are then extracted from the terminal nodes of the HPSG parse tree (bottom). This way, in addition to a widecoverage lexicon, we also obtain an HPSG treebank, which can be used as training data for disambiguation models. The disambiguation model of this parser is based on a maximum entropy model (Berger et al., 1996). The probability p(T|W) of an HPSG parse tree T for the sentence W = ⟨w1, . . . , wn⟩is given as: p(T|W) = p(T|L, W)p(L|W) = 1 Z exp X i λifi(T) ! Y j p(lj|W), where L = ⟨l1, . . . , ln⟩are lexical entries and 625 p(li|W) is the supertagging probability, i.e., the probability of assignining the lexical entry li to wi (Ninomiya et al., 2006). The probability p(T|L, W) is a maximum entropy model on HPSG parse trees, where Z is a normalization factor, and feature functions fi(T) represent syntactic characteristics, such as head words, lengths of phrases, and applied schemas. Given the HPSG treebank as training data, the model parameters λi are estimated so as to maximize the log-likelihood of the training data (Malouf, 2002). 3 HPSG parsing with dependency constraints While a number of fairly straightforward models can be applied successfully to dependency parsing, designing and training HPSG parsing models has been regarded as a significantly more complex task. Although it seems intuitive that a more sophisticated linguistic formalism should be more difficult to parameterize properly, we argue that the difference in complexity between HPSG and dependency structures can be seen as incremental, and that the use of accurate and efficient techniques to determine the surface dependency structure of a sentence provides valuable information that aids HPSG disambiguation. This is largely because HPSG is based on a lexicalized grammar formalism, and as such its syntactic structures have an underlying dependency backbone. However, HPSG syntactic structures includes long-distance dependencies, and the underlying dependency structure described by and HPSG structure is a directed acyclic graph, not a dependency tree (as used by mainstream approaches to data-driven dependency parsing). This difference manifests itself in words that have multiple heads. For example, in the sentence I tried to run, the pronoun I is a dependent of tried and of run. This makes it possible to represent that I is the subject of both verbs, precisely the kind of information that cannot be represented in dependency parsing. If we ignore long-distance dependencies, however, HPSG structures can be seen as lexicalized trees that can be easily converted into dependency trees. Given that for an HPSG representation of the syntactic structure of a sentence we can determine a dependency tree by removing long-distance dependencies, we can use dependency parsing techniques (such as the deterministic dependency parsing approach mentioned in section 2.1) to determine the underlying dependency trees in HPSG structures. This is the basis for the parsing framework presented here. In this approach, deep dependency analysis is done in two stages. First, a dependency parser determines the shallow dependency tree for the input sentence. This shallow dependency tree corresponds to the underlying dependency graph of the HPSG structure for the input sentence, without dependencies that roughly correspond to deep syntax. The second step is to perform HPSG parsing, as described in section 2.2, but using the shallow dependency tree to constrain the application of HPSG rules. We now discuss these two steps in more detail. 3.1 Determining shallow dependencies in HPSG structures using dependency parsing In order to apply a data-driven dependency approach to the task of identifying the shallow dependency tree in HPSG structures, we first need a corpus of such dependency trees to serve as training data. We created a dependency training corpus based on the Penn Treebank (Marcus et al., 1993), or more specifically on the HPSG Treebank generated from the Penn Treebank (see section 2.2). For each HPSG structure in the HPSG Treebank, a dependency tree is extracted in two steps. First, the HPSG tree is converted into a CFG-style tree, simply by removing long-distance dependency links between nodes. A dependency tree is then extracted from the resulting lexicalized CFG-style tree, as is commonly done for converting constituent trees into dependency trees after the application of a headpercolation table (Collins, 1999). Once a dependency training corpus is available, it is used to train a dependency parser as described in section 2.1. This is done by training a classifier to determine parser actions based on local features that represent the current state of the parser (Nivre and Scholz, 2004; Sagae and Lavie, 2005). Training data for the classifier is obtained by applying the parsing algorithm over the training sentences (for which the correct dependency structures are known) and recording the appropriate parser actions that result in the formation of the correct dependency trees, coupled with the features that represent the state of 626 the parser mentioned in section 2.1. An evaluation of the resulting dependency parser and its efficacy in aiding HPSG parsing is presented in section 4. 3.2 Parsing with dependency constraints Given a set of dependencies, the bottom-up process of HPSG parsing can be constrained so that it does not violate the given dependencies. This can be achieved by a simple extension of the parsing algorithm, as follows. During parsing, we store the lexical head of each partial parse tree. In each schema application, we can determine which child is the head; for example, the left child is the head when we apply the Head-Complement Schema. Given this information and lexical heads, the parser can identify the dependency produced by this schema application, and can therefore judge whether the schema application violates the dependency constraints. This method forces the HPSG parser to produce parse trees that strictly conform to the output of the dependency parser. However, this means that the HPSG parser outputs no successful parse results when it cannot find the parse tree that is completely consistent with the given dependencies. This situation may occur when the dependency parser produces structures that are not covered in the HPSG grammar. This is especially likely with a fully datadriven dependency parser that uses local classification, since its output may not be globally consistent grammatically. In addition, the HPSG grammar is extracted from the HPSG Treebank using a corpusbased procedure, and it does not necessarily cover all possible grammatical phenomena in unseen text (Miyao and Tsujii, 2005). We therefore propose an extension of this approach that uses predetermined dependencies as soft constraints. Violations of schema applications are detected in the same way as before, but instead of strictly prohibiting schema applications, we penalize the log-likelihood of partial parse trees created by schema applications that violate the dependencies constraints. Given a negative value α, we add α to the log-probability of a partial parse tree when the schema application violates the dependency constraints. That is, when a parse tree violates n dependencies, the log-probability of the parse tree is lowered by nα. The meta parameter α is determined so as to maximize the accuracy on the development set. Soft dependency constraints can be implemented as explained above as a straightforward extension of the parsing algorithm. In addition, it is easily integrated with beam thresholding methods of parsing. Because beam thresholding discards partial parse trees that have low log-probabilities, we can expect that the parser would discard partial parse trees based on violation of the dependency constraints. 4 Experiments We evaluate the accuracy of HPSG parsing with dependency constraints on the HPSG Treebank (Miyao et al., 2003), which is extracted from the Wall Street Journal portion of the Penn Treebank (Marcus et al., 1993)1. Sections 02-21 were used for training (for HPSG and dependency parsers), section 22 was used as development data, and final testing was performed on section 23. Following previous work on wide-coverage parsing with lexicalized grammars using the Penn Treebank, we evaluate the parser by measuring the accuracy of predicate-argument relations in the parser’s output. A predicate-argument relation is defined as a tuple ⟨σ, wh, a, wa⟩, where σ is the predicate type (e.g. adjective, intransitive verb), wh is the head word of the predicate, a is the argument label (MODARG, ARG1, ... , ARG4), and wa is the head word of the argument. Labeled precision (LP)/labeled recall (LR) is the ratio of tuples correctly identified by the parser. These predicateargument relations cover the full range of syntactic dependencies produced by the HPSG parser (including, long-distance dependencies, raising and control, in addition to surface dependencies). In the experiments presented in this section, input sentences were automatically tagged with partsof-speech with about 97% accuracy, using a maximum entropy POS tagger. We also report results on parsing text with gold standard POS tags, where explicitly noted. This provides an upper-bound on what can be expected if a more sophisticated multitagging scheme (James R. Curran and Vadas, 2006) is used, instead of hard assignment of single tags in a preprocessing step as done here. 1The extraction software can be obtained from http://wwwtsujii.is.s.u-tokyo.ac.jp/enju. 627 4.1 Baseline HPSG parsing results using the same HPSG grammar and treebank have recently been reported by Miyao and Tsujii (2005) and Ninomia et al. (2006). By running the HPSG parser described in section 2.2 on the development data without dependency constraints, we obtain similar values of LP (86.8%) and LR (85.6%) as those reported by Miyao and Tsujii (Miyao and Tsujii, 2005). Using the extremely lexicalized framework of (Ninomiya et al., 2006) by performing supertagging before parsing, we obtain similar accuracy as Ninomiya et al. (87.1% LP and 85.9% LR). 4.2 Dependency constraints and the penalty parameter Parsing the development data with hard dependency constraints confirmed the intuition that these constraints often describe dependency structures that do not conform to HPSG schema used in parsing, resulting in parse failures. To determine the upperbound on HPSG parsing with hard dependency constraints, we set the HPSG parser to disallow the application of any rules that result in the creation of dependencies that violate gold standard dependencies. This results in high precision (96.7%), but recall is low (82.3%) due to parse failures caused by lack of grammatical coverage 2. Using dependencies produced by the shift-reduce SVM parser, we obtain 91.5% LP and 65.7% LR. This represents a large gain in precision over the baseline, but an even greater loss in recall, which limits the usefulness of the parser, and severely hurts the appeal of hard constraints. We focus the rest of our experiments on parsing with soft dependency constraints. As explained in section 3, this involves setting the penalty parameter α. During parsing, we subtract α from the logprobability of applying any schema that violates the dependency constraints given to the HPSG parser. Figure 3 illustrates the effect of α when gold standard dependencies (and gold standard POS tags) are used. We note that setting α = 0 causes the parser 2Although the HPSG grammar does not have perfect coverage of unseen text, it supports complete and mostly correct analyses for all sentences in the development set. However, when we require completely correct analyses by using hard constraints, lack of coverage may cause parse failures. 89 90 91 92 93 94 95 96 0 5 10 15 20 25 30 35 Penalty Accuracy Precision Recall F-score Figure 3: The effect of α on HPSG parsing constrained by gold standard dependencies. to ignore dependency constraints, providing baseline performance. Conversely, setting a high enough value (α = 30 is sufficient, in practice) causes any substructures that violate the dependency constraints to be used only when they are absolutely necessary to produce a valid parse for the input sentence. In figure 3, this corresponds to an upper-bound on the accuracy of parsing with soft dependency constraints (94.7% f-score), since gold standard dependencies are used. We set α empirically with simple hill climbing on the development set. Because it is expected that the optimal value of α depends on the accuracy of the surface dependency parser, we set separate values for parsing with a POS tagger or with gold standard POS tags. Figure 4 shows the accuracy of HPSG predicate-argument relations obtained with dependency constraints determined by dependency parsing with gold standard POS tags. With both automatically assigned and gold standard POS tags, we observe an improvement of about 0.6% in precision, recall and f-score, when the optimal α value is used in each case. While this corresponds to a relative error reduction of over 6% (or 12%, if we consider the upper-bound dictated by imperfect grammatical coverage), a more interesting aspect of this framework is that it allows techniques designed for improving dependency accuracy to improve HPSG parsing accuracy directly, as we illustrate next. 628 89.4 89.6 89.8 90 90.2 90.4 90.6 90.8 91 0 0.5 1 1.5 2 2.5 3 3.5 Penalty Accuracy Precision Recall F-score Figure 4: The effect of α on HPSG parsing constrained by the output of a dependency parser using gold standard POS tags. 4.3 Determining constraints with dependency parser combination Parser combination has been shown to be a powerful way to obtain very high accuracy in dependency parsing (Sagae and Lavie, 2006). Using dependency constraints allows us to improve HPSG parsing accuracy simply by using an existing parser combination approach. As a first step, we train two additional parsers with the dependencies extracted from the HPSG Treebank. The first uses the same shiftreduce framework described in section 2.1, but it process the input from right to left (RL). This has been found to work well in previous work on dependency parser combination (Zeman and ˇZabokrtsk´y, 2005; Sagae and Lavie, 2006). The second parser is MSTParser, the large-margin maximum spanning tree parser described in (McDonald et al., 2005)3. We examine the use of two combination schemes: one using two parsers, and one using three parsers. The first combination approach is to keep only dependencies for which there is agreement between the two parsers. In other words, dependencies that are proposed by one parser but not the other are simply discarded. Using the left-to-right shift-reduce parser and MSTParser, we find that this results in very high precision of surface dependencies on the development data. In the second approach, combination of 3Downloaded from http://sourceforge.net/projects/mstparser the three dependency parsers is done according to the maximum spanning tree combination scheme of Sagae and Lavie (2006), which results in high accuracy of surface dependencies. For each of the combination approaches, we use the resulting dependencies as constraints for HPSG parsing, determining the optimal value of α on the development set in the same way as done for a single parser. Table 1 summarizes our experiments on development data using parser combinations to produce dependency constraints 4. The two combination approaches are denoted as C1 and C2. Parser Dep α HPSG Diff none (baseline) – – 86.5 – LR shift-reduce 91.2 1.5 87.1 0.6 RL shift-reduce 90.1 – – MSTParser 91.0 – – C1 (agreement) 96.8* 2.5 87.4 0.9 C2 (MST) 92.4 2.5 87.4 0.9 Table 1: Summary of results on development data. * The shallow accuracy of combination C1 corresponds to the dependency precision (no dependencies were reported for 8% of all words in the development set). 4.4 Results Having determined α values on development data for the shift-reduce dependency parser, the twoparser agreement combination, and the three-parser maximum spanning tree combination, we parse the test data (section 23) using these three different sources of dependency constraints for HPSG parsing. Our final results are shown in table 2, where we also include the results published in (Ninomiya et al., 2006) for comparison purposes, and the result of using dependency constraints obtained with gold standard POS tags. By using two unlabeled dependency parsers to provide soft dependency constraints, we obtain a 1% absolute improvement in precision and recall of predicate-argument identification in HPSG parsing over a strong baseline. Our baseline approach outperformed previously published results on this test 4The accuracy figures for the dependency parsers is expressed as unlabeled accuracy of the surface dependencies only, and are not comparable to the HPSG parsing accuracy figures 629 Parser LP LR F-score HPSG Baseline 87.4 87.0 87.2 Shift-Reduce + HPSG 88.2 87.7 87.9 C1 + HPSG 88.5 88.0 88.2 C2 + HPSG 88.4 87.9 88.1 Baseline(gold) 89.8 89.4 89.6 Shift-Reduce(gold) 90.62 90.23 90.42 C1+HPSG(gold) 90.9 90.4 90.6 C2+HPSG(gold) 90.8 90.4 90.6 Miyao and Tsujii, 2005 85.0 84.3 84.6 Ninomiya et al., 2006 87.4 86.3 86.8 Table 2: Final results on test set. The first set of results show our HPSG baseline and HPSG with soft dependency constraints using three different sources of dependency constraints. The second set of results show the accuracy of the same parsers when gold part-of-speech tags are used. The third set of results is from existing published models on the same data. set, and our best performing combination scheme obtains an absolute improvement of 1.4% over the best previously published results using the HPSG Treebank. It is interesting to note that the results obtained with dependency parser combinations C1 and C2 were very similar, even though in C1 only two parsers were used, and constraints were provided for about 92% of shallow dependencies (with accuracy higher than 96%). Clearly, precision is crucial in dependency constraints. Finally, although it is necessary to perform dependency parsing to pre-compute dependency constraints, the total time required to perform the entire process of HPSG parsing with dependency constraints is close to that of the baseline HPSG approach. This is due to two reasons: (1) the dependency parsing approaches used to pre-compute constraints are several times faster than the baseline HPSG approach, and (2) the HPSG portion of the process is significantly faster when dependency constraints are used, since the constraints help sharpen the search space, making search more efficient. Using the baseline HPSG approach, it takes approximately 25 minutes to parse the test set. The total time required to parse the test set using HPSG with dependency constraints generated by the shiftreduce parser is 27 minutes. With combination C1, parsing time increases to 30 minutes, since two dependency parsers are used sequentially. 5 Related work There are other approaches that combine shallow processing with deep parsing (Crysmann et al., 2002; Frank et al., 2003; Daum et al., 2003) to improve parsing efficiency. Typically, shallow parsing is used to create robust minimal recursion semantics, which are used as constraints to limit ambiguity during parsing. Our approach, in contrast, uses syntactic dependencies to achieve a significant improvement in the accuracy of wide-coverage HPSG parsing. Additionally, our approach is in many ways similar to supertagging (Bangalore and Joshi, 1999), which uses sequence labeling techniques as an efficient way to pre-compute parsing constraints (specifically, the assignment of lexical entries to input words). 6 Conclusion We have presented a novel framework for taking advantage of the strengths of a shallow parsing approach and a deep parsing approach. We have shown that by constraining the application of rules in HPSG parsing according to results from a dependency parser, we can significantly improve the accuracy of deep parsing by using shallow syntactic analyses. To illustrate how this framework allows for improvements in the accuracy of dependency parsing to be used directly to improve the accuracy of HPSG parsing, we showed that by combining the results of different dependency parsers using the search-based parsing ensemble approach of (Sagae and Lavie, 2006), we obtain improved HPSG parsing accuracy as a result of the improved dependency accuracy. Although we have focused on the use of HPSG and dependency parsing, the general framework presented here can be applied to other lexicalized grammar formalisms, such as LTAG, CCG and LFG. Acknowledgements This research was partially supported by Grant-inAid for Specially Promoted Research 18002007. 630 References Srinivas Bangalore and Aravind K. Joshi. 1999. Supertagging: an approach to almost parsing. Computational Linguistics, 25(2):237–265. A. Berger, S. A. Della Pietra, and V. J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. Joan Bresnan. 1982. The mental representation of grammatical relations. MIT Press. Sabine Buchholz and Erwin Marsi. 2006. Conll-x shared task on multilingual dependency parsing. In Proceedings of the Tenth Conference on Natural Language Learning. New York, NY. M. Collins. 1999. Head-Driven Models for Natural Language Parsing. Phd thesis, University of Pennsylvania. Berthold Crysmann, Anette Frank, Bernd Kiefer, Stefan Mueller, Guenter Neumann, Jakub Piskorski, Ulrich Schaefer, Melanie Siegel, Hans Uszkoreit, Feiyu Xu, Markus Becker, and Hans-Ulrich Krieger. 2002. An integrated architecture for shallow and deep processing. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL 2002). Michael Daum, Kilian A. Foth, and Wolfgang Menzel. 2003. Constraint-based integration of deep and shallow parsing techniques. In Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2003). Jason Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of the International Conference on Computational Linguistics (COLING’96). Copenhagen, Denmark. Anette Frank, Markus Becker, Berthold Crysmann, Bernd Kiefer, and Ulrich Schaefer. 2003. Integrated shallow and deep parsing: TopP meets HPSG. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL 2003), pages 104–111. Stephen Clark James R. Curran and David Vadas. 2006. Multi-tagging for lexicalized-grammar parsing. In Proceedings of COLING/ACL 2006. Sydney, Australia. Robert Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Proceedings of the 2002 Conference on Natural Language Learning. M. P. Marcus, B. Santorini, and M. A. Marcinkiewics. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19. Ryan McDonald, Fernando Pereira, K. Ribarov, and J. Hajic. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of the Conference on Human Language Technologies/Empirical Methods in Natural Language Processing (HLT-EMNLP). Vancouver, Canada. Yusuke Miyao and Jun’ichi Tsujii. 2005. Probabilistic disambiguation models for wide-coverage hpsg parsing. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics. Ann Arbor, MI. Yusuke Miyao, Takashi Ninomiya, and Jun’ichi Tsujii. 2003. Corpus oriented grammar development for aquiring a head-driven phrase structure grammar from the penn treebank. In Proceedings of the Tenth Conference on Natural Language Learning. T. Ninomiya, T. Matsuzaki, Y. Tsuruoka, Y. Miyao, and J. Tsujii. 2006. Extremely lexicalized models for accurate and fast hpsg parsing. In Proceedings of the 2006 Conference on Empirical Methods for Natural Language Processing (EMNLP 2006). Joakim Nivre and Mario Scholz. 2004. Deterministic dependency parsing of english text. In Proceedings of the 20th International Conference on Computational Linguistics, pages 64–70. Geneva, Switzerland. J. Nivre, J. Hall, J. Nilsson, G. Eryigit, and S. Marinov. 2006. Labeled pseudo-projective dependency parsing with support vector machines. In Proceedings of the Tenth Conference on Natural Language Learning. New York, NY. C. Pollard and I. A. Sag. 1994. Head-Driven Phrase Structure Grammar. University of Chicago Press. Kenji Sagae and Alon Lavie. 2005. A classifier-based parser with linear run-time complexity. In Proceedings of the Ninth International Workshop on Parsing Technologies. Vancouver, BC. Kenji Sagae and Alon Lavie. 2006. Parser combination by reparsing. In Proceedings of the 2006 Meeting of the North American ACL. New York, NY. Yves Schabes, Anne Abeille, and Aravind Joshi. 1988. Parsing strategies with lexicalized grammars: Application to tree adjoining grammars. In Proceedings of 12th COLING. Mark Steedman. 2000. The Syntactic Process. MIT Press. Daniel Zeman and Zdenek ˇZabokrtsk´y. 2005. Improving parsing accuracy by combining diverse dependency parsers. In Proceedings of the International Workshop on Parsing Technologies. Vancouver, Canada. 631
2007
79
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 57–64, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Making Lexical Ontologies Functional and Context-Sensitive Tony Veale Computer Science and Informatics University College Dublin Ireland [email protected] Yanfen Hao Computer Science and Informatics University College Dublin Ireland [email protected] Abstract Human categorization is neither a binary nor a context-free process. Rather, some concepts are better examples of a category than others, while the criteria for category membership may be satisfied to different degrees by different concepts in different contexts. In light of these empirical facts, WordNet’s static category structure appears both excessively rigid and unduly fragile for processing real texts. In this paper we describe a syntagmatic, corpus-based approach to redefining WordNet’s categories in a functional, gradable and context-sensitive fashion. We describe how the diagnostic properties for these definitions are automatically acquired from the web, and how the increased flexibility in categorization that arises from these redefinitions offers a robust account of metaphor comprehension in the mold of Glucksberg’s (2001) theory of category-inclusion. Furthermore, we demonstrate how this competence with figurative categorization can effectively be governed by automatically-generated ontological constraints, also acquired from the web. 1 Introduction Linguistic variation across contexts is often symptomatic of ontological differences between contexts. These observable variations can serve as valuable clues not just to the specific senses of words in context (e.g., see Pustejovsky, Hanks and Rumshisky, 2004) but to the underlying ontological structure itself (see Cimiano, Hotho and Staab, 2005). The most revealing variations are syntagmatic in nature, which is to say, they look beyond individual word forms to larger patterns of contiguous usage (Hanks, 2004). In most contexts, the similarity between chocolate, say, and a narcotic like heroin will meagerly reflect the simple ontological fact that both are kinds of substances; certainly, taxonomic measures of similarity as discussed in Budanitsky and Hirst (2006) will capture little more than this commonality. However, in a context in which the addictive properties of chocolate are very salient (e.g., an online dieting forum), chocolate is more likely to be categorized as a drug and thus be considered more similar to heroin. Look, for instance, at the similar ways in which these words can be used: one can be ”chocolate-crazed” or ”chocolate-addicted” and suffer ”chocolate-induced” symptoms (e.g., each of these uses can be found in the pages of Wikipedia). In a context that gives rise to these expressions, it is unsurprising that chocolate should appear altogether more similar to a harmful narcotic. In this paper we computationally model this idea that language use reflects category structure. As noted by De Leenheer and de Moor (2005), ontologies are lexical representations of concepts, so we can expect the effects of context on language use to closely reflect the effects of context on ontological structure. An understanding of the linguistic effects of context, as expressed through syntagmatic patterns of word usage, should lead therefore to the design of more flexible lexical ontologies that naturally adapt to their contexts of use. WordNet (Fell57 baum, 1998) is just one such lexical ontology that can benefit greatly from the added flexibility that context-sensitivity can bring. Though comprehensive in scale and widely used, WordNet suffers from an obvious structural rigidity in which concepts are either entirely within a category or entirely outside a category: no gradation of category membership is allowed, and no contextual factors are brought to bear on criteria for membership. Thus, a gun is always a weapon in WordNet while an axe is never so, despite the uses (sporting or murderous) to which each can be put. In section two we describe a computational framework for giving WordNet senses a functional, context-sensitive form. These functional forms simultaneously represent i) an intensional definition for each word sense; ii) a structured query capable of retrieving instances of the corresponding category from a context-specific corpus; and iii) a membership function that assigns gradated scores to these instances based on available syntagmatic evidence. In section three we describe how the knowledge required to automate this functional re-definition is acquired from the web and linked to WordNet. In section four we describe how these re-definitions can produce a robust model of metaphor, before we evaluate the descriptive sufficiency of this approach in section five, comparing it to the knowledge already available within WordNet. We conclude with some final remarks in section six. 2 Functional Context-Sensitive Categories We take a wholly textual view of context and assume that a given context can be implicitly characterized by a representative text corpus. This corpus can be as large as a text archive or an encyclopedia (e.g., the complete text of Wikipedia), or as small as a single document, a sentence or even a single noun-phrase. For instance, the micro-context ”alcoholic apple-juice” is enough to implicate the category Liquor, rather than Juice, as a semantic head, while ”lovable snake” can be enough of a context to locally categorize Snake as a kind of Pet. There is a range of syntagmatic patterns that one can exploit to glean category insights from a text. For instance, the ”X kills” pattern is enough to categorize X as a kind of Killer, ”hunts X” is enough to categorize X as a kind of Prey, while ”X-covered”, ”X-dipped” and ”X-frosted” all indicate that X is a kind of Covering. Likewise, ”army of X” suggests that a context views X as a kind of Soldier, while ”barrage of X” suggests that X should be seen as a kind of Projectile. We operationalize the collocation-type of adjective and noun via the function (attr ADJ NOUN), which returns a number in the range 0...1; this represents the extent to which ADJ is used to modify NOUN in the context-defining corpus. Dice’s coefficient (e.g., see Cimiano et al., 2005) is used to implement this measure. A context-sensitive category membership function can be defined, as in that for Fundamentalist in Figure 1: (define Fundamentalist.0 (arg0) (* (max (%isa arg0 Person.0) (%isa arg0 Group.0)) (min (max (attr political arg0) (attr religious arg0)) (max (attr extreme arg0) (attr violent arg0) (attr radical arg0))))) Figure 1. A functional re-definition of the category Fundamentalist. The function of Figure 1 takes, as a single argument arg0, a putative member of the category Fundamentalist.0 (note how the sense tag, 0, is used to identify a specific WordNet sense of ”fundamentalist”), and returns a membership score in the range 0...1 for this term. This score reflects the syntagmatic evidence for considering arg0 to be political or religious, as well as extreme or violent or radical. The function (%isa arg0 CAT) returns a value of 1.0 if some sense of arg0 is a descendent of CAT (here Person.0 or Group.0), otherwise 0. This safeguards ontological coherence and ensures that only kinds of people or groups can ever be considered as fundamentalists. The example of Figure 1 is hand-crafted, but a functional form can be assigned automatically to many of the synsets in WordNet by heuristic means. 58 For instance, those of Figure 2 are automatically derived from WordNet’s morpho-semantic links: (define Fraternity.0 (arg0) (* (%sim arg0 Fraternity.0) (max (attr fraternal arg0) (attr brotherly arg0)))) (define Orgasm.0 (arg0) (* (%sim arg0 Orgasm.0) (max (attr climactic arg0) (attr orgasmic arg0)))) Figure 2. Exploiting the WordNet links between nouns and their adjectival forms. The function (%sim arg0 CAT) reflects the perceived similarity between the putative member arg0 and a synset CAT in WordNet, using one of the standard formulations described in Budanitsky and Hirst (2006). Thus, any kind of group (e.g., a glee club, a Masonic lodge, or a barbershop quartet) described in a text as ”fraternal” or ”brotherly” (both occupy the same WordNet synset) can be considered a Fraternity to the corresponding degree, tempered by its a priori similarity to a Fraternity; likewise, any climactic event can be categorized as an Orgasm to a more or less degree. Alternately, the function of Figure 3 is automatically obtained for the lexical concept Espresso by shallow parsing its WordNet gloss: ”strong black coffee brewed by forcing steam under pressure through powdered coffee beans”. (define Espresso.0 (arg0) (* (%sim arg0 Espresso.0) (min (attr strong arg0) (attr black arg0)))) Figure 3. A functional re-definition of the category Espresso based on its WordNet gloss. It follows that any substance (e.g., oil or tea) described locally as ”black” and ”strong” with a non-zero taxonomic similarity to coffee can be considered a kind of Espresso. Combining the contents of WordNet 1.6 and WordNet 2.1, 27,732 different glosses (shared by 51,035 unique word senses) can be shallow parsed to yield a definition of the kind shown in Figure 3. Of these, 4525 glosses yield two or more properties that can be given functional form via attr. However, one can question whether these features are sufficient, and more importantly, whether they are truly diagnostic of the categories they are used to define. In the next section we consider another source of diagnostic properties, explicit similes on the web, before, in section 5, comparing the quality of these properties to those available from WordNet. 3 Diagnostic Properties on the Web We employ the Google search engine as a retrieval mechanism for acquiring the diagnostic properties of categories from the web, since the Google API and its support for the wildcard term * allows this process to be fully automated. The guiding intuition here is that looking for explicit similes of the form ”X is as P as Y” is the surest way of finding the most salient properties of a term Y; with other syntagmatic patterns, such as adjective:noun collocations, one cannot be sure that the adjective is central to the noun. Since we expect that explicit similes will tend to exploit properties that occupy an exemplary point on a scale, we first extract a list of antonymous adjectives, such as ”hot” or ”cold”, from WordNet. For every adjective ADJ on this list, we send the query ”as ADJ as *” to Google and scan the first 200 snippets returned to extract different noun values for the wildcard *. From each set of snippets we can also ascertain the relative frequencies of different noun values for ADJ. The complete set of nouns extracted in this way is then used to drive a second phase of the search, in which the query template ”as * as a NOUN” is used to acquire similes that may have lain beyond the 200-snippet horizon of the original search, or that may hinge on adjectives not included on the original list. Together, both phases collect a wide-ranging series of core samples (of 200 hits each) from across the web, yielding a set of 74,704 simile instances (of 42,618 unique types) relating 59 3769 different adjectives to 9286 different nouns 3.1 Property Filtering Unfortunately, many of these similes are not sufficiently well-formed to identify salient properties. In many cases, the noun value forms part of a larger noun phrase: it may be the modifier of a compound noun (as in ”bread lover”), or the head of complex noun phrase (such as ”gang of thieves” or ”wound that refuses to heal”). In the former case, the compound is used if it corresponds to a compound term in WordNet and thus constitutes a single lexical unit; if not, or if the latter case, the simile is rejected. Other similes are simply too contextual or underspecified to function well in a null context, so if one must read the original document to make sense of the simile, it is rejected. More surprisingly, perhaps, a substantial number of the retrieved similes are ironic, in which the literal meaning of the simile is contrary to the meaning dictated by common sense. For instance, ”as hairy as a bowling ball” (found once) is an ironic way of saying ”as hairless as a bowling ball” (also found just once). Many ironies can only be recognized using world knowledge, such as ”as sober as a Kennedy” and ”as tanned as an Irishman”. Given the creativity involved in these constructions, one cannot imagine a reliable automatic filter to safely identify bona-fide similes. For this reason, the filtering task is performed by a human judge, who annotated 30,991 of these simile instances (for 12,259 unique adjective/noun pairings) as non-ironic and meaningful in a null context; these similes relate a set of 2635 adjectives to a set of 4061 different nouns. In addition, the judge also annotated 4685 simile instances (of 2798 types) as ironic; these similes relate a set of 936 adjectives to a set of 1417 nouns. Perhaps surprisingly, ironic pairings account for over 13% of all annotated simile instances and over 20% of all annotated types. 3.2 Linking to WordNet Senses To create functional WordNet definitions from these adjective:noun pairings, we first need to identify the WordNet sense of each noun. For instance, ”as stiff as a zombie” might refer either to a re-animated corpse or to an alcoholic cocktail (both are senses of ”zombie” in WordNet, and drinks can be ”stiff” too). Disambiguation is trivial for nouns with just a single sense in WordNet. For nouns with two or more fine-grained senses that are all taxonomically close, such as ”gladiator” (two senses: a boxer and a combatant), we consider each sense to be a suitable target. In some cases, the WordNet gloss for as particular sense will literally mention the adjective of the simile, and so this sense is chosen. In all other cases, we employ a strategy of mutual disambiguation to relate the noun vehicle in each simile to a specific sense in WordNet. Two similes ”as A as N1” and ”as A as N2” are mutually disambiguating if N1 and N2 are synonyms in WordNet, or if some sense of N1 is a hypernym or hyponym of some sense of N2 in WordNet. For instance, the adjective ”scary” is used to describe both the noun ”rattler” and the noun ”rattlesnake” in bona-fide (non-ironic) similes; since these nouns share a sense, we can assume that the intended sense of ”rattler” is that of a dangerous snake rather than a child’s toy. Similarly, the adjective ”brittle” is used to describe both saltines and crackers, suggesting that it is the bread sense of ”cracker” rather than the hacker, firework or hillbilly senses (all in WordNet) that is intended. These heuristics allow us to automatically disambiguate 10,378 bona-fide simile types (85%), yielding a mapping of 2124 adjectives to 3778 different WordNet senses. Likewise, 77% (or 2164) of the simile types annotated as ironic are disambiguated automatically. A remarkable stability is observed in the alignment of noun vehicles to WordNet senses: 100% of the ironic vehicles always denote the same sense, no matter the adjective involved, while 96% of bona-fide vehicles always denote the same sense. This stability suggests two conclusions: the disambiguation process is consistent and accurate; but more intriguingly, only one coarse-grained sense of any word is likely to be sufficiently exemplary of some property to be useful in a simile. 4 From Similes to Category Functions As noted in section 3, the filtered web data yields 12,259 bona-fide similes describing 4061 target nouns in terms of 2635 different adjectival properties. Word-sense disambiguation allows 3778 synsets in WordNet to be given a functional redefinition in terms of 2124 diagnostic properties, as 60 in the definition of Gladiator in Figure 4: (define Gladiator.0 (arg0) (* (%isa arg0 Person.0) (* (%sim arg0 Gladiator.0) (combine (attr strong arg0) (attr violent arg0) (attr manly arg0))))) Figure 4. A web-based definition of Gladiator. Since we cannot ascertain from the web data which properties are necessary and which are collectively sufficient, we use the function combine to aggregate the available evidence. This function implements a na¨ıve probabilistic or, in which each piece of syntagmatic evidence is naively assumed to be independent, as follows: (combine e0 e1) = e0 + e1(1 −e0) (combine e0 e1...en) = (combine e0 (combine e1...en)) Thus, any combatant or competitor (such as a sportsman) that is described as strong, violent or manly in a corpus can be categorized as a Gladiator in that context; the more properties that hold, and the greater the degree to which they hold, the greater the membership score that is assigned. The source of the hard taxonomic constraint (%isa arg0 Person.0) is explained in the next section. For now, note how the use of %sim in the functions of Figures 2, 3 and 4 means that these membership functions readily admit both literal and metaphoric members. Since the line between literal and metaphoric uses of a category is often impossible to draw, the best one can do is to accept metaphor as a gradable phenomenon (see Hanks, 2006). The incorporation of taxonomic similarity via %sim ensures that literal members will tend to receive higher membership scores, and that the most tenuous metaphors will receive the lowest membership scores (close to 0.0). 4.1 Constrained Category Inclusion Simile and metaphor involve quite different conceptual mechanisms. For instance, anything that is particularly strong or black might meaningfully be called ”as black as espresso” or ”as strong as espresso”, yet few such things can meaningfully be called just ”espresso”. While simile is a mechanism for highlighting inter-concept similarity, metaphor is at heart a mechanism of category inclusion (see Glucksberg, 2001). As the espresso example demonstrates, category inclusion is more than a matter of shared properties: humans have strong intuitions about the structure of categories and the extent to which they can be stretched to include new members. So while it is sensible to apply the category Espresso to other substances, preferably liquids, it seems nonsensical to apply the category to animals, artifacts, places and so on. Much as the salient properties of categories can be acquired form the web (see section 3), so too can the intuitions governing inclusion amongst categories. For instance, an attested web-usage of the phrase ”Espresso-like CAT” tells us that sub-types of CAT are allowable targets of categorization by the category Espresso. Thus, since the query ”espressolike substance” returns 3 hits via Google, types of substance (oil, etc.) can be described as Espresso if they are contextually strong and black. In contrast, the query ”espresso-like person” returns 0 hits, so no instance of person can be described as Espresso, no matter how black or how strong. While this is clearly a heuristic approach to a complex cognitive problem, it does allow us to tap into the tacit knowledge that humans employ in categorization. More generally, a concept X can be included in a category C if X exhibits salient properties of C and, for some hypernym H of X in WordNet, we can find an attested use of ”C-like H” on the web. If we can pre-fetch all possible ”C-like H” from the web, this will allow comprehension to proceed without having to resort to web analysis in mid-categorization. While there are too many possible values of H to make full pre-fetching a practical reality, we can generalize the problem somewhat, by selecting a range of values for H from the middle-layer of WordNet, such as Person, Substance, Animal, Tool, Plant, Structure, Event, Vehicle, Idea and Place, and by pre-fetching the query ”C-like H” for all 4061 nouns collected in section 3, combined with this limited set of H values. For every noun in our database then, we precompile a vector of possible category inclusions. 61 For instance, ”lattice” yields the following vector: {structure(1620), substance(8), container(1), vehicle(1)} where numbers in parentheses indicate the webfrequency of the corresponding ”Lattice-like H” query. Thus, the category Lattice can be used to describe (and metaphorically include) other kinds of structure (like crystals), types of substance (e.g., crystalline substances), containers (like honeycombs) and even vehicles (e.g., those with many compartments). Likewise, the noun ”snake” yields the following vector of possibilities: {structure(125), animal(122), person(56), vehicle(17), tool(9)} (note, the frequency for ”person” includes the frequency for ”man” and ”woman”). The category Snake can also be used to describe and include structures (like tunnels), other animals (like eels), people (e.g., the dishonest variety), vehicles (e.g., articulated trucks, trains) and tools (e.g., hoses). The noun ”gladiator” yields a vector of just one element, {person(1)}, from which the simple constraint (%isa arg0 Person.0) in Figure 4 is derived. In contrast, ”snake” is now given the definition of Figure 5: (define Snake.0 (arg0) (* (max (%isa arg0 Structure.0) (%isa arg0 Animal.0) (%isa arg0 Person.0) (%isa arg0 Vehicle.0)) (* (%sim arg0 Snake.0) (combine (attr cunning arg0) (attr slippery arg0) (attr flexible arg0) (attr slim arg0) (attr sinuous arg0) (attr crooked arg0) (attr deadly arg0) (attr poised arg0))))) Figure 5. A membership function for Snake using web-derived category-inclusion constraints. Glucksberg (2001) notes that the same category, used figuratively, can exhibit different qualities in different metaphors. For instance, Snake might describe a kind of crooked person in one metaphor, a poised killer in another metaphor, and a kind of flexible tool in yet another. The use of combine in Figure 5 means that a single category definition can give rise to each of these perspectives in the appropriate contexts. We therefore do not need a different category definition for each metaphoric use of Snake. To illustrate the high-level workings of categoryinclusion, Table 1 generalizes over the set of 3778 disambiguated nouns from section 3 to estimate the propensity for one semantic category, like Person, to include members of another category, like Animal, in X-like Y constructs. X-like Y P A Sub T Str (P)erson .66 .05 .03 .04 .09 (A)nimal .36 .27 .04 .05 .15 (Sub)stance .14 .03 .37 .05 .32 (T)ool .08 .03 .07 .22 .34 (Str)ucture .04 .03 .03 .03 .43 Table 1. The Likelihood of a category X accommodating a category Y. Table 1 reveals that 36% of ”ANIMAL-like” patterns on the web describe a kind of Person, while only 5% of ”PERSON-like” patterns on the web describe a kind of Animal. Category inclusion appears here to be a conservative mechanism, with like describing like in most cases; thus, types of Person are most often used to describe other kinds of Person (comprising 66% of ”PERSON-like” patterns), types of substance to describe other substances, and so on. The clear exception is Animal, with ”ANIMAL-like” phrases more often used to describe people (36%) than other kinds of animal (27%). The anthropomorphic uses of this category demonstrate the importance of folk-knowledge in figurative categorization, of the kind one is more likely to find in real text, and on the web (as in section 3), rather than in resources like WordNet. 62 5 Empirical Evaluation The simile gathering process of section 3, abetted by Google’s practice of ranking pages according to popularity, should reveal the most frequently-used comparative nouns, and thus, the most useful categories to capture in a general-purpose ontology like WordNet. But the descriptive sufficiency of these categories is not guaranteed unless the defining properties ascribed to each can be shown to be collectively rich enough, and individually salient enough, to predict how each category is perceived and applied by a language user. If similes are indeed a good basis for mining the most salient and diagnostic properties of categories, we should expect the set of properties for each category to accurately predict how the category is perceived as a whole. For instance, humans – unlike computers – do not generally adopt a dispassionate view of ideas, but rather tend to associate certain positive or negative feelings, or affective values, with particular ideas. Unsavoury activities, people and substances generally possess a negative affect, while pleasant activities and people possess a positive affect. Whissell (1989) reduces the notion of affect to a single numeric dimension, to produce a dictionary of affect that associates a numeric value in the range 1.0 (most unpleasant) to 3.0 (most pleasant) with over 8000 words in a range of syntactic categories (including adjectives, verbs and nouns). So to the extent that the adjectival properties yielded by processing similes paint an accurate picture of each category / noun-sense, we should be able to predict the affective rating of each vehicle via a weighted average of the affective ratings of the adjectival properties ascribed to these nouns (i.e., where the affect rating of each adjective contributes to the estimated rating of a noun in proportion to its frequency of co-occurrence with that noun in our simile data). More specifically, we should expect that ratings estimated via these simile-derived properties should correlate well with the independent ratings contained in Whissell’s dictionary. To determine whether similes do offer the clearest perspective on a category’s most salient properties, we calculate and compare this correlation using the following data sets: A. Adjectives derived from annotated bona-fide (non-ironic) similes only. B. Adjectives derived from all annotated similes (both ironic and non-ironic). C. Adjectives derived from ironic similes only. D. All adjectives used to modify a given noun in a large corpus. We use over 2-gigabytes of text from the online encyclopaedia Wikipedia as our corpus. E. The set of 63,935 unique property-of-noun pairings extracted via shallow-parsing from WordNet glosses in section 2, e.g., strong and black for Espresso. Predictions of affective rating were made from each of these data sources and then correlated with the ratings reported in Whissell’s dictionary of affect using a two-tailed Pearson test (p < 0.01). As expected, property sets derived from bona-fide similes only (A) yielded the best correlation (+0.514) while properties derived from ironic similes only (C) yielded the worst (-0.243); a middling correlation coefficient of 0.347 was found for all similes together, demonstrating the fact that bona-fide similes outnumber ironic similes by a ratio of 4 to 1. A weaker correlation of 0.15 was found using the corpus-derived adjectival modifiers for each noun (D); while this data provides quite large property sets for each noun, these properties merely reflect potential rather than intrinsic properties of each noun and so do not reveal what is most diagnostic about a category. More surprisingly, property sets derived from WordNet glosses (E) are also poorly predictive, yielding a correlation with Whissell’s affect ratings of just 0.278. This suggests that the properties used to define categories in hand-crafted resources like WordNet are not always those that actually reflect how humans think of these categories. 6 Concluding Remarks Much of what we understand about different categories is based on tacit and defeasible knowledge of the outside world, knowledge that cannot easily be shoe-horned into the rigid is-a structure of an ontology like WordNet. This already-complex picture 63 is complicated even further by the often metaphoric relationship between words and the categories they denote, and by the fact that the metaphor/literal distinction is not binary but gradable. Furthermore, the gradability of category membership is clearly influenced by context: in a corpus describing the exploits of Vikings, an axe will most likely be seen as a kind of weapon, but in a corpus dedicated to forestry, it will likely describe a tool. A resource like WordNet, in which is-a links are reserved for category relationships that are always true, in any context, is going to be inherently limited when dealing with real text. We have described an approach that can be seen as a functional equivalent to the CPA (Corpus Pattern Analysis) approach of Pustejovsky et al. (2004), in which our goal is not that of automated induction of word senses in context (as it is in CPA) but the automated induction of flexible, context-sensitive category structures. As such, our goal is primarily ontological rather than lexicographic, though both approaches are complementary since each views syntagmatic evidence as the key to understanding the use of lexical concepts in context. By defining category membership in terms of syntagmatic expectations, we establish a functional and gradable basis for determining whether one lexical concept (or synset) in WordNet deserves to be seen as a descendant of another in a particular corpus and context. Augmented with ontological constraints derived from the usage of ”X-like Y” patterns on the web, we also show how these membership functions can implement Glucksberg’s (2001) theory of category inclusion. We have focused on just one syntagmatic pattern here – adjectival modification of nouns – but categorization can be inferred from a wide range of productive patterns in text, particularly those concerning verbs and their case-fillers. For instance, verbcentred similes of the form ”to V+inf like a|an N” and ”to be V+past like a|an N” reveal insights into the diagnostic behaviour of entities (e.g., that predators hunt, that prey is hunted, that eagles soar and bombs explode). Taken together, adjective-based properties and verb-based behaviours can paint an even more comprehensive picture of each lexical concept, so that e.g., political agents that kill can be categorized as assassins, loyal entities that fight can be categorized as soldiers, and so on. An important next step, then, is to mine these behaviours from the web and incorporate the corresponding syntagmatic expectations into our category definitions. The symbolic nature of the resulting definitions means these can serve not just as mathematical membership functions, but as ”active glosses”, capable of recruiting their own members in a particular context while demonstrating a flexibility with categorization and a genuine competence with metaphor. References Alexander Budanitsky and Graeme Hirst. 2006. Evaluating WordNet-based Measures of Lexical Semantic Relatedness. Computational Linguistics, 32(1), pp 1347. Christiane Fellbaum (ed.). 1998. WordNet: An Electronic Lexical Database. The MIT Press, Cambridge, MA. Cynthia Whissell. 1989. The dictionary of affect in language. In R. Plutchnik & H. Kellerman (Eds.). Emotion: Theory and research. New York, Harcourt Brace, 113-131. James Pustejovsky, Patrick Hanks and Anna Rumshisky. 2004. Automated Induction of Sense in Context. In Proceedings of COLING 2004, Geneva, pp 924-931. Patrick Hanks. 2006. Metaphoricity is a Gradable. In A. Stefanowitsch and S. Gries (eds.). Corpora in Cognitive Linguistics. Vol. 1: Metaphor and Metonymy. Berlin: Mouton. Patrick Hanks. 2004. The syntagmatics of metaphor and idiom. International Journal of Lexicography, 17(3). Philipp Cimiano, Andreas Hotho, and Steffen Staab. 2005. Learning Concept Hierarchies from Text Corpora using Formal Concept Analysis. Journal of AI Research, 24: 305-339. Pieter De Leenheer and Aldo de Moor. 2005. Contextdriven Disambiguation in Ontology Elicitation. In Shvaiko P. & Euzenat J. (eds.), Context and Ontologies: Theory, Practice and Applications, AAAI Tech Report WS-05-01. AAAI Press, pp 17-24. Sam Glucksberg. 2001. Understanding figurative language: From metaphors to idioms. Oxford: Oxford University Press. 64
2007
8
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 632–639, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Constituent Parsing with Incremental Sigmoid Belief Networks Ivan Titov Department of Computer Science University of Geneva 24, rue G´en´eral Dufour CH-1211 Gen`eve 4, Switzerland [email protected] James Henderson School of Informatics University of Edinburgh 2 Buccleuch Place Edinburgh EH8 9LW, United Kingdom [email protected] Abstract We introduce a framework for syntactic parsing with latent variables based on a form of dynamic Sigmoid Belief Networks called Incremental Sigmoid Belief Networks. We demonstrate that a previous feed-forward neural network parsing model can be viewed as a coarse approximation to inference with this class of graphical model. By constructing a more accurate but still tractable approximation, we significantly improve parsing accuracy, suggesting that ISBNs provide a good idealization for parsing. This generative model of parsing achieves state-of-theart results on WSJ text and 8% error reduction over the baseline neural network parser. 1 Introduction Latent variable models have recently been of increasing interest in Natural Language Processing, and in parsing in particular (e.g. (Koo and Collins, 2005; Matsuzaki et al., 2005; Riezler et al., 2002)). Latent variables provide a principled way to include features in a probability model without needing to have data labeled with those features in advance. Instead, a labeling with these features can be induced as part of the training process. The difficulty with latent variable models is that even small numbers of latent variables can lead to computationally intractable inference (a.k.a. decoding, parsing). In this paper we propose a solution to this problem based on dynamic Sigmoid Belief Networks (SBNs) (Neal, 1992). The dynamic SBNs which we peopose, called Incremental Sigmoid Belief Networks (ISBNs) have large numbers of latent variables, which makes exact inference intractable. However, they can be approximated sufficiently well to build fast and accurate statistical parsers which induce features during training. We use SBNs in a generative history-based model of constituent structure parsing. The probability of an unbounded structure is decomposed into a sequence of probabilities for individual derivation decisions, each decision conditioned on the unbounded history of previous decisions. The most common approach to handling the unbounded nature of the histories is to choose a pre-defined set of features which can be unambiguously derived from the history (e.g. (Charniak, 2000; Collins, 1999)). Decision probabilities are then assumed to be independent of all information not represented by this finite set of features. Another previous approach is to use neural networks to compute a compressed representation of the history and condition decisions on this representation (Henderson, 2003; Henderson, 2004). It is possible that an unbounded amount of information is encoded in the compressed representation via its continuous values, but it is not clear whether this is actually happening due to the lack of any principled interpretation for these continuous values. Like the former approach, we assume that there are a finite set of features which encode the relevant information about the parse history. But unlike that approach, we allow feature values to be ambiguous, and represent each feature as a distribution over (binary) values. In other words, these history features are treated as latent variables. Unfortunately, inter632 preting the history representations as distributions over discrete values of latent variables makes the exact computation of decision probabilities intractable. Exact computation requires marginalizing out the latent variables, which involves summing over all possible vectors of discrete values, which is exponential in the length of the vector. We propose two forms of approximation for dynamic SBNs, a neural network approximation and a form of mean field approximation (Saul and Jordan, 1999). We first show that the previous neural network model of (Henderson, 2003) can be viewed as a coarse approximation to inference with ISBNs. We then propose an incremental mean field method, which results in an improved approximation over the neural network but remains tractable. The resulting parser achieves significantly higher accuracy than the neural network parser (90.0% F-measure vs 89.1%). We argue that this correlation between better approximation and better accuracy suggests that dynamic SBNs are a good abstract model for natural language parsing. 2 Sigmoid Belief Networks A belief network, or a Bayesian network, is a directed acyclic graph which encodes statistical dependencies between variables. Each variable Si in the graph has an associated conditional probability distributions P(Si|Par(Si)) over its values given the values of its parents Par(Si) in the graph. A Sigmoid Belief Network (Neal, 1992) is a particular type of belief networks with binary variables and conditional probability distributions in the form of the logistic sigmoid function: P(Si =1|Par(Si)) = 1 1+exp(−P Sj∈Par(Si) JijSj), where Jij is the weight for the edge from variable Sj to variable Si. In this paper we consider a generalized version of SBNs where we allow variables with any range of discrete values. We thus generalize the logistic sigmoid function to the normalized exponential (a.k.a. softmax) function to define the conditional probabilities for non-binary variables. Exact inference with all but very small SBNs is not tractable. Initially sampling methods were used (Neal, 1992), but this is also not feasible for large networks, especially for the dynamic models of the type described in section 2.2. Variational methods have also been proposed for approximating SBNs (Saul and Jordan, 1999). The main idea of variational methods (Jordan et al., 1999) is, roughly, to construct a tractable approximate model with a number of free parameters. The free parameters are set so that the resulting approximate model is as close as possible to the original graphical model for a given inference problem. 2.1 Mean Field Approximation Methods The simplest example of a variation method is the mean field method, originally introduced in statistical mechanics and later applied to unsupervised neural networks in (Hinton et al., 1995). Let us denote the set of visible variables in the model (i.e. the inputs and outputs) by V and hidden variables by H = h1, . . . , hl. The mean field method uses a fully factorized distribution Q as the approximate model: Q(H|V ) = Y i Qi(hi|V ). where each Qi is the distribution of an individual latent variable. The independence between the variables hi in this approximate distribution Q does not imply independence of the free parameters which define the Qi. These parameters are set to minimize the Kullback-Leibler divergence (Cover and Thomas, 1991) between the approximate distribution Q(H|V ) and the true distribution P(H|V ): KL(Q∥P) = X H Q(H|V ) ln Q(H|V ) P(H|V ), (1) or, equivalently, to maximize the expression: LV = X H Q(H|V ) ln P(H, V ) Q(H|V ) . (2) The expression LV is a lower bound on the loglikelihood ln P(V ). It is used in the mean field theory (Saul and Jordan, 1999) to approximate the likelihood. However, in our case of dynamic graphical models, we have to use a different approach which allows us to construct an incremental parsing method without needing to introduce the additional parameters proposed in (Saul and Jordan, 1999). We will describe our modification of the mean field method in section 3.3. 633 2.2 Dynamics Dynamic Bayesian networks are Bayesian networks applied to arbitrarily long sequences. A new set of variables is instantiated for each position in the sequence, but the edges and weights for these variables are the same as in other positions. The edges which connect variables instantiated for different positions must be directed forward in the sequence, thereby allowing a temporal interpretation of the sequence. Typically a dynamic Bayesian Network will only involve edges between adjacent positions in the sequence (i.e. they are Markovian), but in our parsing models the pattern of interconnection is determined by structural locality, rather than sequence locality, as in the neural networks of (Henderson, 2003). Using structural locality to define the graph in a dynamic SBN means that the subgraph of edges with destinations at a given position cannot be determined until all the parser decisions for previous positions have been chosen. We therefore call these models Incremental SBNs, because, at any given position in the parse, we only know the graph of edges for that position and previous positions in the parse. For example in figure 1, discussed below, it would not be possible to draw the portion of the graph after t, because we do not yet know the decision dt k. The incremental specification of model structure means that we cannot use an undirected graphical model, such as Conditional Random Fields. With a directed dynamic model, all edges connecting the known portion of the graph to the unknown portion of the graph are directed toward the unknown portion. Also there are no variables in the unknown portion of the graph whose values are known (i.e. no visible variables), because at each step in a historybased model the decision probability is conditioned only on the parsing history. Only visible variables can result in information being reflected backward through a directed edge, so it is impossible for anything in the unknown portion of the graph to affect the probabilities in the known portion of the graph. Therefore inference can be performed by simply ignoring the unknown portion of the graph, and there is no need to sum over all possible structures for the unknown portion of the graph, as would be necessary for an undirected graphical model. Figure 1: Illustration of an ISBN. 3 The Probabilistic Model of Parsing In this section we present our framework for syntactic parsing with dynamic Sigmoid Belief Networks. We first specify the form of SBN we propose, namely ISBNs, and then two methods for approximating the inference problems required for parsing. We only consider generative models of parsing, since generative probability models are simpler and we are focused on probability estimation, not decision making. Although the most accurate parsing models (Charniak and Johnson, 2005; Henderson, 2004; Collins, 2000) are discriminative, all the most accurate discriminative models make use of a generative model. More accurate generative models should make the discriminative models which use them more accurate as well. Also, there are some applications, such as language modeling, which require generative models. 3.1 The Graphical Model In ISBNs, we use a history-based model, which decomposes the probability of the parse as: P(T) = P(D1, ..., Dm) = Y t P(Dt|D1, . . . , Dt−1), where T is the parse tree and D1, . . . , Dm is its equivalent sequence of parser decisions. Instead of treating each Dt as atomic decisions, it is convenient to further split them into a sequence of elementary decisions Dt = dt 1, . . . , dt n: P(Dt|D1, . . . , Dt−1) = Y k P(dt k|h(t, k)), where h(t, k) denotes the parsing history D1, . . . , Dt−1, dt 1, . . . , dt k−1. For example, a 634 decision to create a new constituent can be divided in two elementary decisions: deciding to create a constituent and deciding which label to assign to it. We use a graphical model to define our proposed class of probability models. An example graphical model for the computation of P(dt k|h(t, k)) is illustrated in figure 1. The graphical model is organized into vectors of variables: latent state variable vectors St′ = st′ 1 , . . . , st′ n, representing an intermediate state of the parser at derivation step t′, and decision variable vectors Dt′ = dt′ 1 , . . . , dt′ l , representing a parser decision at derivation step t′, where t′ ≤t. Variables whose value are given at the current decision (t, k) are shaded in figure 1, latent and output variables are left unshaded. As illustrated by the arrows in figure 1, the probability of each state variable st′ i depends on all the variables in a finite set of relevant previous state and decision vectors, but there are no direct dependencies between the different variables in a single state vector. Which previous state and decision vectors are connected to the current state vector is determined by a set of structural relations specified by the parser designer. For example, we could select the most recent state where the same constituent was on the top of the stack, and a decision variable representing the constituent’s label. Each such selected relation has its own distinct weight matrix for the resulting edges in the graph, but the same weight matrix is used at each derivation position where the relation is relevant. As indicated in figure 1, the probability of each elementary decision dt′ k depends both on the current state vector St′ and on the previously chosen elementary action dt′ k−1 from Dt′. This probability distribution has the form of a normalized exponential: P(dt′ k = d|St′, dt′ k−1)= Φh(t′,k)(d) e P j Wdjst′ j P d′Φh(t′,k)(d′) e P jWd′jst′ j , (3) where Φh(t′,k) is the indicator function of a set of elementary decisions that may possibly follow the parsing history h(t′, k), and the Wdj are the weights. For our experiments, we replicated the same pattern of interconnection between state variables as described in (Henderson, 2003).1 We also used the 1In the neural network of (Henderson, 2003), our variables same left-corner parsing strategy, and the same set of decisions, features, and states. We refer the reader to (Henderson, 2003) for details. Exact computation with this model is not tractable. Sampling of parse trees from the model is not feasible, because a generative model defines a joint model of both a sentence and a tree, thereby requiring sampling over the space of sentences. Gibbs sampling (Geman and Geman, 1984) is also impossible, because of the huge space of variables and need to resample after making each new decision in the sequence. Thus, we know of no reasonable alternatives to the use of variational methods. 3.2 A Feed-Forward Approximation The first model we consider is a strictly incremental computation of a variational approximation, which we will call the feed-forward approximation. It can be viewed as the simplest form of mean field approximation. As in any mean field approximation, each of the latent variables is independently distributed. But unlike the general case of mean field approximation, in the feed-forward approximation we only allow the parameters of the distributions Qi to depend on the distributions of their parents. This additional constraint increases the potential for a large Kullback-Leibler divergence with the true model, defined in expression (1), but it significantly simplifies the computations. The set of hidden variables H in our graphical model consists of all the state vectors St′, t′ ≤t, and the last decision dt k. All the previously observed decisions h(t, k) comprise the set of visible variables V . The approximate fully factorisable distribution Q(H|V ) can be written as: Q(H|V ) = qt k(dt k) Y t′,i  µt′ i st′ i  1 −µt′ i 1−st′ i . where µt′ i is the free parameter which determines the distribution of state variable i at position t′, namely its mean, and qt k(dt k) is the free parameter which determines the distribution over decisions dt k. Because we are only allowed to use information about the distributions of the parent variables to map to their “units”, and our dependencies/edges map to their “links”. 635 compute the free parameters µt′ i , the optimal assignment of values to the µt′ i is: µt′ i = σ  ηt′ i  , where σ denotes the logistic sigmoid function and ηt′ i is a weighted sum of the parent variables’ means: ηt′ i = X t′′∈RS(t′) X j Jτ(t′,t′′) ij µt′′ j + X t′′∈RD(t′) X k Bτ(t′,t′′) idt′′ k , (4) where RS(t′) is the set of previous positions with edges from their state vectors to the state vector at t′, RD(t′) is the set of previous positions with edges from their decision vectors to the state vector at t′, τ(t′, t′′) is the relevant relation between the position t′′ and the position t′, and Jτ ij and Bτ id are weight matrices. In order to maximize (2), the approximate distribution of the next decisions qt k(d) should be set to qt k(d) = Φh(t,k) (d) e P j Wdjµt j P d′ Φh(t,k) (d′) e P j Wd′jµt j , (5) as follows from expression (3). The resulting estimate of the tree probability is given by: P(T) ≈ Y t,k qt k(dt k). This approximation method replicates exactly the computation of the feed-forward neural network in (Henderson, 2003), where the above means µt′ i are equivalent to the neural network hidden unit activations. Thus, that neural network probability model can be regarded as a simple approximation to the graphical model introduced in section 3.1. In addition to the drawbacks shared by any mean field approximation method, this feed-forward approximation cannot capture backward reasoning. By backward (a.k.a. top-down) reasoning we mean the need to update the state vector means µt′ i after observing a decision dt k, for t′ ≤t. The next section discusses how backward reasoning can be incorporated in the approximate model. 3.3 A Mean Field Approximation This section proposes a more accurate way to approximate ISBNs with mean field methods, which we will call the mean field approximation. Again, we are interested in finding the distribution Q which maximizes the quantity LV in expression (2). The decision distribution qt k(dt k) maximizes LV when it has the same dependence on the state vector means µt k as in the feed-forward approximation, namely expression (5). However, as we mentioned above, the feed-forward computation does not allow us to compute the optimal values of state means µt′ i . Optimally, after each new decision dt k, we should recompute all the means µt′ i for all the state vectors St′, t′ ≤t. However, this would make the method intractable, due to the length of derivations in constituent parsing and the interdependence between these means. Instead, after making each decision dt k and adding it to the set of visible variables V , we recompute only means of the current state vector St. The denominator of the normalized exponential function in (3) does not allow us to compute LV exactly. Instead, we use a simple first order approximation: EQ[ln X d Φh(t,k) (d) exp( X j Wdjst j)] ≈ln X d Φh(t,k)(d) exp( X j Wdjµt j), (6) where the expectation EQ[. . .] is taken over the state vector St distributed according to the approximate distribution Q. Unfortunately, even with this assumption there is no analytic way to maximize LV with respect to the means µt k, so we need to use numerical methods. Assuming (6), we can rewrite the expression (2) as follows, substituting the true P(H, V ) defined by the graphical model and the approximate distribution Q(H|V ), omitting parts independent of µt k: Lt,k V = X i −µt i ln µt i −(1 −µt i) ln  1 −µt i  +µt iηt i + X k′<k Φh(t,k′)(dt k′) X j Wdt k′jµt j − X k′<k ln  X d Φh(t,k′)(d) exp( X j Wdjµt j)  , (7) here, ηt i is computed from the previous relevant state means and decisions as in (4). This expression is 636 concave with respect to the parameters µt i, so the global maximum can be found. We use coordinatewise ascent, where each µt i is selected by an efficient line search (Press et al., 1996), while keeping other µt i′ fixed. 3.4 Parameter Estimation We train these models to maximize the fit of the approximate model to the data. We use gradient descent and a maximum likelihood objective function. This requires computation of the gradient of the approximate log-likelihood with respect to the model parameters. In order to compute these derivatives, the error should be propagated all the way back through the structure of the graphical model. For the feed-forward approximation, computation of the derivatives is straightforward, as in neural networks. But for the mean field approximation, it requires computation of the derivatives of the means µt i with respect to the other parameters in expression (7). The use of a numerical search in the mean field approximation makes the analytical computation of these derivatives impossible, so a different method needs to be used to compute their values. If maximization of Lt,k V is done until convergence, then the derivatives of Lt,k V with respect to µt i are close to zero: F t,k i = ∂Lt,k V ∂µt i ≈0 for all i. This system of equations allows us to use implicit differentiation to compute the needed derivatives. 4 Experimental Evaluation In this section we evaluate the two approximations to dynamic SBNs discussed in the previous section, the feed-forward method equivalent to the neural network of (Henderson, 2003) (NN method) and the mean field method (MF method). The hypothesis we wish to test is that the more accurate approximation of dynamic SBNs will result in a more accurate model of constituent structure parsing. If this is true, then it suggests that dynamic SBNs of the form proposed here are a good abstract model of the nature of natural language parsing. We used the Penn Treebank WSJ corpus (Marcus et al., 1993) to perform the empirical evaluation of the considered approaches. It is expensive to train R P F1 Bikel, 2004 87.9 88.8 88.3 Taskar et al., 2004 89.1 89.1 89.1 NN method 89.1 89.2 89.1 Turian and Melamed, 2006 89.3 89.6 89.4 MF method 89.3 90.7 90.0 Charniak, 2000 90.0 90.2 90.1 Table 1: Percentage labeled constituent recall (R), precision (P), combination of both (F1) on the testing set. the MF approximation on the whole WSJ corpus, so instead we use only sentences of length at most 15, as in (Taskar et al., 2004) and (Turian and Melamed, 2006). The standard split of the corpus into training (sections 2–22, 9,753 sentences), validation (section 24, 321 sentences), and testing (section 23, 603 sentences) was performed.2 As in (Henderson, 2003; Turian and Melamed, 2006) we used a publicly available tagger (Ratnaparkhi, 1996) to provide the part-of-speech tag for each word in the sentence. For each tag, there is an unknown-word vocabulary item which is used for all those words which are not sufficiently frequent with that tag to be included individually in the vocabulary. We only included a specific tag-word pair in the vocabulary if it occurred at least 20 time in the training set, which (with tag-unknown-word pairs) led to the very small vocabulary of 567 tag-word pairs. During parsing with both the NN method and the MF method, we used beam search with a post-word beam of 10. Increasing the beam size beyond this value did not significantly effect parsing accuracy. For both of the models, the state vector size of 40 was used. All the parameters for both the NN and MF models were tuned on the validation set. A single best model of each type was then applied to the final testing set. Table 1 lists the results of the NN approximation and the MF approximation, along with results of dif2Training of our MF method on this subset of WSJ took less than 6 days on a standard desktop PC. We would expect that a model for the entire WSJ corpus can be trained in about 3 months time. The training time is about linear with the number of words, but a larger state vector is needed to accommodate all the information. The long training times on the entire WSJ would not allow us to tune the model parameters properly, which would have increased the randomness of the empirical comparison, although it would be feasible for building a system. 637 ferent generative and discriminative parsing methods (Bikel, 2004; Taskar et al., 2004; Turian and Melamed, 2006; Charniak, 2000) evaluated in the same experimental setup. The MF model improves over the baseline NN approximation, with an error reduction in F-measure exceeding 8%. This improvement is statically significant.3 The MF model achieves results which do not appear to be significantly different from the results of the best model in the list (Charniak, 2000). It should also be noted that the model (Charniak, 2000) is the most accurate generative model on the standard WSJ parsing benchmark, which confirms the viability of our generative model. These experimental results suggest that Incremental Sigmoid Belief Networks are an appropriate model for natural language parsing. Even approximations such as those tested here, with a very strong factorisability assumption, allow us to build quite accurate parsing models. The main drawback of our proposed mean field approach is the relative computational complexity of the numerical procedure used to maximize Lt,k V . But this approximation has succeeded in showing that a more accurate approximation of ISBNs results in a more accurate parser. We believe this provides strong justification for more accurate approximations of ISBNs for parsing. 5 Related Work There has not been much previous work on graphical models for full parsing, although recently several latent variable models for parsing have been proposed (Koo and Collins, 2005; Matsuzaki et al., 2005; Riezler et al., 2002). In (Koo and Collins, 2005), an undirected graphical model is used for parse reranking. Dependency parsing with dynamic Bayesian networks was considered in (Peshkin and Savova, 2005), with limited success. Their model is very different from ours. Roughly, it considered the whole sentence at a time, with the graphical model being used to decide which words correspond to leaves of the tree. The chosen words are then removed from the sentence and the model is recursively applied to the reduced sentence. Undirected graphical models, in particular Condi3We measured significance of all the experiments in this paper with the randomized significance test (Yeh, 2000). tional Random Fields, are the standard tools for shallow parsing (Sha and Pereira, 2003). However, shallow parsing is effectively a sequence labeling problem and therefore differs significantly from full parsing. As discussed in section 2.2, undirected graphical models do not seem to be suitable for historybased full parsing models. Sigmoid Belief Networks were used originally for character recognition tasks, but later a dynamic modification of this model was applied to the reinforcement learning task (Sallans, 2002). However, their graphical model, approximation method, and learning method differ significantly from those of this paper. 6 Conclusions This paper proposes a new generative framework for constituent parsing based on dynamic Sigmoid Belief Networks with vectors of latent variables. Exact inference with the proposed graphical model (called Incremental Sigmoid Belief Networks) is not tractable, but two approximations are considered. First, it is shown that the neural network parser of (Henderson, 2003) can be considered as a simple feed-forward approximation to the graphical model. Second, a more accurate but still tractable approximation based on mean field theory is proposed. Both methods are empirically compared, and the mean field approach achieves significantly better results, which are non-significantly different from the results of the most accurate generative parsing model (Charniak, 2000) on our testing set. The fact that a more accurate approximation leads to a more accurate parser suggests that ISBNs are a good abstract model for constituent structure parsing. This empirical result motivates research into more accurate approximations of dynamic SBNs. We focused in this paper on generative models of parsing. The results of such a generative model can be easily improved by a discriminative reranking model, even without any additional feature engineering. For example, the discriminative training techniques successfully applied in (Henderson, 2004) to the feed-forward neural network model can be directly applied to the mean field model proposed in this paper. The same is true for reranking with data-defined kernels, with which we would 638 expect similar improvements as were achieved with the neural network parser (Henderson and Titov, 2005). Such improvements should situate the resulting model among the best current parsing models. References Dan M. Bikel. 2004. Intricacies of Collins’ parsing model. Computational Linguistics, 30(4). Eugene Charniak and Mark Johnson. 2005. Coarse-tofine n-best parsing and MaxEnt discriminative reranking. In Proc. ACL, pages 173–180, Ann Arbor, MI. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proc. ACL, pages 132–139, Seattle, Washington. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA. Michael Collins. 2000. Discriminative reranking for natural language parsing. In Proc. ICML, pages 175–182, Stanford, CA. Thomas M. Cover and Joy A. Thomas. 1991. Elements of Information Theory. John Wiley, New York, NY. S. Geman and D. Geman. 1984. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721–741. James Henderson and Ivan Titov. 2005. Data-defined kernels for parse reranking derived from probabilistic models. In Proc. ACL, Ann Arbor, MI. James Henderson. 2003. Inducing history representations for broad coverage statistical parsing. In Proc. HLT-NAACL, pages 103–110, Edmonton, Canada. James Henderson. 2004. Discriminative training of a neural network statistical parser. In Proc. ACL, Barcelona, Spain. G. Hinton, P. Dayan, B. Frey, and R. Neal. 1995. The wake-sleep algorithm for unsupervised neural networks. Science, 268:1158–1161. M. I. Jordan, Z.Ghahramani, T. S. Jaakkola, and L. K. Saul. 1999. An introduction to variational methods for graphical models. In Michael I. Jordan, editor, Learning in Graphical Models. MIT Press, Cambridge, MA. Terry Koo and Michael Collins. 2005. Hidden-variable models for discriminative reranking. In Proc. EMNLP, Vancouver, B.C., Canada. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2005. Probabilistic CFG with latent annotations. In Proc. ACL, Ann Arbor, MI. Radford Neal. 1992. Connectionist learning of belief networks. Artificial Intelligence, 56:71–113. Leon Peshkin and Virginia Savova. 2005. Dependency parsing with dynamic bayesian network. In AAAI, 20th National Conference on Artificial Intelligence, Pittsburgh, Pennsylvania. W. Press, B. Flannery, S. Teukolsky, and W. Vetterling. 1996. Numerical Recipes. Cambridge University Press, Cambridge, UK. Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proc. EMNLP, pages 133–142, Univ. of Pennsylvania, PA. Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. Maxwell, and Mark Johnson. 2002. Parsing the Wall Street Journal using a Lexical-Functional Grammar and discriminative estimation techniques. In Proc. ACL, Philadelphia, PA. Brian Sallans. 2002. Reinforcement Learning for Factored Markov Decision Processes. Ph.D. thesis, University of Toronto, Toronto, Canada. Lawrence K. Saul and Michael I. Jordan. 1999. A mean field learning algorithm for unsupervised neural networks. In Michael I. Jordan, editor, Learning in Graphical Models, pages 541–554. MIT Press, Cambridge, MA. Fei Sha and Fernando Pereira. 2003. Shallow parsing with conditional random fields. In Proc. HLT-NAACL, Edmonton, Canada. Ben Taskar, Dan Klein, Michael Collins, Daphne Koller, and Christopher Manning. 2004. Max-margin parsing. In Proc. EMNLP, Barcelona, Spain. Joseph Turian and Dan Melamed. 2006. Advances in discriminative parsing. In Proc. COLING-ACL, Sydney, Australia. Alexander Yeh. 2000. More accurate tests for the statistical significance of the result differences. In Proc. COLING, pages 947–953, Saarbruken, Germany. 639
2007
80
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 640–647, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Corpus Effects on the Evaluation of Automated Transliteration Systems Sarvnaz Karimi Andrew Turpin Falk Scholer School of Computer Science and Information Technology RMIT University, GPO Box 2476V, Melbourne 3001, Australia {sarvnaz,aht,fscholer}@cs.rmit.edu.au Abstract Most current machine transliteration systems employ a corpus of known sourcetarget word pairs to train their system, and typically evaluate their systems on a similar corpus. In this paper we explore the performance of transliteration systems on corpora that are varied in a controlled way. In particular, we control the number, and prior language knowledge of human transliterators used to construct the corpora, and the origin of the source words that make up the corpora. We find that the word accuracy of automated transliteration systems can vary by up to 30% (in absolute terms) depending on the corpus on which they are run. We conclude that at least four human transliterators should be used to construct corpora for evaluating automated transliteration systems; and that although absolute word accuracy metrics may not translate across corpora, the relative rankings of system performance remains stable across differing corpora. 1 Introduction Machine transliteration is the process of transforming a word written in a source language into a word in a target language without the aid of a bilingual dictionary. Word pronunciation is preserved, as far as possible, but the script used to render the target word is different from that of the source language. Transliteration is applied to proper nouns and outof-vocabulary terms as part of machine translation and cross-lingual information retrieval (CLIR) (AbdulJaleel and Larkey, 2003; Pirkola et al., 2006). Several transliteration methods are reported in the literature for a variety of languages, with their performance being evaluated on multilingual corpora. Source-target pairs are either extracted from bilingual documents or dictionaries (AbdulJaleel and Larkey, 2003; Bilac and Tanaka, 2005; Oh and Choi, 2006; Zelenko and Aone, 2006), or gathered explicitly from human transliterators (Al-Onaizan and Knight, 2002; Zelenko and Aone, 2006). Some evaluations of transliteration methods depend on a single unique transliteration for each source word, while others take multiple target words for a single source word into account. In their work on transliterating English to Persian, Karimi et al. (2006) observed that the content of the corpus used for evaluating systems could have dramatic affects on the reported accuracy of methods. The effects of corpus composition on the evaluation of transliteration systems has not been specifically studied, with only implicit experiments or claims made in the literature such as introducing the effects of different transliteration models (AbdulJaleel and Larkey, 2003), language families (Lind´en, 2005) or application based (CLIR) evaluation (Pirkola et al., 2006). In this paper, we report our experiments designed to explicitly examine the effect that varying the underlying corpus used in both training and testing systems has on transliteration accuracy. Specifically, we vary the number of human transliterators that are used to construct the corpus; and the origin of the English words used in the corpus. Our experiments show that the word accuracy of automated transliteration systems can vary by up to 30% (in absolute terms), depending on the corpus used. Despite the wide range of absolute values 640 in performance, the ranking of our two transliteration systems was preserved on all corpora. We also find that a human’s confidence in the language from which they are transliterating can affect the corpus in such a way that word accuracy rates are altered. 2 Background Machine transliteration methods are divided into grapheme-based (AbdulJaleel and Larkey, 2003; Lind´en, 2005), phoneme-based (Jung et al., 2000; Virga and Khudanpur, 2003) and combined techniques (Bilac and Tanaka, 2005; Oh and Choi, 2006). Grapheme-based methods derive transformation rules for character combinations in the source text from a training data set, while phoneme-based methods use an intermediate phonetic transformation. In this paper, we use two grapheme-based methods for English to Persian transliteration. During a training phase, both methods derive rules for transforming character combinations (segments) in the source language into character combinations in the target language with some probability. During transliteration, the source word si is segmented and rules are chosen and applied to each segment according to heuristics. The probability of a resulting word is the product of the probabilities of the applied rules. The result is a list of target words sorted by their associated probabilities, Li. The first system we use (SYS-1) is an n-gram approach that uses the last character of the previous source segment to condition the choice of the rule for the current source segment. This system has been shown to outperform other n-gram based methods for English to Persian transliteration (Karimi et al., 2006). The second system we employ (SYS-2) makes use of some explicit knowledge of our chosen language pair, English and Persian, and is also on the collapsed-vowel scheme presented by Karimi et al. (2006). In particular, it exploits the tendency for runs of English vowels to be collapsed into a single Persian character, or perhaps omitted from the Persian altogether. As such, segments are chosen based on surrounding consonants and vowels. The full details of this system are not important for this paper; here we focus on the performance evaluation of systems, not the systems themselves. 2.1 System Evaluation In order to evaluate the list Li of target words produced by a transliteration system for source word si, a test corpus is constructed. The test corpus consists of a source word, si, and a list of possible target words {tij}, where 1 ≤j ≤di, the number of distinct target words for source word si. Associated with each tij is a count nij which is the number of human transliterators who transliterated si into tij. Often the test corpus is a proportion of a larger corpus, the remainder of which has been used for training the system’s rule base. In this work we adopt the standard ten-fold cross validation technique for all of our results, where 90% of a corpus is used for training and 10% for testing. The process is repeated ten times, and the mean result taken. Forthwith, we use the term corpus to refer to the single corpus from which both training and test sets are drawn in this fashion. Once the corpus is decided upon, a metric to measure the system’s accuracy is required. The appropriate metric depends on the scenario in which the transliteration system is to be used. For example, in a machine translation application where only one target word can be inserted in the text to represent a source word, it is important that the word at the top of the system generated list of target words (by definition the most probable) is one of the words generated by a human in the corpus. More formally, the first word generated for source word si, Li 1, must be one of tij,1 ≤j ≤di. It may even be desirable that this is the target word most commonly used for this source word; that is, Li 1 = tij such that nij ≥nik, for all 1 ≤k ≤di. Alternately, in a CLIR application, all variants of a source word might be required. For example, if a user searches for an English term “Tom” in Persian documents, the search engine should try and locate documents that contain both “ Ð A  K” (3 letters:  H Ð) and ” Õç  '”(2 letters:  HÐ), two possible transliterations of “Tom” that would be generated by human transliterators. In this case, a metric that counts the number of tij that appear in the top di elements of the system generated list, Li, might be appropriate. In this paper we focus on the “Top-1” case, where it is important for the most probable target word generated by the system, Li 1 to be either the most pop641 ular tij (labeled the Majority, with ties broken arbitrarily), or just one of the tij’s (labeled Uniform because all possible transliterations are equally rewarded). A third scheme (labeled Weighted) is also possible where the reward for tij appearing as Li 1 is nij/∑di j=1 nij; here, each target word is given a weight proportional to how often a human transliterator chose that target word. Due to space considerations, we focus on the first two variants only. In general, there are two commonly used metrics for transliteration evaluation: word accuracy (WA) and character accuracy (CA) (Hall and Dowling, 1980). In all of our experiments, CA based metrics closely mirrored WA based metrics, and so conclusions drawn from the data would be the same whether WA metrics or CA metrics were used. Hence we only discuss and report WA based metrics in this paper. For each source word in the test corpus of K words, word accuracy calculates the percentage of correctly transliterated terms. Hence for the majority case, where every source word in the corpus only has one target word, the word accuracy is defined as MWA = |{si|Li 1 = ti1,1 ≤i ≤K}|/K, and for the Uniform case, where every target variant is included with equal weight in the corpus, the word accuracy is defined as UWA = |{si|Li 1 ∈{tij},1 ≤i ≤K,1 ≤j ≤di}|/K. 2.2 Human Evaluation To evaluate the level of agreement between transliterators, we use an agreement measure based on Mun and Eye (2004). For any source word si, there are di different transliterations made by the ni human transliterators (ni = ∑di j=1 nij, where nij is the number of times source word si was transliterated into target word tij). When any two transliterators agree on the same target word, there are two agreements being made: transliterator one agrees with transliterator two, and vice versa. In general, therefore, the total number of agreements made on source word si is ∑di j=1 nij(nij −1). Hence the total number of actual agreements made on the entire corpus of K words is Aact = K ∑ i=1 di ∑ j=1 nij(nij −1). The total number of possible agreements (that is, when all human transliterators agree on a single target word for each source word), is Aposs = K ∑ i=1 ni(ni −1). The proportion of overall agreement is therefore PA = Aact Aposs . 2.3 Corpora Seven transliterators (T1, T2, ..., T7: all native Persian speakers from Iran) were recruited to transliterate 1500 proper names that we provided. The names were taken from lists of names written in English on English Web sites. Five hundred of these names also appeared in lists of names on Arabic Web sites, and five hundred on Dutch name lists. The transliterators were not told of the origin of each word. The entire corpus, therefore, was easily separated into three sub-corpora of 500 words each based on the origin of each word. To distinguish these collections, we use E7, A7 and D7 to denote the English, Arabic and Dutch sub-corpora, respectively. The whole 1500 word corpus is referred to as EDA7. Dutch and Arabic were chosen with an assumption that most Iranian Persian speakers have little knowledge of Dutch, while their familiarity with Arabic should be in the second rank after English. All of the participants held at least a Bachelors degree. Table 1 summarizes the information about the transliterators and their perception of the given task. Participants were asked to scale the difficulty of the transliteration of each sub-corpus, indicated as a scale from 1 (hard) to 3 (easy). Similarly, the participants’ confidence in performing the task was rated from 1 (no confidence) to 3 (quite confident). The level of familiarity with second languages was also reported based on a scale of zero (not familiar) to 3 (excellent knowledge). The information provided by participants confirms our assumption of transliterators knowledge of second languages: high familiarity with English, some knowledge of Arabic, and little or no prior knowledge of Dutch. Also, the majority of them found the transliteration of English terms of medium difficulty, Dutch was considered mostly hard, and Arabic as easy to medium. 642 Second Language Knowledge Difficulty,Confidence Transliterator English Dutch Arabic Other English Dutch Arabic 1 2 0 1 1,1 1,2 2,3 2 2 0 2 2,2 2,3 3,3 3 2 0 1 2,2 1,2 2,2 4 2 0 1 2,2 2,1 3,3 5 2 0 2 Turkish 2,2 1,1 3,2 6 2 0 1 2,2 1,1 3,3 7 2 0 1 2,2 1,1 2,2 Table 1: Transliterator’s language knowledge (0=not familiar to 3=excellent knowledge), perception of difficulty (1=hard to 3=easy) and confidence (1=no confidence to 3=quite confident) in creating the corpus. E7 D7 A7 EDA7 Corpus 0 20 40 60 80 100 Word Accuracy (%) UWA (SYS-2) UWA (SYS-1) MWA (SYS-2) MWA (SYS-1) Figure 1: Comparison of the two evaluation metrics using the two systems on four corpora. (Lines were added for clarity, and do not represent data points.) 0 20 40 60 80 100 Corpus 0 20 40 60 80 100 Word Accuracy (%) UWA (SYS-2) UWA (SYS-1) MWA (SYS-2) MWA (SYS-1) Figure 2: Comparison of the two evaluation metrics using the two systems on 100 randomly generated sub-corpora. 3 Results Figure 1 shows the values of UWA and MWA for E7, A7, D7 and EDA7 using the two transliteration systems. Immediately obvious is that varying the corpora (x-axis) results in different values for word accuracy, whether by the UWA or MWA method. For example, if you chose to evaluate SYS-2 with the UWA metric on the D7 corpus, you would obtain a result of 82%, but if you chose to evaluate it with the A7 corpus you would receive a result of only 73%. This makes comparing systems that report results obtained on different corpora very difficult. Encouragingly, however, SYS-2 consistently outperforms the SYS-1 on all corpora for both metrics except MWA on E7. This implies that ranking system performance on the same corpus most likely yields a system ranking that is transferable to other corpora. To further investigate this, we randomly extracted 100 corpora of 500 word pairs from EDA7 and ran the two systems on them and evaluated the results using both MWA and UWA. Both of the measures ranked the systems consistently using all these corpora (Figure 2). As expected, the UWA metric is consistently higher than the MWA metric; it allows for the top transliteration to appear in any of the possible variants for that word in the corpus, unlike the MWA metric which insists upon a single target word. For example, for the E7 corpus using the SYS-2 approach, UWA is 76.4% and MWA is 47.0%. Each of the three sub-corpora can be further divided based on the seven individual transliterators, in different combinations. That is, construct a subcorpus from T1’s transliterations, T2’s, and so on; then take all combinations of two transliterators, then three, and so on. In general we can construct 7Cr such corpora from r transliterators in this fashion, all of which have 500 source words, but may have between one to seven different transliterations for each of those words. Figure 3 shows the MWA for these sub-corpora. The x-axis shows the number of transliterators used to form the sub-corpora. For example, when x = 3, the performance figures plotted are achieved on corpora when taking all triples of the seven transliterator’s transliterations. From the boxplots it can be seen that performance varies considerably when the number of transliterators used to determine a majority vote is varied. 643 1 2 3 4 5 6 7 20 30 40 50 60 D7                                 1 2 3 4 5 6 7 20 30 40 50 60 Number of Transliterators EDA7 1 2 3 4 5 6 7 20 30 40 50 60 Word Accuracy (%) E7                                 1 2 3 4 5 6 7 20 30 40 50 60 Number of Transliterators Word Accuracy (%) A7 Figure 3: Performance on sub-corpora derived by combining the number of transliterators shown on the xaxis. Boxes show the 25th and 75th percentile of the MWA for all 7Cx combinations of transliterators using SYS-2, with whiskers showing extreme values. However, the changes do not follow a fixed trend across the languages. For E7, the range of accuracies achieved is high when only two or three transliterators are involved, ranging from 37.0% to 50.6% in SYS-2 method and from 33.8% to 48.0% in SYS-1 (not shown) when only two transliterators’ data are available. When more than three transliterators are used, the range of performance is noticeably smaller. Hence if at least four transliterators are used, then it is more likely that a system’s MWA will be stable. This finding is supported by Papineni et al. (2002) who recommend that four people should be used for collecting judgments for machine translation experiments. The corpora derived from A7 show consistent median increases as the number of transliterators increases, but the median accuracy is lower than for other languages. The D7 collection does not show any stable results until at least six transliterator’s are used. The results indicate that creating a collection used for the evaluation of transliteration systems, based on a “gold standard” created by only one human transliterator may lead to word accuracy results that could show a 10% absolute difference compared to results on a corpus derived using a different translitE7 D7 A7 EDA7 Corpus 0 20 40 60 Word Accuracy (%) T1 T2 T3 T4 T5 T6 T7 SYS-2 Figure 4: Word accuracy on the sub-corpora using only a single transliterator’s transliterations. erator. This is evidenced by the leftmost box in each panel of the figure which has a wide range of results. Figure 4 shows this box in more detail for each collection, plotting the word accuracy for each user for all sub-corpora for SYS-2. The accuracy achieved varies significantly between transliterators; for example, for E7 collections, word accuracy varies from 37.2% for T1 to 50.0% for T5. This variance is more obvious for the D7 dataset where the difference ranges from 23.2% for T1 to 56.2% for T3. Origin language also has an effect: accuracy for the Arabic collection (A7) is generally less than that of English (E7). The Dutch collection (D7), shows an unstable trend across transliterators. In other words, accuracy differs in a narrower range for Arabic and English, but in wider range for Dutch. 644 This is likely due to the fact that most transliterators found Dutch a difficult language to work with, as reported in Table 1. 3.1 Transliterator Consistency To investigate the effect of invididual transliterator consistency on system accuracy, we consider the number of Persian characters used by each transliterator on each sub-corpus, and the average number of rules generated by SYS-2 on the ten training sets derived in the ten-fold cross validation process, which are shown in Table 2. For example, when transliterating words from E7 into Persian, T3 only ever used 21 out of 32 characters available in the Persian alphabet; T7, on the other hand, used 24 different Persian characters. It is expected that an increase in number of characters or rules provides more “noise” for the automated system, hence may lead to lower accuracy. Superficially the opposite seems true for rules: the mean number of rules generated by SYS2 is much higher for the EDA7 corpus than for the A7 corpus, and yet Figure 1 shows that word accuracy is higher on the EDA7 corpus. A correlation test, however, reveals that there is no significant relationship between either the number of characters used, nor the number of rules generated, and the resulting word accuracy of SYS-2 (Spearman correlation, p = 0.09 (characters) and p = 0.98 (rules)). A better indication of “noise” in the corpus may be given by the consistency with which a transliterator applies a certain rule. For example, a large number of rules generated from a particular transliterator’s corpus may not be problematic if many of the rules get applied with a low probability. If, on the other hand, there were many rules with approximately equal probabilities, the system may have difficulty distinguishing when to apply some rules, and not others. One way to quantify this effect is to compute the self entropy of the rule distribution for each segment in the corpus for an individual. If pij is the probability of applying rule 1 ≤j ≤m when confronted with source segment i, then Hi = −∑m j=1 pij log2 pij is the entropy of the probability distribution for that rule. H is maximized when the probabilities pij are all equal, and minimized when the probabilities are very skewed (Shannon, 1948). As an example, consider the rules: t →<  H,0.5 >, t →<  ,0.3 > and t →< X,0.2 >; for which Ht = 0.79. The expected entropy can be used to obtain a single entropy value over the whole corpus, E = − R ∑ i=1 fi S Hi, where Hi is the entropy of the rule probabilities for segment i, R is the total number of segments, fi is the frequency with which segment i occurs at any position in all source words in the corpus, and S is the sum of all fi. The expected entropy for each transliterator is shown in Figure 5, separated by corpus. Comparison of this graph with Figure 4 shows that generally transliterators that have used rules inconsistently generate a corpus that leads to low accuracy for the systems. For example, T1 who has the lowest accuracy for all the collections in both methods, also has the highest expected entropy of rules for all the collections. For the E7 collection, the maximum accuracy of 50.0%, belongs to T5 who has the minimum expected entropy. The same applies to the D7 collection, where the maximum accuracy of 56.2% and the minimum expected entropy both belong to T3. These observations are confirmed by a statistically significant Spearman correlation between expected rule entropy and word accuracy (r = −0.54, p = 0.003). Therefore, the consistency with which transliterators employ their own internal rules in developing a corpus has a direct effect on system performance measures. 3.2 Inter-Transliterator Agreement and Perceived Difficulty Here we present various agreement proportions (PA from Section 2.2), which give a measure of consistency in the corpora across all users, as opposed to the entropy measure which gives a consistency measure for a single user. For E7, PA was 33.6%, for A7 it was 33.3% and for D7, agreement was 15.5%. In general, humans agree less than 33% of the time when transliterating English to Persian. In addition, we examined agreement among transliterators based on their perception of the task difficulty shown in Table 1. For A7, agreement among those who found the task easy was higher (22.3%) than those who found it in medium level 645 E7 D7 A7 EDA7 Char Rules Char Rules Char Rules Char Rules T1 23 523 23 623 28 330 31 1075 T2 22 487 25 550 29 304 32 956 T3 21 466 20 500 28 280 31 870 T4 23 497 22 524 28 307 30 956 T5 21 492 22 508 28 296 29 896 T6 24 493 21 563 25 313 29 968 T7 24 495 21 529 28 299 30 952 Mean 23 493 22 542 28 304 30 953 Table 2: Number of characters used and rules generated using SYS-2, per transliterator. (18.8%). PA is 12.0% for those who found the D7 collection hard to transliterate; while the six transliterators who found the E7 collection difficulty medium had PA = 30.2%. Hence, the harder participants rated the transliteration task, the lower the agreement scores tend to be for the derived corpus. Finally, in Table 3 we show word accuracy results for the two systems on corpora derived from transliterators grouped by perceived level of difficulty on A7. It is readily apparent that SYS-2 outperforms SYS-1 on the corpus comprised of human transliterations from people who saw the task as easy with both word accuracy metrics; the relative improvement of over 50% is statistically significant (paired t-test on ten-fold cross validation runs). However, on the corpus composed of transliterations that were perceived as more difficult, “Medium”, the advantage of SYS-2 is significantly eroded, but is still statistically significant for UWA. Here again, using only one transliteration, MWA, did not distinguish the performance of each system. 4 Discussion We have evaluated two English to Persian transliteration systems on a variety of controlled corpora using evaluation metrics that appear in previous transliteration studies. Varying the evaluation corpus in a controlled fashion has revealed several interesting facts. We report that human agreement on the English to Persian transliteration task is about 33%. The effect that this level of disagreement on the evaluation of systems has, can be seen in Figure 4, where word accuracy is computed on corpora derived from single transliterators. Accuracy can vary by up to 30% in absolute terms depending on the transliterator chosen. To our knowledge, this is the first paper E7 D7 A7 EDA7 Corpus 0.0 0.2 0.4 0.6 Entropy T1 T2 T3 T4 T5 T6 T7 Figure 5: Entropy of the generated segments based on the collections created by different transliterators. to report human agreement, and examine its effects on transliteration accuracy. In order to alleviate some of these effects on the stability of word accuracy measures across corpora, we recommend that at least four transliterators are used to construct a corpus. Figure 3 shows that constructing a corpus with four or more transliterators, the range of possible word accuracies achieved is less than that of using fewer transliterators. Some past studies do not use more than a single target word for every source word in the corpus (Bilac and Tanaka, 2005; Oh and Choi, 2006). Our results indicate that it is unlikely that these results would translate onto a corpus other than the one used in these studies, except in rare cases where human transliterators are in 100% agreement for a given language pair. Given the nature of the English language, an English corpus can contain English words from a variety of different origins. In this study we have used English words from an Arabic and Dutch origin to show that word accuracy of the systems can vary by up to 25% (in absolute terms) depending on the origin of English words in the corpus, as demonstrated in Figure 1. In addition to computing agreement, we also in646 Relative Perception SYS-1 SYS-2 Improvement (%) UWA Easy 33.4 55.4 54.4 (p < 0.001) Medium 44.6 48.4 8.52 (p < 0.001) MWA Easy 23.2 36.2 56.0 (p < 0.001) Medium 30.6 37.4 22.2 (p = 0.038) Table 3: System performance when A7 is split into sub-corpora based on transliterators perception of the task (Easy or Medium). vestigated the transliterator’s perception of difficulty of the transliteration task with the ensuing word accuracy of the systems. Interestingly, when using corpora built from transliterators that perceive the task to be easy, there is a large difference in the word accuracy between the two systems, but on corpora built from transliterators who perceive the task to be more difficult, the gap between the systems narrows. Hence, a corpus applied for evaluation of transliteration should either be made carefully with transliterators with a variety of backgrounds, or should be large enough and be gathered from various sources so as to simulate different expectations of its expected non-homogeneous users. The self entropy of rule probability distributions derived by the automated transliteration system can be used to measure the consistency with which individual transliterators apply their own rules in constructing a corpus. It was demonstrated that when systems are evaluated on corpora built by transliterators who are less consistent in their application of transliteration rules, word accuracy is reduced. Given the large variations in system accuracy that are demonstrated by the varying corpora used in this study, we recommend that extreme care be taken when constructing corpora for evaluating transliteration systems. Studies should also give details of their corpora that would allow any of the effects observed in this paper to be taken into account. Acknowledgments This work was supported in part by the Australian government IPRS program (SK). References Nasreen AbdulJaleel and Leah S. Larkey. 2003. Statistical transliteration for English-Arabic cross-language information retrieval. In Conference on Information and Knowledge Management, pages 139–146. Yaser Al-Onaizan and Kevin Knight. 2002. Machine transliteration of names in Arabic text. In Proceedings of the ACL02 workshop on Computational approaches to semitic languages, pages 1–13. Slaven Bilac and Hozumi Tanaka. 2005. Direct combination of spelling and pronunciation information for robust backtransliteration. In Conference on Computational Linguistics and Intelligent Text Processing, pages 413–424. Patrick A. V. Hall and Geoff R. Dowling. 1980. Approximate string matching. ACM Computing Survey, 12(4):381–402. Sung Young Jung, Sung Lim Hong, and Eunok Paek. 2000. An English to Korean transliteration model of extended Markov window. In Conference on Computational Linguistics, pages 383–389. Sarvnaz Karimi, Andrew Turpin, and Falk Scholer. 2006. English to Persian transliteration. In String Processing and Information Retrieval, pages 255–266. Krister Lind´en. 2005. Multilingual modeling of cross-lingual spelling variants. Information Retrieval, 9(3):295–310. Eun Young Mun and Alexander Von Eye, 2004. Analyzing Rater Agreement: Manifest Variable Methods. Lawrence Erlbaum Associates. Jong-Hoon Oh and Key-Sun Choi. 2006. An ensemble of transliteration models for information retrieval. Information Processing Management, 42(4):980–1002. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In The 40th Annual Meeting of Association for Computational Linguistics, pages 311–318. Ari Pirkola, Jarmo Toivonen, Heikki Keskustalo, and Kalervo J¨arvelin. 2006. FITE-TRT: a high quality translation technique for OOV words. In Proceedings of the 2006 ACM Symposium on Applied Computing, pages 1043–1049. Claude Elwood Shannon. 1948. A mathematical theory of communication. Bell System Technical Journal, 27:379– 423. Paola Virga and Sanjeev Khudanpur. 2003. Transliteration of proper names in cross-language applications. In ACM SIGIR Conference on Research and Development on Information Retrieval, pages 365–366. Dmitry Zelenko and Chinatsu Aone. 2006. Discriminative methods for transliteration. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 612–617. 647
2007
81
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 648–655, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Collapsed Consonant and Vowel Models: New Approaches for English-Persian Transliteration and Back-Transliteration Sarvnaz Karimi Falk Scholer Andrew Turpin School of Computer Science and Information Technology RMIT University, GPO Box 2476V, Melbourne 3001, Australia {sarvnaz,fscholer,aht}@cs.rmit.edu.au Abstract We propose a novel algorithm for English to Persian transliteration. Previous methods proposed for this language pair apply a word alignment tool for training. By contrast, we introduce an alignment algorithm particularly designed for transliteration. Our new model improves the English to Persian transliteration accuracy by 14% over an n-gram baseline. We also propose a novel back-transliteration method for this language pair, a previously unstudied problem. Experimental results demonstrate that our algorithm leads to an absolute improvement of 25% over standard transliteration approaches. 1 Introduction Translation of a text from a source language to a target language requires dealing with technical terms and proper names. These occur in almost any text, but rarely appear in bilingual dictionaries. The solution is the transliteration of such out-ofdictionary terms: a word from the source language is transformed to a word in the target language, preserving its pronunciation. Recovering the original word from the transliterated target is called backtransliteration. Automatic transliteration is important for many different applications, including machine translation, cross-lingual information retrieval and cross-lingual question answering. Transliteration methods can be categorized into grapheme-based (AbdulJaleel and Larkey, 2003; Li et al., 2004), phoneme-based (Knight and Graehl, 1998; Jung et al., 2000), and combined (Bilac and Tanaka, 2005) approaches. Grapheme-based methods perform a direct orthographical mapping between source and target words, while phonemebased approaches use an intermediate phonetic representation. Both grapheme- or phoneme-based methods usually begin by breaking the source word into segments, and then use a source segment to target segment mapping to generate the target word. The rules of this mapping are obtained by aligning already available transliterated word pairs (training data); alternatively, such rules can be handcrafted. From this perspective, past work is roughly divided into those methods which apply a word alignment tool such as GIZA++ (Och and Ney, 2003), and approaches that combine the alignment step into their main transliteration process. Transliteration is language dependent, and methods that are effective for one language pair may not work as well for another. In this paper, we investigate the English-Persian transliteration problem. Persian (Farsi) is an Indo-European language, written in Arabic script from right to left, but with an extended alphabet and different pronunciation from Arabic. Our previous approach to EnglishPersian transliteration introduced the graphemebased collapsed-vowel method, employing GIZA++ for source to target alignment (Karimi et al., 2006). We propose a new transliteration approach that extends the collapsed-vowel method. To meet Persian language transliteration requirements, we also propose a novel alignment algorithm in our training stage, which makes use of statistical information of 648 the corpus, transliteration specifications, and simple language properties. This approach handles possible consequences of elision (omission of sounds to make the word easier to read) and epenthesis (adding extra sounds to a word to make it fluent) in written target words that happen due to the change of language. Our method shows an absolute accuracy improvement of 14.2% over an n-gram baseline. In addition, we investigate the problem of backtransliteration from Persian to English. To our knowledge, this is the first report of such a study. There are two challenges in Persian to English transliteration that makes it particularly difficult. First, written Persian omits short vowels, while only long vowels appear in texts. Second, monophthongization (changing diphthongs to monophthongs) is popular among Persian speakers when adapting foreign words into their language. To take these into account, we propose a novel method to form transformation rules by changing the normal segmentation algorithm. We find that this method significantly improves the Persian to English transliteration effectiveness, demonstrating an absolute performance gain of 25.1% over standard transliteration approaches. 2 Background In general, transliteration consists of a training stage (running on a bilingual training corpus), and a generation – also called testing – stage. The training step of a transliteration develops transformation rules mapping characters in the source to characters in the target language using knowledge of corresponding characters in transliterated pairs provided by an alignment. For example, for the source-target word pair (pat, H   H), an alignment may map “p” to “ H  ” and “a” to “ ”, and the training stage may develop the rule pa → , with “ ” as the transliteration of “a” in the context of “pa”. The generation stage applies these rules on a segmented source word, transforming it to a word in the target language. Previous work on transliteration either employs a word alignment tool (usually GIZA++), or develops specific alignment strategies. Transliteration methods that use GIZA++ as their word pair aligner (AbdulJaleel and Larkey, 2003; Virga and Khudanpur, 2003; Karimi et al., 2006) have based their work on the assumption that the provided alignments are reliable. Gao et al. (2004) argue that precise alignment can improve transliteration effectiveness, experimenting on English-Chinese data and comparing IBM models (Brown et al., 1993) with phonemebased alignments using direct probabilities. Other transliteration systems focus on alignment for transliteration, for example the joint sourcechannel model suggested by Li et al. (2004). Their method outperforms the noisy channel model in direct orthographical mapping for English-Chinese transliteration. Li et al. also find that graphemebased methods that use the joint source-channel model are more effective than phoneme-based methods due to removing the intermediate phonetic transformation step. Alignment has also been investigated for transliteration by adopting Covington’s algorithm on cognate identification (Covington, 1996); this is a character alignment algorithm based on matching or skipping of characters, with a manually assigned cost of association. Covington considers consonant to consonant and vowel to vowel correspondence more valid than consonant to vowel. Kang and Choi (2000) revise this method for transliteration where a skip is defined as inserting a null in the target string when two characters do not match based on their phonetic similarities or their consonant and vowel nature. Oh and Choi (2002) revise this method by introducing binding, in which many to many correspondences are allowed. However, all of these approaches rely on the manually assigned penalties that need to be defined for each possible matching. In addition, some recent studies investigate discriminative transliteration methods (Klementiev and Roth, 2006; Zelenko and Aone, 2006) in which each segment of the source can be aligned to each segment of the target, where some restrictive conditions based on the distance of the segments and phonetic similarities are applied. 3 The Proposed Alignment Approach We propose an alignment method based on segment occurrence frequencies, thereby avoiding predefined matching patterns and penalty assignments. We also apply the observed tendency of aligning consonants 649 to consonants, and vowels to vowels, as a substitute for phonetic similarities. Many to many, one to many, one to null and many to one alignments can be generated. 3.1 Formulation Our alignment approach consists of two steps: the first is based on the consonant and vowel nature of the word’s letters, while the second uses a frequency-based sequential search. Definition 1 A bilingual corpus B is the set {(S, T)}, where S = s1..sℓ, T = t1..tm, si is a letter in the source language alphabet, and tj is a letter in the target language alphabet. Definition 2 Given some word, w, the consonantvowel sequence p = (C|V )+ for w is obtained by replacing each consonant with C and each vowel with V . Definition 3 Given some consonant-vowel sequence, p, a reduced consonant-vowel sequence q replaces all runs of C’s with C, and all runs of V ’s with V; hence q = q′|q′′, q′ = V(CV)∗(C|ǫ) and q′′ = C(VC)∗(V|ǫ). For each natural language word, we can determine the consonant-vowel sequence (p) from which the reduced consonant-vowel sequence (q) can be derived, giving a common notation between two different languages, no matter which script either of them use. To simplify, semi-vowels and approximants (sounds intermediate between consonants and vowels, such as “w” and “y” in English) are treated according to their target language counterparts. In general, for all the word pairs (S, T) in a corpus B, an alignment can be achieved using the function f : B →A; (S, T) 7→( ˆS, ˆT , r). The function f maps the word pair (S, T) ∈B to the triple ( ˆS, ˆT, r) ∈A where ˆS and ˆT are substrings of S and T respectively. The frequency of this correspondence is denoted by r. A represents a set of substring alignments, and we use a per word alignment notation of ae2p when aligning English to Persian and ap2e for Persian to English. 3.2 Algorithm Details Our algorithm consists of two steps. Step 1 (Consonant-Vowel based) For any word pair (S, T) ∈B, the corresponding reduced consonant-vowel sequences, qS and qT , are generated. If the sequences match, then the aligned consonant clusters and vowel sequences are added to the alignment set A. If qS does not match with qT , the word pair remains unaligned in Step 1. The assumption in this step is that transliteration of each vowel sequence of the source is a vowel sequence in the target language, and similarly for consonants. However, consonants do not always map to consonants, or vowels to vowels (for example, the English letter “s” may be written as “ € ” in Persian which consists of one vowel and one consonant). Alternatively, they might be omitted altogether, which can be specified as the null string, ε. We therefore require a second step. Step 2 (Frequency based) For most natural languages, the maximum length of corresponding phonemes of each grapheme is a digraph (two letters) or at most a trigraph. Hence, alignment can be defined as a search problem that seeks for units with a maximum length of two or three in both strings that need to be aligned. In our approach, we search based on statistical occurrence data available from Step 1. In Step 2, only those words that remain unaligned at the end of Step 1 need to be considered. For each pair of words (S, T), matching proceeds from left to right, examining one of the three possible options of transliteration: single letter to single letter, digraph to single letter and single letter to digraph. Trigraphs are unnecessary in alignment as they can be effectively captured during transliteration generation, as we explain below. We define four different valid alignments for the source (S = s1s2 . . . si . . . sl) and target (T = t1t2 . . . tj . . . tm) strings: (si, tj, r), (sisi+1, tj, r), (si, tjtj+1, r) and (si, ε, r). These four options are considered as the only possible valid alignments, and the most frequently occurring alignment (highest r) is chosen. These frequencies are dynamically updated after successfully aligning a pair. For exceptional situations, where there is no character in the target string to match with the source character si, it is aligned with the empty string. It is possible that none of the four valid alignment 650 options have occurred previously (that is, r = 0 for each). This situation can arise in two ways: first, such a tuple may simply not have occurred in the training data; and, second, the previous alignment in the current string pair may have been incorrect. To account for this second possibility, a partial backtracking is considered. Most misalignments are derived from the simultaneous comparison of alignment possibilities, giving the highest priority to the most frequent. For example if S=bbc, T= H . € and A = {(b, H . ,100),(bb, H . ,40),(c, €,60)}, starting from the initial position s1 and t1, the first alignment choice is (b, H . ,101). However immediately after, we face the problem of aligning the second “b”. There are two solutions: inserting ε and adding the triple (b,ε,1), or backtracking the previous alignment and substituting that with the less frequent but possible alignment of (bb, H . ,41). The second solution is a better choice as it adds less ambiguous alignments containing ε. At the end, the alignment set is updated as A = {(b, H . ,100),(bb, H . ,41),(c, €,61)}. In case of equal frequencies, we check possible subsequent alignments to decide on which alignment should be chosen. For example, if (b, H . ,100) and (bb, H . ,100) both exist as possible options, we consider if choosing the former leads to a subsequent ε insertion. If so, we opt for the latter. At the end of a string, if just one character in the target string remains unaligned while the last alignment is a ε insertion, that final alignment will be substituted for ε. This usually happens when the alignment of final characters is not yet registered in the alignment set, mainly because Persian speakers tend to transliterate the final vowels to consonants to preserve their existence in the word. For example, in the word “Jose” the final “e” might be transliterated to “ è” which is a consonant (“h”) and therefore is not captured in Step 1. Backparsing The process of aligning words explained above can handle words with already known components in the alignment set A (the frequency of occurrence is greater than zero). However, when this is not the case, the system may repeatedly insert ε while part or all of the target characters are left intact (unsuccessful alignment). In such cases, processing the source and target backwards helps to find the problematic substrings: backparsing. The poorly aligned substrings of the source and target are taken as new pairs of strings, which are then reintroduced into the system as new entries. Note that they themselves are not subject to backparsing. Most strings of repeating nulls can be broken up this way, and in the worst case will remain as one tuple in the alignment set. To clarify, consider the example given in Figure 1. For the word pair (patricia, H   H P ø  € ø ), where an association between “c” and “  €” is not yet registered. Forward parsing, as shown in the figure, does not resolve all target characters; after the incorrect alignment of “c” with “ε”, subsequent characters are also aligned with null, and the substring “  € ø ” remains intact. Backward parsing, shown in the next line of the figure, is also not successful. It is able to correctly align the last two characters of the string, before generating repeated null alignments. Therefore, the central region — substrings of the source and target which remained unaligned plus one extra aligned segment to the left and right — is entered as a new pair to the system (ici, ø  € ø), as shown in the line labelled Input 2 in the figure. This new input meets Step 1 requirements, and is aligned successfully. The resulting tuples are then merged with the alignment set A. An advantage of our backparsing strategy is that it takes care of casual transliterations happening due to elision and epenthesis (adding or removing extra sounds). It is not only in translation that people may add extra words to make fluent target text; for transliteration also, it is possible that spurious characters are introduced for fluency. However, this often follows patterns, such as adding vowels to the target form. These irregularities are consistently covered in the backparsing strategy, where they remain connected to their previous character. 4 Transliteration Method Transliteration algorithms use aligned data (the output from the alignment process, ae2p or ap2e alignment tuples) for training to derive transformation rules. These rules are then used to generate a target word T given a new input source word S. 651 Initial alignment set: A = {(p, H  ,42),(a, ,320),(a,ε,99),(a, ø,10),(a, ø,35),(r, P,200),(i, ø,60),(i,ε,5),(c, €,80),(c, h  ,25),(t,  H,51)} Input: (patricia, H   H P ø  € ø ) qS = CVCVCV qT = CVCV Step 1: qS ̸= qT Forward alignment: (p, H  ,43), (a,ε,100), (t,  H,52), (r, P,201), (i, ø,61), (c,ε,1), (i,ε,6), (a,ε,100) Backward alignment: (a, ,321), (i, ø,61), (c,ε,1), (i,ε,6), (r,ε,1), (t,ε,1), (a,ε,100), (p,ε,1) Input 2: (ici, ø  € ø) qS = VCV qT = VCV Step 1: (i, ø,61),(c,  €,1), (i, ø,61) Final Alignment: ae2p = ((p, H  ),(a,ε),(t,  H),((r, P),(i, ø),(c,  €),(i, ø),(a, )) Updated alignment set: A = {(p, H  ,43),(a, ,321),(a,ε,100),(a, ø,10),(a, ø,35),(r, P,201),(i, ø,62),(i,ε,5),(c, €,80),(c, h  ,25),(c,  €,1),(t,  H,52)} Figure 1: A backparsing example. Note middle tuples in forward and backward parsings are not merged in A till the alignment is successfully completed. Method Intermediate Sequence Segment(Pattern) Backoff Bigram N/A #s, sh, he, el, ll, le, ey s,h,e,l,e,y CV-MODEL1 CCVCCV sh(CC), hel(CVC), ll(CC), lley(CV) s(C), h(C), e(V), l(C), e(V), y(V) CV-MODEL2 CCVCCV sh(CC), e(CVC), ll(CC), ey(CV) As Above. CV-MODEL3 CVCV #sh(C), e(CVC), ll(C), ey(CV) sh(C), s(C), h(C), e(V), l(C), e(V), y(V) Figure 2: An example of transliteration for the word pair (shelley,  € È ø). Underlined characters are actually transliterated for each segment. 4.1 Baseline Most transliteration methods reported in the literature — either grapheme- or phoneme-based — use n-grams (AbdulJaleel and Larkey, 2003; Jung et al., 2000). The n-gram-based methods differ mainly in the way that words are segmented, both for training and transliteration generation. A simple ngram based method works only on single characters (unigram) and transformation rules are defined as si →tj, while an advanced method may take the surrounding context into account (Jung et al., 2000). We found that using one past symbol (bigram model) works better than other n-gram based methods for English to Persian transliteration (Karimi et al., 2006). Our collapsed-vowel methods consider language knowledge to improve the string segmentation of n-gram techniques (Karimi et al., 2006). The process begins by generating the consonant-vowel sequence (Definition 2) of a source word. For example, the word “shelley” is represented by the sequence p = CCV CCV V . Then, following the collapsed vowel concept (Definition 3), this sequence becomes “CCVCCV”. These approaches, which we refer to as CV-MODEL1 and CV-MODEL2 respectively, partition these sequences using basic patterns (C and V) and main patterns (CC, CVC, VC and CV). In the training phase, transliteration rules are formed according to the boundaries of the defined patterns and their aligned counterparts (based on ae2p or ap2e) in the target language word T. Similar segmentation is applied during the transliteration generation stage. 4.2 The Proposed Transliteration Approach The restriction on the context length of consonants imposed by CV-MODEL1 and CV-MODEL2 makes the transliteration of consecutive consonants mapping to a particular character in the target language difficult. For example, “ght” in English maps to only one character in Persian: “  H”. Dealing with languages which have different alphabets, and for which the number of characters in their alphabets also differs (such as 26 and 32 for English and Persian), increases the possibility of facing these cases, especially when moving from the language with smaller alphabet size to the one with a larger size. To more effectively address this, we propose a collapsed consonant and vowel method (CV-MODEL3) which uses the full reduced sequence (Definition 3), rather than simply reduced vowel sequences. Although recognition of consonant segments is based on the vowel positions, consonants are considered as independent blocks in each string. Conversely, vowels are transliterated in the context of surrounding 652 consonants, as demonstrated in the example below. A special symbol is used to indicate the start and/or end of each word if the beginning and end of the word is a consonant respectively. Therefore, for the words starting or ending with consonants, the symbol “#” is added, which is treated as a consonant and therefore grouped in the consonant segment. An example of applying this technique is shown in Figure 2 for the string “shelley”. In this example, “sh” and “ll” are treated as two consonant segments, where the transliteration of individual characters inside a segment is dependent on the other members but not the surrounding segments. However, this is not the case for vowel sequences which incorporate a level of knowledge about any segment neighbours. Therefore, for the example “shelley”, the first segment is “sh” which belongs to C pattern. During transliteration, if “#sh” does not appear in any existing rules, a backoff splits the segment to smaller segments: “#” and “sh”, or “s”and “h”. The second segment contains the vowel “e”. Since this vowel is surrounded by consonants, the segment pattern is CVC. In this case, backoff only applies for vowels as consonants are supposed to be part of their own independent segments. That is, if search in the rules of pattern CVC was unsuccessful, it looks for “e” in V pattern. Similarly, segmentation for this word continues with “ll” in C pattern and “ey” in CV pattern (“y” is an approximant, and therefore considered as a vowel when transliterating English to Persian). 4.3 Rules for Back-Transliteration Written Persian ignores short vowels, and only long vowels appear in text. This causes most English vowels to disappear when transliterating from English to Persian; hence, these vowels must be restored during back-transliteration. When the initial transliteration happens from English to Persian, the transliterator (whether human or machine) uses the rules of transliterating from English as the source language. Therefore, transliterating back to the original language should consider the original process, to avoid losing essential information. In terms of segmentation in collapsed-vowel models, different patterns define segment boundaries in which vowels are necessary clues. Although we do not have most of these vowels in the transliteration generation phase, it is possible to benefit from their existence in the training phase. For example, using CVMODEL3, the pair ( Ð P ¸ È,merkel) with qS=C and ap2e=(( Ð,me),( P,r),( ¸,ke),( È,l)), produces just one transformation rule “ Ð P ¸ È →merkel” based on a C pattern. That is, the Persian string contains no vowel characters. If, during the transliteration generation phase, a source word “ É¿ QÓ” (S= Ð P ¸ È) is entered, there would be one and only one output of “merkel”, while an alternative such as “mercle” might be required instead. To avoid overfitting the system by long consonant clusters, we perform segmentation based on the English q sequence, but categorise the rules based on their Persian segment counterparts. That is, for the pair ( Ð P ¸ È,merkel) with ae2p=((m, Ð),(e,ε),(r, P),(k, ¸),(e,ε),(l, È)), these rules are generated (with category patterns given in parenthesis): Ð →m (C), P ¸ →rk (C), È →l (C), Ð P ¸ →merk (C), P ¸ È →rkel (C). We call the suggested training approach reverse segmentation. Reverse segmentation avoids clustering all the consonants in one rule, since many English words might be transliterated to all-consonant Persian words. 4.4 Transliteration Generation and Ranking In the transliteration generation stage, the source word is segmented following the same process of segmenting words in training stage, and a probability is computed for each generated target word: P(T|S) = |K| Y k=1 P( ˆTk| ˆSk), where |K| is the number of distinct source segments. P( ˆTk| ˆSk) is the probability of the ˆSk→ˆTk transformation rule, as obtained from the training stage: P( ˆTk| ˆSk) = frequency of ˆSk →ˆTk frequency of ˆSk , where frequency of ˆSk is the number of its occurrence in the transformation rules. We apply a tree structure, following Dijkstra’s α-shortest path, to generate the α highest scoring (most probable) transliterations, ranked based on their probabilities. 653 Corpus Baseline CV-MODEL3 Bigram CV-MODEL1 CV-MODEL2 GIZA++ New Alignment Small Corpus TOP-1 58.0 (2.2) 61.7 (3.0) 60.0 (3.9) 67.4 (5.5) 72.2 (2.2) TOP-5 85.6 (3.4) 80.9 (2.2) 86.0 (2.8) 90.9 (2.1) 92.9 (1.6) TOP-10 89.4 (2.9) 82.0 (2.1) 91.2 (2.5) 93.8 (2.1) 93.5 (1.7) Large Corpus TOP-1 47.2 (1.0) 50.6 (2.5) 47.4 (1.0) 55.3 (0.8) 59.8 (1.1) TOP-5 77.6 (1.4) 79.8 (3.4) 79.2 (1.0) 84.5 (0.7) 85.4 (0.8) TOP-10 83.3 (1.5) 84.9 (3.1) 87.0 (0.9) 89.5 (0.4) 92.6 (0.7) Table 1: Mean (standard deviation) word accuracy (%) for English to Persian transliteration. 5 Experiments To investigate the effectiveness of CV-MODEL3 and the new alignment approach on transliteration, we first compare CV-MODEL3 with baseline systems, employing GIZA++ for alignment generation during system training. We then evaluate the same systems, using our new alignment approach. Backtransliteration is also investigated, applying both alignment systems and reverse segmentation. In all our experiments, we used ten-fold cross-validation. The statistical significance of different performance levels are evaluated using a paired t-test. The notation TOP-X indicates the first X transliterations prodcued by the automatic methods. We used two corpora of word pairs in English and Persian: the first, called Large, contains 16,670 word pairs; the second, Small, contains 1,857 word pairs, and are described fully in our previous paper (Karimi et al., 2006). The results of transliteration experiments are evaluated using word accuracy (Kang and Choi, 2000) which measures the proportion of transliterations that are correct out of the test corpus. 5.1 Accuracy of Transliteration Approaches The results of our experiments for transliterating English to Persian, using GIZA++ for alignment generation, are shown in Table 1. CV-MODEL3 outperforms all three baseline systems significantly in TOP-1 and TOP-5 results, for both Persian corpora. TOP-1 results were improved by 9.2% to 16.2% (p<0.0001, paired t-test) relative to the baseline systems for the Small corpus. For the Large corpus, CV-MODEL3 was 9.3% to 17.2% (p<0.0001) more accurate relative to the baseline systems. The results of applying our new alignment algorithm are presented in the last column of Table 1, comparing word accuracy of CV-MODEL3 using GIZA++ and the new alignment for English to Persian transliteration. Transliteration accuracy increases in TOP-1 for both corpora (a relative increase of 7.1% (p=0.002) for the Small corpus and 8.1% (p<0.0001) for the Large corpus). The TOP-10 results of the Large corpus again show a relative increase of 3.5% (p=0.004). Although the new alignment also increases the performance for TOP-5 and TOP-10 of the Small corpus, these increases are not statistically significant. 5.2 Accuracy of Back-Transliteration The results of back-transliteration are shown in Table 2. We first consider performance improvements gained from using CV-MODEL3: CV-MODEL3 using GIZA++ outperforms Bigram, CV-MODEL1 and CVMODEL2 by 12.8% to 40.7% (p<0.0001) in TOP1 for the Small corpus. The corresponding improvement for the Large corpus is 12.8% to 74.2% (p<0.0001). The fifth column of the table shows the performance increase when using CV-MODEL3 with the new alignment algorithm: for the Large corpus, the new alignment approach gives a relative increase in accuracy of 15.5% for TOP-5 (p<0.0001) and 10% for TOP-10 (p=0.005). The new alignment method does not show a significant difference using CVMODEL3 for the Small corpus. The final column of Table 2 shows the performance of the CV-MODEL3 with the new reverse segmentation approach. Reverse segmentation leads to a significant improvement over the new alignment approach in TOP-1 results for the Small corpus by 40.1% (p<0.0001), and 49.4% (p<0.0001) for the Large corpus. 654 Corpus Bigram CV-MODEL1 CV-MODEL2 CV-MODEL3 GIZA++ New Alignment Reverse Small Corpus TOP-1 23.1 (2.0) 28.8 (4.6) 24.9 (2.8) 32.5 (3.6) 34.4 (3.8) 48.2 (2.9) TOP-5 40.8 (3.1) 51.0 (4.8) 52.9 (3.4) 56.0 (3.5) 54.8 (3.7) 68.1 (4.9) TOP-10 50.1 (4.1) 58.2 (5.3) 63.2 (3.1) 64.2 (3.2) 63.8 (3.6) 75.7 (4.2) Large Corpus TOP-1 10.1 (0.6) 15.6 (1.0) 12.0 (1.0) 17.6 (0.8) 18.0 (1.2) 26.9 (0.7) TOP-5 20.6 (1.2) 31.7 (0.9) 28.0 (0.7) 36.2 (0.5) 41.8 (1.2) 41.3 (1.7) TOP-10 27.2 (1.0) 40.1 (1.1) 37.4 (0.8) 46.0 (0.8) 50.6 (1.1) 49.3 (1.6) Table 2: Comparison of mean (standard deviation) word accuracy (%) for Persian to English transliteration. 6 Conclusions We have presented a new algorithm for English to Persian transliteration, and a novel alignment algorithm applicable for transliteration. Our new transliteration method (CV-MODEL3) outperforms the previous approaches for English to Persian, increasing word accuracy by a relative 9.2% to 17.2% (TOP-1), when using GIZA++ for alignment in training. This method shows further 7.1% to 8.1% increase in word accuracy (TOP-1) with our new alignment algorithm. Persian to English back-transliteration is also investigated, with CV-MODEL3 significantly outperforming other methods. Enriching this model with a new reverse segmentation algorithm gives rise to further accuracy gains in comparison to directly applying English to Persian methods. In future work we will investigate whether phonetic information can help refine our CV-MODEL3, and experiment with manually constructed rules as a baseline system. Acknowledgments This work was supported in part by the Australian government IPRS program (SK) and an ARC Discovery Project Grant (AT). References Nasreen AbdulJaleel and Leah S. Larkey. 2003. Statistical transliteration for English-Arabic cross language information retrieval. In Conference on Information and Knowledge Management, pages 139–146. Slaven Bilac and Hozumi Tanaka. 2005. Direct combination of spelling and pronunciation information for robust backtransliteration. In Conferences on Computational Linguistics and Intelligent Text Processing, pages 413–424. Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computional Linguistics, 19(2):263–311. Michael A. Covington. 1996. An algorithm to align words for historical comparison. Computational Linguistics, 22(4):481–496. Wei Gao, Kam-Fai Wong, and Wai Lam. 2004. Improving transliteration with precise alignment of phoneme chunks and using contextual features. In Asia Information Retrieval Symposium, pages 106–117. Sung Young Jung, Sung Lim Hong, and Eunok Paek. 2000. An English to Korean transliteration model of extended Markov window. In Conference on Computational Linguistics, pages 383–389. Byung-Ju Kang and Key-Sun Choi. 2000. Automatic transliteration and back-transliteration by decision tree learning. In Conference on Language Resources and Evaluation, pages 1135–1411. Sarvnaz Karimi, Andrew Turpin, and Falk Scholer. 2006. English to Persian transliteration. In String Processing and Information Retrieval, pages 255–266. Alexandre Klementiev and Dan Roth. 2006. Weakly supervised named entity transliteration and discovery from multilingual comparable corpora. In Association for Computational Linguistics, pages 817–824. Kevin Knight and Jonathan Graehl. 1998. Machine transliteration. Computational Linguistics, 24(4):599–612. Haizhou Li, Min Zhang, and Jian Su. 2004. A joint sourcechannel model for machine transliteration. In Association for Computational Linguistics, pages 159–166. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Jong-Hoon Oh and Key-Sun Choi. 2002. An English-Korean transliteration model using pronunciation and contextual rules. In Conference on Computational Linguistics. Paola Virga and Sanjeev Khudanpur. 2003. Transliteration of proper names in cross-language applications. In ACM SIGIR Conference on Research and Development on Information Retrieval, pages 365–366. Dmitry Zelenko and Chinatsu Aone. 2006. Discriminative methods for transliteration. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing., pages 612–617. 655
2007
82
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 656–663, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Alignment-Based Discriminative String Similarity Shane Bergsma and Grzegorz Kondrak Department of Computing Science University of Alberta Edmonton, Alberta, Canada, T6G 2E8 {bergsma,kondrak}@cs.ualberta.ca Abstract A character-based measure of similarity is an important component of many natural language processing systems, including approaches to transliteration, coreference, word alignment, spelling correction, and the identification of cognates in related vocabularies. We propose an alignment-based discriminative framework for string similarity. We gather features from substring pairs consistent with a character-based alignment of the two strings. This approach achieves exceptional performance; on nine separate cognate identification experiments using six language pairs, we more than double the precision of traditional orthographic measures like Longest Common Subsequence Ratio and Dice’s Coefficient. We also show strong improvements over other recent discriminative and heuristic similarity functions. 1 Introduction String similarity is often used as a means of quantifying the likelihood that two pairs of strings have the same underlying meaning, based purely on the character composition of the two words. Strube et al. (2002) use Edit Distance as a feature for determining if two words are coreferent. Taskar et al. (2005) use French-English common letter sequences as a feature for discriminative word alignment in bilingual texts. Brill and Moore (2000) learn misspelled-word to correctly-spelled-word similarities for spelling correction. In each of these examples, a similarity measure can make use of the recurrent substring pairings that reliably occur between words having the same meaning. Across natural languages, these recurrent substring correspondences are found in word pairs known as cognates: words with a common form and meaning across languages. Cognates arise either from words in a common ancestor language (e.g. light/Licht, night/Nacht in English/German) or from foreign word borrowings (e.g. trampoline/toranporin in English/Japanese). Knowledge of cognates is useful for a number of applications, including sentence alignment (Melamed, 1999) and learning translation lexicons (Mann and Yarowsky, 2001; Koehn and Knight, 2002). We propose an alignment-based, discriminative approach to string similarity and evaluate this approach on cognate identification. Section 2 describes previous approaches and their limitations. In Section 3, we explain our technique for automatically creating a cognate-identification training set. A novel aspect of this set is the inclusion of competitive counter-examples for learning. Section 4 shows how discriminative features are created from a characterbased, minimum-edit-distance alignment of a pair of strings. In Section 5, we describe our bitext and dictionary-based experiments on six language pairs, including three based on non-Roman alphabets. In Section 6, we show significant improvements over traditional approaches, as well as significant gains over more recent techniques by Ristad and Yianilos (1998), Tiedemann (1999), Kondrak (2005), and Klementiev and Roth (2006). 2 Related Work String similarity is a fundamental concept in a variety of fields and hence a range of techniques 656 have been developed. We focus on approaches that have been applied to words, i.e., uninterrupted sequences of characters found in natural language text. The most well-known measure of the similarity of two strings is the Edit Distance or Levenshtein Distance (Levenshtein, 1966): the number of insertions, deletions and substitutions required to transform one string into another. In our experiments, we use Normalized Edit Distance (NED): Edit Distance divided by the length of the longer word. Other popular measures include Dice’s Coefficient (DICE) (Adamson and Boreham, 1974), and the length-normalized measures Longest Common Subsequence Ratio (LCSR) (Melamed, 1999), and Longest Common Prefix Ratio (PREFIX) (Kondrak, 2005). These baseline approaches have the important advantage of not requiring training data. We can also include in the non-learning category Kondrak (2005)’s Longest Common Subsequence Formula (LCSF), a probabilistic measure designed to mitigate LCSR’s preference for shorter words. Although simple to use, the untrained measures cannot adapt to the specific spelling differences between a pair of languages. Researchers have therefore investigated adaptive measures that are learned from a set of known cognate pairs. Ristad and Yianilos (1998) developed a stochastic transducer version of Edit Distance learned from unaligned string pairs. Mann and Yarowsky (2001) saw little improvement over Edit Distance when applying this transducer to cognates, even when filtering the transducer’s probabilities into different weight classes to better approximate Edit Distance. Tiedemann (1999) used various measures to learn the recurrent spelling changes between English and Swedish, and used these changes to re-weight LCSR to identify more cognates, with modest performance improvements. Mulloni and Pekar (2006) developed a similar technique to improve NED for English/German. Essentially, all these techniques improve on the baseline approaches by using a set of positive (true) cognate pairs to re-weight the costs of edit operations or the score of sequence matches. Ideally, we would prefer a more flexible approach that can learn positive or negative weights on substring pairings in order to better identify related strings. One system that can potentially provide this flexibility is a discriminative string-similarity approach to named-entity transliteration by Klementiev and Roth (2006). Although not compared to other similarity measures in the original paper, we show that this discriminative technique can strongly outperform traditional methods on cognate identification. Unlike many recent generative systems, the Klementiev and Roth approach does not exploit the known positions in the strings where the characters match. For example, Brill and Moore (2000) combine a character-based alignment with the Expectation Maximization (EM) algorithm to develop an improved probabilistic error model for spelling correction. Rappoport and Levent-Levi (2006) apply this approach to learn substring correspondences for cognates. Zelenko and Aone (2006) recently showed a Klementiev and Roth (2006)-style discriminative approach to be superior to alignment-based generative techniques for name transliteration. Our work successfully uses the alignment-based methodology of the generative approaches to enhance the feature set for discriminative string similarity. 3 The Cognate Identification Task Given two string lists, E and F, the task of cognate identification is to find all pairs of strings (e, f) that are cognate. In other similarity-driven applications, E and F could be misspelled and correctly spelled words, or the orthographic and the phonetic representation of words, etc. The task remains to link strings with common meaning in E and F using only the string similarity measure. We can facilitate the application of string similarity to cognates by using a definition of cognation not dependent on etymological analysis. For example, Mann and Yarowsky (2001) define a word pair (e, f) to be cognate if they are a translation pair (same meaning) and their Edit Distance is less than three (same form). We adopt an improved definition (suggested by Melamed (1999) for the French-English Canadian Hansards) that does not over-propose shorter word pairs: (e, f) are cognate if they are translations and their LCSR ≥ 0.58. Note that this cutoff is somewhat conservative: the English/German cognates light/Licht (LCSR=0.8) are included, but not the cognates eight/acht (LCSR=0.4). If two words must have LCSR ≥0.58 to be cog657 Foreign Language F Words f ∈F Cognates Ef+ False Friends Ef− Japanese (Rˆomaji) napukin napkin nanking, pumpkin, snacking, sneaking French abondamment abundantly abandonment, abatement, ... wonderment German prozyklische procyclical polished, prophylactic, prophylaxis Table 1: Foreign-English cognates and false friend training examples. nate, then for a given word f ∈F, we need only consider as possible cognates the subset of words in E having an LCSR with f larger than 0.58, a set we call Ef. The portion of Ef with the same meaning as f, Ef+, are cognates, while the part with different meanings, Ef−, are not cognates. The words Ef−with similar spelling but different meaning are sometimes called false friends. The cognate identification task is, for every word f ∈F, and a list of similarly spelled words Ef, to distinguish the cognate subset Ef+ from the false friend set Ef−. To create training data for our learning approaches, and to generate a high-quality labelled test set, we need to annotate some of the (f, ef ∈Ef) word pairs for whether or not the words share a common meaning. In Section 5, we explain our two high-precision automatic annotation methods: checking if each pair of words (a) were aligned in a word-aligned bitext, or (b) were listed as translation pairs in a bilingual dictionary. Table 1 provides some labelled examples with non-empty cognate and false friend lists. Note that despite these examples, this is not a ranking task: even in highly related languages, most words in F have empty Ef+ lists, and many have empty Ef− as well. Thus one natural formulation for cognate identification is a pairwise (and symmetric) cognation classification that looks at each pair (f, ef) separately and individually makes a decision: +(napukin,napkin) – (napukin,nanking) – (napukin,pumpkin) In this formulation, the benefits of a discriminative approach are clear: it must find substrings that distinguish cognate pairs from word pairs with otherwise similar form. Klementiev and Roth (2006), although using a discriminative approach, do not provide their infinite-attribute perceptron with competitive counter-examples. They instead use transliterations as positives and randomly-paired English and Russian words as negative examples. In the following section, we also improve on Klementiev and Roth (2006) by using a character-based string alignment to focus the features for discrimination. 4 Features for Discriminative Similarity Discriminative learning works by providing a training set of labelled examples, each represented as a set of features, to a module that learns a classifier. In the previous section we showed how labelled word pairs can be collected. We now address methods of representing these word pairs as sets of features useful for determining cognation. Consider the Rˆomaji Japanese/English cognates: (sutoresu,stress). The LCSR is 0.625. Note that the LCSR of sutoresu with the English false friend stories is higher: 0.75. LCSR alone is too weak a feature to pick out cognates. We need to look at the actual character substrings. Klementiev and Roth (2006) generate features for a pair of words by splitting both words into all possible substrings of up to size two: sutoresu ⇒{ s, u, t, o, r, e, s, u, su, ut, to, ... su } stress ⇒{ s, t, r, e, s, s, st, tr, re, es, ss } Then, a feature vector is built from all substring pairs from the two words such that the difference in positions of the substrings is within one: {s-s, s-t, s-st, su-s, su-t, su-st, su-tr... r-s, r-s, r-es...} This feature vector provides the feature representation used in supervised machine learning. This example also highlights the limitations of the Klementiev and Roth approach. The learner can provide weight to features like s-s or s-st at the beginning of the word, but because of the gradual accumulation of positional differences, the learner never sees the tor-tr and es-es correspondences that really help indicate the words are cognate. Our solution is to use the minimum-edit-distance alignment of the two strings as the basis for feature extraction, rather than the positional correspondences. We also include beginning-of-word (ˆ) and end-of-word ($) markers (referred to as boundary 658 markers) to highlight correspondences at those positions. The pair (sutoresu, stress) can be aligned: For the feature representation, we only extract substring pairs that are consistent with this alignment.1 That is, the letters in our pairs can only be aligned to each other and not to letters outside the pairing: { ˆ-ˆ,ˆs-ˆs, s-s, su-s, ut-t, t-t,... es-es, s-s, su-ss...} We define phrase pairs to be the pairs of substrings consistent with the alignment. A similar use of the term “phrase” exists in machine translation, where phrases are often pairs of word sequences consistent with word-based alignments (Koehn et al., 2003). By limiting the substrings to only those pairs that are consistent with the alignment, we generate fewer, more-informative features. Using more precise features allows a larger maximum substring size L than is feasible with the positional approach. Larger substrings allow us to capture important recurring deletions like the “u” in sut-st. Tiedemann (1999) and others have shown the importance of using the mismatching portions of cognate pairs to learn the recurrent spelling changes between two languages. In order to capture mismatching segments longer than our maximum substring size will allow, we include special features in our representation called mismatches. Mismatches are phrases that span the entire sequence of unaligned characters between two pairs of aligned end characters (similar to the “rules” extracted by Mulloni and Pekar (2006)). In the above example, su$-ss$ is a mismatch with “s” and “$” as the aligned end characters. Two sets of features are taken from each mismatch, one that includes the beginning/ending aligned characters as context and one that does not. For example, for the endings of the French/English pair (´economique,economic), we include both the substring pairs ique$:ic$ and que:c as features. One consideration is whether substring features should be binary presence/absence, or the count of the feature in the pair normalized by the length of the longer word. We investigate both of these ap1If the words are from different alphabets, we can get the alignment by mapping the letters to their closest Roman equivalent, or by using the EM algorithm to learn the edits (Ristad and Yianilos, 1998). proaches in our experiments. Also, there is no reason not to include the scores of baseline approaches like NED, LCSR, PREFIX or DICE as features in the representation as well. Features like the lengths of the two words and the difference in lengths of the words have also proved to be useful in preliminary experiments. Semantic features like frequency similarity or contextual similarity might also be included to help determine cognation between words that are not present in a translation lexicon or bitext. 5 Experiments Section 3 introduced two high-precision methods for generating labelled cognate pairs: using the word alignments from a bilingual corpus or using the entries in a translation lexicon. We investigate both of these methods in our experiments. In each case, we generate sets of labelled word pairs for training, testing, and development. The proportion of positive examples in the bitext-labelled test sets range between 1.4% and 1.8%, while ranging between 1.0% and 1.6% for the dictionary data.2 For the discriminative methods, we use a popular Support Vector Machine (SVM) learning package called SVMlight (Joachims, 1999). SVMs are maximum-margin classifiers that achieve good performance on a range of tasks. In each case, we learn a linear kernel on the training set pairs and tune the parameter that trades-off training error and margin on the development set. We apply our classifier to the test set and score the pairs by their positive distance from the SVM classification hyperplane (also done by Bilenko and Mooney (2003) with their token-based SVM similarity measure). We also score the test sets using traditional orthographic similarity measures PREFIX, DICE, LCSR, and NED, an average of these four, and Kondrak (2005)’s LCSF. We also use the log of the edit probability from the stochastic decoder of Ristad and Yianilos (1998) (normalized by the length of the longer word) and Tiedemann (1999)’s highest performing system (Approach #3). Both use only the positive examples in our training set. Our evaluation metric is 11-pt average precision on the score-sorted pair lists (also used by Kondrak and Sherif (2006)). 2The cognate data sets used in our experiments are available at http://www.cs.ualberta.ca/˜bergsma/Cognates/ 659 5.1 Bitext Experiments For the bitext-based annotation, we use publiclyavailable word alignments from the Europarl corpus, automatically generated by GIZA++ for FrenchEnglish (Fr), Spanish-English (Es) and GermanEnglish (De) (Koehn and Monz, 2006). Initial cleaning of these noisy word pairs is necessary. We thus remove all pairs with numbers, punctuation, a capitalized English word, and all words that occur fewer than ten times. We also remove many incorrectly aligned words by filtering pairs where the pairwise Mutual Information between the words is less than 7.5. This processing leaves vocabulary sizes of 39K for French, 31K for Spanish, and 60K for German. Our labelled set is then generated from pairs with LCSR ≥0.58 (using the cutoff from Melamed (1999)). Each labelled set entry is a triple of a) the foreign word f, b) the cognates Ef+ and c) the false friends Ef−. For each language pair, we randomly take 20K triples for training, 5K for development and 5K for testing. Each triple is converted to a set of pairwise examples for learning and classification. 5.2 Dictionary Experiments For the dictionary-based cognate identification, we use French, Spanish, German, Greek (Gr), Japanese (Jp), and Russian (Rs) to English translation pairs from the Freelang program.3 The latter three pairs were chosen so that we can evaluate on more distant languages that use non-Roman alphabets (although the Rˆomaji Japanese is Romanized by definition). We take 10K labelled-set triples for training, 2K for testing and 2K for development. The baseline approaches and our definition of cognation require comparison in a common alphabet. Thus we use a simple context-free mapping to convert every Russian and Greek character in the word pairs to their nearest Roman equivalent. We then label a translation pair as cognate if the LCSR between the words’ Romanized representations is greater than 0.58. We also operate all of our comparison systems on these Romanized pairs. 6 Results We were interested in whether our working definition of cognation (translations and LCSR ≥0.58) 3http://www.freelang.net/dictionary/ Figure 1: LCSR histogram and polynomial trendline of French-English dictionary pairs. System Prec Klementiev-Roth (KR) L≤2 58.6 KR L≤2 (normalized, boundary markers) 62.9 phrases L≤2 61.0 phrases L≤3 65.1 phrases L≤3 + mismatches 65.6 phrases L≤3 + mismatches + NED 65.8 Table 2: Bitext French-English development set cognate identification 11-pt average precision (%). reflects true etymological relatedness. We looked at the LCSR histogram for translation pairs in one of our translation dictionaries (Figure 1). The trendline suggests a bimodal distribution, with two distinct distributions of translation pairs making up the dictionary: incidental letter agreement gives low LCSR for the larger, non-cognate portion and high LCSR characterizes the likely cognates. A threshold of 0.58 captures most of the cognate distribution while excluding non-cognate pairs. This hypothesis was confirmed by checking the LCSR values of a list of known French-English cognates (randomly collected from a dictionary for another project): 87.4% were above 0.58. We also checked cognation on 100 randomly-sampled, positively-labelled FrenchEnglish pairs (i.e. translated or aligned and having LCSR ≥0.58) from both the dictionary and bitext data. 100% of the dictionary pairs and 93% of the bitext pairs were cognate. Next, we investigate various configurations of the discriminative systems on one of our cognate identification development sets (Table 2). The original Klementiev and Roth (2006) (KR) system can 660 Bitext Dictionary System Fr Es De Fr Es De Gr Jp Rs PREFIX 34.7 27.3 36.3 45.5 34.7 25.5 28.5 16.1 29.8 DICE 33.7 28.2 33.5 44.3 33.7 21.3 30.6 20.1 33.6 LCSR 34.0 28.7 28.5 48.3 36.5 18.4 30.2 24.2 36.6 NED 36.5 31.9 32.3 50.1 40.3 23.3 33.9 28.2 41.4 PREFIX+DICE+LCSR+NED 38.7 31.8 39.3 51.6 40.1 28.6 33.7 22.9 37.9 Kondrak (2005): LCSF 29.8 28.9 29.1 39.9 36.6 25.0 30.5 33.4 45.5 Ristad & Yanilos (1998) 37.7 32.5 34.6 56.1 46.9 36.9 38.0 52.7 51.8 Tiedemann (1999) 38.8 33.0 34.7 55.3 49.0 24.9 37.6 33.9 45.8 Klementiev & Roth (2006) 61.1 55.5 53.2 73.4 62.3 48.3 51.4 62.0 64.4 Alignment-Based Discriminative 66.5 63.2 64.1 77.7 72.1 65.6 65.7 82.0 76.9 Table 3: Bitext, Dictionary Foreign-to-English cognate identification 11-pt average precision (%). be improved by normalizing the feature count by the longer string length and including the boundary markers. This is therefore done with all the alignment-based approaches. Also, because of the way its features are constructed, the KR system is limited to a maximum substring length of two (L≤2). A maximum length of three (L≤3) in the KR framework produces millions of features and prohibitive training times, while L≤3 is computationally feasible in the phrasal case, and increases precision by 4.1% over the phrases L≤2 system.4 Including mismatches results in another small boost in performance (0.5%), while using an Edit Distance feature again increases performance by a slight margin (0.2%). This ranking of configurations is consistent across all the bitext-based development sets; we therefore take the configuration of the highest scoring system as our Alignment-Based Discriminative system for the remainder of this paper. We next compare the Alignment-Based Discriminative scorer to the various other implemented approaches across the three bitext and six dictionarybased cognate identification test sets (Table 3). The table highlights the top system among both the non-adaptive and adaptive similarity scorers.5 In 4Preliminary experiments using even longer phrases (beyond L≤3) currently produce a computationally prohibitive number of features for SVM learning. Deploying current feature selection techniques might enable the use of even more expressive and powerful feature sets with longer phrase lengths. 5Using the training data and the SVM to weight the components of the PREFIX+DICE+LCSR+NED scorer resulted in negligible improvements over the simple average on our development data. each language pair, the alignment-based discriminative approach outperforms all other approaches, but the KR system also shows strong gains over non-adaptive techniques and their re-weighted extensions. This is in contrast to previous comparisons which have only demonstrated minor improvements with adaptive over traditional similarity measures (Kondrak and Sherif, 2006). We consistently found that the original KR performance could be surpassed by a system that normalizes the KR feature count and adds boundary markers. Across all the test sets, this modification results in a 6% average gain in performance over baseline KR, but is still on average 5% below the AlignmentBased Discriminative technique, with a statistically significantly difference on each of the nine sets.6 Figure 2 shows the relationship between training data size and performance in our bitext-based French-English data. Note again that the Tiedemann and Ristad & Yanilos systems only use the positive examples in the training data. Our alignment-based similarity function outperforms all the other systems across nearly the entire range of training data. Note also that the discriminative learning curves show no signs of slowing down: performance grows logarithmically from 1K to 846K word pairs. For insight into the power of our discriminative approach, we provide some of our classifiers’ highest and lowest-weighted features (Table 4). 6Following Evert (2004), significance was computed using Fisher’s exact test (at p = 0.05) to compare the n-best word pairs from the scored test sets, where n was taken as the number of positive pairs in the set. 661 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 1000 10000 100000 1e+06 11-pt Average Precision Number of training pairs NED Tiedemann Ristad-Yanilos Klementiev-Roth Alignment-Based Discrim. Figure 2: Bitext French-English cognate identification learning curve. Lang. Feat. Wt. Example Fr (Bitext) ´ees-ed +8.0 v´erifi´ees:verified Jp (Dict.) ru-l +5.9 penaruti:penalty De (Bitext) k-c +5.5 kreativ:creative Rs (Dict.) irov+4.9 motivirovat:motivate Gr (Dict.) f-ph +4.1 symfonia:symphony Gr (Dict.) kos-c +3.3 anarchikos:anarchic Gr (Dict.) os$-y$ -2.5 anarchikos:anarchy Jp (Dict.) ou-ou -2.6 handoutai:handout Es (Dict.) -un -3.1 balance:unbalance Fr (Dict.) er$-er$ -5.0 former:former Es (Bitext) mos-s -5.1 toleramos:tolerates Table 4: Example features and weights for various Alignment-Based Discriminative classifiers (Foreign-English, negative pairs in italics). Note the expected correspondences between foreign spellings and English (k-c, f-ph), but also features that leverage derivational and inflectional morphology. For example, Greek-English pairs with the adjective-ending correspondence kos-c, e.g. anarchikos:anarchic, are favoured, but pairs with the adjective ending in Greek and noun ending in English, os$-y$, are penalized; indeed, by our definition, anarchikos:anarchy is not cognate. In a bitext, the feature ´ees-ed captures that feminine-plural inflection of past tense verbs in French corresponds to regular past tense in English. On the other hand, words ending in the Spanish first person plural verb suffix -amos are rarely translated to English words ending with the suffix -s, causing mos-s to be peGr-En (Dict.) Es-En (Bitext) alkali:alkali agenda:agenda makaroni:macaroni natural:natural adrenalini:adrenaline m´argenes:margins flamingko:flamingo hormonal:hormonal spasmodikos:spasmodic rad´on:radon amvrosia:ambrosia higi´enico:hygienic Table 5: Highest scored pairs by Alignment-Based Discriminative classifier (negative pairs in italics). nalized. The ability to leverage negative features, learned from appropriate counter examples, is a key innovation of our discriminative framework. Table 5 gives the top pairs scored by our system on two of the sets. Notice that unlike traditional similarity measures that always score identical words higher than all other pairs, by virtue of our feature weighting, our discriminative classifier prefers some pairs with very characteristic spelling changes. We performed error analysis by looking at all the pairs our system scored quite confidently (highly positive or highly negative similarity), but which were labelled oppositely. Highly-scored false positives arose equally from 1) actual cognates not linked as translations in the data, 2) related words with diverged meanings, e.g. the error in Table 5: makaroni in Greek actually means spaghetti in English, and 3) the same word stem, a different part of speech (e.g. the Greek/English adjective/noun synonymos:synonym). Meanwhile, inspection of the highly-confident false negatives revealed some (often erroneously-aligned in the bitext) positive pairs with incidental letter match (e.g. the French/English recettes:proceeds) that we would not actually deem to be cognate. Thus the errors that our system makes are often either linguistically interesting or point out mistakes in our automatically-labelled bitext and (to a lesser extent) dictionary data. 7 Conclusion This is the first research to apply discriminative string similarity to the task of cognate identification. We have introduced and successfully applied an alignment-based framework for discriminative similarity that consistently demonstrates improved performance in both bitext and dictionary-based cog662 nate identification on six language pairs. Our improved approach can be applied in any of the diverse applications where traditional similarity measures like Edit Distance and LCSR are prevalent. We have also made available our cognate identification data sets, which will be of interest to general string similarity researchers. Furthermore, we have provided a natural framework for future cognate identification research. Phonetic, semantic, or syntactic features could be included within our discriminative infrastructure to aid in the identification of cognates in text. In particular, we plan to investigate approaches that do not require the bilingual dictionaries or bitexts to generate training data. For example, researchers have automatically developed translation lexicons by seeing if words from each language have similar frequencies, contexts (Koehn and Knight, 2002), burstiness, inverse document frequencies, and date distributions (Schafer and Yarowsky, 2002). Semantic and string similarity might be learned jointly with a co-training or bootstrapping approach (Klementiev and Roth, 2006). We may also compare alignmentbased discriminative string similarity with a more complex discriminative model that learns the alignments as latent structure (McCallum et al., 2005). Acknowledgments We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Alberta Ingenuity Fund, and the Alberta Informatics Circle of Research Excellence. References George W. Adamson and Jillian Boreham. 1974. The use of an association measure based on character structure to identify semantically related pairs of words and document titles. Information Storage and Retrieval, 10:253–260. Mikhail Bilenko and Raymond J. Mooney. 2003. Adaptive duplicate detection using learnable string similarity measures. In KDD, pages 39–48. Eric Brill and Robert Moore. 2000. An improved error model for noisy channel spelling correction. In ACL. 286–293. Stefan Evert. 2004. Significance tests for the evaluation of ranking methods. In COLING, pages 945–951. Thorsten Joachims. 1999. Making large-scale Support Vector Machine learning practical. In Advances in Kernel Methods: Support Vector Machines, pages 169–184. MIT-Press. Alexandre Klementiev and Dan Roth. 2006. Named entity transliteration and discovery from multilingual comparable corpora. In HLT-NAACL, pages 82–88. Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. In ACL Workshop on Unsupervised Lexical Acquistion. Philipp Koehn and Christof Monz. 2006. Manual and automatic evaluation of machine translation between European languages. In NAACL Workshop on Statistical Machine Translation, pages 102–121. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In HLT-NAACL, pages 127–133. Grzegorz Kondrak and Tarek Sherif. 2006. Evaluation of several phonetic similarity algorithms on the task of cognate identification. In COLING-ACL Workshop on Linguistic Distances, pages 37–44. Grzegorz Kondrak. 2005. Cognates and word alignment in bitexts. In MT Summit X, pages 305–312. Vladimir I. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10(8):707–710. Gideon S. Mann and David Yarowsky. 2001. Multipath translation lexicon induction via bridge languages. In NAACL, pages 151–158. Andrew McCallum, Kedar Bellare, and Fernando Pereira. 2005. A conditional random field for discriminativelytrained finite-state string edit distance. In UAI. 388–395. I. Dan Melamed. 1999. Bitext maps and alignment via pattern recognition. Computational Linguistics, 25(1):107–130. Andrea Mulloni and Viktor Pekar. 2006. Automatic detection of orthographic cues for cognate recognition. In LREC, pages 2387–2390. Ari Rappoport and Tsahi Levent-Levi. 2006. Induction of cross-language affix and letter sequence correspondence. In EACL Workshop on Cross-Language Knowledge Induction. Eric Sven Ristad and Peter N. Yianilos. 1998. Learning stringedit distance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(5):522–532. Charles Schafer and David Yarowsky. 2002. Inducing translation lexicons via diverse similarity measures and bridge languages. In CoNLL, pages 207–216. Michael Strube, Stefan Rapp, and Christoph M¨uller. 2002. The influence of minimum edit distance on reference resolution. In EMNLP, pages 312–319. Ben Taskar, Simon Lacoste-Julien, and Dan Klein. 2005. A discriminative matching approach to word alignment. In HLT-EMNLP, pages 73–80. J¨org Tiedemann. 1999. Automatic construction of weighted string similarity measures. In EMNLP-VLC, pages 213–219. Dmitry Zelenko and Chinatsu Aone. 2006. Discriminative methods for transliteration. In EMNLP, pages 612–617. 663
2007
83
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 664–671, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Bilingual Terminology Mining – Using Brain, not brawn comparable corpora E. Morin, B. Daille Université de Nantes LINA FRE CNRS 2729 2, rue de la Houssinière BP 92208 F-44322 Nantes Cedex 03 {morin-e,daille-b}@ univ-nantes.fr K. Takeuchi Okayama University 3-1-1, Tsushimanaka Okayama-shi, Okayama, 700-8530, Japan koichi@ cl.it.okayama-u.ac.jp K. Kageura Graduate School of Education The University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-0033, Japan [email protected] Abstract Current research in text mining favours the quantity of texts over their quality. But for bilingual terminology mining, and for many language pairs, large comparable corpora are not available. More importantly, as terms are defined vis-à-vis a specific domain with a restricted register, it is expected that the quality rather than the quantity of the corpus matters more in terminology mining. Our hypothesis, therefore, is that the quality of the corpus is more important than the quantity and ensures the quality of the acquired terminological resources. We show how important the type of discourse is as a characteristic of the comparable corpus. 1 Introduction Two main approaches exist for compiling corpora: “Big is beautiful” or “Insecurity in large collections”. Text mining research commonly adopts the first approach and favors data quantity over quality. This is normally justified on the one hand by the need for large amounts of data in order to make use of statistic or stochastic methods (Manning and Schütze, 1999), and on the other by the lack of operational methods to automatize the building of a corpus answering to selected criteria, such as domain, register, media, style or discourse. For lexical alignment from comparable corpora, good results on single words can be obtained from large corpora — several millions words — the accuracy of proposed translation is about 80% for the top 10-20 candidates (Fung, 1998; Rapp, 1999; Chiao and Zweigenbaum, 2002). (Cao and Li, 2002) have achieved 91% accuracy for the top three candidates using the Web as a comparable corpus. But for specific domains, and many pairs of languages, such huge corpora are not available. More importantly, as terms are defined vis-à-vis a specific domain with a restricted register, it is expected that the quality rather than the quantity of the corpus matters more in terminology mining. For terminology mining, therefore, our hypothesis is that the quality of the corpora is more important than the quantity and that this ensures the quality of the acquired terminological resources. Comparable corpora are “sets of texts in different languages, that are not translations of each other” (Bowker and Pearson, 2002, p. 93). The term comparable is used to indicate that these texts share some characteristics or features: topic, period, media, author, register (Biber, 1994), discourse... This corpus comparability is discussed by lexical alignment researchers but never demonstrated: it is often reduced to a specific domain, such as the medical (Chiao and Zweigenbaum, 2002) or financial domains (Fung, 1998), or to a register, such as newspaper articles (Fung, 1998). For terminology 664 mining, the comparability of the corpus should be based on the domain or the sub-domaine, but also on the type of discourse. Indeed, discourse acts semantically upon the lexical units. For a defined topic, some terms are specific to one discourse or another. For example, for French, within the subdomain of obesity in the domain of medicine, we find the term excès de poids (overweight) only inside texts sharing a popular science discourse, and the synonym excès pondéral (overweight) only in scientific discourse. In order to evaluate how important the discourse criterion is for building bilingual terminological lists, we carried out experiments on French-Japanese comparable corpora in the domain of medicine, more precisely on the topic of diabetes and nutrition, using texts collected from the Web and manually selected and classified into two discourse categories: one contains only scientific documents and the other contains both scientific and popular science documents. We used a state-of-the-art multilingual terminology mining chain composed of two term extraction programs, one in each language, and an alignment program. The term extraction programs are publicly available and both extract multi-word terms that are more precise and specific to a particular scientific domain than single word terms. The alignment program makes use of the direct context-vector approach (Fung, 1998; Peters and Picchi, 1998; Rapp, 1999) slightly modified to handle both singleand multi-word terms. We evaluated the candidate translations of multi-word terms using a reference list compiled from publicly available resources. We found that taking discourse type into account resulted in candidate translations of a better quality even when the corpus size is reduced by half. Thus, even using a state-of-the-art alignment method wellknown as data greedy, we reached the conclusion that the quantity of data is not sufficient to obtain a terminological list of high quality and that a real comparability of corpora is required. 2 Multilingual terminology mining chain Taking as input a comparable corpora, the multilingual terminology chain outputs a list of single- and multi-word candidate terms along with their candidate translations. Its architecture is summarized in Figure 1 and comprises term extraction and alignment programs. 2.1 Term extraction programs The terminology extraction programs are available for both French1 (Daille, 2003) and Japanese2 (Takeuchi et al., 2004). The terminological units that are extracted are multi-word terms whose syntactic patterns correspond either to a canonical or a variation structure. The patterns are expressed using part-of-speech tags: for French, Brill’s POS tagger3 and the FLEM lemmatiser4 are utilised, and for Japanese, CHASEN5. For French, the main patterns are N N, N Prep N et N Adj and for Japanese, N N, N Suff, Adj N and Pref N. The variants handled are morphological for both languages, syntactical only for French, and compounding only for Japanese. We consider as a morphological variant a morphological modification of one of the components of the base form, as a syntactical variant the insertion of another word into the components of the base form, and as a compounding variant the agglutination of another word to one of the components of the base form. For example, in French, the candidate MWT sécrétion d’insuline (insulin secretion) appears in the following forms: base form of N Prep N pattern: sécrétion d’insuline (insulin secretion); inflexional variant: sécrétions d’insuline (insulin secretions); syntactic variant (insertion inside the base form of a modifier): sécrétion pancréatique d’insuline (pancreatic insulin secretion); syntactic variant (expansion coordination of base form): secrétion de peptide et d’insuline (insulin and peptide secretion). The MWT candidates secrétion insulinique (insulin secretion) and hypersécrétion insulinique (insulin 1http://www.sciences.univ-nantes.fr/ info/perso/permanents/daille/ and release LINUX. 2http://research.nii.ac.jp/~koichi/ study/hotal/ 3http://www.atilf.fr/winbrill/ 4http://www.univ-nancy2.fr/pers/namer/ 5http://chasen.org/$\sim$taku/software/ mecab/ 665 WEB dictionary bilingual Japanese documents French documents terminology extraction terminology extraction lexical context extraction lexical context extraction process translated terms to be translations candidate haversting documents lexical alignment Figure 1: Architecture of the multilingual terminology mining chain hypersecretion) have also been identified and lead together with sécrétion d’insuline (insulin secretion) to a cluster of semantically linked MWTs. In Japanese, the MWT  .  6 (insulin secretion) appears in the following forms: base form of NN pattern:     /N  .  /N  (insulin secretion); compounding variant (agglutination of a word at the end of the base form):   /N  .  /N  .  /N  (insulin secretion ability) At present, the Japanese term extraction program does not cluster terms. 2.2 Term alignment The lexical alignment program adapts the direct context-vector approach proposed by (Fung, 1998) for single-word terms (SWTs) to multi-word terms (MWTs). It aligns source MWTs with target single 6For all Japanese examples, we explicitly segment the compound into its component parts through the use of the “.” symbol. words, SWTs or MWTs. From now on, we will refer to lexical units as words, SWTs or MWTs. 2.2.1 Implementation of the direct context-vector method Our implementation of the direct context-vector method consists of the following 4 steps: 1. We collect all the lexical units in the context of each lexical unit  and count their occurrence frequency in a window of  words around  . For each lexical unit  of the source and the target language, we obtain a context vector  which gathers the set of co-occurrence units  associated with the number of times that  and  occur together  !  " . We normalise context vectors using an association score such as Mutual Information or Log-likelihood. In order to reduce the arity of context vectors, we keep only the co-occurrences with the highest association scores. 2. Using a bilingual dictionary, we translate the lexical units of the source context vector. 666 3. For a word to be translated, we compute the similarity between the translated context vector and all target vectors through vector distance measures such as Cosine (Salton and Lesk, 1968) or Jaccard (Tanimoto, 1958). 4. The candidate translations of a lexical unit are the target lexical units closest to the translated context vector according to vector distance. 2.2.2 Translation of lexical units The translation of the lexical units of the context vectors, which depends on the coverage of the bilingual dictionary vis-à-vis the corpus, is an important step of the direct approach: more elements of the context vector are translated more the context vector will be discrimating for selecting translations in the target language. If the bilingual dictionary provides several translations for a lexical unit, we consider all of them but weight the different translations by their frequency in the target language. If an MWT cannot be directly translated, we generate possible translations by using a compositional method (Grefenstette, 1999). For each element of the MWT found in the bilingual dictionary, we generate all the translated combinations identified by the term extraction program. For example, in the case of the MWT fatigue chronique (chronic fatigue), we have the following four translations for fatigue:  ,  ,  ,  and the following two translations for chronique:  ,  . Next, we generate all combinations of translated elements (See Table 17) and select those which refer to an existing MWT in the target language. Here, only one term has been identified by the Japanese terminology extraction program:  .  . In this approach, when it is not possible to translate all parts of an MWT, or when the translated combinations are not identified by the term extraction program, the MWT is not taken into account in the translation process. This approach differs from that used by (Robitaille et al., 2006) for French/Japanese translation. They first decompose the French MWT into combinations of shorter multi-word units (MWU) elements. This approach makes the direct translation of a subpart of the MWT possible if it is present in the 7the French word order is inverted to take into account the different constraints between French and Japanese. chronique fatigue                       Table 1: Illustration of the compositional method. The underlined Japanese MWT actually exists. bilingual dictionary. For an MWT of length  , (Robitaille et al., 2006) produce all the combinations of MWU elements of a length less than or equal to  . For example, the French term syndrome de fatigue chronique (chronic fatigue disease) yields the following four combinations: i)  syndrome de fatigue chronique  , ii)  syndrome de fatigue  chronique  , iii)  syndrome  fatigue chronique and iv)  syndrome   fatigue  chronique  . We limit ourselves to the combination of type iv) above since 90% of the candidate terms provided by the term extraction process, after clustering, are only composed of two content words. 3 Linguistic resources In this section we outline the different textual resources used for our experiments: the comparable corpora, bilingual dictionary and reference lexicon. 3.1 Comparable corpora The French and Japanese documents were harvested from the Web by native speakers of each language who are not domain specialists. The texts are from the medical domain, within the sub-domain of diabetes and nutrition. Document harvesting was carried out by a domain-based search, then by manual selection. The search for documents sharing the same domain can be achieved using keywords reflecting the specialized domain: for French, diabète and obésité (diabetes and obesity); for Japanese, !" and # $ . Then the documents were classified according to the type of discourse: scientific or popularized science. At present, the selection and classification phases are carried out manually although 667 research into how to automatize these two steps is ongoing. Table 2 shows the main features of the harvested comparable corpora: the number of documents, and the number of words for each language and each type of discourse. French Japanese doc. words doc. words Scientific 65 425,781 119 234,857 Popular 183 267,885 419 572,430 science Total 248 693,666 538 807,287 Table 2: Comparable corpora statistics From these documents, we created two comparable corpora:  scientific corpora  , composed only of scientific documents;  mixed corpora  , composed of both popular and scientific documents. 3.2 Bilingual dictionary The French-Japanese bilingual dictionary required for the translation phase is composed of four dictionaries freely available from the Web8, and of the French-Japanese Scientific Dictionary (1989). It contains about 173,156 entries (114,461 single words and 58,695 multi words) with an average of 2.1 translations per entry. 3.3 Terminology reference lists To evaluate the quality of the terminology mining chain, we built two bilingual terminology reference lists which include either SWTs or SMTs and MWTs:  lexicon 1  100 French SWTs of which the translation are Japanese SWTs.  lexicon 2  60 French SWTs and MWTs of which the translation could be Japanese SWTs or MWTs. 8http://kanji.free.fr/, http:// quebec-japon.com/lexique/index.php?a= index&d=25, http://dico.fj.free.fr/index. php, http://quebec-japon.com/lexique/index. php?a=index&d=3 These lexicons contains terms that occur at least twice in the scientific corpus, have been identified monolingually by both the French and the Japanese term extraction programs, and are found in either the UMLS9 thesaurus or in the French part of the Grand dictionnaire terminologique10 in the domain of medicine. These constraints prevented us from obtaining 100 French SWTs and MWTs for lexicon 2. The main reasons for this are the small number of UMLS terms dealing with the sub-domain of diabetes and the great difference between the linguistic structures of French and Japanese terms: French pattern definitions tend to cover more phrasal units while Japanese pattern definitions focus more narrowly on compounds. So, even if monolingually the same percentage of terms are detected in both languages, this does not guarantee a good result in bilingual terminology extraction. For example, the French term diabète de type 1 (Diabetes mellitus type I) extracted by the French term extraction program and found in UMLS was not extracted by the Japanese term extraction program although it appears frequently in the Japanese corpus (  ! " ). In bilingual terminology mining from specialized comparable corpora, the terminology reference lists are often composed of a hundred words (180 SWTs in (Déjean and Gaussier, 2002) and 97 SWTs in (Chiao and Zweigenbaum, 2002)). 4 Experiments In order to evaluate the influence of discourse type on the quality of bilingual terminology extraction, two experiments were carried out. Since the main studies relating to bilingual lexicon extraction from comparable corpora concentrate on finding translation candidates for SWTs, we first perform an experiment using  lexicon 1  , which is composed of SWTs. In order to evaluate the hypothesis of this study, we then conducted a second experiment using  lexicon 2  , which is composed of MWTs. 4.1 Alignment results for  lexicon 1  Table 3 shows the results obtained. The first three columns indicate the number of translations found 9http://www.nlm.nih.gov/research/umls 10http://www.granddictionnaire.com/ 668     "! $#  "! #  scientific corpora  64 11.6 20.2 49 52  mixed corpora  76 11.5 16.3 51 60 Table 3: Bilingual terminology extraction results for  lexicon 1         "! $#  "! #  scientific corpora  32 16.1 21.9 18 25  mixed corpora  32 23.9 27.6 17 20 Table 4: Bilingual terminology extraction results for  lexicon 2  (  % & ), and the average (   ) and standard deviation ( '  ) positions for the translations in the ranked list of candidate translations. The other two columns indicate the percentage of French terms for which the correct translation was obtained among the top ten and top twenty candidates ( ! $# ,  "! # ). The results of this experiment (see Table 3) show that the terms belonging to  lexicon 1  were more easily identified in the corpus of scientific and popular documents (51% and 60% respectively for  "! $# and  ! # ) than in the corpus of scientific documents (49% and 52%). Since  lexicon 1 is composed of SWTs, these terms are not more characteristic of popular discourse than scientific discourse. The frequency of the terms to be translated is an important factor in the vectorial approach. In fact, the higher the frequency of the term to be translated, the more the associated context vector will be discriminant. Table 5 confirms this hypothesis since the most frequent terms, such as insuline (#occ. 364 - insulin:     ), obésité (#occ. 333 - obesity: # $ ), and prévention (#occ. 120 - prevention: (*) ), were the best translated. [2,10] [11,50] [51,100] [101,...] fr 3/17 12/29 17/23 28/31 jp 4/26 32/41 14/20 10/13 Table 5: Frequency in  corpus 2  of the terms translated belonging to  lexicon 1  (for  "! # ) As a baseline, (Déjean et al., 2002) obtain 43% and 51% for the first 10 and 20 candidates respectively in a 100,000-word medical corpus, and 79% and 84% in a multi-domain 8 million-word corpus. For single-item French-English words applied on a medical corpus of 0.66 million words, (Chiao and Zweigenbaum, 2002) obtained 61% and 94% precision on the top-10 and top-20 candidates. In our case, we obtained 51% and 60% precision for the top 10 and 20 candidates in a 1.5 million-word French/Japanese corpus. 4.2 Alignment results for  lexicon 2  The analysis results in table 4 indicate only a small number of the terms in  lexicon 2  were found. Since we work with small-size corpora, this result is not surprising. Because multi-word terms are more specific than single-word terms, they tend to occur less frequently in a corpus and are more difficult to translate. Here, the terms belonging  lexicon 2  were more accurately identified from the corpus which consists of scientific documents than the corpus which consists of scientific and popular documents. In this instance, we obtained 30% and 42% precision for the top 10 and top 20 candidates in a 0.84 million-word scientific corpus. Moreover, if we count the number of terms which are correctly translated between  scientific corpora  and  mixed corpora  , we find the majority of the translated terms with  mixed corpora  in those obtained with  scientific corpora 11 By combining parameters 11Here, +,.-0/214357683 9;:<3 = , +,.-?>@1 A4=B6A%C;:D3E5 and F GHJILKEM%N AO6 N A.:PA%5 . 669 C = 3C 3= A C A4= C = 3EC 3= A%C A =        × × × × × × × nbr. win. C = 3C 3= A C A4= C = 3EC 3= A%C A =        × × × × × × × nbr. win. (a) parameter : Log-likelihood & cosinus (b) parameter : Log-likelihood & jaccard C = 3C 3= A C A4= C = 3EC 3= A%C A =        × × × × × × × nbr. win. C = 3C 3= A C A4= C = 3EC 3= A%C A =       × × × × × × × nbr. win. (c) parameter : MI & cosinus (d) parameter : MI & jaccard Figure 2: Evolution of the number of translations found in  "! # according to the size of the contextual window for several combinations of parameters with  lexicon 2  (  scientific corpora  —–;  mixed corpora - -, the points indicated are the computed values) such as the window size of the context vector, association score, and vector distance measure, the terms were often identified with more precision from the corpus consisting of scientific documents than the corpus consisting of scientific and popular documents (see Figure 2). Here again, the most frequent terms (see Table 6), such as diabète (#occ. 899 - diabetes: ! . " ), facteur de risque (#occ. 267 - risk factor:  . ), hyperglycémie (#occ. 127 - hyperglycaemia: .  ), tissu adipeux (#occ. 62 - adipose tissue:  .  ) were the best translated. On the other hand, some terms with low frequency, such as édulcorant (#occ. 13 - sweetener:  .  ) and choix alimentaire (#occ. 11 - feeding preferences:  .   ), or very low frequency, such as obésité massive (#occ. 6 - massive obesity:  . #$ ), were also identified with this approach. [2,10] [11,50] [51,100] [101,...] fr 1/11 11/25 6/14 7/10 jp 5/21 13/25 5/9 2/5 Table 6: Frequency in  scientific corpora  of translated terms belonging to  lexicon 2 (for  ! # ) 5 Conclusion This article describes a first attempt at compiling French-Japanese terminology from comparable corpora taking into account both single- and multi-word terms. Our claim was that a real comparability of the corpora is required to obtain relevant terms of the domain. This comparability should be based not only on the domain and the sub-domain but also on the type of discourse, which acts semantically upon the lexical units. The discourse categorization of documents allows lexical acquisition to increase pre670 cision despite the data sparsity problem that is often encountered for terminology mining and for language pairs not involving the English language, such as French-Japanese. We carried out experiments using two corpora of the specialised domain concerning diabetes and nutrition: one gathering documents from both scientific and popular science discourses, the other limited to scientific discourse. Our alignment results are close to previous works involving the English language, and are of better quality for the scientific corpus despite a corpus size that was reduced by half. The results demonstrate that the more frequent a term and its translation, the better the quality of the alignment will be, but also that the data sparsity problem could be partially solved by using comparable corpora of high quality. References Douglas Biber. 1994. Representativeness in corpus design. In A. Zampolli, N. Calzolari, and M. Palmer, editors, Current Issues in Computational Linguistics: in Honour of Don Walker, pages 377–407. Pisa: Giardini/Dordrecht: Kluwer. Lynne Bowker and Jennifer Pearson. 2002. Working with Specialized Language: A Practical Guide to Using Corpora. London/New York: Routledge. Yunbo Cao and Hang Li. 2002. Base Noun Phrase Translation Using Web Data and the EM Algorithm. In Proceedings of the 19th International Conference on Computational Linguistics (COLING’02), pages 127– 133, Tapei, Taiwan. Yun-Chuang Chiao and Pierre Zweigenbaum. 2002. Looking for candidate translational equivalents in specialized, comparable corpora. In Proceedings of the 19th International Conference on Computational Linguistics (COLING’02), pages 1208–1212, Tapei, Taiwan. Béatrice Daille. 2003. Terminology Mining. In Maria Teresa Pazienza, editor, Information Extraction in the Web Era, pages 29–44. Springer. Hervé Déjean and Éric Gaussier. 2002. Une nouvelle approche l’extraction de lexiques bilingues partir de corpus comparables. Lexicometrica, Alignement lexical dans les corpus multilingues, pages 1–22. Hervé Déjean, Fatia Sadat, and Éric Gaussier. 2002. An approach based on multilingual thesauri and model combination for bilingual lexicon extraction. In Proceedings of the 19th International Conference on Computational Linguistics (COLING’02), pages 218– 224, Tapei, Taiwan. French-Japanese Scientific Dictionary. 1989. Hakusuisha. 4th edition. Pascale Fung. 1998. A Statistical View on Bilingual Lexicon Extraction: From Parallel Corpora to Nonparallel Corpora. In David Farwell, Laurie Gerber, and Eduard Hovy, editors, Proceedings of the 3rd Conference of the Association for Machine Translation in the Americas (AMTA’98), pages 1–16, Langhorne, PA, USA. Springer. Gregory Grefenstette. 1999. The Word Wide Web as a Resource for Example-Based Machine Translation Tasks. In ASLIB’99 Translating and the Computer 21, London, UK. Christopher D. Manning and Hinrich Schütze. 1999. Foundations of Statistical Natural Language Processing. MIT Press, Cambridge, MA. Carol Peters and Eugenio Picchi. 1998. Cross-language information retrieval: A system for comparable corpus querying. In Gregory Grefenstette, editor, Crosslanguage information retrieval, chapter 7, pages 81– 90. Kluwer. Reinhard Rapp. 1999. Automatic Identification of Word Translations from Unrelated English and German Corpora. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL’99), pages 519–526, College Park, Maryland, USA. Xavier Robitaille, Xavier Sasaki, Masatsugu Tonoike, Satoshi Sato, and Satoshi Utsuro. 2006. Compiling French-Japanese Terminologies from the Web. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL’06), pages 225–232, Trento, Italy. Gerard Salton and Michael E. Lesk. 1968. Computer evaluation of indexing and text processing. Journal of the Association for Computational Machinery, 15(1):8–36. Koichi Takeuchi, Kyo Kageura, Béatrice Daille, and Laurent Romary. 2004. Construction of grammar based term extraction model for japanese. In Sophia Ananadiou and Pierre Zweigenbaum, editors, Proceeding of the COLING 2004, 3rd International Workshop on Computational Terminology (COMPUTERM’04), pages 91–94, Geneva, Switzerland. T. T. Tanimoto. 1958. An elementary mathematical theory of classification. Technical report, IBM Research. 671
2007
84
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 672–679, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Unsupervised Language Model Adaptation Incorporating Named Entity Information Feifan Liu and Yang Liu Department of Computer Science The University of Texas at Dallas, Richardson, TX, USA {ffliu,yangl}@hlt.utdallas.edu Abstract Language model (LM) adaptation is important for both speech and language processing. It is often achieved by combining a generic LM with a topic-specific model that is more relevant to the target document. Unlike previous work on unsupervised LM adaptation, this paper investigates how effectively using named entity (NE) information, instead of considering all the words, helps LM adaptation. We evaluate two latent topic analysis approaches in this paper, namely, clustering and Latent Dirichlet Allocation (LDA). In addition, a new dynamically adapted weighting scheme for topic mixture models is proposed based on LDA topic analysis. Our experimental results show that the NE-driven LM adaptation framework outperforms the baseline generic LM. The best result is obtained using the LDA-based approach by expanding the named entities with syntactically filtered words, together with using a large number of topics, which yields a perplexity reduction of 14.23% compared to the baseline generic LM. 1 Introduction Language model (LM) adaptation plays an important role in speech recognition and many natural language processing tasks, such as machine translation and information retrieval. Statistical N-gram LMs have been widely used; however, they capture only local contextual information. In addition, even with the increasing amount of LM training data, there is often a mismatch problem because of differences in domain, topics, or styles. Adaptation of LM, therefore, is very important in order to better deal with a variety of topics and styles. Many studies have been conducted for LM adaptation. One method is supervised LM adaptation, where topic information is typically available and a topic specific LM is interpolated with the generic LM (Kneser and Steinbiss, 1993; Suzuki and Gao, 2005). In contrast, various unsupervised approaches perform latent topic analysis for LM adaptation. To identify implicit topics from the unlabeled corpus, one simple technique is to group the documents into topic clusters by assigning only one topic label to a document (Iyer and Ostendorf, 1996). Recently several other methods in the line of latent semantic analysis have been proposed and used in LM adaptation, such as latent semantic analysis (LSA) (Bellegarda, 2000), probabilistic latent semantic analysis (PLSA) (Gildea and Hofmann, 1999), and LDA (Blei et al., 2003). Most of these existing approaches are based on the “bag of words” model to represent documents, where all the words are treated equally and no relation or association between words is considered. Unlike prior work in LM adaptation, this paper investigates how to effectively leverage named entity information for latent topic analysis. Named entities are very common in domains such as newswire or broadcast news, and carry valuable information, which we hypothesize is topic indicative and useful for latent topic analysis. We compare different latent topic generation approaches as well as model adaptation methods, and propose an LDA based dynamic weighting method for the topic mixture model. Furthermore, we expand 672 named entities by incorporating other content words, in order to capture more topic information. Our experimental results show that the proposed method of incorporating named information in LM adaptation is effective. In addition, we find that for the LDA based adaptation scheme, adding more content words and increasing the number of topics can further improve the performance significantly. The paper is organized as follows. In Section 2 we review some related work. Section 3 describes in detail our unsupervised LM adaptation approach using named entities. Experimental results are presented and discussed in Section 4. Conclusion and future work appear in Section 5. 2 Related Work There has been a lot of previous related work on LM adaptation. Suzuki and Gao (2005) compared different supervised LM adaptation approaches, and showed that three discriminative methods significantly outperform the maximum a posteriori (MAP) method. For unsupervised LM adaptation, an earlier attempt is a cache-based model (Kuhn and Mori, 1990), developed based on the assumption that words appearing earlier in a document are likely to appear again. The cache concept has also been used to increase the probability of unseen but topically related words, for example, the triggerbased LM adaptation using the maximum entropy approach (Rosenfeld, 1996). Latent topic analysis has recently been investigated extensively for language modeling. Iyer and Ostendorf (1996) used hard clustering to obtain topic clusters for LM adaptation, where a single topic is assigned to each document. Bellegarda (2000) employed Latent Semantic Analysis (LSA) to map documents into implicit topic sub-spaces and demonstrated significant reduction in perplexity and word error rate (WER). Its probabilistic extension, PLSA, is powerful for characterizing topics and documents in a probabilistic space and has been used in LM adaptation. For example, Gildea and Hofmann (1999) reported noticeable perplexity reduction via a dynamic combination of many unigram topic models with a generic trigram model. Proposed by Blei et al. (2003), Latent Dirichlet Allocation (LDA) loosens the constraint of the document-specific fixed weights by using a prior distribution and has quickly become one of the most popular probabilistic text modeling techniques. LDA can overcome the drawbacks in the PLSA model, and has been shown to outperform PLSA in corpus perplexity and text classification experiments (Blei et al., 2003). Tam and Schultz (2005) successfully applied the LDA model to unsupervised LM adaptation by interpolating the background LM with the dynamic unigram LM estimated by the LDA model. Hsu and Glass (2006) investigated using hidden Markov model with LDA to allow for both topic and style adaptation. Mrva and Woodland (2006) achieved WER reduction on broadcast conversation recognition using an LDA based adaptation approach that effectively combined the LMs trained from corpora with different styles: broadcast news and broadcast conversation data. In this paper, we investigate unsupervised LM adaptation using clustering and LDA based topic analysis. Unlike the clustering based interpolation method as in (Iyer and Ostendorf, 1996), we explore different distance measure methods for topic analysis. Different from the LDA based framework as in (Tam and Schultz, 2005), we propose a novel dynamic weighting scheme for the topic adapted LM. More importantly, the focus of our work is to investigate the role of named entity information in LM adaptation, which to our knowledge has not been explored. 3 Unsupervised LM Adaptation Integrating Named Entities (NEs) 3.1 Overview of the NE-driven LM Adaptation Framework Figure 1 shows our unsupervised LM adaptation framework using NEs. For training, we use the text collection to train the generic word-based N-gram LM. Then we apply named entity recognition (NER) and topic analysis to train multiple topic specific N-gram LMs. During testing, NER is performed on each test document, and then a dynamically adaptive LM based on the topic analysis result is combined with the general LM. Note that in this figure, we evaluate the performance of LM adaptation using the perplexity measure. We will evaluate this framework for N-best or lattice rescoring in speech recognition in the future. In our experiments, different topic analysis methods combined with different topic matching and adaptive schemes result in several LM adapta673 tion paradigms, which are described below in details. Training Text Test Text NER NER Latent Topic Analysis Compute Perplexity Generic N-gram Training Topic Model Training Topic Matching Topic Model Adaptation Model Interpolation Figure 1. Framework of NE-driven LM adaptation. 3.2 NE-based Clustering for LM Adaptation Clustering is a simple unsupervised topic analysis method. We use NEs to construct feature vectors for the documents, rather than considering all the words as in most previous work. We use the CLUTO1 toolkit to perform clustering. It finds a predefined number of clusters based on a specific criterion, for which we chose the following function: ∑∑ = ∈ = K i S u v k i u v sim S S S 1 , * 2 1 ) , ( max arg ) ( L where K is the desired number of clusters, Si is the set of documents belonging to the ith cluster, v and u represent two documents, and sim(v, u) is the similarity between them. We use the cosine distance to measure the similarity between two documents: || || || || ) , ( u v u v u v sim r r r r ⋅ ⋅ = (1) where vr and ur are the feature vectors representing the two documents respectively, in our experiments composed of NEs. For clustering, the elements in every feature vector are scaled based on their term frequency and inverse document fre 1 Available at http://glaros.dtc.umn.edu/gkhome/views/cluto quency, a concept widely used in information retrieval. After clustering, we train an N-gram LM, called a topic LM, for each cluster using the documents in it. During testing, we identify the ‘topic’ for the test document, and interpolate the topic specific LM with the background LM, that is, if the test document belongs to the cluster S*, we can predict a word wk in the document given the word’s history hk using the following equation: ) | ( ) 1( ) | ( ) | ( * k k S Topic k k General k k h w p h w p h w p − − + = λ λ (2) where λ is the interpolation weight. We investigate two approaches to find the topic assignment S* for a given test document. (A) cross-entropy measure For a test document d=w1,w2,…,wn with a word distribution pd(w) and a cluster S with a topic LM ps(w), the cross entropy CE(d, S) can be computed as: ∑ = − = = n i i s i d s d w p w p p p H S d CE 1 2 )) ( ( log ) ( ) , ( ) , ( From the information theoretic perspective, the cluster with the lower cross entropy value is expected to be more topically correlated to the test document. For each test document, we compute the cross entropy values according to different clusters, and select the cluster S* that satisfies: ) , ( min arg 1 * i K i S d CE S ≤ ≤ = (B) cosine similarity For each cluster, its centroid can be obtained by: ∑ = = in k ik i i u n cv 1 1 where uik is the vector for the kth document in the ith cluster, and ni is the number of documents in the ith cluster. The distance between the test document and a cluster can then be easily measured by the cosine similarity function as in Equation (1). Our goal here is to find the cluster S* which the test document is closest to, that is, || || || || max arg 1 * i i K i cv d cv d S ⋅ ⋅ = ≤ ≤ r r 674 where d r is the feature vector for the test document. 3.3 NE-based LDA for LM Adaptation LDA model (Blei et al., 2003) has been introduced as a new, semantically consistent generative model, which overcomes overfitting and the problem of generating new documents in PLSA. It is a threelevel hierarchical Bayesian model. Based on the LDA model, a document d is generated as follows. • Sample a vector of K topic mixture weights θ from a prior Dirichlet distribution with parameter α : ∏ = − = K k k k f 1 1 ) ; ( α θ α θ • For each word w in d, pick a topic k from the multinomial distribution θ . • Pick a word w from the multinomial distribution k w, β given the kth topic. For a document d=w1,w2,…wn, the LDA model assigns it the following probability: ∫ ∏∑ ⎟⎟ ⎠ ⎞ ⎜⎜ ⎝ ⎛ ⋅ = = = θ θ α θ θ β d f d p n i K k k k wi ) ; ( ) ( 1 1 We use the MATLAB topic Toolbox 1.3 (Griffiths et al., 2004) in the training set to obtain the document-topic matrix, DP, and the word-topic matrix, WP. Note that here “words” correspond to the elements in the feature vector used to represent the document (e.g., NEs). In the DP matrix, an entry cik represents the counts of words in a document di that are from a topic zk (k=1,2,…,K). In the WP matrix, an entry fjk represents the frequency of a word wj generated from a topic zk (k=1,2,…,K) over the training set. For training, we assign a topic zi * to a document di such that ik K k i c z ≤ ≤ = 1 * max arg . Based on the documents belonging to the different topics, K topic Ngram LMs are trained. This “hard clustering” strategy allows us to train an LM that accounts for all the words rather than simply those NEs used in LDA analysis, as well as use higher order N-gram LMs, unlike the ‘unigram’ based LDA in previous work. For a test document d = w1,w2,…,wn that is generated by multiple topics under the LDA assumption, we formulate a dynamically adapted topic model using the mixture of LMs from different topics: ∑ = − × = K i k k z i k k adapt LDA h w p h w p i 1 ) | ( ) | ( γ where ) | ( k k z h w p i stands for the ith topic LM, and γi is the mixture weight. Different from the idea of dynamic topic adaptation in (Tam and Schultz, 2005), we propose a new weighting scheme to calculate γi that directly uses the two resulting matrices from LDA analysis during training: ∑ = = n j j j k k d w p w z p 1 ) | ( ) | ( γ ∑ ∑ = = = = n q q j j K p jp jk j k w freq w freq d w p f f w z p 1 1 ) ( ) ( ) | ( , ) | ( where freq(wj) is the frequency of a word wj in the document d. Other notations are consistent with the previous definitions. Then we interpolate this adapted topic model with the generic LM, similar to Equation (2): ) | ( ) 1( ) | ( ) | ( k k adapt LDA k k General k k h w p h w p h w p − − + = λ λ (3) 4 Experiments 4.1 Experimental Setup # of files # of words # of NEs Training Data 23,985 7,345,644 590,656 Test Data 2,661 831,283 65,867 Table 1. Statistics of our experimental data. The data set we used is the LDC Mandarin TDT4 corpus, consisting of 337 broadcast news shows with transcriptions. These files were split into small pieces, which we call documents here, according to the topic segmentation information marked in the LDC’s transcription. In total, there are 26,646 such documents in our data set. We randomly chose 2661 files as the test data (which is balanced for different news sources). The rest was used for topic analysis and also generic LM training. Punctuation marks were used to determine sentences in the transcriptions. We used the NYU NE tagger (Ji and Grishman, 2005) to recognize four kinds of NEs: Person, Location, Organi675 zation, and Geo-political. Table 1 shows the statistics of the data set in our experiments. We trained trigram LMs using the SRILM toolkit (Stolcke, 2002). A fixed weight (i.e., λ in Equation (2) and (3)) was used for the entire test set when interpolating the generic LM with the adapted topic LM. Perplexity was used to measure the performance of different adapted LMs in our experiments. 4.2 Latent Topic Analysis Results Topic # of Files Top 10 Descriptive Items (Translated from Chinese) 1 3526 U.S., Israel, Washington, Palestine, Bush, Clinton, Gore, Voice of America, Mid-East, Republican Party 2 3067 Taiwan, Taipei, Mainland, Taipei City, Chinese People’s Broadcasting Station, Shuibian Chen, the Executive Yuan, the Legislative Yuan, Democratic Progressive Party, Nationalist Party 3 4857 Singapore, Japan, Hong Kong, Indonesia, Asia, Tokyo, Malaysia, Thailand, World, China 4 4495 World, German, Landon, Russia, France, England, Xinhua News Agency, Europe, U.S., Italy Clustering Based 5 7586 China, Beijing, Nation, China Central Television Station, Xinhua News Agency, Shanghai, World, State Council, Zemin Jiang, Beijing City 1 5859 China, Japan, Hong Kong, Beijing, Shanghai, World, Zemin Jiang, Macao, China Central Television Station, Africa 2 3794 U.S., Bush, World, Gore, South Korea, North Korea, Clinton, George Walker Bush, Asia, Thailand 3 4640 Singapore, Indonesia, Team, Israel, Europe, Germany, England, France, Palestine, Wahid 4 4623 Taiwan, Russia, Mainland, India, Taipei, Shuibian Chen, Philippine, Estrada, Communist Party of China, RUS. LDA Based 5 4729 Xinhua News Agency, Nation, Beijing, World, Canada, Sydney, Brazil, Beijing City, Education Ministry, Cuba Table 2. Topic analysis results using clustering and LDA (the number of documents and the top 10 words (NEs) in each cluster). For latent topic analysis, we investigated two approaches using named entities, i.e., clustering and LDA. 5 latent topics were used in both approaches. Table 2 illustrates the resulting topics using the top 10 words in each topic. We can see that the words in the same cluster share some similarity and that the words in different clusters seem to be ‘topically’ different. Note that errors from automatic NE recognition may impact the clustering results. For example, ‘队/team’ in the table (in topic 3 in LDA results) is an error and is less discriminative for topic analysis. Table 3 shows the perplexity of the test set using the background LM (baseline) and each of the topic LMs, from clustering and LDA respectively. We can see that for the entire test set, a topic LM generally performs much worse than the generic LM. This is expected, since the size of a topic cluster is much smaller than that of the entire training set, and the test set may contain documents from different topics. However, we found that when using an optimal topic model (i.e., the topic LM that yields the lowest perplexity among the 5 topic LMs), 23.45% of the documents in the test set have a lower perplexity value than that obtained from the generic LM. This suggests that a topic model could benefit LM adaptation and motivates a dynamic topic adaptation approach for different test documents. Perplexity Baseline 502.02 CL-1 1054.36 CL-2 1399.16 CL-3 919.237 CL-4 962.996 CL-5 981.072 LDA-1 1224.54 LDA-2 1375.97 LDA-3 1330.44 LDA-4 1328.81 LDA-5 1287.05 Table 3. Perplexity results using the baseline LM vs. the single topic LMs. 4.3 Clustering vs. LDA Based LM Adaptation In this section, we compare three LM adaptation paradigms. As we discussed in Section 3, two of them are clustering based topic analysis, but using different strategies to choose the optimal cluster; and the third one is based on LDA analysis that 676 uses a dynamic weighting scheme for adapted topic mixture model. Figure 2 shows the perplexity results using different interpolation parameters with the general LM. 5 topics were used in both clustering and LDA based approaches (as in Section 4.2). “CLCE” means clustering based topic analysis via cross entropy criterion, “CL-Cos” represents clustering based topic analysis via cosine distance criterion, and “LDA-MIX” denotes LDA based topic mixture model, which uses 5 mixture topic LMs. 440 450 460 470 480 490 500 510 520 530 540 0.4 0.5 0.6 0.7 0.8 λ Perplexity Baseline CL-CE CL-Cos LDA-MIX Figure 2. Perplexity using different LM adaptation approaches and different interpolation weightsλ with the general LM. We observe that all three adaptation approaches outperform the baseline when using a proper interpolation weight. “CL-CE” yields the best perplexity of 469.75 when λ is 0.5, a reduction of 6.46% against the baseline perplexity of 502.02. For clustering based adaptation, between the two strategies used to determine the topic for a test document, “CL-CE” outperforms “CL-Cos”. This indicates that the cosine distance measure using only names is less effective than cross entropy for LM adaptation. In addition, cosine similarity does not match perplexity as well as the CE-based distance measure. Similarly, for the LDA based approach, using only NEs may not be sufficient to find appropriate weights for the topic model. This also explains the bigger interpolation weight for the general LM in CL-Cos and LDA-MIX than that in “CL-CE”. For a fair comparison between the clustering and LDA based LM adaptation approaches, we also evaluated using the topic mixture model for the clustering based approach and using only one topic in the LDA based method. For clustering based adaptation, we constructed topic mixture models using the weights obtained from a linear normalization of the two distance measures presented in Section 3.2. In order to use only one topic model in LDA based adaptation, we chose the topic cluster that has the largest weight in the adapted topic mixture model (as in Sec 3.3). Table 4 shows the perplexity for the three approaches (CL-Cos, CL-CE, and LDA) using the mixture topic models versus a single topic LM. We observe similar trends as in Figure 2 when changing the interpolation weight λ with the generic LM; therefore, in Table 4 we only present results for one optimal interpolation weight. Single-Topic Mixture-Topic CL-Cos (λ =0.7) 498.01 497.86 CL-CE (λ =0.5) 469.75 483.09 LDA (λ =0.7) 488.96 489.14 Table 4. Perplexity results using the adapted topic model (single vs. mixture) for clustering and LDA based approaches. We can see from Table 4 that using the mixture model in clustering based adaptation does not improve performance. This may be attributed to how the interpolation weights are calculated. For example, only names are used in cosine distance, and the normalized distance may not be appropriate weights. We also notice negligible difference when only using one topic in the LDA based framework. This might be because of the small number of topics currently used. Intuitively, using a mixture model should yield better performance, since LDA itself is based on the assumption of generating words from multiple topics. We will investigate the impact of the number of topics on LM adaptation in Section 4.5. 4.4 Effect of Different Feature Configurations on LM Adaptation We suspect that using only named entities may not provide enough information about the ‘topics’ of the documents, therefore we investigate expanding the feature vectors with other words. Since generally content words are more indicative of the topic of a document than function words, we used a POS tagger (Hillard et al., 2006) to select words for latent topic analysis. We kept words with three POS classes: noun (NN, NR, NT), verb (VV), and modi677 fier (JJ), selected from the LDC POS set2. This is similar to the removal of stop words widely used in information retrieval. Figure 3 shows the perplexity results for three different feature configurations, namely, all-words (w), names (n), and names plus syntactically filtered items (n+), for the CL-CE and LDA based approaches. The LDA based LM adaptation paradigm supports our hypothesis. Using named information instead of all the words seems to efficiently eliminate redundant information and achieve better performance. In addition, expanding named entities with syntactically filtered items yields further improvement. For CL-CE, using named information achieves the best result among the three configurations. This might be because that the clustering method is less powerful in analyzing the principal components as well as dealing with redundant information than the LDA model. 460 465 470 475 480 485 490 495 500 505 0.4 0.5 0.6 0.7 0.8 λ Perplexity CL-CE(w) CL-CE(n) CL-CE(n+) LDA-MIX(w) LDA-MIX(n) LDA-MIX(n+) Figure 3. Comparison of perplexity using different feature configurations. 4.5 Impact of Predefined Topic Number on LM Adaptation LDA based topic analysis typically uses a large number of topics to capture the fine grained topic space. In this section, we evaluate the effect of the number of topics on LM adaptation. For comparison, we evaluate this for both LDA and CL-CE, similar to Section 4.3. We use the “n+” feature configuration as in Section 4.4, that is, names plus POS filtered items. When using a single-topic adapted model in the LDA or CL-CE based approach, finer-grained topic analysis (i.e., increasing the number of topics) leads to worse performance mainly because of the smaller clusters for each topic; therefore, we only show results here using 2 See http://www.cis.upenn.edu/~chinese/posguide.3rd.ch.pdf the mixture topic adapted models. Figure 4 shows the perplexity results using different numbers of topics. The interpolation weightλ with the general LM is 0.5 in all the experiments. For the topic mixture LMs, we used a maximum of 9 mixtures (a limitation in the current SRILM toolkit) when the number of topics is greater than 9. We observe that as the number of topics increases, the perplexity reduces significantly for LDA. When the number of topics is 50, the adapted LM using LDA achieves a perplexity reduction of 11.35% compared to using 5 topics, and 14.23% against the baseline generic LM. Therefore, using finer-grained multiple topics in dynamic adaptation improves system performance. When the number of topics increases further, e.g., to 100, the performance degrades slightly. This might be due to the limitation of the number of the topic mixtures used. A similar trend is observable for the CL-CE approach, but the effect of the topic number is much greater in LDA than CL-CE. 435.2 477.2 430.6 445.8 485.7 477.3 471.8 467.2 483.1 485.1 400 420 440 460 480 500 n=5 n=10 n=20 n=50 n=100 # of Topics Perplexity LDA CL-CE Figure 4. Perplexity results using different predefined numbers of topics for LDA and CL-CE. 4.6 Discussion As we know, although there is an increasing amount of training data available for LM training, it is still only for limited domains and styles. Creating new training data for different domains is time consuming and labor intensive, therefore it is very important to develop algorithms for LM adaptation. We investigate leveraging named entities in the LM adaptation task. Though some errors of NER may be introduced, our experimental results have shown that exploring named information for topic analysis is promising for LM adaptation. Furthermore, this framework may have other advantages. For speech recognition, using NEs for topic analysis can be less vulnerable to recognition 678 errors. For instance, we may add a simple module to compute the similarity between two NEs based on the word tokens or phonetics, and thus compensate the recognition errors inside NEs. Whereas, word-based models, such as the traditional cache LMs, may be more sensitive to recognition errors that are likely to have a negative impact on the prediction of the current word. From this point of view, our framework can potentially be more robust in the speech processing task. In addition, the number of NEs in a document is much smaller than that of the words, as shown in Table 1; hence, using NEs can also reduce the computational complexity, in particular in topic analysis for training. 5 Conclusion and Future Work We compared several unsupervised LM adaptation methods leveraging named entities, and proposed a new dynamic weighting scheme for topic mixture model based on LDA topic analysis. Experimental results have shown that the NE-driven LM adaptation approach outperforms using all the words, and yields perplexity reduction compared to the baseline generic LM. In addition, we find that for the LDA based method, adding other content words, combined with an increased number of topics, can further improve the performance, achieving up to 14.23% perplexity reduction compared to the baseline LM. The experiments in this paper combine models primarily through simple linear interpolation. Thus one direction of our future work is to develop algorithms to automatically learn appropriate interpolation weights. In addition, our work in this paper has only showed promising results in perplexity reduction. We will investigate using this framework of LM adaptation for N-best or lattice rescoring in speech recognition. Acknowledgements We thank Mari Ostendorf, Mei-Yuh Hwang, and Wen Wang for useful discussions, and Heng Ji for sharing the Mandarin named entity tagger. This work is supported by DARPA under Contract No. HR0011-06-C-0023. Any opinions expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. References J. Bellegarda. 2000. Exploiting Latent Semantic Information in Statistical Language Modeling. In IEEE Transactions on Speech and Audio Processing. 88(80):1279-1296. D. Blei, A. Ng, and M. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research. 3:993-1022. D. Gildea and T. Hofmann. 1999. Topic-Based Language Models using EM. In Proc. of Eurospeech. T. Griffiths, M. Steyvers, D. Blei, and J. Tenenbaum. 2004. Integrating Topics and Syntax. Adv. in Neural Information Processing Systems. 17:537-544. D. Hillard, Z. Huang, H. Ji, R. Grishman, D. HakkaniTur, M. Harper, M. Ostendorf, and W. Wang. 2006. Impact of Automatic Comma Prediction on POS/Name Tagging of Speech. In Proc. of the First Workshop on Spoken Language Technology (SLT). P. Hsu and J. Glass. 2006. Style & Topic Language Model Adaptation using HMM-LDA. In Proc. of EMNLP, pp:373-381. R. Iyer and M. Ostendorf. 1996. Modeling Long Distance Dependence in Language: Topic Mixtures vs. Dynamic Cache Models. In Proc. of ICSLP. H. Ji and R. Grishman. 2005. Improving NameTagging by Reference Resolution and Relation Detection. In Proc. of ACL. pp: 411-418. R. Kneser and V. Steinbiss. 1993. On the Dynamic Adaptation of Stochastic language models. In Proc. of ICASSP, Vol 2, pp: 586-589. R. Kuhn and R.D. Mori. 1990. A Cache-Based Natural Language Model for Speech Recognition. In IEEE Transactions on Pattern Analysis and Machine Intelligence, 12: 570-583. D. Mrva and P.C. Woodland. 2006. Unsupervised Language Model Adaptation for Mandarin Broadcast Conversation Transcription. In Proc. of INTERSPEECH, pp:2206-2209. R. Rosenfeld. 1996. A Maximum Entropy Approach to Adaptive Statistical Language Modeling. Computer, Speech and Language, 10:187-228. A. Stolcke. 2002. SRILM – An Extensible Language Modeling Toolkit. In Proc. of ICSLP. H. Suzuki and J. Gao. 2005. A Comparative Study on Language Model Adaptation Techniques Using New Evaluation Metrics, In Proc. of HLT/EMNLP. Y.C. Tam and T. Schultz. 2005. Dynamic Language Model Adaptation Using Variational Bayes Inference. In Proc. of INTERSPEECH, pp:5-8. 679
2007
85
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 680–687, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Coordinate Noun Phrase Disambiguation in a Generative Parsing Model Deirdre Hogan∗ Computer Science Department Trinity College Dublin Dublin 2, Ireland [email protected] Abstract In this paper we present methods for improving the disambiguation of noun phrase (NP) coordination within the framework of a lexicalised history-based parsing model. As well as reducing noise in the data, we look at modelling two main sources of information for disambiguation: symmetry in conjunct structure, and the dependency between conjunct lexical heads. Our changes to the baseline model result in an increase in NP coordination dependency f-score from 69.9% to 73.8%, which represents a relative reduction in f-score error of 13%. 1 Introduction Coordination disambiguation is a relatively little studied area, yet the correct bracketing of coordination constructions is one of the most difficult problems for natural language parsers. In the Collins parser (Collins, 1999), for example, dependencies involving coordination achieve an f-score as low as 61.8%, by far the worst performance of all dependency types. Take the phrase busloads of executives and their wives (taken from the WSJ treebank). The coordinating conjunction (CC) and and the noun phrase their wives could attach to the noun phrase executives, as illustrated in Tree 1, Figure 1. Alternatively, their wives could be incorrectly conjoined to the noun phrase busloads of executives as in Tree 2, Figure 1. ∗Now at the National Centre for Language Technology, Dublin City University, Ireland. As with PP attachment, most previous attempts at tackling coordination as a subproblem of parsing have treated it as a separate task to parsing and it is not always obvious how to integrate the methods proposed for disambiguation into existing parsing models. We therefore approach coordination disambiguation, not as a separate task, but from within the framework of a generative parsing model. As noun phrase coordination accounts for over 50% of coordination dependency error in our baseline model we focus on NP coordination. Using a model based on the generative parsing model of (Collins, 1999) Model 1, we attempt to improve the ability of the parsing model to make the correct coordination decisions. This is done in the context of parse reranking, where the n-best parses output from Bikel’s parser (Bikel, 2004) are reranked according to a generative history-based model. In Section 2 we summarise previous work on coordination disambiguation. There is often a considerable bias toward symmetry in the syntactic structure of two conjuncts and in Section 3 we introduce new parameter classes to allow the model to prefer symmetry in conjunct structure. Section 4 is concerned with modelling the dependency between conjunct head words and begins by looking at how the different handling of coordination in noun phrases and base noun phrases (NPB) affects coordination disambiguation.1 We look at how we might improve the model’s handling of coordinate head-head dependencies by altering the model so that a common 1A base noun phrase, as defined in (Collins, 1999), is a noun phrase which does not directly dominate another noun phrase, unless that noun phrase is possessive. 680 1. NP NP NPB busloads PP of NP NP NPB executives and NP NPB their wives 2. NP NP NPB busloads PP of NP NPB executives and NP NPB their wives Figure 1: Tree 1. The correct noun phrase parse. Tree 2. The incorrect parse for the noun phrase. parameter class is used for coordinate word probability estimation in both NPs and NPBs. In Section 4.2 we focus on improving the estimation of this parameter class by incorporating BNC data, and a measure of word similarity based on vector cosine similarity, to reduce data sparseness. In Section 5 we suggest a new head-finding rule for NPBs so that the lexicalisation process for coordinate NPBs is more similar to that of other NPs. Section 6 examines inconsistencies in the annotation of coordinate NPs in the Penn Treebank which can lead to errors in coordination disambiguation. We show how some coordinate noun phrase inconsistencies can be automatically detected and cleaned from the data sets. Section 7 details how the model is evaluated, presents the experiments made and gives a breakdown of results. 2 Previous Work Most previous attempts at tackling coordination have focused on a particular type of NP coordination to disambiguate. Both Resnik (1999) and Nakov and Hearst (2005) consider NP coordinations of the form n1 and n2 n3 where two structural analyses are possible: ((n1 and n2) n3) and ((n1) and (n2 n3)). They aim to show more structure than is shown in trees following the Penn guidelines, whereas in our approach we aim to reproduce Penn guideline trees. To resolve the ambiguities, Resnik combines number agreement information of candidate conjoined nouns, an information theoretic measure of semantic similarity, and a measure of the appropriateness of noun-noun modification. Nakov and Hearst (2005) disambiguate by combining Web-based statistics on head word co-occurrences with other mainly heuristic information sources. A probabilistic approach is presented in (Goldberg, 1999), where an unsupervised maximum entropy statistical model is used to disambiguate coordinate noun phrases of the form n1 preposition n2 cc n3. Here the problem is framed as an attachment decision: does n3 attach ‘high’ to the first noun, n1, or ‘low’ to n2? In (Agarwal and Boggess, 1992) the task is to identify pre-CC conjuncts which appear in text that has been part-of-speech (POS) tagged and semiparsed, as well as tagged with semantic labels specific to the domain. The identification of the preCC conjunct is based on heuristics which choose the pre-CC conjunct that maximises the symmetry between pre- and post-CC conjuncts. Insofar as we do not separate coordination disambiguation from the overall parsing task, our approach resembles the efforts to improve coordination disambiguation in (Kurohashi, 1994; Ratnaparkhi, 1994; Charniak and Johnson, 2005). In (Kurohashi, 1994) coordination disambiguation is carried out as the first component of a Japanese dependency parser using a technique which calculates similarity between series of words from the left and right of a conjunction. Similarity is measured based on matching POS tags, matching words and a thesaurus-based measure of semantic similarity. In both the discriminative reranker of Ratnaparkhi et al. (1994) and that of Charniak and Johnson (2005) features are included to capture syntactic parallelism across conjuncts at various depths. 3 Modelling Symmetry Between Conjuncts There is often a considerable bias toward symmetry in the syntactic structure of two conjuncts, see for example (Dubey et al., 2005). Take Figure 2: If we take as level 0 the level in the coordinate sub681 NP1(plains) NP2(plains) NP3(plains) DT6 the JJ5 high NNS4 plains PP7(of) IN8 of NP9(T exas) NNP10 Texas CC11 and NP11(states) NP12(states) DT15 the JJ14 northern NNS13 states PP16(of) IN17 of NP18(Delta) DT20 the NNP19 Delta Figure 2: Example of symmetry in conjunct structure in a lexicalised subtree. tree where the coordinating conjunction CC occurs, then there is exact symmetry in the two conjuncts in terms of non-terminal labels and head word part-ofspeech tags for levels 0, 1 and 2. Learning a bias toward parallelism in conjuncts should improve the parsing model’s ability to correctly attach a coordination conjunction and second conjunct to the correct position in the tree. In history-based models, features are limited to being functions of the tree generated so far. The task is to incorporate a feature into the model that captures a particular bias yet still adheres to derivationbased restrictions. Parses are generated top-down, head-first, left-to-right. Each node in the tree in Figure 2 is annotated with the order the nodes are generated (we omit, for the sake of clarity, the generation of the STOP nodes). Note that when the decision to attach the second conjunct to the head conjunct is being made (i.e. Step 11, when the CC and NP(states) nodes are being generated) the subtree rooted at NP(states) has not yet been generated. Thus at the point that the conjunct attachment decision is made it is not possible to use information about symmetry of conjunct structure, as the structure of the second conjunct is not yet known. It is possible, however, to condition on structure of the already generated head conjunct when building the internal structure of the second conjunct. In our model when the structure of the second conjunct is being generated we condition on features which are functions of the first conjunct. When generating a node Ni in the second conjunct, we retrieve the corresponding node NipreCC in the first conjunct, via a left to right traversal of the first conjunct. For example, from Figure 2 the pre-CC node NP(Texas) is the node corresponding to NP(Delta) in the postCC conjunct. From NipreCC we extract information, such as its part-of-speech, for use as a feature when predicting a POS tag for the corresponding node in the post-CC conjunct. When generating a second conjunct, instead of the usual parameter classes for estimating the probability of the head label Ch and the POS label of a dependent node ti, we created two new parameter classes which are used only in the generation of second conjunct nodes: PccCh(Ch|γ(headC), Cp, wp, tp, tgp, depth) (1) Pccti(ti|α(headC), dir, Cp, wp, tp, dist, ti 1, ti 2, depth) (2) where γ(headC) returns the non-terminal label of NipreCC for the node in question and α(headC) returns the POS tag of NipreCC. Both functions return +NOMATCH+ if there is no NipreCCfor the node. Depth is the level of the post-CC conjunct node being generated. 4 Modelling Coordinate Head Words Some noun pairs are more likely to be conjoined than others. Take again the trees in Figure 1. The two head nouns coordinated in Tree 1 are executives and wives, and in Tree 2: busloads and wives. Clearly, the former pair of head nouns is more likely and, for the purpose of discrimination, the model would benefit if it could learn that executives and wives is a more likely combination than busloads and wives. Bilexical head-head dependencies of the type found in coordinate structures are a somewhat dif682 ferent class of dependency to modifier-head dependencies. In the fat cat, for example, there is clearly one head to the noun phrase: cat. In cats and dogs however there are two heads, though in the parsing model just one is chosen, somewhat arbitrarily, to head the entire noun phrase. In the baseline model there is essentially one parameter class for the estimation of word probabilities: Pword(wi|H(i)) (3) where wi is the lexical head of constituent i and H(i) is the history of the constituent. The history is made up of conditioning features chosen from structure that has already been determined in the topdown derivation of the tree. In Section 4.1 we discuss how though the coordinate head-head dependency is captured for NPs, it is not captured for NPBs. We look at how we might improve the model’s handling of coordinate headhead dependencies by altering the model so that a common parameter class in (4) is used for coordinate word probability estimation in both NPs and NPBs. PcoordW ord(wi|wp, H(i)) (4) In Section 4.2 we focus on improving the estimation of this parameter class by reducing data sparseness. 4.1 Extending PcoordW ord to Coordinate NPBs In the baseline model each node in the tree is annotated with a coordination flag which is set to true for the node immediately following the coordinating conjunction. For coordinate NPs the head-head dependency is captured when this flag is set to true. In Figure 1, discarding for simplicity the other features in the history, the probability of the coordinate head wives, is estimated in Tree 1 as: Pword(wi = wives|coord = true, wp = executives, ...) (5) and in Tree 2: Pword(wi = wives|coord = true, wp = busloads, ...) (6) where wp is the head word of the node to which the node headed by wi is attaching and coord is the coordination flag. Unlike NPs, in NPBs (i.e. flat, non-recursive NPs) the coordination flag is not used to mark whether a node is a coordinated head or not. This flag is always set to false for NPBs. In addition, modifiers within NPBs are conditioned on the previously generated modifier rather than the head of the phrase.2 This means that in an NPB such as (cats and dogs), the estimate for the word cats will look like: Pword(wi = cats|coord = false, wp = and, ...) (7) In our new model, for NPs, when the coordination flag is set to true, we use the parameter class in (4) to estimate the probability of one lexical head noun, given another. For NPBs, if a noun is generated directly after a CC then it is taken to be a coordinate head, wi, and conditioned on the noun generated before the coordinating conjunction, which is chosen as wp, and also estimated using (4). 4.2 Estimating the PcoordW ord parameter class Data for bilexical statistics are particularly sparse. In order to decrease the sparseness of the coordinate head noun data, we extracted from the BNC examples of coordinate head noun pairs. We extracted all noun pairs occurring in a pattern of the form: noun cc noun, as well as lists of any number of nouns separated by commas and ending in cc noun.3 To this data we added all head noun pairs from the WSJ that occurred together in a coordinate noun phrase, identified when the coordination flag was set to true. Every occurrence ni CC nj was also counted as an occurrence of nj CC ni. This further helps reduce sparseness. The probability of one noun, ni being coordinated with another nj can be calculated simply as: Plex(ni|nj) = |ninj| |nj| (8) Again to reduce data sparseness, we introduce a measure of word similarity. A word can be represented as a vector where every dimension of the vector represents another word type. The values of the vector components, the term weights, are derived from word co-occurrence counts. Cosine similarity between two word vectors can then be used to measure the similarity of two words. Measures of 2A full explanation of the handling of coordination in the model is given in (Bikel, 2004). 3Extracting coordinate noun pairs from the BNC in such a fashion follows work on networks of concepts described in (Widdows, 2004). 683 similarity between words based on similarity of cooccurrence vectors have been used before, for example, for word sense disambiguation (Sch¨utze, 1998) and for PP-attachment disambiguation (Zhao and Lin, 2004). Our measure resembles that of (Caraballo, 99) where co-occurrence is also defined with respect to coordination patterns, although the experimental details in terms of data collection and vector term weights differ. We can now incorporate the similarity measure into the probability estimate of (8) to give a new k-NN style method of estimating bilexical statistics based on weighting events according to the word similarity measure: Psim(ni|nj) = P nx∈N(nj) sim(nj, nx)|ninx| P nx∈N(nj) sim(nj, nx)|nx| (9) where sim(nj, nx) is a similarity score between words nj and nx and N(nj) is the set of words in the neighbourhood of nj. This neighbourhood can be based on the k-nearest neighbours of nj, where nearness is measured with the similarity function. In order to smooth the bilexical estimate in (9) we combine it with another estimate, trained from WSJ data, by way of linear interpolation: PcoordW ord(ni|nj) = λnjPsim(ni|nj) + (1 −λnj)PMLE(ni|ti) (10) where ti is the POS tag of word ni, PMLE(ni|ti) is the maximum-likelihood estimate calculated from annotated WSJ data, and λnj is calculated as in (11). In (11) we adapt the Witten-Bell method for the calculation of the weight λ, as used in the Collins parser, so that it incorporates the similarity measure for all words in the neighbourhood of nj. λnj = P nx∈N(nj ) sim(nj, nx)|nx| P nx∈N(nj) sim(nj, nx)(|nx| + CD(nx)) (11) where C is a constant that can be optimised using held-out data and D(nj) is the diversity of a word nj: the number of distinct words with which nj has been coordinated in the training set. The estimate in (9) can be viewed as the estimate with the more general history context than that of (8) because the context includes not only nj but also words similar to nj. The final probability estimate for PcoordW ord is calculated as the most specific estimate, Plex, combined via regular Witten-Bell interpolation with the estimate in (10). 5 NPB Head-Finding Rules Head-finding rules for coordinate NPBs differ from coordinate NPs.4 Take the following two versions of the noun phrase hard work and harmony: (c) (NP (NPB hard work and harmony)) and (d) (NP (NP (NPB hard work)) and (NP (NPB harmony))). In the first example, harmony is chosen as head word of the NP; in example (d) the head of the entire NP is work. The choice of head affects the various dependencies in the model. However, in the case of two coordinate NPs which, as in the above example, cover the same span of words and differ only in whether the coordinate noun phrase is flat as in (c) or structured as in (d), the choice of head for the phrase is not particularly informative. In both cases the head words being coordinated are the same and either word could plausibly head the phrase; discrimination between trees in such cases should not be influenced by the choice of head, but rather by other, salient features that distinguish the trees.5 We would like to alter the head-finding rules for coordinate NPBs so that, in cases like those above, the word chosen to head the entire coordinate noun phrase would be the same for both base and nonbase noun phrases. We experiment with slightly modified head-finding rules for coordinate NPBs. In an NPB such as NPB →n1 CC n2 n3, the head rules remain unchanged and the head of the phrase is (usually) the rightmost noun in the phrase. Thus, when n2 is immediately followed by another noun the default is to assume nominal modifier coordination and the head rules stay the same. The modification we make to the head rules for NPBs is as follows: when n2 is not immediately followed by a noun then the noun chosen to head the entire phrase is n1. 6 Inconsistencies in WSJ Coordinate NP Annotation An inspection of NP coordination error in the baseline model revealed inconsistencies in WSJ annota4See (Collins, 1999) for the rules used in the baseline model. 5For example, it would be better if discrimination was largely based on whether hard modifies both work and harmony (c), or whether it modifies work alone (d). 684 tion. In this section we outline some types of coordinate NP inconsistency and outline a method for detecting some of these inconsistencies, which we later use to automatically clean noise from the data. Eliminating noise from treebanks has been previously used successfully to increase overall parser accuracy (Dickinson and Meurers, 2005). The annotation of NPs in the Penn Treebank (Bies et al., 1995) follows somewhat different guidelines to that of other syntactic categories. Because their interpretation is so ambiguous, no internal structure is shown for nominal modifiers. For NPs with more than one head noun, if the only unshared modifiers in the constituent are nominal modifiers, then a flat structure is also given. Thus in (NP the Manhattan phone book and tour guide)6 a flat structure is given because although the is a non-nominal modifier, it is shared, modifying both tour guide and phone book, and all other modifiers in the phrase are nominal. However, we found that out of 1,417 examples of NP coordination in sections 02 to 21, involving phrases containing only nouns (common nouns or a mixture of common and proper nouns) and the coordinating conjunction, as many as 21.3%, contrary to the guidelines, were given internal structure, instead of a flat annotation. When all proper nouns are involved this phenomenon is even more common.7 Another common source of inconsistency in coordinate noun phrase bracketing occurs when a nonnominal modifier appears in the coordinate noun phrase. As previously discussed, according to the guidelines the modifier is annotated flat if it is shared. When the non-nominal modifier is unshared, more internal structure is shown, as in: (NP (NP (NNS fangs)) (CC and) (NP (JJ pointed) (NNS ears))). However, the following two structured phrases, for example, were given a completely flat structure in the treebank: (a) (NP (NP (NN oversight))(CC and) (NP (JJ disciplinary)(NNS procedures))), (b) (NP (ADJP (JJ moderate)(CC and)(JJ low-cost))(NN housing)). If we follow the guidelines then any coordinate NPB which ends with the following tag sequence can be automatically detected as incorrectly bracketed: CC/nonnominal modifier/noun. This is because either the 6In this section we do not show the NPB levels. 7In the guidelines it is recognised however that proper names are frequently annotated with internal structure. non-nominal modifier, which is unambiguously unshared, is part of a noun phrase as (a) above, or it conjoined with another modifier as in (b). We found 202 examples of this in the training set, out of a total of 4,895 coordinate base noun phrases. Finally, inconsistencies in POS tagging can also lead to problems with coordination. Take the bigram executive officer. We found 151 examples in the training set of a base noun phrase which ended with this bigram. 48% of the cases were POS tagged JJ NN, 52% tagged NN NN. 8 This has repercussions for coordinate noun phrase structure, as the presence of an adjectival pre-modifier indicates a structured annotation should be given. These inconsistencies pose problems both for training and testing. With a relatively large amount of noise in the training set the model learns to give structures, which should be very unlikely, too high a probability. In testing, given inconsistencies in the gold standard trees, it becomes more difficult to judge how well the model is doing. Although it would be difficult to automatically detect the POS tagging errors, the other inconsistencies outlined above can be detected automatically by simple pattern matching. Automatically eliminating such examples is a simple method of cleaning the data. 7 Experimental Evaluation We use a parsing model similar to that described in (Hogan, 2005) which is based on (Collins, 1999) Model 1 and uses k-NN for parameter estimation. The n-best output from Bikel’s parser (Bikel, 2004) is reranked according to this k-NN parsing model, which achieves an f-score of 89.4% on section 23. For the coordination experiments, sections 02 to 21 are used for training, section 23 for testing and the remaining sections for validation. Results are for sentences containing 40 words or less. As outlined in Section 6, the treebank guidelines are somewhat ambiguous as to the appropriate bracketing for coordinate NPs which consist entirely of proper nouns. We therefore do not include, in the coordination test and validation sets, coordinate NPs where in the gold standard NP the leaf nodes consist entirely of proper nouns (or CCs or commas). In do8According to the POS bracketing guidelines (Santorini, 1991) the correct sequence of POS tags should be NN NN. 685 ing so we hope to avoid a situation whereby the success of the model is measured in part by how well it can predict the often inconsistent bracketing decisions made for a particular portion of the treebank. In addition, and for the same reasons, if a gold standard tree is inconsistent with the guidelines in either of the following two ways the tree is not used when calculating coordinate precision and recall of the model: the gold tree is a noun phrase which ends with the sequence CC/non-nominal modifier/noun; the gold tree is a structured coordinate noun phrase where each word in the noun phrase is a noun.9 Call these inconsistencies type a and type b respectively. This left us with a coordination validation set consisting of 1064 coordinate noun phrases and a test set of 416 coordinate NPs from section 23. A coordinate phrase was deemed correct if the parent constituent label, and the two conjunct node labels (at level 0) match those in the gold subtree and if, in addition, each of the conjunct head words are the same in both test and gold tree. This follows the definition of a coordinate dependency in (Collins, 1999). Based on these criteria, the baseline f-scores for test and validation set were 69.1% and 67.1% respectively. The coordination f-score for the oracle trees on section 23 is 83.56%. In other words: if an ‘oracle’ were to choose from each set of n-best trees the tree that maximised constituent precision and recall, then the resulting set of oracle trees would have a NP coordination dependency f-score of 83.56%. For the validation set the oracle trees coordination dependency f-score is 82.47%. 7.1 Experiments and Results We first eliminated from the training set all coordinate noun phrase subtrees, of type a and type b described in Section 7. The effect of this on the validation set is outlined in Table 1, step 2. For the new parameter class in (1) we found that the best results occurred when it was used only in conjuncts of depth 1 and 2, although the case base for this parameter class contained head events from all post-CC conjunct depths. Parameter class (2) was used for predicting POS tags at level 1 in right-ofhead conjuncts, although again the sample contained 9Recall from §6 that for this latter case the noun phrase should be flat - an NPB - rather than a noun phrase with internal structure. Model f-score significance 1. Baseline 67.1 2. NoiseElimination 68.7 ≫1 3. Symmetry 69.9 > 2, ≫1 4. NPB head rule 70.6 NOT > 3, > 2, ≫1 5. PcoordW ord WSJ 71.7 NOT > 4, > 3, ≫2 6. BNC data 72.1 NOT > 5, > 4, ≫3 7. sim(wi, wp) 72.4 NOT > 6, NOT > 5, ≫4 Table 1: Results on the Validation Set. 1064 coordinate noun phrase dependencies. In the significance column > means at level .05 and ≫means at level .005, for McNemar’s test of significance. Results are cumulative. events from all depths. For the PcoordW ord parameter class we extracted 9961 coordinate noun pairs from the WSJ training set and 815,323 pairs from the BNC. As pairs are considered symmetric this resulted in a total of 1,650,568 coordinate noun events. The term weights for the word vectors were dampened co-occurrence counts, of the form: 1 + log(count). For the estimation of Psim(ni|nj) we found it too computationally expensive to calculate similarity measures between nj and each word token collected. The best results were obtained when the neighbourhood of nj was taken to be the k-nearest neighbours of nj from among the set of word that had previously occurred in a coordination pattern with nj, where k is 1000. Table 1 shows the effect of the PcoordW ord parameter class estimated from WSJ data only (step 5), with the addition of BNC data (step 6) and finally with the word similarity measure (step 7). The result of these experiments, as well as that involving the change in the head-finding heuristics, outlined in Section 5, was an increase in coordinate noun phrase f-score from 69.9% to 73.8% on the test set. This represents a 13% relative reduction in coordinate f-score error over the baseline, and, using McNemar’s test for significance, is significant at the 0.05 level (p = 0.034). The reranker f-score for all constituents (not excluding any coordinate NPs) for section 23 rose slightly from 89.4% to 89.6%, a small but significant increase in f-score.10 Finally, we report results on an unaltered coordination test set, that is, a test set from which no 10Significance was calculated using the software available at www.cis.upenn.edu/ dbikel/software.html. 686 noisy events were eliminated. The baseline coordination dependency f-score for all NP coordination dependencies (550 dependencies) from section 23 is 69.27%. This rises to 72.74% when all experiments described in Section 7 are applied, which is also a statistically significant increase (p = 0.042). 8 Conclusion and Future Work This paper outlined a novel method for modelling symmetry in conjunct structure, for modelling the dependency between noun phrase conjunct head words and for incorporating a measure of word similarity in the estimation of a model parameter. We also demonstrated how simple pattern matching can be used to reduce noise in WSJ noun phrase coordination data. Combined, these techniques resulted in a statistically significant improvement in noun phrase coordination accuracy. Coordination disambiguation necessitates information from a variety of sources. Another information source important to NP coordinate disambiguation is the dependency between nonnominal modifiers and nouns which cross CCs in NPBs. For example, modelling this type of dependency could help the model learn that the phrase the cats and dogs should be bracketed flat, whereas the phrase the U.S. and Washington should be given structure. Acknowledgements We are grateful to the TCD Broad Curriculum Fellowship scheme and to the SFI Basic Research Grant 04/BR/CS370 for funding this research. Thanks to P´adraig Cunningham, Saturnino Luz, Jennifer Foster and Gerard Hogan for helpful discussions and feedback on this work. References Rajeev Agarwal and Lois Boggess. 1992. A Simple but Useful Approach to Conjunct Identification. In Proceedings of the 30th ACL. Ann Bies, Mark Ferguson, Karen Katz and Robert MacIntyre. 1995. Bracketing Guidelines for Treebank II Style Penn Treebank Project. Technical Report. University of Pennsylvania. Dan Bikel. 2004. On The Parameter Space of Generative Lexicalized Statistical Parsing Models. Ph.D. thesis, University of Pennsylvania. Sharon Caraballo. 1999. Automatic construction of a hypernym-labeled noun hierarchy from text. In Proceedings of the 37th ACL. Eugene Charniak and Mark Johnson. 2005. Coarse-to-fine nbest Parsing and MaxEnt Discriminative Reranking. In Proceedings of the 43rd ACL. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Markus Dickinson and W. Detmar Meurers. 2005. Prune diseased branches to get healthy trees! How to find erroneous local trees in a treebank and why it matters. In Proceedings of the Fourth Workshop on Treebanks and Linguistic Theories (TLT). Amit Dubey, Patrick Sturt and Frank Keller. 2005. Parallelism in Coordination as an Instance of Syntactic Priming: Evidence from Corpus-based Modeling. In Proceedings of the HLT/EMNP-05. Miriam Goldberg. 1999. An Unsupervised Model for Statistically Determining Coordinate Phrase Attachment. In Proceedings of the 27th ACL. Deirdre Hogan. 2005. k-NN for Local Probability Estimation in Generative Parsing Models. In Proceedings of the IWPT05. Sadao Kurohashi and Makoto Nagao. 1994. A Syntactic Analysis Method of Long Japanese Sentences Based on the Detection of Conjunctive Structures. In Computational Linguistics, 20(4). Preslav Nakov and Marti Hearst. 2005. Using the Web as an Implicit Training Set: Application to Structural Ambiguity Resolution. In Proceedings of the HLT/EMNLP-05. Adwait Ratnaparkhi, Salim Roukos and R. Todd Ward. 1994. A Maximum Entropy Model for Parsing. In Proceedings of the International Conference on Spoken Language Processing. Philip Resnik. 1999. Semantic Similarity in a Taxonomy: An Information-Based Measure and its Application to Problems of Ambiguity in Natural Language. In Journal of Artificial Intelligence Research, 11:95-130, 1999. Beatrice Santorini. 1991. Part-of-Speech Tagging Guidelines for the Penn Treebank Project. Technical Report. University of Pennsylvania. Hinrich Sch¨utze. 1998. Automatic Word Sense Discrimination. Computational Linguistics, 24(1):97-123. Dominic Widdows. 2004. Geometry and Meaning. CSLI Publications, Stanford, USA. Shaojun Zhao and Dekang Lin. 2004. A Nearest-Neighbor Method for Resolving PP-Attachment Ambiguity. In Proceedings of the IJCNLP-04. 687
2007
86
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 688–695, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics A Unified Tagging Approach to Text Normalization Conghui Zhu Harbin Institute of Technology Harbin, China [email protected] Jie Tang Department of Computer Science Tsinghua University, China [email protected] Hang Li Microsoft Research Asia Beijing, China [email protected] Hwee Tou Ng Department of Computer Science National University of Singapore, Singapore [email protected] Tiejun Zhao Harbin Institute of Technology Harbin, China [email protected] Abstract This paper addresses the issue of text normalization, an important yet often overlooked problem in natural language processing. By text normalization, we mean converting ‘informally inputted’ text into the canonical form, by eliminating ‘noises’ in the text and detecting paragraph and sentence boundaries in the text. Previously, text normalization issues were often undertaken in an ad-hoc fashion or studied separately. This paper first gives a formalization of the entire problem. It then proposes a unified tagging approach to perform the task using Conditional Random Fields (CRF). The paper shows that with the introduction of a small set of tags, most of the text normalization tasks can be performed within the approach. The accuracy of the proposed method is high, because the subtasks of normalization are interdependent and should be performed together. Experimental results on email data cleaning show that the proposed method significantly outperforms the approach of using cascaded models and that of employing independent models. 1 Introduction More and more ‘informally inputted’ text data becomes available to natural language processing, such as raw text data in emails, newsgroups, forums, and blogs. Consequently, how to effectively process the data and make it suitable for natural language processing becomes a challenging issue. This is because informally inputted text data is usually very noisy and is not properly segmented. For example, it may contain extra line breaks, extra spaces, and extra punctuation marks; and it may contain words badly cased. Moreover, the boundaries between paragraphs and the boundaries between sentences are not clear. We have examined 5,000 randomly collected emails and found that 98.4% of the emails contain noises (based on the definition in Section 5.1). In order to perform high quality natural language processing, it is necessary to perform ‘normalization’ on informally inputted data first, specifically, to remove extra line breaks, segment the text into paragraphs, add missing spaces and missing punctuation marks, eliminate extra spaces and extra punctuation marks, delete unnecessary tokens, correct misused punctuation marks, restore badly cased words, correct misspelled words, and identify sentence boundaries. Traditionally, text normalization is viewed as an engineering issue and is conducted in a more or less ad-hoc manner. For example, it is done by using rules or machine learning models at different levels. In natural language processing, several issues of text normalization were studied, but were only done separately. This paper aims to conduct a thorough investigation on the issue. First, it gives a formalization of 688 the problem; specifically, it defines the subtasks of the problem. Next, it proposes a unified approach to the whole task on the basis of tagging. Specifically, it takes the problem as that of assigning tags to the input texts, with a tag representing deletion, preservation, or replacement of a token. As the tagging model, it employs Conditional Random Fields (CRF). The unified model can achieve better performances in text normalization, because the subtasks of text normalization are often interdependent. Furthermore, there is no need to define specialized models and features to conduct different types of cleaning; all the cleaning processes have been formalized and conducted as assignments of the three types of tags. Experimental results indicate that our method significantly outperforms the methods using cascaded models or independent models on normalization. Our experiments also indicate that with the use of the tags defined, we can conduct most of the text normalization in the unified framework. Our contributions in this paper include: (a) formalization of the text normalization problem, (b) proposal of a unified tagging approach, and (c) empirical verification of the effectiveness of the proposed approach. The rest of the paper is organized as follows. In Section 2, we introduce related work. In Section 3, we formalize the text normalization problem. In Section 4, we explain our approach to the problem and in Section 5 we give the experimental results. We conclude the paper in Section 6. 2 Related Work Text normalization is usually viewed as an engineering issue and is addressed in an ad-hoc manner. Much of the previous work focuses on processing texts in clean form, not texts in informal form. Also, prior work mostly focuses on processing one type or a small number of types of errors, whereas this paper deals with many different types of errors. Clark (2003) has investigated the problem of preprocessing noisy texts for natural language processing. He proposes identifying token boundaries and sentence boundaries, restoring cases of words, and correcting misspelled words by using a source channel model. Minkov et al. (2005) have investigated the problem of named entity recognition in informally inputted texts. They propose improving the performance of personal name recognition in emails using two machine-learning based methods: Conditional Random Fields and Perceptron for learning HMMs. See also (Carvalho and Cohen, 2004). Tang et al. (2005) propose a cascaded approach for email data cleaning by employing Support Vector Machines and rules. Their method can detect email headers, signatures, program codes, and extra line breaks in emails. See also (Wong et al., 2007). Palmer and Hearst (1997) propose using a Neural Network model to determine whether a period in a sentence is the ending mark of the sentence, an abbreviation, or both. See also (Mikheev, 2000; Mikheev, 2002). Lita et al. (2003) propose employing a language modeling approach to address the case restoration problem. They define four classes for word casing: all letters in lower case, first letter in uppercase, all letters in upper case, and mixed case, and formalize the problem as assigning class labels to words in natural language texts. Mikheev (2002) proposes using not only local information but also global information in a document in case restoration. Spelling error correction can be formalized as a classification problem. Golding and Roth (1996) propose using the Winnow algorithm to address the issue. The problem can also be formalized as that of data conversion using the source channel model. The source model can be built as an n-gram language model and the channel model can be constructed with confusing words measured by edit distance. Brill and Moore, Church and Gale, and Mayes et al. have developed different techniques for confusing words calculation (Brill and Moore, 2000; Church and Gale, 1991; Mays et al., 1991). Sproat et al. (1999) have investigated normalization of non-standard words in texts, including numbers, abbreviations, dates, currency amounts, and acronyms. They propose a taxonomy of nonstandard words and apply n-gram language models, decision trees, and weighted finite-state transducers to the normalization. 3 Text Normalization In this paper we define text normalization at three levels: paragraph, sentence, and word level. The subtasks at each level are listed in Table 1. For example, at the paragraph level, there are two sub689 tasks: extra line-break deletion and paragraph boundary detection. Similarly, there are six (three) subtasks at the sentence (word) level, as shown in Table 1. Unnecessary token deletion refers to deletion of tokens like ‘-----’ and ‘====’, which are not needed in natural language processing. Note that most of the subtasks conduct ‘cleaning’ of noises, except paragraph boundary detection and sentence boundary detection. Level Task Percentages of Noises Extra line break deletion 49.53 Paragraph Paragraph boundary detection Extra space deletion 15.58 Extra punctuation mark deletion 0.71 Missing space insertion 1.55 Missing punctuation mark insertion 3.85 Misused punctuation mark correction 0.64 Sentence Sentence boundary detection Case restoration 15.04 Unnecessary token deletion 9.69 Word Misspelled word correction 3.41 Table 1. Text Normalization Subtasks As a result of text normalization, a text is segmented into paragraphs; each paragraph is segmented into sentences with clear boundaries; and each word is converted into the canonical form. After normalization, most of the natural language processing tasks can be performed, for example, part-of-speech tagging and parsing. We have manually cleaned up some email data (cf., Section 5) and found that nearly all the noises can be eliminated by performing the subtasks defined above. Table 1 gives the statistics. 1. i’m thinking about buying a pocket 2. pc device for my wife this christmas,. 3. the worry that i have is that she won’t 4. be able to sync it to her outlook express 5. contacts… Figure 1. An example of informal text I’m thinking about buying a Pocket PC device for my wife this Christmas.// The worry that I have is that she won’t be able to sync it to her Outlook Express contacts.// Figure 2. Normalized text Figure 1 shows an example of informally inputted text data. It includes many typical noises. From line 1 to line 4, there are four extra line breaks at the end of each line. In line 2, there is an extra comma after the word ‘Christmas’. The first word in each sentence and the proper nouns (e.g., ‘Pocket PC’ and ‘Outlook Express’) should be capitalized. The extra spaces between the words ‘PC’ and ‘device’ should be removed. At the end of line 2, the line break should be removed and a space is needed after the period. The text should be segmented into two sentences. Figure 2 shows an ideal output of text normalization on the input text in Figure 1. All the noises in Figure 1 have been cleaned and paragraph and sentence endings have been identified. We must note that dependencies (sometimes even strong dependencies) exist between different types of noises. For example, word case restoration needs help from sentence boundary detection, and vice versa. An ideal normalization method should consider processing all the tasks together. 4 A Unified Tagging Approach 4.1 Process In this paper, we formalize text normalization as a tagging problem and employ a unified approach to perform the task (no matter whether the processing is at paragraph level, sentence level, or word level). There are two steps in the method: preprocessing and tagging. In preprocessing, (A) we separate the text into paragraphs (i.e., sequences of tokens), (B) we determine tokens in the paragraphs, and (C) we assign possible tags to each token. The tokens form the basic units and the paragraphs form the sequences of units in the tagging problem. In tagging, given a sequence of units, we determine the most likely corresponding sequence of tags by using a trained tagging model. In this paper, as the tagging model, we make use of CRF. Next we describe the steps (A)-(C) in detail and explain why our method can accomplish many of the normalization subtasks in Table 1. (A). We separate the text into paragraphs by taking two or more consecutive line breaks as the endings of paragraphs. (B). We identify tokens by using heuristics. There are five types of tokens: ‘standard word’, ‘non-standard word’, punctuation mark, space, and line break. Standard words are words in natural language. Non-standard words include several general ‘special words’ (Sproat et al., 1999), email address, IP address, URL, date, number, money, percentage, unnecessary tokens (e.g., ‘===‘ and 690 ‘###’), etc. We identify non-standard words by using regular expressions. Punctuation marks include period, question mark, and exclamation mark. Words and punctuation marks are separated into different tokens if they are joined together. Natural spaces and line breaks are also regarded as tokens. (C). We assign tags to each token based on the type of the token. Table 2 summarizes the types of tags defined. Token Type Tag Description PRV Preserve line break RPA Replace line break by space Line break DEL Delete line break PRV Preserve space Space DEL Delete space PSB Preserve punctuation mark and view it as sentence ending PRV Preserve punctuation mark without viewing it as sentence ending Punctuation mark DEL Delete punctuation mark AUC Make all characters in uppercase ALC Make all characters in lowercase FUC Make the first character in uppercase Word AMC Make characters in mixed case PRV Preserve the special token Special token DEL Delete the special token Table 2. Types of tags Figure 3. An example of tagging Figure 3 shows an example of the tagging process. (The symbol ‘’ indicates a space). In the figure, a white circle denotes a token and a gray circle denotes a tag. Each token can be assigned several possible tags. Using the tags, we can perform most of the text normalization processing (conducting seven types of subtasks defined in Table 1 and cleaning 90.55% of the noises). In this paper, we do not conduct three subtasks, although we could do them in principle. These include missing space insertion, missing punctuation mark insertion, and misspelled word correction. In our email data, it corresponds to 8.81% of the noises. Adding tags for insertions would increase the search space dramatically. We did not do that due to computation consideration. Misspelled word correction can be done in the same framework easily. We did not do that in this work, because the percentage of misspelling in the data is small. We do not conduct misused punctuation mark correction as well (e.g., correcting ‘.’ with ‘?’). It consists of 0.64% of the noises in the email data. To handle it, one might need to parse the sentences. 4.2 CRF Model We employ Conditional Random Fields (CRF) as the tagging model. CRF is a conditional probability distribution of a sequence of tags given a sequence of tokens, represented as P(Y|X) , where X denotes the token sequence and Y the tag sequence (Lafferty et al., 2001). In tagging, the CRF model is used to find the sequence of tags Y* having the highest likelihood Y* = maxYP(Y|X), with an efficient algorithm (the Viterbi algorithm). In training, the CRF model is built with labeled data and by means of an iterative algorithm based on Maximum Likelihood Estimation. Transition Features yi-1=y’, yi=y yi-1=y’, yi=y, wi=w yi-1=y’, yi=y, ti=t State Features wi=w, yi=y wi-1=w, yi=y wi-2=w, yi=y wi-3=w, yi=y wi-4=w, yi=y wi+1=w, yi=y wi+2=w, yi=y wi+3=w, yi=y wi+4=w, yi=y wi-1=w’, wi=w, yi=y wi+1=w’, wi=w, yi=y ti=t, yi=y ti-1=t, yi=y ti-2=t, yi=y ti-3=t, yi=y ti-4=t, yi=y ti+1=t, yi=y ti+2=t, yi=y ti+3=t, yi=y ti+4=t, yi=y ti-2=t’’, ti-1=t’, yi=y ti-1=t’, ti=t, yi=y ti=t, ti+1=t’, yi=y ti+1=t’, ti+2=t’’, yi=y ti-2=t’’, ti-1=t’, ti=t, yi=y ti-1=t’’, ti=t, ti+1=t’, yi=y ti=t, ti+1=t’, ti+2=t’’, yi=y Table 3. Features used in the unified CRF model 691 4.3 Features Two sets of features are defined in the CRF model: transition features and state features. Table 3 shows the features used in the model. Suppose that at position i in token sequence x, wi is the token, ti the type of token (see Table 2), and yi the possible tag. Binary features are defined as described in Table 3. For example, the transition feature yi-1=y’, yi=y implies that if the current tag is y and the previous tag is y’, then the feature value is true; otherwise false. The state feature wi=w, yi=y implies that if the current token is w and the current label is y, then the feature value is true; otherwise false. In our experiments, an actual feature might be the word at position 5 is ‘PC’ and the current tag is AUC. In total, 4,168,723 features were used in our experiments. 4.4 Baseline Methods We can consider two baseline methods based on previous work, namely cascaded and independent approaches. The independent approach performs text normalization with several passes on the text. All of the processes take the raw text as input and output the normalized/cleaned result independently. The cascaded approach also performs normalization in several passes on the text. Each process carries out cleaning/normalization from the output of the previous process. 4.5 Advantages Our method offers some advantages. (1) As indicated, the text normalization tasks are interdependent. The cascaded approach or the independent approach cannot simultaneously perform the tasks. In contrast, our method can effectively overcome the drawback by employing a unified framework and achieve more accurate performances. (2) There are many specific types of errors one must correct in text normalization. As shown in Figure 1, there exist four types of errors with each type having several correction results. If one defines a specialized model or rule to handle each of the cases, the number of needed models will be extremely large and thus the text normalization processing will be impractical. In contrast, our method naturally formalizes all the tasks as assignments of different types of tags and trains a unified model to tackle all the problems at once. 5 Experimental Results 5.1 Experiment Setting Data Sets We used email data in our experiments. We randomly chose in total 5,000 posts (i.e., emails) from 12 newsgroups. DC, Ontology, NLP, and ML are from newsgroups at Google (http://groupsbeta.google.com/groups). Jena is a newsgroup at Yahoo (http://groups.yahoo.com/group/jena-dev). Weka is a newsgroup at Waikato University (https://list. scms.waikato.ac.nz). Protégé and OWL are from a project at Stanford University (http://protege.stanford.edu/). Mobility, WinServer, Windows, and PSS are email collections from a company. Five human annotators conducted normalization on the emails. A spec was created to guide the annotation process. All the errors in the emails were labeled and corrected. For disagreements in the annotation, we conducted “majority voting”. For example, extra line breaks, extra spaces, and extra punctuation marks in the emails were labeled. Unnecessary tokens were deleted. Missing spaces and missing punctuation marks were added and marked. Mistakenly cased words, misspelled words, and misused punctuation marks were corrected. Furthermore, paragraph boundaries and sentence boundaries were also marked. The noises fell into the categories defined in Table 1. Table 4 shows the statistics in the data sets. From the table, we can see that a large number of noises (41,407) exist in the emails. We can also see that the major noise types are extra line breaks, extra spaces, casing errors, and unnecessary tokens. In the experiments, we conducted evaluations in terms of precision, recall, F1-measure, and accuracy (for definitions of the measures, see for example (van Rijsbergen, 1979; Lita et al., 2003)). Implementation of Baseline Methods We used the cascaded approach and the independent approach as baselines. For the baseline methods, we defined several basic prediction subtasks: extra line break detection, extra space detection, extra punctuation mark detection, sentence boundary detection, unnecessary token detection, and case restoration. We compared the performances of our method with those of the baseline methods on the subtasks. 692 Data Set Number of Email Number of Noises Extra Line Break Extra Space Extra Punc. Missing Space Missing Punc. Casing Error Spelling Error Misused Punc. Unnecessary Token Number of Paragraph Boundary Number of Sentence Boundary DC 100 702 476 31 8 3 24 53 14 2 91 457 291 Ontology 100 2,731 2,132 24 3 10 68 205 79 15 195 677 1,132 NLP 60 861 623 12 1 3 23 135 13 2 49 244 296 ML 40 980 868 17 0 2 13 12 7 0 61 240 589 Jena 700 5,833 3,066 117 42 38 234 888 288 59 1,101 2,999 1,836 Weka 200 1,721 886 44 0 30 37 295 77 13 339 699 602 Protégé 700 3,306 1,770 127 48 151 136 552 116 9 397 1,645 1,035 OWL 300 1,232 680 43 24 47 41 152 44 3 198 578 424 Mobility 400 2,296 1,292 64 22 35 87 495 92 8 201 891 892 WinServer 400 3,487 2,029 59 26 57 142 822 121 21 210 1,232 1,151 Windows 1,000 9,293 3,416 3,056 60 116 348 1,309 291 67 630 3,581 2,742 PSS 1,000 8,965 3,348 2,880 59 153 296 1,331 276 66 556 3,411 2,590 Total 5,000 41,407 20,586 6,474 293 645 1,449 6,249 1,418 265 4,028 16,654 13,580 Table 4. Statistics on data sets For the case restoration subtask (processing on token sequence), we employed the TrueCasing method (Lita et al., 2003). The method estimates a tri-gram language model using a large data corpus with correctly cased words and then makes use of the model in case restoration. We also employed Conditional Random Fields to perform case restoration, for comparison purposes. The CRF based casing method estimates a conditional probabilistic model using the same data and the same tags defined in TrueCasing. For unnecessary token deletion, we used rules as follows. If a token consists of non-ASCII characters or consecutive duplicate characters, such as ‘===‘, then we identify it as an unnecessary token. For each of the other subtasks, we exploited the classification approach. For example, in extra line break detection, we made use of a classification model to identify whether or not a line break is a paragraph ending. We employed Support Vector Machines (SVM) as the classification model (Vapnik, 1998). In the classification model we utilized the same features as those in our unified model (see Table 3 for details). In the cascaded approach, the prediction tasks are performed in sequence, where the output of each task becomes the input of each immediately following task. The order of the prediction tasks is: (1) Extra line break detection: Is a line break a paragraph ending? It then separates the text into paragraphs using the remaining line breaks. (2) Extra space detection: Is a space an extra space? (3) Extra punctuation mark detection: Is a punctuation mark a noise? (4) Sentence boundary detection: Is a punctuation mark a sentence boundary? (5) Unnecessary token deletion: Is a token an unnecessary token? (6) Case restoration. Each of steps (1) to (4) uses a classification model (SVM), step (5) uses rules, whereas step (6) uses either a language model (TrueCasing) or a CRF model (CRF). In the independent approach, we perform the prediction tasks independently. When there is a conflict between the outcomes of two classifiers, we adopt the result of the latter classifier, as determined by the order of classifiers in the cascaded approach. To test how dependencies between different types of noises affect the performance of normalization, we also conducted experiments using the unified model by removing the transition features. Implementation of Our Method In the implementation of our method, we used the tool CRF++, available at http://chasen.org/~taku /software/CRF++/. We made use of all the default settings of the tool in the experiments. 5.2 Text Normalization Experiments Results We evaluated the performances of our method (Unified) and the baseline methods (Cascaded and Independent) on the 12 data sets. Table 5 shows the five-fold cross-validation results. Our method outperforms the two baseline methods. Table 6 shows the overall performances of text normalization by our method and the two baseline methods. We see that our method outperforms the two baseline methods. It can also be seen that the performance of the unified method decreases when removing the transition features (Unified w/o Transition Features). 693 We conducted sign tests for each subtask on the results, which indicate that all the improvements of Unified over Cascaded and Independent are statistically significant (p << 0.01). Detection Task Prec. Rec. F1 Acc. Independent 95.16 91.52 93.30 93.81 Cascaded 95.16 91.52 93.30 93.81 Extra Line Break Unified 93.87 93.63 93.75 94.53 Independent 91.85 94.64 93.22 99.87 Cascaded 94.54 94.56 94.55 99.89 Extra Space Unified 95.17 93.98 94.57 99.90 Independent 88.63 82.69 85.56 99.66 Cascaded 87.17 85.37 86.26 99.66 Extra Punctuation Mark Unified 90.94 84.84 87.78 99.71 Independent 98.46 99.62 99.04 98.36 Cascaded 98.55 99.20 98.87 98.08 Sentence Boundary Unified 98.76 99.61 99.18 98.61 Independent 72.51 100.0 84.06 84.27 Cascaded 72.51 100.0 84.06 84.27 Unnecessary Token Unified 98.06 95.47 96.75 96.18 Independent 27.32 87.44 41.63 96.22 Case Restoration (TrueCasing) Cascaded 28.04 88.21 42.55 96.35 Independent 84.96 62.79 72.21 99.01 Cascaded 85.85 63.99 73.33 99.07 Case Restoration (CRF) Unified 86.65 67.09 75.63 99.21 Table 5. Performances of text normalization (%) Text Normalization Prec. Rec. F1 Acc. Independent (TrueCasing) 69.54 91.33 78.96 97.90 Independent (CRF) 85.05 92.52 88.63 98.91 Cascaded (TrueCasing) 70.29 92.07 79.72 97.88 Cascaded (CRF) 85.06 92.70 88.72 98.92 Unified w/o Transition Features 86.03 93.45 89.59 99.01 Unified 86.46 93.92 90.04 99.05 Table 6. Performances of text normalization (%) Discussions Our method outperforms the independent method and the cascaded method in all the subtasks, especially in the subtasks that have strong dependencies with each other, for example, sentence boundary detection, extra punctuation mark detection, and case restoration. The cascaded method suffered from ignorance of the dependencies between the subtasks. For example, there were 3,314 cases in which sentence boundary detection needs to use the results of extra line break detection, extra punctuation mark detection, and case restoration. However, in the cascaded method, sentence boundary detection is conducted after extra punctuation mark detection and before case restoration, and thus it cannot leverage the results of case restoration. Furthermore, errors of extra punctuation mark detection can lead to errors in sentence boundary detection. The independent method also cannot make use of dependencies across different subtasks, because it conducts all the subtasks from the raw input data. This is why for detection of extra space, extra punctuation mark, and casing error, the independent method cannot perform as well as our method. Our method benefits from the ability of modeling dependencies between subtasks. We see from Table 6 that by leveraging the dependencies, our method can outperform the method without using dependencies (Unified w/o Transition Features) by 0.62% in terms of F1-measure. Here we use the example in Figure 1 to show the advantage of our method compared with the independent and the cascaded methods. With normalization by the independent method, we obtain: I’m thinking about buying a pocket PC device for my wife this Christmas, The worry that I have is that she won’t be able to sync it to her outlook express contacts.// With normalization by the cascaded method, we obtain: I’m thinking about buying a pocket PC device for my wife this Christmas, the worry that I have is that she won’t be able to sync it to her outlook express contacts.// With normalization by our method, we obtain: I’m thinking about buying a Pocket PC device for my wife this Christmas.// The worry that I have is that she won’t be able to sync it to her Outlook Express contacts.// The independent method can correctly deal with some of the errors. For instance, it can capitalize the first word in the first and the third line, remove extra periods in the fifth line, and remove the four extra line breaks. However, it mistakenly removes the period in the second line and it cannot restore the cases of some words, for example ‘pocket’ and ‘outlook express’. In the cascaded method, each process carries out cleaning/normalization from the output of the previous process and thus can make use of the cleaned/normalized results from the previous process. However, errors in the previous processes will also propagate to the later processes. For example, the cascaded method mistakenly removes the period in the second line. The error allows case restoration to make the error of keeping the word ‘the’ in lower case. 694 TrueCasing-based methods for case restoration suffer from low precision (27.32% by Independent and 28.04% by Cascaded), although their recalls are high (87.44% and 88.21% respectively). There are two reasons: 1) About 10% of the errors in Cascaded are due to errors of sentence boundary detection and extra line break detection in previous steps; 2) The two baselines tend to restore cases of words to the forms having higher probabilities in the data set and cannot take advantage of the dependencies with the other normalization subtasks. For example, ‘outlook’ was restored to first letter capitalized in both ‘Outlook Express’ and ‘a pleasant outlook’. Our method can take advantage of the dependencies with other subtasks and thus correct 85.01% of the errors that the two baseline methods cannot handle. Cascaded and Independent methods employing CRF for case restoration improve the accuracies somewhat. However, they are still inferior to our method. Although we have conducted error analysis on the results given by our method, we omit the details here due to space limitation and will report them in a future expanded version of this paper. We also compared the speed of our method with those of the independent and cascaded methods. We tested the three methods on a computer with two 2.8G Dual-Core CPUs and three Gigabyte memory. On average, it needs about 5 hours for training the normalization models using our method and 25 seconds for tagging in the crossvalidation experiments. The independent and the cascaded methods (with TrueCasing) require less time for training (about 2 minutes and 3 minutes respectively) and for tagging (several seconds). This indicates that the efficiency of our method still needs improvement. 6 Conclusion In this paper, we have investigated the problem of text normalization, an important issue for natural language processing. We have first defined the problem as a task consisting of noise elimination and boundary detection subtasks. We have then proposed a unified tagging approach to perform the task, specifically to treat text normalization as assigning tags representing deletion, preservation, or replacement of the tokens in the text. Experiments show that our approach significantly outperforms the two baseline methods for text normalization. References E. Brill and R. C. Moore. 2000. An Improved Error Model for Noisy Channel Spelling Correction, Proc. of ACL 2000. V. R. Carvalho and W. W. Cohen. 2004. Learning to Extract Signature and Reply Lines from Email, Proc. of CEAS 2004. K. Church and W. Gale. 1991. Probability Scoring for Spelling Correction, Statistics and Computing, Vol. 1. A. Clark. 2003. Pre-processing Very Noisy Text, Proc. of Workshop on Shallow Processing of Large Corpora. A. R. Golding and D. Roth. 1996. Applying Winnow to Context-Sensitive Spelling Correction, Proc. of ICML’1996. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data, Proc. of ICML 2001. L. V. Lita, A. Ittycheriah, S. Roukos, and N. Kambhatla. 2003. tRuEcasIng, Proc. of ACL 2003. E. Mays, F. J. Damerau, and R. L. Mercer. 1991. Context Based Spelling Correction, Information Processing and Management, Vol. 27, 1991. A. Mikheev. 2000. Document Centered Approach to Text Normalization, Proc. SIGIR 2000. A. Mikheev. 2002. Periods, Capitalized Words, etc. Computational Linguistics, Vol. 28, 2002. E. Minkov, R. C. Wang, and W. W. Cohen. 2005. Extracting Personal Names from Email: Applying Named Entity Recognition to Informal Text, Proc. of EMNLP/HLT-2005. D. D. Palmer and M. A. Hearst. 1997. Adaptive Multilingual Sentence Boundary Disambiguation, Computational Linguistics, Vol. 23. C.J. van Rijsbergen. 1979. Information Retrieval. Butterworths, London. R. Sproat, A. Black, S. Chen, S. Kumar, M. Ostendorf, and C. Richards. 1999. Normalization of nonstandard words, WS’99 Final Report. http://www.clsp.jhu.edu/ws99/projects/normal/. J. Tang, H. Li, Y. Cao, and Z. Tang. 2005. Email data cleaning, Proc. of SIGKDD’2005. V. Vapnik. 1998. Statistical Learning Theory, Springer. W. Wong, W. Liu, and M. Bennamoun. 2007. Enhanced Integrated Scoring for Cleaning Dirty Texts, Proc. of IJCAI-2007 Workshop on Analytics for Noisy Unstructured Text Data. 695
2007
87
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 696–703, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Sparse Information Extraction: Unsupervised Language Models to the Rescue Doug Downey, Stefan Schoenmackers, and Oren Etzioni Turing Center, Department of Computer Science and Engineering University of Washington, Box 352350 Seattle, WA 98195, USA {ddowney,stef,etzioni}@cs.washington.edu Abstract Even in a massive corpus such as the Web, a substantial fraction of extractions appear infrequently. This paper shows how to assess the correctness of sparse extractions by utilizing unsupervised language models. The REALM system, which combines HMMbased and n-gram-based language models, ranks candidate extractions by the likelihood that they are correct. Our experiments show that REALM reduces extraction error by 39%, on average, when compared with previous work. Because REALM pre-computes language models based on its corpus and does not require any hand-tagged seeds, it is far more scalable than approaches that learn models for each individual relation from handtagged data. Thus, REALM is ideally suited for open information extraction where the relations of interest are not specified in advance and their number is potentially vast. 1 Introduction Information Extraction (IE) from text is far from infallible. In response, researchers have begun to exploit the redundancy in massive corpora such as the Web in order to assess the veracity of extractions (e.g., (Downey et al., 2005; Etzioni et al., 2005; Feldman et al., 2006)). In essence, such methods utilize extraction patterns to generate candidate extractions (e.g., “Istanbul”) and then assess each candidate by computing co-occurrence statistics between the extraction and words or phrases indicative of class membership (e.g., “cities such as”). However, Zipf’s Law governs the distribution of extractions. Thus, even the Web has limited redundancy for less prominent instances of relations. Indeed, 50% of the extractions in the data sets employed by (Downey et al., 2005) appeared only once. As a result, Downey et al.’s model, and related methods, had no way of assessing which extraction is more likely to be correct for fully half of the extractions. This problem is particularly acute when moving beyond unary relations. We refer to this challenge as the task of assessing sparse extractions. This paper introduces the idea that language modeling techniques such as n-gram statistics (Manning and Sch¨utze, 1999) and HMMs (Rabiner, 1989) can be used to effectively assess sparse extractions. The paper introduces the REALM system, and highlights its unique properties. Notably, REALM does not require any hand-tagged seeds, which enables it to scale to Open IE—extraction where the relations of interest are not specified in advance, and their number is potentially vast (Banko et al., 2007). REALM is based on two key hypotheses. The KnowItAll hypothesis is that extractions that occur more frequently in distinct sentences in the corpus are more likely to be correct. For example, the hypothesis suggests that the argument pair (Giuliani, New York) is relatively likely to be appropriate for the Mayor relation, simply because this pair is extracted for the Mayor relation relatively frequently. Second, we employ an instance of the distributional hypothesis (Harris, 1985), which 696 can be phrased as follows: different instances of the same semantic relation tend to appear in similar textual contexts. We assess sparse extractions by comparing the contexts in which they appear to those of more common extractions. Sparse extractions whose contexts are more similar to those of common extractions are judged more likely to be correct based on the conjunction of the KnowItAll and the distributional hypotheses. The contributions of the paper are as follows: • The paper introduces the insight that the subfield of language modeling provides unsupervised methods that can be leveraged to assess sparse extractions. These methods are more scalable than previous assessment techniques, and require no hand tagging whatsoever. • The paper introduces an HMM-based technique for checking whether two arguments are of the proper type for a relation. • The paper introduces a relational n-gram model for the purpose of determining whether a sentence that mentions multiple arguments actually expresses a particular relationship between them. • The paper introduces a novel languagemodeling system called REALM that combines both HMM-based models and relational ngram models, and shows that REALM reduces error by an average of 39% over previous methods, when applied to sparse extraction data. The remainder of the paper is organized as follows. Section 2 introduces the IE assessment task, and describes the REALM system in detail. Section 3 reports on our experimental results followed by a discussion of related work in Section 4. Finally, we conclude with a discussion of scalability and with directions for future work. 2 IE Assessment This section formalizes the IE assessment task and describes the REALM system for solving it. An IE assessor takes as input a list of candidate extractions meant to denote instances of a relation, and outputs a ranking of the extractions with the goal that correct extractions rank higher than incorrect ones. A correct extraction is defined to be a true instance of the relation mentioned in the input text. More formally, the list of candidate extractions for a relation R is denoted as ER = {(a1, b1), . . . , (am, bm)}. An extraction (ai, bi) is an ordered pair of strings. The extraction is correct if and only if the relation R holds between the arguments named by ai and bi. For example, for R = Headquartered, a pair (ai, bi) is correct iff there exists an organization ai that is in fact headquartered in the location bi.1 ER is generated by applying an extraction mechanism, typically a set of extraction “patterns”, to each sentence in a corpus, and recording the results. Thus, many elements of ER are identical extractions derived from different sentences in the corpus. This task definition is notable for the minimal inputs required—IE assessment does not require knowing the relation name nor does it require handtagged seed examples of the relation. Thus, an IE Assessor is applicable to Open IE. 2.1 System Overview In this section, we describe the REALM system, which utilizes language modeling techniques to perform IE Assessment. REALM takes as input a set of extractions ER, and outputs a ranking of those extractions. The algorithm REALM follows is outlined in Figure 1. REALM begins by automatically selecting from ER a set of bootstrapped seeds SR intended to serve as correct examples of the relation R. REALM utilizes the KnowItAll hypothesis, setting SR equal to the h elements in ER extracted most frequently from the underlying corpus. This results in a noisy set of seeds, but the methods that use these seeds are noise tolerant. REALM then proceeds to rank the remaining (non-seed) extractions by utilizing two languagemodeling components. An n-gram language model is a probability distribution P(w1, ..., wn) over consecutive word sequences of length n in a corpus. Formally, if we assume a seed (s1, s2) is a correct extraction of a relation R, the distributional hypothesis states that the context distribution around the seed extraction, P(w1, ..., wn|wi = s1, wj = s2) for 1 ≤i, j ≤n tends to be “more similar” to 1For clarity, our discussion focuses on relations between pairs of arguments. However, the methods we propose can be extended to relations of any arity. 697 P(w1, ..., wn|wi = e1, wj = e2) when the extraction (e1, e2) is correct. Naively comparing context distributions is problematic, however, because the arguments to a relation often appear separated by several intervening words. In our experiments, we found that when relation arguments appear together in a sentence, 75% of the time the arguments are separated by at least three words. This implies that n must be large, and for sparse argument pairs it is not possible to estimate such a large language model accurately, because the number of modeling parameters is proportional to the vocabulary size raised to the nth power. To mitigate sparsity, REALM utilizes smaller language models in its two components as a means of “backing-off’ from estimating context distributions explicitly, as described below. First, REALM utilizes an HMM to estimate whether each extraction has arguments of the proper type for the relation. Each relation R has a set of types for its arguments. For example, the relation AuthorOf(a, b) requires that its first argument be an author, and that its second be some kind of written work. Knowing whether extracted arguments are of the proper type for a relation can be quite informative for assessing extractions. The challenge is, however, that this type information is not given to the system since the relations (and the types of the arguments) are not known in advance. REALM solves this problem by comparing the distributions of the seed arguments and extraction arguments. Type checking mitigates data sparsity by leveraging every occurrence of the individual extraction arguments in the corpus, rather than only those cases in which argument pairs occur near each other. Although argument type checking is invaluable for extraction assessment, it is not sufficient for extracting relationships between arguments. For example, an IE system using only type information might determine that Intel is a corporation and that Seattle is a city, and therefore erroneously conclude that Headquartered(Intel, Seattle) is correct. Thus, REALM’s second step is to employ an n-gram-based language model to assess whether the extracted arguments share the appropriate relation. Again, this information is not given to the system, so REALM compares the context distributions of the extractions to those of the seeds. As described in REALM(Extractions ER = {e1, ..., em}) SR = the h most frequent extractions in ER UR = ER - SR TypeRankings(UR) ←HMM-T(SR, UR) RelationRankings(UR) ←REL-GRAMS(SR, UR) return a ranking of ER with the elements of SR at the top (ranked by frequency) followed by the elements of UR = {u1, ..., um−h} ranked in ascending order of TypeRanking(ui) ∗RelationRanking(ui). Figure 1: Pseudocode for REALM at run-time. The language models used by the HMM-T and REL-GRAMS components are constructed in a preprocessing step. Section 2.3, REALM employs a relational n-gram language model in order to accurately compare context distributions when extractions are sparse. REALM executes the type checking and relation assessment components separately; each component takes the seed and non-seed extractions as arguments and returns a ranking of the non-seeds. REALM then combines the two components’ assessments into a single ranking. Although several such combinations are possible, REALM simply ranks the extractions in ascending order of the product of the ranks assigned by the two components. The following subsections describe REALM’s two components in detail. We identify the proper nouns in our corpus using the LEX method (Downey et al., 2007). In addition to locating the proper nouns in the corpus, LEX also concatenates each multi-token proper noun (e.g.,Los Angeles) together into a single token. Both of REALM’s components construct language models from this tokenized corpus. 2.2 Type Checking with HMM-T In this section, we describe our type-checking component, which takes the form of a Hidden Markov Model and is referred to as HMM-T. HMM-T ranks the set UR of non-seed extractions, with a goal of ranking those extractions with arguments of proper type for R above extractions containing type errors. Formally, let URi denote the set of the ith arguments of the extractions in UR. Let SRi be defined similarly for the seed set SR. Our type checking technique exploits the distributional hypothesis—in this case, the intuition that 698 Intel , headquartered in Santa+Clara Figure 2: Graphical model employed by HMMT. Shown is the case in which k = 2. Corpus pre-processing results in the proper noun Santa Clara being concatenated into a single token. extraction arguments in URi of the proper type will likely appear in contexts similar to those in which the seed arguments SRi appear. In order to identify terms that are distributionally similar, we train a probabilistic generative Hidden Markov Model (HMM), which treats each token in the corpus as generated by a single hidden state variable. Here, the hidden states take integral values from {1, . . . , T}, and each hidden state variable is itself generated by some number k of previous hidden states.2 Formally, the joint distribution of the corpus, represented as a vector of tokens w, given a corresponding vector of states t is: P(w|t) = Y i P(wi|ti)P(ti|ti−1, . . . , ti−k) (1) The distributions on the right side of Equation 1 can be learned from a corpus in an unsupervised manner, such that words which are distributed similarly in the corpus tend to be generated by similar hidden states (Rabiner, 1989). The generative model is depicted as a Bayesian network in Figure 2. The figure also illustrates the one way in which our implementation is distinct from a standard HMM, namely that proper nouns are detected a priori and modeled as single tokens (e.g., Santa Clara is generated by a single hidden state). This allows the type checker to compare the state distributions of different proper nouns directly, even when the proper nouns contain differing numbers of words. To generate a ranking of UR using the learned HMM parameters, we rank the arguments ei according to how similar their state distributions P(t|ei) 2Our implementation makes the simplifying assumption that each sentence in the corpus is generated independently. are to those of the seed arguments.3 Specifically, we define a function: f(e) = X ei∈e KL( P w′∈SRi P(t|w′) |SRi| , P(t|ei)) (2) where KL represents KL divergence, and the outer sum is taken over the arguments ei of the extraction e. We rank the elements of UR in ascending order of f(e). HMM-T has two advantages over a more traditional type checking approach of simply counting the number of times in the corpus that each extraction appears in a context in which a seed also appears (cf. (Ravichandran et al., 2005)). The first advantage of HMM-T is efficiency, as the traditional approach involves a computationally expensive step of retrieving the potentially large set of contexts in which the extractions and seeds appear. In our experiments, using HMM-T instead of a context-based approach results in a 10-50x reduction in the amount of data that is retrieved to perform type checking. Secondly, on sparse data HMM-T has the potential to improve type checking accuracy. For example, consider comparing Pickerington, a sparse candidate argument of the type City, to the seed argument Chicago, for which the following two phrases appear in the corpus: (i) “Pickerington, Ohio” (ii) “Chicago, Illinois” In these phrases, the textual contexts surrounding Chicago and Pickerington are not identical, so to the traditional approach these contexts offer no evidence that Pickerington and Chicago are of the same type. For a sparse token like Pickerington, this is problematic because the token may never occur in a context that precisely matches that of a seed. In contrast, in the HMM, the non-sparse tokens Ohio and Illinois are likely to have similar state distributions, as they are both the names of U.S. States. Thus, in the state space employed by the HMM, the contexts in phrases (i) and (ii) are in fact quite similar, allowing HMMT to detect that Pickerington and Chicago are likely of the same type. Our experiments quantify the performance improvements that HMM-T of3The distribution P(t|ei) for any ei can be obtained from the HMM parameters using Bayes Rule. 699 fers over the traditional approach for type checking sparse data. The time required to learn HMM-T’s parameters scales proportional to T k+1 times the corpus size. Thus, for tractability, HMM-T uses a relatively small state space of T = 20 states and a limited k value of 3. While these settings are sufficient for type checking (e.g., determining that Santa Clara is a city) they are too coarse-grained to assess relations between arguments (e.g., determining that Santa Clara is the particular city in which Intel is headquartered). We now turn to the REL-GRAMS component, which performs the latter task. 2.3 Relation Assessment with REL-GRAMS REALM’s relation assessment component, called REL-GRAMS, tests whether the extracted arguments have a desired relationship, but given REALM’s minimal input it has no a priori information about the relationship. REL-GRAMS relies instead on the distributional hypothesis to test each extraction. As argued in Section 2.1, it is intractable to build an accurate language model for context distributions surrounding sparse argument pairs. To overcome this problem, we introduce relational n-gram models. Rather than simply modeling the context distribution around a given argument, a relational n-gram model specifies separate context distributions for an arguments conditioned on each of the other arguments with which it appears. The relational n-gram model allows us to estimate context distributions for pairs of arguments, even when the arguments do not appear together within a fixed window of n words. Further, by considering only consecutive argument pairs, the number of distinct argument pairs in the model grows at most linearly with the number of sentences in the corpus. Thus, the relational n-gram model can scale. Formally, for a pair of arguments (e1, e2), a relational n-gram model estimates the distributions P(w1, ..., wn|wi = e1, e1 ↔e2) for each 1 ≤i ≤ n, where the notation e1 ↔e2 indicates the event that e2 is the next argument to either the right or the left of e1 in the corpus. REL-GRAMS begins by building a relational ngram model of the arguments in the corpus. For notational convenience, we represent the model’s distributions in terms of “context vectors” for each pair of arguments. Formally, for a given sentence containing arguments e1 and e2 consecutively, we define a context of the ordered pair (e1, e2) to be any window of n tokens around e1. Let C = {c1, c2, ..., c|C|} be the set of all contexts of all argument pairs found in the corpus.4 For a pair of arguments (ej, ek), we model their relationship using a |C| dimensional context vector v(ej,ek), whose i-th dimension corresponds to the number of times context ci occurred with the pair (ej, ek) in the corpus. These context vectors are similar to document vectors from Information Retrieval (IR), and we leverage IR research to compare them, as described below. To assess each extraction, we determine how similar its context vector is to a canonical seed vector (created by summing the context vectors of the seeds). While there are many potential methods for determining similarity, in this work we rank extractions by decreasing values of the BM25 distance metric. BM25 is a TF-IDF variant introduced in TREC-3(Robertson et al., 1992), which outperformed both the standard cosine distance and a smoothed KL divergence on our data. 3 Experimental Results This section describes our experiments on IE assessment for sparse data. We start by describing our experimental methodology, and then present our results. The first experiment tests the hypothesis that HMM-T outperforms an n-gram-based method on the task of type checking. The second experiment tests the hypothesis that REALM outperforms multiple approaches from previous work, and also outperforms each of its HMM-T and REL-GRAMS components taken in isolation. 3.1 Experimental Methodology The corpus used for our experiments consisted of a sample of sentences taken from Web pages. From an initial crawl of nine million Web pages, we selected sentences containing relations between proper nouns. The resulting text corpus consisted of about 4Pre-computing the set C requires identifying in advance the potential relation arguments in the corpus. We consider the proper nouns identified by the LEX method (see Section 2.1) to be the potential arguments. 700 three million sentences, and was tokenized as described in Section 2. For tractability, before and after performing tokenization, we replaced each token occurring fewer than five times in the corpus with one of two “unknown word” markers (one for capitalized words, and one for uncapitalized words). This preprocessing resulted in a corpus containing about sixty-five million total tokens, and 214,787 unique tokens. We evaluated performance on four relations: Conquered, Founded, Headquartered, and Merged. These four relations were chosen because they typically take proper nouns as arguments, and included a large number of sparse extractions. For each relation R, the candidate extraction list ER was obtained using TEXTRUNNER (Banko et al., 2007). TEXTRUNNER is an IE system that computes an index of all extracted relationships it recognizes, in the form of (object, predicate, object) triples. For each of our target relations, we executed a single query to the TEXTRUNNER index for extractions whose predicate contained a phrase indicative of the relation (e.g., “founded by”, “headquartered in”), and the results formed our extraction list. For each relation, the 10 most frequent extractions served as bootstrapped seeds. All of the non-seed extractions were sparse (no argument pairs were extracted more than twice for a given relation). These test sets contained a total of 361 extractions. 3.2 Type Checking Experiments As discussed in Section 2.2, on sparse data HMM-T has the potential to outperform type checking methods that rely on textual similarities of context vectors. To evaluate this claim, we tested the HMM-T system against an N-GRAMS type checking method on the task of type-checking the arguments to a relation. The N-GRAMS method compares the context vectors of extractions in the same way as the RELGRAMS method described in Section 2.3, but is not relational (N-GRAMS considers the distribution of each extraction argument independently, similar to HMM-T). We tagged an extraction as type correct iff both arguments were valid for the relation, ignoring whether the relation held between the arguments. The results of our type checking experiments are shown in Table 1. For all types, HMM-T outperforms N-GRAMS, and HMM-T reduces error (meaType HMM-T N-GRAMS Conquered 0.917 0.767 Founded 0.827 0.636 Headquartered 0.734 0.589 Merged 0.920 0.854 Average 0.849 0.712 Table 1: Type Checking Performance. Listed is area under the precision/recall curve. HMM-T outperforms N-GRAMS for all relations, and reduces the error in terms of missing area under the curve by 46% on average. sured in missing area under the precision/recall curve) by 46%. The performance difference on each relation is statistically significant (p < 0.01, twosampled t-test), using the methodology for measuring the standard deviation of area under the precision/recall curve given in (Richardson and Domingos, 2006). N-GRAMS, like REL-GRAMS, employs the BM-25 metric to measure distributional similarity between extractions and seeds. Replacing BM25 with cosine distance cuts HMM-T’s advantage over N-GRAMS, but HMM-T’s error rate is still 23% lower on average. 3.3 Experiments with REALM The REALM system combines the type checking and relation assessment components to assess extractions. Here, we test the ability of REALM to improve the ranking of a state of the art IE system, TEXTRUNNER. For these experiments, we evaluate REALM against the TEXTRUNNER frequencybased ordering, a pattern-learning approach, and the HMM-T and REL-GRAMS components taken in isolation. The TEXTRUNNER frequency-based ordering ranks extractions in decreasing order of their extraction frequency, and importantly, for our task this ordering is essentially equivalent to that produced by the “Urns” (Downey et al., 2005) and Pointwise Mutual Information (Etzioni et al., 2005) approaches employed in previous work. The pattern-learning approach, denoted as PL, is modeled after Snowball (Agichtein, 2006). The algorithm and parameter settings for PL were those manually tuned for the Headquartered relation in previous work (Agichtein, 2005). A sensitivity analysis of these parameters indicated that the re701 Conquered Founded Headquartered Merged Average Avg. Prec. 0.698 0.578 0.400 0.742 0.605 TEXTRUNNER 0.738 0.699 0.710 0.784 0.733 PL 0.885 0.633 0.651 0.852 0.785 PL+ HMM-T 0.883 0.722 0.727 0.900 0.808 HMM-T 0.830 0.776 0.678 0.864 0.787 REL-GRAMS 0.929 (39%) 0.713 0.758 0.886 0.822 REALM 0.907 (19%) 0.781 (27%) 0.810 (35%) 0.908 (38%) 0.851 (39%) Table 2: Performance of REALM for assessment of sparse extractions. Listed is area under the precision/recall curve for each method. In parentheses is the percentage reduction in error over the strongest baseline method (TEXTRUNNER or PL) for each relation. “Avg. Prec.” denotes the fraction of correct examples in the test set for each relation. REALM outperforms its REL-GRAMS and HMM-T components taken in isolation, as well as the TEXTRUNNER and PL systems from previous work. sults are sensitive to the parameter settings. However, we found no parameter settings that performed significantly better, and many settings performed significantly worse. As such, we believe our results reasonably reflect the performance of a pattern learning system on this task. Because PL performs relation assessment, we also attempted combining PL with HMM-T in a hybrid method (PL+ HMM-T) analogous to REALM. The results of these experiments are shown in Table 2. REALM outperforms the TEXTRUNNER and PL baselines for all relations, and reduces the missing area under the curve by an average of 39% relative to the strongest baseline. The performance differences between REALM and TEXTRUNNER are statistically significant for all relations, as are differences between REALM and PL for all relations except Conquered (p < 0.01, two-sampled t-test). The hybrid REALM system also outperforms each of its components in isolation. 4 Related Work To our knowledge, REALM is the first system to use language modeling techniques for IE Assessment. Redundancy-based approaches to pattern-based IE assessment (Downey et al., 2005; Etzioni et al., 2005) require that extractions appear relatively frequently with a limited set of patterns. In contrast, REALM utilizes all contexts to build a model of extractions, rather than a limited set of patterns. Our experiments demonstrate that REALM outperforms these approaches on sparse data. Type checking using named-entity taggers has been previously shown to improve the precision of pattern-based IE systems (Agichtein, 2005; Feldman et al., 2006), but the HMM-T type-checking component we develop differs from this work in important ways. Named-entity taggers are limited in that they typically recognize only small set of types (e.g., ORGANIZATION, LOCATION, PERSON), and they require hand-tagged training data for each type. HMM-T, by contrast, performs type checking for any type. Finally, HMM-T does not require hand-tagged training data. Pattern learning is a common technique for extracting and assessing sparse data (e.g. (Agichtein, 2005; Riloff and Jones, 1999; Pas¸ca et al., 2006)). Our experiments demonstrate that REALM outperforms a pattern learning system closely modeled after (Agichtein, 2005). REALM is inspired by pattern learning techniques (in particular, both use the distributional hypothesis to assess sparse data) but is distinct in important ways. Pattern learning techniques require substantial processing of the corpus after the relations they assess have been specified. Because of this, pattern learning systems are unsuited to Open IE. Unlike these techniques, REALM pre-computes language models which allow it to assess extractions for arbitrary relations at run-time. In essence, pattern-learning methods run in time linear in the number of relations whereas REALM’s run time is constant in the number of relations. Thus, REALM scales readily to large numbers of relations whereas pattern-learning methods do not. 702 A second distinction of REALM is that its type checker, unlike the named entity taggers employed in pattern learning systems (e.g., Snowball), can be used to identify arbitrary types. A final distinction is that the language models REALM employs require fewer parameters and heuristics than pattern learning techniques. Similar distinctions exist between REALM and a recent system designed to assess sparse extractions by bootstrapping a classifier for each target relation (Feldman et al., 2006). As in pattern learning, constructing the classifiers requires substantial processing after the target relations have been specified, and a set of hand-tagged examples per relation, making it unsuitable for Open IE. 5 Conclusions This paper demonstrated that unsupervised language models, as embodied in the REALM system, are an effective means of assessing sparse extractions. Another attractive feature of REALM is its scalability. Scalability is a particularly important concern for Open Information Extraction, the task of extracting large numbers of relations that are not specified in advance. Because HMM-T and REL-GRAMS both pre-compute language models, REALM can be queried efficiently to perform IE Assessment. Further, the language models are constructed independently of the target relations, allowing REALM to perform IE Assessment even when relations are not specified in advance. In future work, we plan to develop a probabilistic model of the information computed by REALM. We also plan to evaluate the use of non-local context for IE Assessment by integrating document-level modeling techniques (e.g., Latent Dirichlet Allocation). Acknowledgements This research was supported in part by NSF grants IIS-0535284 and IIS-0312988, DARPA contract NBCHD030010, ONR grant N00014-05-1-0185 as well as a gift from Google. The first author is supported by an MSR graduate fellowship sponsored by Microsoft Live Labs. We thank Michele Banko, Jeff Bilmes, Katrin Kirchhoff, and Alex Yates for helpful comments. References E. Agichtein. 2005. Extracting Relations From Large Text Collections. Ph.D. thesis, Department of Computer Science, Columbia University. E. Agichtein. 2006. Confidence estimation methods for partially supervised relation extraction. In SDM 2006. M. Banko, M. Cararella, S. Soderland, M. Broadhead, and O. Etzioni. 2007. Open information extraction from the web. In Procs. of IJCAI 2007. D. Downey, O. Etzioni, and S. Soderland. 2005. A Probabilistic Model of Redundancy in Information Extraction. In Procs. of IJCAI 2005. D. Downey, M. Broadhead, and O. Etzioni. 2007. Locating complex named entities in web text. In Procs. of IJCAI 2007. O. Etzioni, M. Cafarella, D. Downey, S. Kok, A. Popescu, T. Shaked, S. Soderland, D. Weld, and A. Yates. 2005. Unsupervised named-entity extraction from the web: An experimental study. Artificial Intelligence, 165(1):91–134. R. Feldman, B. Rosenfeld, S. Soderland, and O. Etzioni. 2006. Self-supervised relation extraction from the web. In ISMIS, pages 755–764. Z. Harris. 1985. Distributional structure. In J. J. Katz, editor, The Philosophy of Linguistics, pages 26–47. New York: Oxford University Press. C. D. Manning and H. Sch¨utze. 1999. Foundations of Statistical Natural Language Processing. M. Pas¸ca, D. Lin, J. Bigham, A. Lifchits, and A. Jain. 2006. Names and similarities on the web: Fact extraction in the fast lane. In Procs. of ACL/COLING 2006. L. R. Rabiner. 1989. A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286. D. Ravichandran, P. Pantel, and E. H. Hovy. 2005. Randomized Algorithms and NLP: Using Locality Sensitive Hash Functions for High Speed Noun Clustering. In Procs. of ACL 2005. M. Richardson and P. Domingos. 2006. Markov Logic Networks. Machine Learning, 62(1-2):107–136. E. Riloff and R. Jones. 1999. Learning Dictionaries for Information Extraction by Multi-level Boot-strapping. In Procs. of AAAI-99, pages 1044–1049. S. E. Robertson, S. Walker, M. Hancock-Beaulieu, A. Gull, and M. Lau. 1992. Okapi at TREC-3. In Text REtrieval Conference, pages 21–30. 703
2007
88
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 704–711, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Forest-to-String Statistical Translation Rules Yang Liu , Yun Huang , Qun Liu and Shouxun Lin Key Laboratory of Intelligent Information Processing Institute of Computing Technology Chinese Academy of Sciences P.O. Box 2704, Beijing 100080, China {yliu,huangyun,liuqun,sxlin}@ict.ac.cn Abstract In this paper, we propose forest-to-string rules to enhance the expressive power of tree-to-string translation models. A forestto-string rule is capable of capturing nonsyntactic phrase pairs by describing the correspondence between multiple parse trees and one string. To integrate these rules into tree-to-string translation models, auxiliary rules are introduced to provide a generalization level. Experimental results show that, on the NIST 2005 Chinese-English test set, the tree-to-string model augmented with forest-to-string rules achieves a relative improvement of 4.3% in terms of BLEU score over the original model which allows treeto-string rules only. 1 Introduction The past two years have witnessed the rapid development of linguistically syntax-based translation models (Quirk et al., 2005; Galley et al., 2006; Marcu et al., 2006; Liu et al., 2006), which induce tree-to-string translation rules from parallel texts with linguistic annotations. They demonstrated very promising results when compared with the state of the art phrase-based system (Och and Ney, 2004) in the NIST 2006 machine translation evaluation 1. While Galley et al. (2006) and Marcu et al. (2006) put emphasis on target language analysis, Quirk et al. (2005) and Liu et al. (2006) show benefits from modeling the syntax of source language. 1See http://www.nist.gov/speech/tests/mt/ One major problem with linguistically syntaxbased models, however, is that tree-to-string rules fail to syntactify non-syntactic phrase pairs because they require a syntax tree fragment over the phrase to be syntactified. Here, we distinguish between syntactic and non-syntactic phrase pairs. By “syntactic” we mean that the phrase pair is subsumed by some syntax tree fragment. The phrase pairs without trees over them are non-syntactic. Marcu et al. (2006) report that approximately 28% of bilingual phrases are non-syntactic on their English-Chinese corpus. We believe that it is important to make available to syntax-based models all the bilingual phrases that are typically available to phrase-based models. On one hand, phrases have been proven to be a simple and powerful mechanism for machine translation. They excel at capturing translations of short idioms, providing local re-ordering decisions, and incorporating context information straightforwardly. Chiang (2005) shows significant improvement by keeping the strengths of phrases while incorporating syntax into statistical translation. On the other hand, the performance of linguistically syntax-based models can be hindered by making use of only syntactic phrase pairs. Studies reveal that linguistically syntax-based models are sensitive to syntactic analysis (Quirk and Corston-Oliver, 2006), which is still not reliable enough to handle real-world texts due to limited size and domain of training data. Various solutions are proposed to tackle the problem. Galley et al. (2004) handle non-constituent phrasal translation by traversing the tree upwards until reaches a node that subsumes the phrase. Marcu et al. (2006) argue that this choice is inap704 propriate because large applicability contexts are required. For a non-syntactic phrase pair, Marcu et al. (2006) create a xRS rule headed by a pseudo, nonsyntactic nonterminal symbol that subsumes the phrase and corresponding multi-headed syntactic structure; and one sibling xRS rule that explains how the non-syntactic nonterminal symbol can be combined with other genuine nonterminals so as to obtain genuine parse trees. The name of the pseudo nonterminal is designed to reflect how the corresponding rule can be fully realized. However, they neglect alignment consistency when creating sibling rules. In addition, it is hard for the naming mechanism to deal with more complex phenomena. Liu et al. (2006) treat bilingual phrases as lexicalized TATs (Tree-to-string Alignment Template). A bilingual phrase can be used in decoding if the source phrase is subsumed by the input parse tree. Although this solution does help, only syntactic bilingual phrases are available to the TAT-based model. Moreover, it is problematic to combine the translation probabilities of bilingual phrases and TATs, which are estimated independently. In this paper, we propose forest-to-string rules which describe the correspondence between multiple parse trees and a string. They can not only capture non-syntactic phrase pairs but also have the capability of generalization. To integrate these rules into tree-to-string translation models, auxiliary rules are introduced to provide a generalization level. As there is no pseudo node or naming mechanism, the integration of forest-to-string rules is flexible, relying only on their root nodes. The forest-to-string and auxiliary rules enable tree-to-string models to derive in a more general way, while the strengths of conventional tree-to-string rules still remain. 2 Forest-to-String Translation Rules We define a tree-to-string rule r as a triple ⟨˜T, ˜S, ˜A⟩, which describes the alignment ˜A between a source parse tree ˜T = T(f J′ 1 ) and a target string ˜S = eI′ 1 . A source string fJ′ 1 , which is the sequence of leaf nodes of T(fJ′ 1 ), consists of both terminals (source words) and nonterminals (phrasal categories). A target string eI′ 1 is also composed of both terminals (target words) and nonterminals (placeholders). An IP NP NN VP SB VP NP NN VV  PU The gunman was killed by police . Figure 1: An English sentence aligned with a Chinese parse tree. alignment ˜A is defined as a subset of the Cartesian product of source and target symbol positions: ˜A ⊆{(j, i) : j = 1, . . . , J′; i = 1, . . . , I′} A derivation θ = r1 ◦r2 ◦. . . ◦rn is a leftmost composition of translation rules that explains how a source parse tree T = T(fJ 1 ), a target sentence S = eI 1, and the word alignment A are synchronously generated. For example, Table 1 demonstrates a derivation composed of only tree-to-string rules for the ⟨T, S, A⟩tuple in Figure 1 2. As we mentioned before, tree-to-string rules can not syntactify phrase pairs that are not subsumed by any syntax tree fragments. For example, for the phrase pair ⟨“”, “The gunman was”⟩in Figure 1, it is impossible to extract an equivalent treeto-string rule that subsumes the same phrase pair because valid tree-to-string rules can not be multiheaded. To address this problem, we propose forest-tostring rules3 to subsume the non-syntactic phrase pairs. A forest-to-string rule r 4 is a triple ⟨˜F, ˜S, ˜A⟩, which describes the alignment ˜A between K source parse trees ˜F = ˜T K 1 and a target string ˜S. The source string fJ′ 1 is therefore the sequence of leaf nodes of ˜F. Auxiliary rules are introduced to integrate forestto-string rules into tree-to-string translation models. An auxiliary rule is a special unlexicalized tree-tostring rule that allows multiple source nonterminals 2We use “X” to denote a nonterminal in the target string. If there are more than one nonterminals, they are indexed. 3The term “forest” refers to an ordered and finite set of trees. 4We still use “r” to represent a forest-to-string rule to reduce notational overhead. 705 No. Rule (1) ( IP ( NP ) ( VP ) ( PU ) ) X1 X2 X3 1:1 2:2 3:3 (2) ( NP ( NN ) ) The gunman 1:1 1:2 (3) ( VP ( SB ) ( VP ( NP ( NN ) ) ( VV  ) ) ) was killed by X 1:1 2:4 3:2 (4) ( NN ) police 1:1 (5) ( PU ) . 1:1 Table 1: A derivation composed of only tree-to-string rules for Figure 1. No. Rule (1) ( IP ( NP ) ( VP ( SB ) ( VP ) ) ( PU ) ) X1 X2 1:1 2:1 3:2 4:2 (2) ( NP ( NN ) ) ( SB ) The gunman was 1:1 1:2 2:3 (3) ( VP ( NP ) ( VV  ) ) ( PU ) killed by X . 1:3 2:1 3:4 (4) ( NP ( NN ) ) police 1:1 Table 2: A derivation composed of tree-to-string, forest-to-string, and auxiliary rules for Figure 1. to correspond to one target nonterminal, suggesting that the forest-to-string rules that are rooted at such source nonterminals can be integrated. For example, Table 2 shows a derivation composed of tree-to-string, forest-to-string, and auxiliary rules for the ⟨T, S, A⟩tuple in Figure 1. r1 is an auxiliary rule, r2 and r3 are forest-to-string rules, and r4 is a conventional tree-to-string rule. Following Marcu et al. (2006), we define the probability of a tuple ⟨T, S, A⟩as the sum over all derivations θi ∈Θ that are consistent with the tuple, c(Θ) = ⟨T, S, A⟩. The probability of each derivation θi is given by the product of the probabilities of all the rules p(rj) in the derivation. Pr(T, S, A) =  θi∈Θ,c(Θ)=⟨T,S,A⟩  rj∈θi p(rj) (1) 3 Training We obtain tree-to-string and forest-to-string rules from word-aligned, source side parsed bilingual corpus. The extraction algorithm is shown in Figure 2. Note that T ′ denotes either a tree or a forest. For each span, the ⟨tree/forest, string, alignment⟩ triples are identified first. If a triple is consistent with the alignment, the skeleton of the triple is computed then. A skeleton s is a rule satisfying the following: 1. s ∈R(t), s is induced from t. 2. node(T(s)) ≥2, the tree/forest of s contains two or more nodes. 3. ∀r ∈R(t) ∧node(T(r)) ≥2, T(s) ⊆T(r), the tree/forest of s is the subgraph of that of any r containing two or more nodes. 1: Input: a source tree T = T(f J 1 ), a target string S = eI 1, and word alignment A between them 2: R := ∅ 3: for u := 0 to J −1 do 4: for v := 1 to J −u do 5: identify the triple set T corresponding to span (v, v + u) 6: for each triple t = ⟨T ′, S′, A′⟩∈T do 7: if ⟨T ′, S′⟩is not consistent with A then 8: continue 9: end if 10: if u = 0 ∧node(T ′) = 1 then 11: add t to R 12: add ⟨root(T ′), “X”, 1:1⟩to R 13: else 14: compute the skeleton s of the triple t 15: register rules that are built on s using rules extracted from the sub-triples of t: R := R ∪build(s, R) 16: end if 17: end for 18: end for 19: end for 20: Output: rule set R Figure 2: Rule extraction algorithm. Given the skeleton and rules extracted from the sub-triples, the rules for the triple can be acquired. For example, the algorithm identifies the following triple for span (1, 2) in Figure 1: ⟨( NP ( NN ) ) ( SB ),“The gunman was”, 1:1 1:2 2:3⟩ The skeleton of the triple is: ⟨( NP ) ( SB ),“X1 X2”, 1:1 2:2⟩ As the algorithm proceeds bottom-up, five rules have already been extracted from the sub-triples, rooted at “NP” and “SB” respectively: ⟨( NP ),“X”, 1:1⟩ ⟨( NP ( NN ) ),“X”, 1:1⟩ ⟨( NP ( NN ) ),“The gunman”, 1:1 1:2⟩ 706 ⟨( SB ),“X”, 1:1⟩ ⟨( SB ),“was”, 1:1⟩ Hence, we can obtain new rules by replacing the source and target symbols of the skeleton with corresponding rules and also by modifying the alignment information. For the above triple, the combination of the five rules produces 2 × 3 = 6 new rules: ⟨( NP ) ( SB ),“X1 X2”, 1:1 2:2⟩ ⟨( NP ) ( SB ),“X was”, 1:1 2:2⟩ ⟨( NP ( NN ) ) ( SB ),“X1 X2”, 1:1 2:2⟩ ⟨( NP ( NN ) ) ( SB ),“X was”, 1:1 2:2⟩ ⟨( NP ( NN ) ) ( SB ),“The gunman X”, 1:1 1:2⟩ ⟨( NP ( NN ) ) ( SB ),“The gunman was”, 1:1 1:2 2:3⟩ Since we need only to check the alignment consistency, in principle all phrase pairs can be captured by tree-to-string and forest-to-string rules. To lower the complexity for both training and decoding, we impose four restrictions: 1. Both the first and the last symbols in the target string must be aligned to some source symbols. 2. The height of a tree or forest is no greater than h. 3. The number of direct descendants of a node is no greater than c. 4. The number of leaf nodes is no greater than l. Although possible, it is infeasible to learn auxiliary rules from training data. To extract an auxiliary rule which integrates at least one forest-to-string rule, one need traverse the parse tree upwards until one reaches a node that subsumes the entire forest without violating the alignment consistency. This usually results in very complex auxiliary rules, especially on real-world training data, making both training and decoding very slow. As a result, we construct auxiliary rules in decoding instead. 4 Decoding Given a source parse tree T(fJ 1 ), our decoder finds the target yield of the single best derivation that has source yield of T(fJ 1 ): ˆS = argmax S,A Pr(T, S, A) = argmax S,A  θi∈Θ,c(Θ)=⟨T,S,A⟩  rj∈θi p(rj) 1: Input: a source parse tree T = T(fJ 1 ) 2: for u := 0 to J −1 do 3: for v := 1 to J −u do 4: for each T ′ spanning from v to v + u do 5: if T ′ is a tree then 6: for each usable tree-to-string rule r do 7: for each derivation θ inferred from r and derivations in matrix do 8: add θ to matrix[v, v + u, root(T ′)] 9: end for 10: end for 11: search subcell divisions D[v, v + u] 12: for each subcell division d ∈D[v, v + u] do 13: if d contains at least one forest cell then 14: construct auxiliary rule ra 15: for each derivation θ inferred from ra and derivations in matrix do 16: add θ to matrix[v, v + u, root(T ′)] 17: end for 18: end if 19: end for 20: else 21: for each usable forest-to-string rule r do 22: for each derivation θ inferred from r and derivations in matrix do 23: add θ to matrix[v, v + u, “”] 24: end for 25: end for 26: search subcell divisions D[v, v + u] 27: end if 28: end for 29: end for 30: end for 31: find the best derivation ˆθ in matrix[1, J, root(T)] and get the best translation ˆS = e(ˆθ) 32: Output: a target string ˆS Figure 3: Decoding algorithm. ≈ argmax S,A,θ  rj∈θ,c(θ)=⟨T,S,A⟩ p(rj) (2) Figure 3 demonstrates the decoding algorithm. It organizes the derivations into an array matrix whose cells matrix[j1, j2, X] are sets of derivations. [j1, j2, X] represents a tree/forest rooted at X spanning from j1 to j2. We use the empty string “” to denote the pseudo root of a forest. Next, we will explain how to infer derivations for a tree/forest provided a usable rule. If T(r) = T′, there is only one derivation which contains only the rule r. This usually happens for leaf nodes. If T(r) ⊂T ′, the rule r resorts to derivations from subcells to infer new derivations. Suppose that the decoder is to translate the source tree in Figure 1 and finds a usable rule for [1, 5, “IP”]: ⟨( IP ( NP ) ( VP ) ( PU ) ),“X1 X2 X3”, 1:1 2:2 3:3⟩ 707 Subcell Division Auxiliary Rule [1, 1][2, 2][3, 5] ( IP ( NP ) ( VP ( SB ) ( VP ) ) ( PU ) ) X1 X2 X3 1:1 2:2 3:3 4:3 [1, 2][3, 4][5, 5] ( IP ( NP ) ( VP ( SB ) ( VP ) ) ( PU ) ) X1 X2 X3 1:1 2:1 3:2 4:3 [1, 3][4, 5] ( IP ( NP ) ( VP ( SB ) ( VP ( NP ) ( VV ) ) ) ( PU ) ) X1 X2 1:1 2:1 3:1 4:2 5:2 [1, 1][2, 5] ( IP ( NP ) ( VP ) ( PU ) ) X1 X2 1:1 2:2 3:2 Table 3: Subcell divisions and corresponding auxiliary rules for the source tree in Figure 1 Since the decoding algorithm proceeds in a bottom-up fashion, the uncovered portions have already been translated. For [1, 1, “NP”], suppose that we can find a derivation in matrix: ⟨( NP ( NN ) ),“The gunman”, 1:1 1:2⟩ For [2, 4, “VP”], we find a derivation in matrix: ⟨( VP ( SB ) ( VP ( NP ( NN )) (VV ) ) ), “was killed by X”, 1:1 2:4 3:2⟩ ⟨( NN ),“police”, 1:1⟩ For [5, 5, “PU”], we find a derivation in matrix: ⟨( PU ),“.”, 1:1⟩ Henceforth, we get a derivation for [1, 5, “IP”], shown in Table 1. A translation rule r is said to be usable to an input tree/forest T′ if and only if: 1. T(r) ⊆T ′, the tree/forest of r is the subgraph of T ′. 2. root(T(r)) = root(T ′), the root sequence of T(r) is identical to that of T′. For example, the following rules are usable to the tree “( NP ( NR ) ( NN ) )”: ⟨( NP ( NR ) ( NN ) ),“X1 X2”, 1:2 2:1⟩ ⟨( NP ( NR ) ( NN ) ),“China X”, 1:1 2:2⟩ ⟨( NP ( NR ) ( NN  ) ),“China economy”, 1:1 2:2⟩ Similarly, the forest-to-string rule ⟨( ( NP ( NR ) ( NN ) ) ( VP ) ),“X1 X2 X3”, 1:2 2:1 3:3⟩ is usable to the forest ( NP ( NR  ) ( NN ) ) ( VP (VV  )( NN  ) ) As we mentioned before, auxiliary rules are special unlexicalized tree-to-string rules that are built in decoding rather than learnt from real-world data. To get an auxiliary rule for a cell, we need first identify its subcell division. A cell sequence c1, c2, . . . , cn is referred to as a subcell division of a cell c if and only if: 1. c1.begin = c.begin 1: Input: a cell [j1, j2], the derivation array matrix, the subcell division array D 2: if j1 = j2 then 3: ˆp := 0 4: for each derivation θ in matrix[j1, j2, ·] do 5: ˆp := max(p(θ), ˆp) 6: end for 7: add {[j1, j2]} : ˆp to D[j1, j2] 8: else 9: if [j1, j2] is a forest cell then 10: ˆp := 0 11: for each derivation θ in matrix[j1, j2, ·] do 12: ˆp := max(p(θ), ˆp) 13: end for 14: add {[j1, j2]} : ˆp to D[j1, j2] 15: end if 16: for j := j1 to j2 −1 do 17: for each division d1 ∈D[j1, j] do 18: for each division d2 ∈D[j + 1, j2] do 19: create a new division: d := d1 ⊕d2 20: add d to D[j1, j2] 21: end for 22: end for 23: end for 24: end if 25: Output: subcell divisions D[j1, j2] Figure 4: Subcell division search algorithm. 2. cn.end = c.end 3. cj.end + 1 = cj+1.begin, 1 ≤j < n Given a subcell division, it is easy to construct the auxiliary rule for a cell. For each subcell, one need transverse the parse tree upwards until one reaches nodes that subsume it. All descendants of these nodes are dropped. The target string consists of only nonterminals, the number of which is identical to that of subcells. To limit the search space, we assume that the alignment between the source tree and the target string is monotone. Table 3 shows some subcell divisions and corresponding auxiliary rules constructed for the source tree in Figure 1. For simplicity, we ignore the root node label. There are 2n−1 subcell divisions for a cell which has a length of n. We need only consider the sub708 cell divisions which contain at least one forest cell because tree-to-string rules have already explored those contain only tree cells. The actual search algorithm for subcell divisions is shown in Figure 4. We use matrix[j1, j2, ·] to denote all trees or forests spanning from j1 to j2. The subcell divisions and their associated probabilities are stored in an array D. We define an operator ⊕ between two divisions: their cell sequences are concatenated and the probabilities are accumulated. As sometimes there are no usable rules available, we introduce default rules to ensure that we can always get a translation for any input parse tree. A default rule is a tree-to-string rule 5, built in two ways: 1. If the input tree contains only one node, the target string of the default rule is equal to the source string. 2. If the height of the input tree is greater than one, the tree of the default rule contains only the root node and its direct descendants of the input tree, the string contains only nonterminals, and the alignment is monotone. To speed up the decoder, we limit the search space by reducing the number of rules used for each cell. There are two ways to limit the rule table size: by a fixed limit a of how many rules are retrieved for each cell, and by a probability threshold α that specify that the rule probability has to be above some value. Also, instead of keeping the full list of derivations for a cell, we store a top-scoring subset of the derivations. This can also be done by a fixed limit b or a threshold β. The subcell division array D, in which divisions containing forest cells have priority over those composed of only tree cells, is pruned by keeping only a-best divisions. Following Och and Ney (2002), we base our model on log-linear framework and adopt the seven feature functions described in (Liu et al., 2006). It is very important to balance the preference between conventional tree-to-string rules and the newlyintroduced forest-to-string and auxiliary rules. As the probabilities of auxiliary rules are not learnt from training data, we add a feature that sums up the 5There are no default rules for forests because only tree-tostring rules are essential to tree-to-string translation models. node count of auxiliary rules of a derivation to penalize the use of forest-to-string and auxiliary rules. 5 Experiments In this section, we report on experiments with Chinese-to-English translation. The training corpus consists of 31, 149 sentence pairs with 843, 256 Chinese words and 949, 583 English words. For the language model, we used SRI Language Modeling Toolkit (Stolcke, 2002) to train a trigram model with modified Kneser-Ney smoothing (Chen and Goodman, 1998) on the 31, 149 English sentences. We selected 571 short sentences from the 2002 NIST MT Evaluation test set as our development corpus, and used the 2005 NIST MT Evaluation test set as our test corpus. Our evaluation metric is BLEU-4 (Papineni et al., 2002), as calculated by the script mteval-v11b.pl with its default setting except that we used case-sensitive matching of n-grams. To perform minimum error rate training (Och, 2003) to tune the feature weights to maximize the system’s BLEU score on development set, we used the script optimizeV5IBMBLEU.m (Venugopal and Vogel, 2005). We ran GIZA++ (Och and Ney, 2000) on the training corpus in both directions using its default setting, and then applied the refinement rule “diagand” described in (Koehn et al., 2003) to obtain a single many-to-many word alignment for each sentence pair. Next, we employed a Chinese parser written by Deyi Xiong (Xiong et al., 2005) to parse all the 31, 149 Chinese sentences. The parser was trained on articles 1-270 of Penn Chinese Treebank version 1.0 and achieved 79.4% in terms of F1 measure. Given the word-aligned, source side parsed bilingual corpus, we obtained bilingual phrases using the training toolkits publicly released by Philipp Koehn with its default setting. Then, we applied extraction algorithm described in Figure 2 to extract both tree-to-string and forest-to-string rules by restricting h = 3, c = 5, and l = 7. All the rules, including bilingual phrases, tree-to-string rules, and forest-tostring rules, are filtered for the development and test sets. According to different levels of lexicalization, we divide translation rules into three categories: 709 Rule L P U Total BP 251, 173 0 0 251, 173 TR 56, 983 41, 027 3, 529 101, 539 FR 16, 609 254, 346 25, 051 296, 006 Table 4: Number of rules used in experiments (BP: bilingual phrase, TR: tree-to-string rule, FR: forestto-string rule; L: lexicalized, P: partial lexicalized, U: unlexicalized). System Rule Set BLEU4 Pharaoh BP 0.2182 ± 0.0089 BP 0.2059 ± 0.0083 TR 0.2302 ± 0.0089 Lynx TR + BP 0.2346 ± 0.0088 TR + FR + AR 0.2402 ± 0.0087 Table 5: Comparison of Pharaoh and Lynx with different rule sets. 1. lexicalized: all symbols in both the source and target strings are terminals 2. unlexicalized: all symbols in both the source and target strings are nonterminals 3. partial lexicalized: otherwise Table 4 shows the statistics of rules used in our experiments. We find that even though forest-to-string rules are introduced the total number (i.e. 73, 592) of lexicalized tree-to-string and forest-to-string rules is still far less than that (i.e. 251, 173) of bilingual phrases. This difference results from the restriction we impose in training that both the first and last symbols in the target string must be aligned to some source symbols. For the forest-to-string rules, partial lexicalized ones are in the majority. We compared our system Lynx against a freely available phrase-based decoder Pharaoh (Koehn et al., 2003). For Pharaoh, we set a = 20, α = 0, b = 100, β = 10−5, and distortion limit dl = 4. For Lynx, we set a = 20, α = 0, b = 100, and β = 0. Two postprocessing procedures ran to improve the outputs of both systems: OOVs removal and recapitalization. Table 5 shows results on test set using Pharaoh and Lynx with different rule sets. Note that Lynx is capable of using only bilingual phrases plus deForest-to-String Rule Set BLEU4 None 0.2225 ± 0.0085 L 0.2297 ± 0.0081 P 0.2279 ± 0.0083 U 0.2270 ± 0.0087 L + P + U 0.2312 ± 0.0082 Table 6: Effect of lexicalized, partial lexicalized, and unlexicalized forest-to-string rules. fault rules to perform monotone search. The 95% confidence intervals were computed using Zhang’s significance tester (Zhang et al., 2004). We modified it to conform to NIST’s current definition of the BLEU brevity penalty. We find that Lynx outperforms Pharaoh significantly. The integration of forest-to-string rules achieves an absolute improvement of 1.0% (4.3% relative) over using tree-tostring rules only. This difference is statistically significant (p < 0.01). It also achieves better result than treating bilingual phrases as lexicalized tree-tostring rules. To produce the best result of 0.2402, Lynx made use of 26, 082 tree-to-string rules, 9, 219 default rules, 5, 432 forest-to-string rules, and 2, 919 auxiliary rules. This suggests that tree-to-string rules still play a central role, although the integration of forest-to-string and auxiliary rules is really beneficial. Table 6 demonstrates the effect of forest-to-string rules with different lexicalization levels. We set a = 3, α = 0, b = 10, and β = 0. The second row “None” shows the result of using only tree-to-string rules. “L” denotes using tree-to-string rules and lexicalized forest-to-string rules. Similarly, “L+P+U” denotes using tree-to-string rules and all forest-tostring rules. We find that lexicalized forest-to-string rules are more useful. 6 Conclusion In this paper, we introduce forest-to-string rules to capture non-syntactic phrase pairs that are usually unaccessible to traditional tree-to-string translation models. With the help of auxiliary rules, forest-tostring rules can be integrated into tree-to-string models to offer more general derivations. Experiment results show that the tree-to-string model augmented with forest-to-string rules significantly outperforms 710 the original model which allows tree-to-string rules only. Our current rule extraction algorithm attaches the unaligned target words to the nearest ascendants that subsume them. This constraint hampers the expressive power of our model. We will try a more general way as suggested in (Galley et al., 2006), making no a priori assumption about assignment and using EM training to learn the probability distribution. We will also conduct experiments on large scale training data to further examine our design philosophy. Acknowledgement This work was supported by National Natural Science Foundation of China, Contract No. 60603095 and 60573188. References Stanley F. Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical report, Harvard University Center for Research in Computing Technology. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL 2005, pages 263–270, Ann Arbor, Michigan, June. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proceedings of HLT/NAACL 2004, pages 273–280, Boston, Massachusetts, USA, May. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceedings of COLING/ACL 2006, pages 961–968, Sydney, Australia, July. Philipp Koehn, Franz Joseph Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT/NAACL 2003, pages 127–133, Edmonton, Canada, May. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-tostring alignment template for statistical machine translation. In Proceedings of COLING/ACL 2006, pages 609–616, Sydney, Australia, July. Daniel Marcu, Wei Wang, Abdessamad Echihabi, and Kevin Knight. 2006. Spmt: Statistical machine translation with syntactified target language phrases. In Proceedings of EMNLP 2006, pages 44–52, Sydney, Australia, July. Franz J. Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of ACL 2000, pages 440–447. Franz J. Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of ACL 2002, pages 295–302. Franz J. Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417–449. Franz J. Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL 2003, pages 160–167. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL 2002, pages 311–318, Philadephia, USA, July. Chris Quirk and Simon Corston-Oliver. 2006. The impact of parse quality on syntactically-informed statistical machine translation. In Proceedings of EMNLP 2006, pages 62–69, Sydney, Australia, July. Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency treelet translation: Syntactically informed phrasal SMT. In Proceedings of ACL 2005, pages 271–279, Ann Arbor, Michigan, June. Andreas Stolcke. 2002. Srilm - an extensible language modeling toolkit. In Proceedings of International Conference on Spoken Language Processing, volume 30, pages 901–904. Ashish Venugopal and Stephan Vogel. 2005. Considerations in maximum mutual information and minimum classification error training for statistical machine translation. In Proceedings of the Tenth Conference of the European Association for Machine Translation, pages 271–279. Deyi Xiong, Shuanglong Li, Qun Liu, and Shouxun Lin. 2005. Parsing the penn chinese treebank with semantic knowledge. In Proceedings of IJCNLP 2005, pages 70–81. Ying Zhang, Stephan Vogel, and Alex Waibel. 2004. Interpreting bleu/nist scores how much improvement do we need to have a better system? In Proceedings of Fourth International Conference on Language Resources and Evaluation, pages 2051–2054. 711
2007
89
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 65–72, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics A Bayesian Model for Discovering Typological Implications Hal Daum´e III School of Computing University of Utah [email protected] Lyle Campbell Department of Linguistics University of Utah [email protected] Abstract A standard form of analysis for linguistic typology is the universal implication. These implications state facts about the range of extant languages, such as “if objects come after verbs, then adjectives come after nouns.” Such implications are typically discovered by painstaking hand analysis over a small sample of languages. We propose a computational model for assisting at this process. Our model is able to discover both well-known implications as well as some novel implications that deserve further study. Moreover, through a careful application of hierarchical analysis, we are able to cope with the well-known sampling problem: languages are not independent. 1 Introduction Linguistic typology aims to distinguish between logically possible languages and actually observed languages. A fundamental building block for such an understanding is the universal implication (Greenberg, 1963). These are short statements that restrict the space of languages in a concrete way (for instance “object-verb ordering implies adjective-noun ordering”); Croft (2003), Hawkins (1983) and Song (2001) provide excellent introductions to linguistic typology. We present a statistical model for automatically discovering such implications from a large typological database (Haspelmath et al., 2005). Analyses of universal implications are typically performed by linguists, inspecting an array of 30100 languages and a few pairs of features. Looking at all pairs of features (typically several hundred) is virtually impossible by hand. Moreover, it is insufficient to simply look at counts. For instance, results presented in the form “verb precedes object implies prepositions in 16/19 languages” are nonconclusive. While compelling, this is not enough evidence to decide if this is a statistically well-founded implication. For one, maybe 99% of languages have prepositions: then the fact that we’ve achieved a rate of 84% actually seems really bad. Moreover, if the 16 languages are highly related historically or areally (geographically), and the other 3 are not, then we may have only learned something about geography. In this work, we propose a statistical model that deals cleanly with these difficulties. By building a computational model, it is possible to apply it to a very large typological database and search over many thousands of pairs of features. Our model hinges on two novel components: a statistical noise model a hierarchical inference over language families. To our knowledge, there is no prior work directly in this area. The closest work is represented by the books Possible and Probable Languages (Newmeyer, 2005) and Language Classification by Numbers (McMahon and McMahon, 2005), but the focus of these books is on automatically discovering phylogenetic trees for languages based on Indo-European cognate sets (Dyen et al., 1992). 2 Data The database on which we perform our analysis is the World Atlas of Language Structures (Haspelmath et al., 2005). This database contains information about 2150 languages (sampled from across the world; Figure 1 depicts the locations of lan65 Numeral Glottalized Number of Language Classifiers Rel/N Order O/V Order Consonants Tone Genders English Absent NRel VO None None Three Hindi Absent RelN OV None None Two Mandarin Obligatory RelN VO None Complex None Russian Absent NRel VO None None Three Tukang Besi Absent ? Either Implosives None Three Zulu Absent NRel VO Ejectives Simple Five+ Table 1: Example database entries for a selection of diverse languages and features. −150 −100 −50 0 50 100 150 −40 −20 0 20 40 60 Figure 1: Map of the 2150 languages in the database. guages). There are 139 features in this database, broken down into categories such as “Nominal Categories,” “Simple Clauses,” “Phonology,” “Word Order,” etc. The database is sparse: for many language/feature pairs, the feature value is unknown. In fact, only about 16% of all possible language/feature pairs are known. A sample of five languages and six features from the database are shown in Table 1. Importantly, the density of samples is not random. For certain languages (eg., English, Chinese, Russian), nearly all features are known, whereas other languages (eg., Asturian, Omagua, Frisian) that have fewer than five feature values known. Furthermore, some features are known for many languages. This is due to the fact that certain features take less effort to identify than others. Identifying, for instance, if a language has a particular set of phonological features (such as glottalized consonants) requires only listening to speakers. Other features, such as determining the order of relative clauses and nouns require understanding much more of the language. 3 Models In this section, we propose two models for automatically uncovering universal implications from noisy, sparse data. First, note that even well attested implications are not always exceptionless. A common example is that verbs preceding objects (“VO”) implies adjectives following nouns (“NA”). This implication (VO ⊃NA) has one glaring exception: English. This is one particular form of noise. Another source of noise stems from transcription. WALS contains data about languages documented by field linguists as early as the 1900s. Much of this older data was collected before there was significant agreement in documentation style. Different field linguists often had different dimensions along which they segmented language features into classes. This leads to noise in the properties of individual languages. Another difficulty stems from the sampling problem. This is a well-documented issue (see, eg., (Croft, 2003)) stemming from the fact that any set of languages is not sampled uniformly from the space of all probable languages. Politically interesting languages (eg., Indo-European) and typologically unusual languages (eg., Dyirbal) are better documented than others. Moreover, languages are not independent: German and Dutch are more similar than German and Hindi due to history and geography. The first model, FLAT, treats each language as independent. It is thus susceptible to sampling problems. For instance, the WALS database contains a half dozen versions of German. The FLAT model considers these versions of German just as statistically independent as, say, German and Hindi. To cope with this problem, we then augment the FLAT model into a HIERarchical model that takes advantage of known hierarchies in linguistic phylogenetics. The HIER model explicitly models the fact that individual languages are not independent and exhibit strong familial dependencies. In both models, we initially restrict our attention to pairs of features. We will describe our models as if all features are binary. We expand any multi-valued feature with K values into K binary features in a “one versus rest” manner. 3.1 The FLAT Model In the FLAT model, we consider a 2 × N matrix of feature values. The N corresponds to the number of languages, while the 2 corresponds to the two features currently under consideration (eg., object/verb order and noun/adjective order). The order of the 66 two features is important: f1 implies f2 is logically different from f2 implies f1. Some of the entries in the matrix will be unknown. We may safely remove all languages from consideration for which both are unknown, but we do not remove languages for which only one is unknown. We do so because our model needs to capture the fact that if f2 is always true, then f1 ⊃f2 is uninteresting. The statistical model is set up as follows. There is a single variable (we will denote this variable “m”) corresponding to whether the implication holds. Thus, m = 1 means that f1 implies f2 and m = 0 means that it does not. Independent of m, we specify two feature priors, π1 and π2 for f1 and f2 respectively. π1 specifies the prior probability that f1 will be true, and π2 specifies the prior probability that f2 will be true. One can then put the model together na¨ıvely as follows. If m = 0 (i.e., the implication does not hold), then the entire data matrix is generated by choosing values for f1 (resp., f2) independently according to the prior probability π1 (resp., π2). On the other hand, if m = 1 (i.e., the implication does hold), then the first column of the data matrix is generated by choosing values for f1 independently by π1, but the second column is generated differently. In particular, if for a particular language, we have that f1 is true, then the fact that the implication holds means that f2 must be true. On the other hand, if f1 is false for a particular language, then we may generate f2 according to the prior probability π2. Thus, having m = 1 means that the model is significantly more constrained. In equations: p(f1 | π1) = πf1 1 (1 −π1)1−f1 p(f2 | f1, π2, m) = f2 m = f1 = 1 πf2 2 (1 −π2)1−f2 otherwise The problem with this na¨ıve model is that it does not take into account the fact that there is “noise” in the data. (By noise, we refer either to misannotations, or to “strange” languages like English.) To account for this, we introduce a simple noise model. There are several options for parameterizing the noise, depending on what independence assumptions we wish to make. One could simply specify a noise rate for the entire data set. One could alternatively specify a language-specific noise rate. Or one could specify a feature-specific noise rate. We opt for a blend between the first and second opFigure 2: Graphical model for the FLAT model. tion. We assume an underlying noise rate for the entire data set, but that, conditioned on this underlying rate, there is a language-specific noise level. We believe this to be an appropriate noise model because it models the fact that the majority of information for a single language is from a single source. Thus, if there is an error in the database, it is more likely that other errors will be for the same languages. In order to model this statistically, we assume that there are latent variables e1,n and e2,n for each language n. If e1,n = 1, then the first feature for language n is wrong. Similarly, if e2,n = 1, then the second feature for language n is wrong. Given this model, the probabilities are exactly as in the na¨ıve model, with the exception that instead of using f1 (resp., f2), we use the exclusive-or1 f1 ⊗e1 (resp., f2 ⊗e2) so that the feature values are flipped whenever the noise model suggests an error. The graphical model for the FLAT model is shown in Figure 2. Circular nodes denote random variables and arrows denote conditional dependencies. The rectangular plate denotes the fact that the elements contained within it are replicated N times (N is the number of languages). In this model, there are four “root” nodes: the implication value m; the two feature prior probabilities π1 and π2; and the languagespecific error rate ǫ. On all of these nodes we place Bayesian priors. Since m is a binary random variable, we place a Bernoulli prior on it. The πs are Bernoulli random variables, so they are given independent Beta priors. Finally, the noise rate ǫ is also given a Beta prior. For the two Beta parameters governing the error rate (i.e., aǫ and bǫ) we set these by hand so that the mean expected error rate is 5% and the probability of the error rate being between 0% and 10% is 50% (this number is based on an expert opinion of the noise-rate in the data). For the rest of 1The exclusive-or of a and b, written a ⊗b, is true exactly when either a or b is true but not both. 67 the parameters we use uniform priors. 3.2 The HIER Model A significant difficulty in working with any large typological database is that the languages will be sampled nonuniformly. In our case, this means that implications that seem true in the FLAT model may only be true for, say, Indo-European, and the remaining languages are considered noise. While this may be interesting in its own right, we are more interested in discovering implications that are truly universal. We model this using a hierarchical Bayesian model. In essence, we take the FLAT model and build a notion of language relatedness into it. In particular, we enforce a hierarchy on the m implication variables. For simplicity, suppose that our “hierarchy” of languages is nearly flat. Of the N languages, half of them are Indo-European and the other half are Austronesian. We will use a nearly identical model to the FLAT model, but instead of having a single m variable, we have three: one for IE, one for Austronesian and one for “all languages.” For a general tree, we assign one implication variable for each node (including the root and leaves). The goal of the inference is to infer the value of the m variable corresponding to the root of the tree. All that is left to specify the full HIER model is to specify the probability distribution of the m random variables. We do this as follows. We place a zero mean Gaussian prior with (unknown) variance σ2 on the root m. Then, for a non-root node, we use a Gaussian with mean equal to the “m” value of the parent and tied variance σ2. In our three-node example, this means that the root is distributed Nor(0, σ2) and each child is distributed Nor(mroot, σ2), where mroot is the random variable corresponding to the root. Finally, the leaves (corresponding to the languages themselves) are distributed logistic-binomial. Thus, the m random variable corresponding to a leaf (language) is distributed Bin(s(mpar)), where mpar is the m value for the parent (internal) node and s is the sigmoid function s(x) = [1 + exp(−x)]−1. The intuition behind this model is that the m value at each node in the tree (where a node is either “all languages” or a specific language family or an individual language) specifies the extent to which the implication under consideration holds for that node. A large positive m means that the implication is very likely to hold. A large negative value means it is very likely to not hold. The normal distributions across edges in the tree indicate that we expect the m values not to change too much across the tree. At the leaves (i.e., individual languages), the logisticbinomial simply transforms the real-valued ms into the range [0, 1] so as to make an appropriate input to the binomial distribution. 4 Statistical Inference In this section, we describe how we use Markov chain Monte Carlo methods to perform inference in the statistical models described in the previous section; Andrieu et al. (2003) provide an excellent introduction to MCMC techniques. The key idea behind MCMC techniques is to approximate intractable expectations by drawing random samples from the probability distribution of interest. The expectation can then be approximated by an empirical expectation over these sample. For the FLAT model, we use a combination of Gibbs sampling with rejection sampling as a subroutine. Essentially, all sampling steps are standard Gibbs steps, except for sampling the error rates e. The Gibbs step is not available analytically for these. Hence, we use rejection sampling (drawing from the Beta prior and accepting according to the posterior). The sampling procedure for the HIER model is only slightly more complicated. Instead of performing a simple Gibbs sample for m in Step (4), we first sample the m values for the internal nodes using simple Gibbs updates. For the leaf nodes, we use rejection sampling. For this rejection, we draw proposal values from the Gaussian specified by the parent m, and compute acceptance probabilities. In all cases, we run the outer Gibbs sampler for 1000 iterations and each rejection sampler for 20 iterations. We compute the marginal values for the m implication variables by averaging the sampled values after dropping 200 “burn-in” iterations. 5 Data Preprocessing and Search After extracting the raw data from the WALS electronic database (Haspelmath et al., 2005)2, we perform a minor amount of preprocessing. Essentially, we have manually removed certain feature 2This is nontrivial—we are currently exploring the possibility of freely sharing these data. 68 values from the database because they are underrepresented. For instance, the “Glottalized Consonants” feature has eight possible values (one for “none” and seven for different varieties of glottalized consonants). We reduce this to simply two values “has” or “has not.” 313 languages have no glottalized consonants and 139 have some variety of glottalized consonant. We have done something similar with approximately twenty of the features. For the HIER model, we obtain the hierarchy in one of two ways. The first hierarchy we use is the “linguistic hierarchy” specified as part of the WALS data. This hierarchy divides languages into families and subfamilies. This leads to a tree with the leaves at depth four. The root has 38 immediate children (corresponding to the major families), and there are a total of 314 internal nodes. The second hierarchy we use is an areal hierarchy obtained by clustering languages according to their latitude and longitude. For the clustering we first cluster all the languages into 6 “macro-clusters.” We then cluster each macro-cluster individually into 25 “micro-clusters.” These micro-clusters then have the languages at their leaves. This yields a tree with 31 internal nodes. Given the database (which contains approximately 140 features), performing a raw search even over all possible pairs of features would lead to over 19, 000 computations. In order to reduce this space to a more manageable number, we filter: • There must be at least 250 languages for which both features are known. • There must be at least 15 languages for which both feature values hold simultaneously. • Whenever f1 is true, at least half of the languages also have f2 true. Performing all these filtration steps reduces the number of pairs under consideration to 3442. While this remains a computationally expensive procedure, we were able to perform all the implication computations for these 3442 possible pairs in about a week on a single modern machine (in Matlab). 6 Results The task of discovering universal implications is, at its heart, a data-mining task. As such, it is difficult to evaluate, since we often do not know the correct answers! If our model only found well-documented implications, this would be interesting but useless from the perspective of aiding linguists focus their energies on new, plausible implications. In this section, we present the results of our method, together with both a quantitative and qualitative evaluation. 6.1 Quantitative Evaluation In this section, we perform a quantitative evaluation of the results based on predictive power. That is, one generally would prefer a system that finds implications that hold with high probability across the data. The word “generally” is important: this quality is neither necessary nor sufficient for the model to be good. For instance, finding 1000 implications of the form A1 ⊃X, A2 ⊃X, . . . , A1000 ⊃X is completely uninteresting if X is true in 99% of the cases. Similarly, suppose that a model can find 1000 implications of the form X ⊃A1, . . . , X ⊃A1000, but X is only true in five languages. In both of these cases, according to a “predictive power” measure, these would be ideal systems. But they are both somewhat uninteresting. Despite these difficulties with a predictive powerbased evaluation, we feel that it is a good way to understand the relative merits of our different models. Thus, we compare the following systems: FLAT (our proposed flat model), LINGHIER (our model using the phylogenetic hierarchy), DISTHIER (our model using the areal hierarchy) and RANDOM (a model that ranks implications—that meet the three qualifications from the previous section—randomly). The models are scored as follows. We take the entire WALS data set and “hide” a random 10% of the entries. We then perform full inference and ask the inferred model to predict the missing values. The accuracy of the model is the accuracy of its predictions. To obtain a sense of the quality of the ranking, we perform this computation on the top k ranked implications provided by each model; k ∈{2, 4, 8, . . . , 512, 1024}. The results of this quantitative evaluation are shown in Figure 3 (on a log-scale for the x-axis). The two best-performing models are the two hierarchical models. The flat model does significantly worse and the random model does terribly. The vertical lines are a standard deviation over 100 folds of the experiment (hiding a different 10% each time). The difference between the two hierarchical models is typically not statistically significant. At the top of the ranking, the model based on phylogenetic 69 0 1 2 3 4 5 6 7 8 9 10 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Number of Implications (log2) Prediction Accuracy LingHier DistHier Flat Random Figure 3: Results of quantitative (predictive) evaluation. Top curves are the hierarchical models; middle is the flat model; bottom is the random baseline. information performs marginally better; at the bottom of the ranking, the order flips. Comparing the hierarchical models to the flat model, we see that adequately modeling the a priori similarity between languages is quite important. 6.2 Cross-model Comparison The results in the previous section support the conclusion that the two hierarchical models are doing something significantly different (and better) than the flat model. This clearly must be the case. The results, however, do not say whether the two hierarchies are substantially different. Moreover, are the results that they produce substantially different. The answer to these two questions is “yes.” We first address the issue of tree similarity. We consider all pairs of languages which are at distance 0 in the areal tree (i.e., have the same parent). We then look at the mean tree-distance between those languages in the phylogenetic tree. We do this for all distances in the areal tree (because of its construction, there are only three: 0, 2 and 4). The mean distances in the phylogenetic tree corresponding to these three distances in the areal tree are: 2.9, 3.5 and 4.0, respectively. This means that languages that are “nearby” in the areal tree are quite often very far apart in the phylogenetic tree. To answer the issue of whether the results obtained by the two trees are similar, we employ Kendall’s τ statistic. Given two ordered lists, the τ statistic computes how correlated they are. τ is always between 0 and 1, with 1 indicating identical ordering and 0 indicated completely reversed ordering. The results are as follows. Comparing FLAT to LINGHIER yield τ = 0.4144, a very low correlation. Between FLAT and DISTHIER, τ = 0.5213, also very low. These two are as expected. Finally, between LINGHIER and DISTHIER, we obtain τ = 0.5369, a very low correlation, considering that both perform well predictively. 6.3 Qualitative Analysis For the purpose of a qualitative analysis, we reproduce the top 30 implications discovered by the LINGHIER model in Table 2 (see the final page).3 Each implication is numbered, then the actual implication is presented. For instance, #7 says that any language that has adjectives preceding their governing nouns also has numerals preceding their nouns. We additionally provide an “analysis” of many of these discovered implications. Many of them (eg., #7) are well known in the typological literature. These are simply numbered according to well-known references. For instance our #7 is implication #18 from Greenberg, reproduced by Song (2001). Those that reference Hawkins (eg., #11) are based on implications described by Hawkins (1983); those that reference Lehmann are references to the principles decided by Lehmann (1981) in Ch 4 & 8. Some of the implications our model discovers are obtained by composition of well-known implications. For instance, our #3 (namely, OV ⊃GenitiveNoun) can be obtained by combining Greenberg #4 (OV ⊃Postpositions) and Greenberg #2a (Postpositions ⊃Genitive-Noun). It is quite encouraging that 14 of our top 21 discovered implications are well-known in the literature (and this, not even considering the tautalogically true implications)! This strongly suggests that our model is doing something reasonable and that there is true structure in the data. In addition to many of the known implications found by our model, there are many that are “unknown.” Space precludes attempting explanations of them all, so we focus on a few. Some are easy. Consider #8 (Strongly suffixing ⊃Tense-aspect suffixes): this is quite plausible—if you have a lan3In truth, our model discovers several tautalogical implications that we have removed by hand before presentation. These are examples like “SVO ⊃VO” or “No unusual consonants ⊃ no glottalized consonants.” It is, of course, good that our model discovers these, since they are obviously true. However, to save space, we have withheld them from presentation here. The 30th implication presented here is actually the 83rd in our full list. 70 guage that tends to have suffixes, it will probably have suffixes for tense/aspect. Similarly, #10 states that languages with verb morphology for questions lack question particles; again, this can be easily explained by an appeal to economy. Some of the discovered implications require a more involved explanation. One such example is #20: labial-velars implies no uvulars.4 It turns out that labial-velars are most common in Africa just north of the equator, which is also a place that has very few uvulars (there are a handful of other examples, mostly in Papua New Guinea). While this implication has not been previously investigated, it makes some sense: if a language has one form of rare consonant, it is unlikely to have another. As another example, consider #28: Obligatory suffix pronouns implies no possessive affixes. This means is that in languages (like English) for which pro-drop is impossible, possession is not marked morphologically on the head noun (like English, “book” appears the same regarless of if it is “his book” or “the book”). This also makes sense: if you cannot drop pronouns, then one usually will mark possession on the pronoun, not the head noun. Thus, you do not need marking on the head noun. Finally, consider #25: High and mid front vowels (i.e., / u/, etc.) implies large vowel inventory (≥7 vowels). This is supported by typological evidence that high and mid front vowels are the “last” vowels to be added to a language’s repertoire. Thus, in order to get them, you must also have many other types of vowels already, leading to a large vowel inventory. Not all examples admit a simple explanation and are worthy of further thought. Some of which (like the ones predicated on “SV”) may just be peculiarities of the annotation style: the subject verb order changes frequently between transitive and intransitive usages in many languages, and the annotation reflects just one. Some others are bizzarre: why not having fricatives should mean that you don’t have tones (#27) is not a priori clear. 6.4 Multi-conditional Implications Many implications in the literature have multiple implicants. For instance, much research has gone 4Labial-velars and uvulars are rare consonants (order 100 languages). Labial-velars are joined sounds like /kp/ and /gb/ (to English speakers, sounding like chicken noises); uvulars sounds are made in the back of the throat, like snoring. Implicants Implicand Postpositions ⊃Demonstrative-Noun Adjective-Noun Posessive prefixes ⊃Genitive-Noun Tense-aspect suffixes Case suffixes ⊃Genitive-Noun Plural suffix Adjective-Noun ⊃OV Genitive-Noun High cons/vowel ratio ⊃No tones No front-rounded vowels Negative affix ⊃OV Genitive-Noun No front-rounded vowels ⊃Large vowel quality inventory Labial velars Subordinating suffix ⊃Postpositions Tense-aspect suffixes No case affixes ⊃Initial subordinator word Prepositions Strongly suffixing ⊃Genitive-Noun Plural suffix Table 3: Top implications discovered by the LINGHIER multi-conditional model. into looking at which implications hold, considering only “VO” languages, or considering only languages with prepositions. It is straightforward to modify our model so that it searches over triples of features, conditioning on two and predicting the third. Space precludes an in-depth discussion of these results, but we present the top examples in Table 3 (after removing the tautalogically true examples, which are more numerous in this case, as well as examples that are directly obtainable from Table 2). It is encouraging that in the top 1000 multi-conditional implications found, the most frequently used were “OV” (176 times) “Postpositions” (157 times) and “AdjectiveNoun” (89 times). This result agrees with intuition. 7 Discussion We have presented a Bayesian model for discovering universal linguistic implications from a typological database. Our model is able to account for noise in a linguistically plausible manner. Our hierarchical models deal with the sampling issue in a unique way, by using prior knowledge about language families to “group” related languages. Quantitatively, the hierarchical information turns out to be quite useful, regardless of whether it is phylogenetically- or areallybased. Qualitatively, our model can recover many well-known implications as well as many more potential implications that can be the object of future linguistic study. We believe that our model is suf71 # Implicant ⊃Implicand Analysis 1 Postpositions ⊃Genitive-Noun Greenberg #2a 2 OV ⊃Postpositions Greenberg #4 3 OV ⊃Genitive-Noun Greenberg #4 + Greenberg #2a 4 Genitive-Noun ⊃Postpositions Greenberg #2a (converse) 5 Postpositions ⊃OV Greenberg #2b (converse) 6 SV ⊃Genitive-Noun ??? 7 Adjective-Noun ⊃Numeral-Noun Greenberg #18 8 Strongly suffixing ⊃Tense-aspect suffixes Clear explanation 9 VO ⊃Noun-Relative Clause Lehmann 10 Interrogative verb morph ⊃No question particle Appeal to economy 11 Numeral-Noun ⊃Demonstrative-Noun Hawkins XVI (for postpositional languages) 12 Prepositions ⊃VO Greenberg #3 (converse) 13 Adjective-Noun ⊃Demonstrative-Noun Greenberg #18 14 Noun-Adjective ⊃Postpositions Lehmann 15 SV ⊃Postpositions ??? 16 VO ⊃Prepositions Greenberg #3 17 Initial subordinator word ⊃Prepositions Operator-operand principle (Lehmann) 18 Strong prefixing ⊃Prepositions Greenberg #27b 19 Little affixation ⊃Noun-Adjective ??? 20 Labial-velars ⊃No uvular consonants See text 21 Negative word ⊃No pronominal possessive affixes See text 22 Strong prefixing ⊃VO Lehmann 23 Subordinating suffix ⊃Strongly suffixing ??? 24 Final subordinator word ⊃Postpositions Operator-operand principle (Lehmann) 25 High and mid front vowels ⊃Large vowel inventories See text 26 Plural prefix ⊃Noun-Genitive ??? 27 No fricatives ⊃No tones ??? 28 Obligatory subject pronouns ⊃No pronominal possessive affixes See text 29 Demonstrative-Noun ⊃Tense-aspect suffixes Operator-operand principle (Lehmann) 30 Prepositions ⊃Noun-Relative clause Lehmann, Hawkins Table 2: Top 30 implications discovered by the LINGHIER model. ficiently general that it could be applied to many different typological databases — we attempted not to “overfit” it to WALS. Our hope is that the automatic discovery of such implications not only aid typologically-inclined linguists, but also other groups. For instance, well-attested universal implications have the potential to reduce the amount of data field linguists need to collect. They have also been used computationally to aid in the learning of unsupervised part of speech taggers (Schone and Jurafsky, 2001). Many extensions are possible to this model; for instance attempting to uncover typologically hierarchies and other higher-order structures. We have made the full output of all models available at http://hal3.name/WALS. Acknowledgments. We are grateful to Yee Whye Teh, Eric Xing and three anonymous reviewers for their feedback on this work. References Christophe Andrieu, Nando de Freitas, Arnaud Doucet, and Michael I. Jordan. 2003. An introduction to MCMC for machine learning. Machine Learning (ML), 50:5–43. William Croft. 2003. Typology and Univerals. Cambridge University Press. Isidore Dyen, Joseph Kurskal, and Paul Black. 1992. An Indoeuropean classification: A lexicostatistical experiment. Transactions of the American Philosophical Society, 82(5). American Philosophical Society. Joseph Greenberg, editor. 1963. Universals of Languages. MIT Press. Martin Haspelmath, Matthew Dryer, David Gil, and Bernard Comrie, editors. 2005. The World Atlas of Language Structures. Oxford University Press. John A. Hawkins. 1983. Word Order Universals: Quantitative analyses of linguistic structure. Academic Press. Winfred Lehmann, editor. 1981. Syntactic Typology, volume xiv. University of Texas Press. April McMahon and Robert McMahon. 2005. Language Classification by Numbers. Oxford University Press. Frederick J. Newmeyer. 2005. Possible and Probable Languages: A Generative Perspective on Linguistic Typology. Oxford University Press. Patrick Schone and Dan Jurafsky. 2001 Language Independent Induction of Part of Speech Class Labels Using only Language Universals. Machine Learning: Beyond Supervision. Jae Jung Song. 2001. Linguistic Typology: Morphology and Syntax. Longman Linguistics Library. 72
2007
9
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 712–719, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Ordering Phrases with Function Words Hendra Setiawan and Min-Yen Kan School of Computing National University of Singapore Singapore 117543 {hendrase,kanmy}@comp.nus.edu.sg Haizhou Li Institute for Infocomm Research 21 Heng Mui Keng Terrace Singapore 119613 [email protected] Abstract This paper presents a Function Word centered, Syntax-based (FWS) solution to address phrase ordering in the context of statistical machine translation (SMT). Motivated by the observation that function words often encode grammatical relationship among phrases within a sentence, we propose a probabilistic synchronous grammar to model the ordering of function words and their left and right arguments. We improve phrase ordering performance by lexicalizing the resulting rules in a small number of cases corresponding to function words. The experiments show that the FWS approach consistently outperforms the baseline system in ordering function words’ arguments and improving translation quality in both perfect and noisy word alignment scenarios. 1 Introduction The focus of this paper is on function words, a class of words with little intrinsic meaning but is vital in expressing grammatical relationships among words within a sentence. Such encoded grammatical information, often implicit, makes function words pivotal in modeling structural divergences, as projecting them in different languages often result in longrange structural changes to the realized sentences. Just as a foreign language learner often makes mistakes in using function words, we observe that current machine translation (MT) systems often perform poorly in ordering function words’ arguments; lexically correct translations often end up reordered incorrectly. Thus, we are interested in modeling the structural divergence encoded by such function words. A key finding of our work is that modeling the ordering of the dependent arguments of function words results in better translation quality. Most current systems use statistical knowledge obtained from corpora in favor of rich natural language knowledge. Instead of using syntactic knowledge to determine function words, we approximate this by equating the most frequent words as function words. By explicitly modeling phrase ordering around these frequent words, we aim to capture the most important and prevalent ordering productions. 2 Related Work A good translation should be both faithful with adequate lexical choice to the source language and fluent in its word ordering to the target language. In pursuit of better translation, phrase-based models (Och and Ney, 2004) have significantly improved the quality over classical word-based models (Brown et al., 1993). These multiword phrasal units contribute to fluency by inherently capturing intra-phrase reordering. However, despite this progress, interphrase reordering (especially long distance ones) still poses a great challenge to statistical machine translation (SMT). The basic phrase reordering model is a simple unlexicalized, context-insensitive distortion penalty model (Koehn et al., 2003). This model assumes little or no structural divergence between language pairs, preferring the original, translated order by penalizing reordering. This simple model works well when properly coupled with a well-trained language 712 model, but is otherwise impoverished without any lexical evidence to characterize the reordering. To address this, lexicalized context-sensitive models incorporate contextual evidence. The local prediction model (Tillmann and Zhang, 2005) models structural divergence as the relative position between the translation of two neighboring phrases. Other further generalizations of orientation include the global prediction model (Nagata et al., 2006) and distortion model (Al-Onaizan and Papineni, 2006). However, these models are often fully lexicalized and sensitive to individual phrases. As a result, they are not robust to unseen phrases. A careful approximation is vital to avoid data sparseness. Proposals to alleviate this problem include utilizing bilingual phrase cluster or words at the phrase boundary (Nagata et al., 2006) as the phrase identity. The benefit of introducing lexical evidence without being fully lexicalized has been demonstrated by a recent state-of-the-art formally syntax-based model1, Hiero (Chiang, 2005). Hiero performs phrase ordering by using linked non-terminal symbols in its synchronous CFG production rules coupled with lexical evidence. However, since it is difficult to specify a well-defined rule, Hiero has to rely on weak heuristics (i.e., length-based thresholds) to extract rules. As a result, Hiero produces grammars of enormous size. Watanabe et al. (2006) further reduces the grammar’s size by enforcing all rules to comply with Greibach Normal Form. Taking the lexicalization an intuitive a step forward, we propose a novel, finer-grained solution which models the content and context information encoded by function words - approximated by high frequency words. Inspired by the success of syntaxbased approaches, we propose a synchronous grammar that accommodates gapping production rules, while focusing on the statistical modeling in relation to function words. We refer to our approach as the Function Word-centered Syntax-based approach (FWS). Our FWS approach is different from Hiero in two key aspects. First, we use only a small set of high frequency lexical items to lexicalize non-terminals in the grammar. This results in a much smaller set of rules compared to Hiero, 1Chiang (2005) used the term “formal” to indicate the use of synchronous grammar but without linguistic commitment ,\ 4  Þ { jâ Qœ ­ { ø\ a form is a coll. of data entry fields on a page ((((((((((((((((        P P P P P P P P P ` ` ` ` ` ` ` ` ` ` ` ` ` ` Figure 1: A Chinese-English sentence pair. greatly reducing the computational overhead that arises when moving from phrase-based to syntaxbased approach. Furthermore, by modeling only high frequency words, we are able to obtain reliable statistics even in small datasets. Second, as opposed to Hiero, where phrase ordering is done implicitly alongside phrase translation and lexical weighting, we directly model the reordering process using orientation statistics. The FWS approach is also akin to (Xiong et al., 2006) in using a synchronous grammar as a reordering constraint. Instead of using Inversion Transduction Grammar (ITG) (Wu, 1997) directly, we will discuss an ITG extension to accommodate gapping. 3 Phrase Ordering around Function Words We use the following Chinese (c) to English (e) translation in Fig.1 as an illustration to conduct an inquiry to the problem. Note that the sentence translation requires some translations of English words to be ordered far from their original position in Chinese. Recovering the correct English ordering requires the inversion of the Chinese postpositional phrase, followed by the inversion of the first smaller noun phrase, and finally the inversion of the second larger noun phrase. Nevertheless, the correct ordering can be recovered if the position and the semantic roles of the arguments of the boxed function words were known. Such a function word centered approach also hinges on knowing the correct phrase boundaries for the function words’ arguments and which reorderings are given precedence, in case of conflicts. We propose modeling these sources of knowledge using a statistical formalism. It includes 1) a model to capture bilingual orientations of the left and right arguments of these function words; 2) a model to approximate correct reordering sequence; and 3) a model for finding constituent boundaries of 713 the left and right arguments. Assuming that the most frequent words in a language are function words, we can apply orientation statistics associated with these words to reorder their adjacent left and right neighbors. We follow the notation in (Nagata et al., 2006) and define the following bilingual orientation values given two neighboring source (Chinese) phrases: Monotone-Adjacent (MA); ReverseAdjacent (RA); Monotone-Gap (MG); and ReverseGap (RG). The first clause (monotone, reverse) indicates whether the target language translation order follows the source order; the second (adjacent, gap) indicates whether the source phrases are adjacent or separated by an intervening phrase on the target side. Table 1 shows the orientation statistics for several function words. Note that we separate the statistics for left and right arguments to account for differences in argument structures: some function words take a single argument (e.g., prepositions), while others take two or more (e.g., copulas). To handle other reordering decisions not explicitly encoded (i.e., lexicalized) in our FWS model, we introduce a universal token U, to be used as a backoff statistic when function words are absent. For example, orientation statistics for 4 (to be) overwhelmingly suggests that the English translation of its surrounding phrases is identical to its Chinese ordering. This reflects the fact that the arguments of copulas in both languages are realized in the same order. The orientation statistics for postposition Þ (on) suggests inversion which captures the divergence between Chinese postposition to the English preposition. Similarly, the dominant orientation for particle { (of) suggests the noun-phrase shift from modified-modifier to modifier-modified, which is common when translating Chinese noun phrases to English. Taking all parts of the model, which we detail later, together with the knowledge in Table 1, we demonstrate the steps taken to translate the example in Fig. 2. We highlight the function words with boxed characters and encapsulate content words as indexed symbols. As shown, orientation statistics from function words alone are adequate to recover the English ordering - in practice, content words also influence the reordering through a language model. One can think of the FWS approach as a foreign language learner with limited knowledge about Chinese grammar but fairly knowledgable about the role of Chinese function words. ,\ 4 Þ { jâ Qœ ­ { ø\ X1 4 X2 Þ { X3 { X4 HH j   Þ X2 ?       9 XXXXX z X3 { X5 ?      ) XXXXXXX z X4 { X6 ? ? ? X1 4 X7 X1 4 X4 { X3 { Þ X2 ,\ 4 ø\ { jâQœ­ { Þ  a form is a coll. of data entry fields on a page #1 #2 #3 ? ? ? ? ? ? ? ? ? Figure 2: In Step 1, function words (boxed characters) and content words (indexed symbols) are identified. Step 2 reorders phrases according to knowledge embedded in function words. A new indexed symbol is introduced to indicate previously reordered phrases for conciseness. Step 3 finally maps Chinese phrases to their English translation. 4 The FWS Model We first discuss the extension of standard ITG to accommodate gapping and then detail the statistical components of the model later. 4.1 Single Gap ITG (SG-ITG) The FWS model employs a synchronous grammar to describe the admissible orderings. The utility of ITG as a reordering constraint for most language pairs, is well-known both empirically (Zens and Ney, 2003) and analytically (Wu, 1997), however ITG’s straight (monotone) and inverted (reverse) rules exhibit strong cohesiveness, which is inadequate to express orientations that require gaps. We propose SG-ITG that follows Wellington et al. (2006)’s suggestion to model at most one gap. We show the rules for SG-ITG below. Rules 13 are identical to those defined in standard ITG, in which monotone and reverse orderings are represented by square and angle brackets, respectively. 714 Rank Word unigram MAL RAL MGL RGL MAR RAR MGR RGR 1 { 0.0580 0.45 0.52 0.01 0.02 0.44 0.52 0.01 0.03 2 Ç 0.0507 0.85 0.12 0.02 0.01 0.84 0.12 0.02 0.02 3  0.0550 0.99 0.01 0.00 0.00 0.92 0.08 0.00 0.00 4  0.0155 0.87 0.10 0.02 0.00 0.82 0.12 0.05 0.02 5  0.0153 0.84 0.11 0.01 0.04 0.88 0.11 0.01 0.01 6 Z 0.0138 0.95 0.02 0.01 0.01 0.97 0.02 0.01 0.00 7 Ö 0.0123 0.73 0.12 0.10 0.04 0.51 0.14 0.14 0.20 8 ,1 0.0114 0.78 0.12 0.03 0.07 0.86 0.05 0.08 0.01 9 Ý 0.0099 0.95 0.02 0.02 0.01 0.96 0.01 0.02 0.01 10 R 0.0091 0.87 0.10 0.01 0.02 0.88 0.10 0.01 0.00 21 4 0.0056 0.85 0.11 0.02 0.02 0.85 0.04 0.09 0.02 37 Þ 0.0035 0.33 0.65 0.02 0.01 0.31 0.63 0.03 0.03 U 0.0002 0.76 0.14 0.06 0.05 0.74 0.13 0.07 0.06 Table 1: Orientation statistics and unigram probability of selected frequent Chinese words in the HIT corpus. Subscripts L/R refers to lexical unit’s orientation with respect to its left/right neighbor. U is the universal token used in back-off for N = 128. Dominant orientations of each word are in bold. (1) X →c/e (2) X →[XX] (3) X →⟨XX⟩ (4) X⋄→[X ⋄X] (5) X⋄→⟨X ⋄X⟩ (6) X →[X ∗X] (7) X →⟨X ∗X⟩ SG-ITG introduces two new sets of rules: gapping (Rules 4-5) and dovetailing (Rules 6-7) that deal specifically with gaps. On the RHS of the gapping rules, a diamond symbol (⋄) indicates a gap, while on the LHS, it emits a superscripted symbol X⋄to indicate a gapped phrase (plain Xs without superscripts are thus contiguous phrases). Gaps in X⋄are eventually filled by actual phrases via dovetailing (marked with an ∗on the RHS). Fig.3 illustrates gapping and dovetailing rules using an example where two Chinese adjectival phrases are translated into a single English subordinate clause. SG-ITG can generate the correct ordering by employing gapping followed by dovetailing, as shown in the following simplified trace: X⋄ 1 →⟨1997 { ‘Ç, V.1 ⋄1997 ⟩ X⋄ 2 →⟨1998 { ‘Ç, V.2 ⋄1998 ⟩ X3 →[X1 ∗X2] →[ 1997 { ‘Ç Z 1998 { ‘Ç, V.1 ⋄1997 ∗V.2 ⋄1998 ] →1997 {‘ÇZ1998 {‘Ç, V.1 and V.2 that were released in 1997 and 1998 where X⋄ 1 and X⋄ 2 each generate the translation of their respective Chinese noun phrase using gapping and X3 generates the English subclause by dovetailing the two gapped phrases together. Thus far, the grammar is unlexicalized, and does 1997#q { ‘Ç Z 1998#q { ‘Ç V.1 and V.2 that were released in 1997 and 1998. !!!!!! ((((((((((((( h h h h h h h h h h h h h P P P P P P P Figure 3: An example of an alignment that can be generated only by allowing gaps. not incorporate any lexical evidence. Now we modify the grammar to introduce lexicalized function words to SG-ITG. In practice, we introduce a new set of lexicalized non-terminal symbols Yi, i ∈ {1...N}, to represent the top N most-frequent words in the vocabulary; the existing unlexicalized X is now reserved for content words. This difference does not inherently affect the structure of the grammar, but rather lexicalizes the statistical model. In this way, although different Yis follow the same production rules, they are associated with different statistics. This is reflected in Rules 8-9. Rule 8 emits the function word; Rule 9 reorders the arguments around the function word, resembling our orientation model (see Section 4.2) where a function word influences the orientation of its left and right arguments. For clarity, we omit notation that denotes which rules have been applied (monotone, reverse; gapping, dovetailing). (8) Yi→c/e (9) X→XYiX In practice, we replace Rule 9 with its equivalent 2-normal form set of rules (Rules 10-13). Finally, we introduce rules to handle back-off (Rules 14-16) and upgrade (Rule 17). These allow SG-ITG to re715 vert function words to normal words and vice versa. (10) R→YiX (11) L →XYi (12) X→LX (13) X→XR (14) Yi→X (15) R→X (16) L →X (17) X→YU Back-off rules are needed when the grammar has to reorder two adjacent function words, where one set of orientation statistics must take precedence over the other. The example in Fig.1 illustrates such a case where the orientation of Þ (on) and { (of) compete for influence. In this case, the grammar chooses to use { (of) and reverts the function word Þ (on) to the unlexicalized form. The upgrade rule is used for cases where there are two adjacent phrases, both of which are not function words. Upgrading allows either phrase to act as a function word, making use of the universal word’s orientation statistics to reorder its neighbor. 4.2 Statistical model We now formulate the FWS model as a statistical framework. We replace the deterministic rules in our SG-ITG grammar with probabilistic ones, elevating it to a stochastic grammar. In particular, we develop the three sub models (see Section 3) which influence the choice of production rules for ordering decision. These models operate on the 2-norm rules, where the RHS contains one function word and its argument (except in the case of the phrase boundary model). We provide the intuition for these models next, but their actual form will be discussed in the next section on training. 1) Orientation Model ori(o|H,Yi): This model captures the preference of a function word Yi to a particular orientation o ∈{MA, RA, MG, RG} in reordering its H ∈{left, right} argument X. The parameter H determines which set of Yi’s statistics to use (left or right); the model consults Yi’s left orientation statistic for Rules 11 and 13 where X precedes Yi, otherwise Yi’s right orientation statistic is used for Rules 10 and 12. 2) Preference Model pref(Yi): This model arbitrates reordering in the cases where two function words are adjacent and the backoff rules have to decide which function word takes precedence, reverting the other to the unlexicalized X form. This model prefers the function word with higher unigram probability to take the precedence. 3) Phrase Boundary Model pb(X): This model is a penalty-based model, favoring the resulting alignment that conforms to the source constituent boundary. It penalizes Rule 1 if the terminal rule X emits a Chinese phrase that violates the boundary (pb = e−1), otherwise it is inactive (pb = 1). These three sub models act as features alongside seven other standard SMT features in a log-linear model, resulting in the following set of features {f1, . . . , f10}: f1) orientation ori(o|H, Yi); f2) preference pref(Yi); f3) phrase boundary pb(X); f4) language model lm(e); f5 −f6) phrase translation score φ(e|c) and its inverse φ(c|e); f7 −f8) lexical weight lex(e|c) and its inverse lex(c|e); f9) word penalty wp; and f10) phrase penalty pp. The translation is then obtained from the most probable derivation of the stochastic SG-ITG. The formula for a single derivation is shown in Eq. (18), where X1, X2, ..., XL is a sequence of rules with w(Xl) being the weight of each particular rule Xl. w(Xl) is estimated through a log-linear model, as in Eq. (19), with all the abovementioned features where λj reflects the contribution of each feature fj. P(X1, ..., XL) = YL l=1w(Xl) (18) w(Xl) = Y10 j=1fj(Xl)λj (19) 5 Training We train the orientation and preference models from statistics of a training corpus. To this end, we first derive the event counts and then compute the relative frequency of each event. The remaining phrase boundary model can be modeled by the output of a standard text chunker, as in practice it is simply a constituent boundary detection mechanism together with a penalty scheme. The events of interest to the orientation model are (Yi, o) tuples, where o ∈{MA, RA, MG, RG} is an orientation value of a particular function word Yi. Note that these tuples are not directly observable from training data. Hence, we need an algorithm to derive (Yi, o) tuples from a parallel corpus. Since both left and right statistics share identical training steps, thus we omit references to them. The algorithm to derive (Yi, o) involves several steps. First, we estimate the bi-directional alignment 716 by running GIZA++ and applying the “grow-diagfinal” heuristic. Then, the algorithm enumerates all Yi and determines its orientation o with respect to its argument X to derive (Yi, o). To determine o, the algorithm inspects the monotonicity (monotone or reverse) and adjacency (adjacent or gap) between Yi’s and X’s alignments. Monotonicity can be determined by looking at the Yi’s alignment with respect to the most fine-grained level of X (i.e., word level alignment). However, such a heuristic may inaccurately suggest gap orientation. Figure 1 illustrates this problem when deriving the orientation for the second { (of). Looking only at the word alignment of its left argument ­ (fields) incorrectly suggests a gapped orientation, where the alignment of jâQœ (data entry) intervened. It is desirable to look at the alignment of jâQœ­ (data entry fields) at the phrase level, which suggests the correct adjacent orientation instead. To address this issue, the algorithm uses gapping conservatively by utilizing the consistency constraint (Och and Ney, 2004) to suggest phrase level alignment of X. The algorithm exhaustively grows consistent blocks containing the most fine-grained level of X not including Yi. Subsequently, it merges each hypothetical argument with the Yi’s alignment. The algorithm decides that Yi has a gapped orientation only if all merged blocks violate the consistency constraint, concluding an adjacent orientation otherwise. With the event counts C(Yi, o) of tuple (Yi, o), we estimate the orientation model for Yi and U using Eqs. (20) and (21). We also estimate the preference model with word unigram counts C(Yi) using Eqs. (22) and (23), where V indicates the vocabulary size. ori(o|Yi) = C(Yi, o)/C(Yi, ·), i ⩽N (20) ori(o|U) = X i>N C(Yi, o)/ X i>N C(Yi, ·) (21) pref(Yi) = C(Yi)/C(·), i ⩽N (22) pref(U) = 1/(V −N) X i>N C(Yi)/C(·) (23) Samples of these statistics are found in Table 1 and have been used in the running examples. For instance, the statistic ori(RAL|{) = 0.52, which is the dominant one, suggests that the grammar inversely order {(of)’s left argument; while in our illustration of backoff rules in Fig.1, the grammar chooses {(of) to take precedence since pref({) > pref(Þ). 6 Decoding We employ a bottom-up CKY parser with a beam to find the derivation of a Chinese sentence which maximizes Eq. (18). The English translation is then obtained by post-processing the best parse. We set the beam size to 30 in our experiment and further constrain reordering to occur within a window of 10 words. Our decoder also prunes entries that violate the following constraints: 1) each entry contains at most one gap; 2) any gapped entries must be dovetailed at the next level higher; 3) an entry spanning the whole sentence must not contain gaps. The score of each newly-created entry is derived from the scores of its parts accordingly. When scoring entries, we treat gapped entries as contiguous phrases by ignoring the gap symbol and rely on the orientation model to penalize such entries. This allows a fair score comparison between gapped and contiguous entries. 7 Experiments We would like to study how the FWS model affects 1) the ordering of phrases around function words; 2) the overall translation quality. We achieve this by evaluating the FWS model against a baseline system using two metrics, namely, orientation accuracy and BLEU respectively. We define the orientation accuracy of a (function) word as the accuracy of assigning correct orientation values to both its left and right arguments. We report the aggregate for the top 1024 most frequent words; these words cover 90% of the test set. We devise a series of experiments and run it in two scenarios - manual and automatic alignment - to assess the effects of using perfect or real-world input. We utilize the HIT bilingual computer manual corpus, which has been manually aligned, to perform Chinese-to-English translation (see Table 2). Manual alignment is essential as we need to measure orientation accuracy with respect to a gold standard. 717 Chinese English train words 145,731 135,032 (7K sentences) vocabulary 5,267 8,064 dev words 13,986 14,638 (1K sentences) untranslatable 486 (3.47%) test words 27,732 28,490 (2K sentences) untranslatable 935 (3.37%) Table 2: Statistics for the HIT corpus. A language model is trained using the SRILMToolkit, and a text chunker (Chen et al., 2006) is applied to the Chinese sentences in the test and dev sets to extract the constituent boundaries necessary for the phrase boundary model. We run minimum error rate training on dev set using Chiang’s toolkit to find a set of parameters that optimizes BLEU score. 7.1 Perfect Lexical Choice Here, the task is simplified to recovering the correct order of the English sentence from the scrambled Chinese order. We trained the orientation model using manual alignment as input. The aforementioned decoder is used with phrase translation, lexical mapping and penalty features turned off. Table 4 compares orientation accuracy and BLEU between our FWS model and the baseline. The baseline (lm+d) employs a language model and distortion penalty features, emulating the standard Pharaoh model. We study the behavior of the FWS model with different numbers of lexicalized items N. We start with the language model alone (N=0) and incrementally add the orientation (+ori), preference (+ori+pref) and phrase boundary models (+ori+pref+pb). As shown, the language model alone is relatively weak, assigning the correct orientation in only 62.28% of the cases. A closer inspection reveals that the lm component aggressively promotes reverse reorderings. Including a distortion penalty model (the baseline) improves the accuracy to 72.55%. This trend is also apparent for the BLEU score. When we incorporate the FSW model, including just the most frequent word (Y1={), we see improvement. This model promotes non-monotone reordering conservatively around Y1 (where the dominant statistic suggests reverse ordering). Increasing the value of N leads to greater improvement. The most effective improvement is obtained by increaspharaoh (dl=5) 22.44 ± 0.94 +ori 23.80 ± 0.98 +ori+pref 23.85 ± 1.00 +ori+pref+pb 23.86 ± 1.08 Table 3: BLEU score with the 95% confidence intervals based on (Zhang and Vogel, 2004). All improvement over the baseline (row 1) are statistically significant under paired bootstrap resampling. ing N to 128. Additional (marginal) improvement is obtained at the expense of modeling an additional 900+ lexical items. We see these results as validating our claim that modeling the top few most frequent words captures most important and prevalent ordering productions. Lastly, we study the effect of the pref and pb features. The inclusion of both sub models has little affect on orientation accuracy, but it improves BLEU consistently (although not significantly). This suggests that both models correct the mistakes made by the ori model while preserving the gain. They are not as effective as the addition of the basic orientation model as they only play a role when two lexicalized entries are adjacent. 7.2 Full SMT experiments Here, all knowledge is automatically trained on the train set, and as a result, the input word alignment is noisy. As a baseline, we use the state-of-the-art phrase-based Pharaoh decoder. For a fair comparison, we run minimum error rate training for different distortion limits from 0 to 10 and report the best parameter (dl=5) as the baseline. We use the phrase translation table from the baseline and perform an identical set of experiments as the perfect lexical choice scenario, except that we only report the result for N=128, due to space constraint. Table 3 reports the resulting BLEU scores. As shown, the FWS model improves BLEU score significantly over the baseline. We observe the same trend as the one in perfect lexical choice scenario where top 128 most frequent words provides the majority of improvement. However, the pb features yields no noticeable improvement unlike in prefect lexical choice scenario; this is similar to the findings in (Koehn et al., 2003). 718 N=0 N=1 N=4 N=16 N=64 N=128 N=256 N=1024 Orientation Acc. (%) lm+d 72.55 +ori 62.28 76.52 76.58 77.38 77.54 78.17 77.76 78.38 +ori+pref 76.66 76.82 77.57 77.74 78.13 77.94 78.54 +ori+pref+pb 76.70 76.85 77.58 77.70 78.20 77.94 78.56 BLEU lm+d 75.13 +ori 66.54 77.54 77.57 78.22 78.48 78.76 78.58 79.20 +ori+pref 77.60 77.70 78.29 78.65 78.77 78.70 79.30 +ori+pref+pb 77.69 77.80 78.34 78.65 78.93 78.79 79.30 Table 4: Results using perfect aligned input. Here, (lm+d) is the baseline; (+ori), (+ori+pref) and (+ori+pref+pb) are different FWS configurations. The results of the model (where N is varied) that features the largest gain are bold, whereas the highest score is italicized. 8 Conclusion In this paper, we present a statistical model to capture the grammatical information encoded in function words. Formally, we develop the Function Word Syntax-based (FWS) model, a probabilistic synchronous grammar, to encode the orientation statistics of arguments to function words. Our experimental results shows that the FWS model significantly improves the state-of-the-art phrase-based model. We have touched only the surface benefits of modeling function words. In particular, our proposal is limited to modeling function words in the source language. We believe that conditioning on both source and target pair would result in more finegrained, accurate orientation statistics. From our error analysis, we observe that 1) reordering may span several levels and the preference model does not handle this phenomena well; 2) correctly reordered phrases with incorrect boundaries severely affects BLEU score and the phrase boundary model is inadequate to correct the boundaries especially for cases of long phrase. In future, we hope to address these issues while maintaining the benefits offered by modeling function words. References Benjamin Wellington, Sonjia Waxmonsky, and I. Dan Melamed. 2006. Empirical Lower Bounds on the Complexity of Translational Equivalence. In ACL/COLING 2006, pp. 977–984. Christoph Tillman and Tong Zhang. 2005. A Localized Prediction Model for Statistical Machine Translation. In ACL 2005, pp. 557–564. David Chiang. 2005. A Hierarchical Phrase-Based Model for Statistical Machine Translation. In ACL 2005, pp. 263–270. Dekai Wu. 1997. Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora. Computational Linguistics, 23(3):377–403. Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum Entropy Based Phrase Reordering Model for Statistical Machine Translation. In ACL/COLING 2006, pp. 521–528. Franz J. Och and Hermann Ney. 2004. The Alignment Template Approach to Statistical Machine Translation. Computational Linguistics, 30(4):417–449. Masaaki Nagata, Kuniko Saito, Kazuhide Yamamoto, and Kazuteru Ohashi. 2006. A Clustered Global Phrase Reordering Model for Statistical Machine Translation. In ACL/COLING 2006, pp. 713–720. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, Robert L. Mercer 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19(2):263–311. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In HLT-NAACL 2003, pp. 127–133. Richard Zens and Hermann Ney. 2003. A Comparative Study on Reordering Constraints in Statistical Machine Translation. In ACL 2003. Taro Watanabe, Hajime Tsukada, and Hideki Isozaki. 2006. Left-to-Right Target Generation for Hierarchical Phrase-Based Translation. In ACL/COLING 2006, pp. 777–784. Wenliang Chen, Yujie Zhang and Hitoshi Isahara 2006. An Empirical Study of Chinese Chunking In ACL 2006 Poster Sessions, pp. 97–104. Yaser Al-Onaizan and Kishore Papineni. 2006. Distortion Models for Statistical Machine Translation. In ACL/COLING 2006, pp. 529–536. Ying Zhang and Stephan Vogel. 2004. Measuring Confidence Intervals for the Machine Translation Evaluation Metrics. In TMI 2004. 719
2007
90
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 720–727, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics A Probabilistic Approach to Syntax-based Reordering for Statistical Machine Translation Chi-Ho Li, Dongdong Zhang, Mu Li, Ming Zhou Microsoft Research Asia Beijing, China chl, [email protected] muli, [email protected] Minghui Li, Yi Guan Harbin Institute of Technology Harbin, China [email protected] [email protected] Abstract Inspired by previous preprocessing approaches to SMT, this paper proposes a novel, probabilistic approach to reordering which combines the merits of syntax and phrase-based SMT. Given a source sentence and its parse tree, our method generates, by tree operations, an n-best list of reordered inputs, which are then fed to standard phrase-based decoder to produce the optimal translation. Experiments show that, for the NIST MT-05 task of Chinese-toEnglish translation, the proposal leads to BLEU improvement of 1.56%. 1 Introduction The phrase-based approach has been considered the default strategy to Statistical Machine Translation (SMT) in recent years. It is widely known that the phrase-based approach is powerful in local lexical choice and word reordering within short distance. However, long-distance reordering is problematic in phrase-based SMT. For example, the distancebased reordering model (Koehn et al., 2003) allows a decoder to translate in non-monotonous order, under the constraint that the distance between two phrases translated consecutively does not exceed a limit known as distortion limit. In theory the distortion limit can be assigned a very large value so that all possible reorderings are allowed, yet in practise it is observed that too high a distortion limit not only harms efficiency but also translation performance (Koehn et al., 2005). In our own experiment setting, the best distortion limit for ChineseEnglish translation is 4. However, some ideal translations exhibit reorderings longer than such distortion limit. Consider the sentence pair in NIST MT2005 test set shown in figure 1(a): after translating the word “V/mend”, the decoder should ‘jump’ across six words and translate the last phrase “ ø ð_/fissures in the relationship”. Therefore, while short-distance reordering is under the scope of the distance-based model, long-distance reordering is simply out of the question. A terminological remark: In the rest of the paper, we will use the terms global reordering and local reordering in place of long-distance reordering and short-distance reordering respectively. The distinction between long and short distance reordering is solely defined by distortion limit. Syntax1 is certainly a potential solution to global reordering. For example, for the last two Chinese phrases in figure 1(a), simply swapping the two children of the NP node will produce the correct word order on the English side. However, there are also reorderings which do not agree with syntactic analysis. Figure 1(b) shows how our phrase-based decoder2 obtains a good English translation by reordering two blocks. It should be noted that the second Chinese block “ˆe ” and its English counterpart “at the end of” are not constituents at all. In this paper, our interest is the value of syntax in reordering, and the major statement is that syntactic information is useful in handling global reordering 1Here by syntax it is meant linguistic syntax rather than formal syntax. 2The decoder is introduced in section 6. 720 Figure 1: Examples on how syntax (a) helps and (b) harms reordering in Chinese-to-English translation The lines and nodes on the top half of the figures show the phrase structure of the Chinese sentences, while the links on the bottom half of the figures show the alignments between Chinese and English phrases. Square brackets indicate the boundaries of blocks found by our decoder. and it achieves better MT performance on the basis of the standard phrase-based model. To prove it, we developed a hybrid approach which preserves the strength of phrase-based SMT in local reordering as well as the strength of syntax in global reordering. Our method is inspired by previous preprocessing approaches like (Xia and McCord, 2004), (Collins et al., 2005), and (Costa-juss`a and Fonollosa, 2006), which split translation into two stages: S →S′ →T (1) where a sentence of the source language (SL), S, is first reordered with respect to the word order of the target language (TL), and then the reordered SL sentence S′ is translated as a TL sentence T by monotonous translation. Our first contribution is a new translation model as represented by formula 2: S →n × S′ →n × T →ˆT (2) where an n-best list of S′, instead of only one S′, is generated. The reason of such change will be given in section 2. Note also that the translation process S′ →T is not monotonous, since the distance-based model is needed for local reordering. Our second contribution is our definition of the best translation: arg max T exp(λrlogPr(S →S′)+ X i λiFi(S′ →T)) where Fi are the features in the standard phrasebased model and Pr(S →S′) is our new feature, viz. the probability of reordering S as S′. The details of this model are elaborated in sections 3 to 6. The settings and results of experiments on this new model are given in section 7. 2 Related Work There have been various attempts to syntaxbased SMT, such as (Yamada and Knight, 2001) and (Quirk et al., 2005). We do not adopt these models since a lot of subtle issues would then be introduced due to the complexity of syntax-based decoder, and the impact of syntax on reordering will be difficult to single out. There have been many reordering strategies under the phrase-based camp. A notable approach is lexicalized reordering (Koehn et al., 2005) and (Tillmann, 2004). It should be noted that this approach achieves the best result within certain distortion limit and is therefore not a good model for global reordering. There are a few attempts to the preprocessing approach to reordering. The most notable ones are (Xia and McCord, 2004) and (Collins et al., 2005), both of which make use of linguistic syntax in the preprocessing stage. (Collins et al., 2005) analyze German clause structure and propose six types 721 of rules for transforming German parse trees with respect to English word order. Instead of relying on manual rules, (Xia and McCord, 2004) propose a method in learning patterns of rewriting SL sentences. This method parses training data and uses some heuristics to align SL phrases with TL ones. From such alignment it can extract rewriting patterns, of which the units are words and POSs. The learned rewriting rules are then applied to rewrite SL sentences before monotonous translation. Despite the encouraging results reported in these papers, the two attempts share the same shortcoming that their reordering is deterministic. As pointed out in (Al-Onaizan and Papineni, 2006), these strategies make hard decisions in reordering which cannot be undone during decoding. That is, the choice of reordering is independent from other translation factors, and once a reordering mistake is made, it cannot be corrected by the subsequent decoding. To overcome this weakness, we suggest a method to ‘soften’ the hard decisions in preprocessing. The essence is that our preprocessing module generates n-best S′s rather than merely one S′. A variety of reordered SL sentences are fed to the decoder so that the decoder can consider, to certain extent, the interaction between reordering and other factors of translation. The entire process can be depicted by formula 2, recapitulated as follows: S →n × S′ →n × T →ˆT. Apart from their deterministic nature, the two previous preprocessing approaches have their own weaknesses. (Collins et al., 2005) count on manual rules and it is suspicious if reordering rules for other language pairs can be easily made. (Xia and McCord, 2004) propose a way to learn rewriting patterns, nevertheless the units of such patterns are words and their POSs. Although there is no limit to the length of rewriting patterns, due to data sparseness most patterns being applied would be short ones. Many instances of global reordering are therefore left unhandled. 3 The Acquisition of Reordering Knowledge To avoid this problem, we give up using rewriting patterns and design a form of reordering knowledge which can be directly applied to parse tree nodes. Given a node N on the parse tree of an SL sentence, the required reordering knowledge should enable the preprocessing module to determine how probable the children of N are reordered.3 For simplicity, let us first consider the case of binary nodes only. Let N1 and N2, which yield phrases p1 and p2 respectively, be the child nodes of N. We want to determine the order of p1 and p2 with respect to their TL counterparts, T(p1) and T(p2). The knowledge for making such a decision can be learned from a wordaligned parallel corpus. There are two questions involved in obtaining training instances: • How to define T(pi)? • How to define the order of T(pi)s? For the first question, we adopt a similar method as in (Fox, 2002): given an SL phrase ps = s1 . . . si . . . sn and a word alignment matrix A, we can enumerate the set of TL words {ti : tiϵA(si)}, and then arrange the words in the order as they appear in the TL sentence. Let first(t) be the first word in this sorted set and last(t) be the last word. T(ps) is defined as the phrase first(t) . . . last(t) in the TL sentence. Note that T(ps) may contain words not in the set {ti}. The question of the order of two TL phrases is not a trivial one. Since a word alignment matrix usually contains a lot of noises as well as one-to-many and many-to-many alignments, two TL phrases may overlap with each other. For the sake of the quality of reordering knowledge, if T(p1) and T(p2) overlap, then the node N with children N1 and N2 is not taken as a training instance. Obviously it will greatly reduce the amount of training input. To remedy data sparseness, less probable alignment points are removed so as to minimize overlapping phrases, since, after removing some alignment point, one of the TL phrases may become shorter and the two phrases may no longer overlap. The implementation is similar to the idea of lexical weight in (Koehn et al., 2003): all points in the alignment matrices of the entire training corpus are collected to calculate the probabilistic distribution, P(t|s), of some TL word 3Some readers may prefer the expression the subtree rooted at node N to node N. The latter term is used in this paper for simplicity. 722 t given some SL word s. Any pair of overlapping T(pi)s will be redefined by iteratively removing less probable word alignments until they no longer overlap. If they still overlap after all one/many-to-many alignments have been removed, then the refinement will stop and N, which covers pis, is no longer taken as a training instance. In sum, given a bilingual training corpus, a parser for the SL, and a word alignment tool, we can collect all binary parse tree nodes, each of which may be an instance of the required reordering knowledge. The next question is what kind of reordering knowledge can be formed out of these training instances. Two forms of reordering knowledge are investigated: 1. Reordering Rules, which have the form Z : X Y ⇒ ( X Y Pr(IN-ORDER) Y X Pr(INVERTED) where Z is the phrase label of a binary node and X and Y are the phrase labels of Z’s children, and Pr(INVERTED) and Pr(IN-ORDER) are the probability that X and Y are inverted on TL side and that not inverted, respectively. The probability figures are estimated by Maximum Likelihood Estimation. 2. Maximum Entropy (ME) Model, which does the binary classification whether a binary node’s children are inverted or not, based on a set of features over the SL phrases corresponding to the two children nodes. The features that we investigated include the leftmost, rightmost, head, and context words4, and their POSs, of the SL phrases, as well as the phrase labels of the SL phrases and their parent. 4 The Application of Reordering Knowledge After learning reordering knowledge, the preprocessing module can apply it to the parse tree, tS, of an SL sentence S and obtain the n-best list of S′. Since a ranking of S′ is needed, we need some way to score each S′. Here probability is used as the scoring metric. In this section it is explained 4The context words of the SL phrases are the word to the left of the left phrase and the word to the right of the right phrase. how the n-best reorderings of S and their associated scores/probabilites are computed. Let us first look into the scoring of a particular reordering. Let Pr(p→p′) be the probability of reordering a phrase p into p′. For a phrase q yielded by a non-binary node, there is only one ‘reordering’ of q, viz. q itself, thus Pr(q→q) = 1. For a phrase p yielded by a binary node N, whose left child N1 has reorderings pi 1 and right child N2 has the reorderings pj 2 (1 ≤i, j ≤n), p′ has the form pi 1pj 2 or pj 2pi 1. Therefore, Pr(p→p′) = ( Pr(IN-ORDER) × Pr(pi 1 →pi′ 1 ) × Pr(pj 2 →pj′ 2 ) Pr(INVERTED) × Pr(pj 2 →pj′ 2 ) × Pr(pi 1 →pi′ 1 ) The figures Pr(IN-ORDER) and Pr(INVERTED) are obtained from the learned reordering knowledge. If reordering knowledge is represented as rules, then the required probability is the probability associated with the rule that can apply to N. If reordering knowledge is represented as an ME model, then the required probability is: P(r|N) = exp(P i λifi(N, r)) P r′ exp(P i λifi(N, r′)) where rϵ{IN-ORDER, INVERTED}, and fi’s are features used in the ME model. Let us turn to the computation of the n-best reordering list. Let R(N) be the number of reorderings of the phrase yielded by N, then: R(N) = ( 2R(N1)R(N2) if N has children N1, N2 1 otherwise It is easily seen that the number of S′s increases exponentially. Fortunately, what we need is merely an n-best list rather than a full list of reorderings. Starting from the leaves of tS, for each node N covering phrase p, we only keep track of the n p′s that have the highest reordering probability. Thus R(N) ≤n. There are at most 2n2 reorderings for any node and only the top-scored n reorderings are recorded. The n-best reorderings of S, i.e. the n-best reorderings of the yield of the root node of tS, can be obtained by this efficient bottom-up method. 5 The Generalization of Reordering Knowledge In the last two sections reordering knowledge is learned from and applied to binary parse tree nodes 723 only. It is not difficult to generalize the theory of reordering knowledge to nodes of other branching factors. The case of binary nodes is simple as there are only two possible reorderings. The case of 3-ary nodes is a bit more complicated as there are six.5 In general, an n-ary node has n! possible reorderings of its children. The maximum entropy model has the same form as in the binary case, except that there are more classes of reordering patterns as n increases. The form of reordering rules, and the calculation of reordering probability for a particular node, can also be generalized easily.6 The only problem for the generalized reordering knowledge is that, as there are more classes, data sparseness becomes more severe. 6 The Decoder The last three sections explain how the S →n×S′ part of formula 2 is done. The S′ →T part is simply done by our re-implementation of PHARAOH (Koehn, 2004). Note that nonmonotonous translation is used here since the distance-based model is needed for local reordering. For the n×T →ˆT part, the factors in consideration include the score of T returned by the decoder, and the reordering probability Pr(S →S′). In order to conform to the log-linear model used in the decoder, we integrate the two factors by defining the total score of T as formula 3: exp(λr logPr(S →S′) + X i λiFi(S′ →T)) (3) The first term corresponds to the contribution of syntax-based reordering, while the second term that of the features Fi used in the decoder. All the feature weights (λs) were trained using our implementation of Minimum Error Rate Training (Och, 2003). The final translation ˆT is the T with the highest total score. 5Namely, N1N2N3, N1N3N2, N2N1N3, N2N3N1, N3N1N2, and N3N2N1, if the child nodes in the original order are N1, N2, and N3. 6For example, the reordering probability of a phrase p = p1p2p3 generated by a 3-ary node N is Pr(r)×Pr(pi 1)×Pr(pj 2)×Pr(pk 3) where r is one of the six reordering patterns for 3-ary nodes. It is observed in pilot experiments that, for a lot of long sentences containing several clauses, only one of the clauses is reordered. That is, our greedy reordering algorithm (c.f. section 4) has a tendency to focus only on a particular clause of a long sentence. The problem was remedied by modifying our decoder such that it no longer translates a sentence at once; instead the new decoder does: 1. split an input sentence S into clauses {Ci}; 2. obtain the reorderings among {Ci}, {Sj}; 3. for each Sj, do (a) for each clause Ci in Sj, do i. reorder Ci into n-best C ′ is, ii. translate each C ′ i into T(C ′ i), iii. select ˆT(C ′ i); (b) concatenate { ˆT(C ′ i)} into Tj; 4. select ˆTj. Step 1 is done by checking the parse tree if there are any IP or CP nodes7 immediately under the root node. If yes, then all these IPs, CPs, and the remaining segments are treated as clauses. If no, then the entire input is treated as one single clause. Step 2 and step 3(a)(i) still follow the algorithm in section 4. Step 3(a)(ii) is trivial, but there is a subtle point about the calculation of language model score: the language model score of a translated clause is not independent from other clauses; it should take into account the last few words of the previous translated clause. The best translated clause ˆT(C ′ i) is selected in step 3(a)(iii) by equation 3. In step 4 the best translation ˆTj is arg max Tj exp(λrlogPr(S →Sj)+ X i score(T(C ′ i))). 7 Experiments 7.1 Corpora Our experiments are about Chinese-to-English translation. The NIST MT-2005 test data set is used for evaluation. (Case-sensitive) BLEU-4 (Papineni et al., 2002) is used as the evaluation metric. The 7IP stands for inflectional phrase and CP for complementizer phrase. These two types of phrases are clauses in terms of the Government and Binding Theory. 724 Branching Factor 2 3 >3 Count 12294 3173 1280 Percentage 73.41 18.95 7.64 Table 1: Distribution of Parse Tree Nodes with Different Branching Factors Note that nodes with only one child are excluded from the survey as reordering does not apply to such nodes. test set and development set of NIST MT-2002 are merged to form our development set. The training data for both reordering knowledge and translation table is the one for NIST MT-2005. The GIGAWORD corpus is used for training language model. The Chinese side of all corpora are segmented into words by our implementation of (Gao et al., 2003). 7.2 The Preprocessing Module As mentioned in section 3, the preprocessing module for reordering needs a parser of the SL, a word alignment tool, and a Maximum Entropy training tool. We use the Stanford parser (Klein and Manning, 2003) with its default Chinese grammar, the GIZA++ (Och and Ney, 2000) alignment package with its default settings, and the ME tool developed by (Zhang, 2004). Section 5 mentions that our reordering model can apply to nodes of any branching factor. It is interesting to know how many branching factors should be included. The distribution of parse tree nodes as shown in table 1 is based on the result of parsing the Chinese side of NIST MT-2002 test set by the Stanford parser. It is easily seen that the majority of parse tree nodes are binary ones. Nodes with more than 3 children seem to be negligible. The 3ary nodes occupy a certain proportion of the distribution, and their impact on translation performance will be shown in our experiments. 7.3 The decoder The data needed by our Pharaoh-like decoder are translation table and language model. Our 5-gram language model is trained by the SRI language modeling toolkit (Stolcke, 2002). The translation table is obtained as described in (Koehn et al., 2003), i.e. the alignment tool GIZA++ is run over the training data in both translation directions, and the two alignTest Setting BLEU B1 standard phrase-based SMT 29.22 B2 (B1) + clause splitting 29.13 Table 2: Experiment Baseline Test Setting BLEU BLEU 2-ary 2,3-ary 1 rule 29.77 30.31 2 ME (phrase label) 29.93 30.49 3 ME (left,right) 30.10 30.53 4 ME ((3)+head) 30.24 30.71 5 ME ((3)+phrase label) 30.12 30.30 6 ME ((4)+context) 30.24 30.76 Table 3: Tests on Various Reordering Models The 3rd column comprises the BLEU scores obtained by reordering binary nodes only, the 4th column the scores by reordering both binary and 3-ary nodes. The features used in the ME models are explained in section 3. ment matrices are integrated by the GROW-DIAGFINAL method into one matrix, from which phrase translation probabilities and lexical weights of both directions are obtained. The most important system parameter is, of course, distortion limit. Pilot experiments using the standard phrase-based model show that the optimal distortion limit is 4, which was therefore selected for all our experiments. 7.4 Experiment Results and Analysis The baseline of our experiments is the standard phrase-based model, which achieves, as shown by table 2, the BLEU score of 29.22. From the same table we can also see that the clause splitting mechanism introduced in section 6 does not significantly affect translation performance. Two sets of experiments were run. The first set, of which the results are shown in table 3, tests the effect of different forms of reordering knowledge. In all these tests only the top 10 reorderings of each clause are generated. The contrast between tests 1 and 2 shows that ME modeling of reordering outperforms reordering rules. Tests 3 and 4 show that phrase labels can achieve as good performance as the lexical features of mere leftmost and rightmost words. However, when more lexical features 725 Input 0 2005# ¤ R ™  é Úá qÖ Z öÌ / äú ÷ =ý Reference Hainan province will continue to increase its investment in the public services and social services infrastructures in 2005 Baseline Hainan Province in 2005 will continue to increase for the public service and social infrastructure investment Translation with Preprocessing Hainan Province in 2005 will continue to increase investment in public services and social infrastructure Table 4: Translation Example 1 Test Setting BLEU a length constraint 30.52 b DL=0 30.48 c n=100 30.78 Table 5: Tests on Various Constraints are added (tests 4 and 6), phrase labels can no longer compete with lexical features. Surprisingly, test 5 shows that the combination of phrase labels and lexical features is even worse than using either phrase labels or lexical features only. Apart from quantitative evaluation, let us consider the translation example of test 6 shown in table 4. To generate the correct translation, a phrasebased decoder should, after translating the word “” as “increase”, jump to the last word “= ý(investment)”. This is obviously out of the capability of the baseline model, and our approach can accomplish the desired reordering as expected. By and large, the experiment results show that no matter what kind of reordering knowledge is used, the preprocessing of syntax-based reordering does greatly improve translation performance, and that the reordering of 3-ary nodes is crucial. The second set of experiments test the effect of some constraints. The basic setting is the same as that of test 6 in the first experiment set, and reordering is applied to both binary and 3-ary nodes. The results are shown in table 5. In test (a), the constraint is that the module does not consider any reordering of a node if the yield of this node contains not more than four words. The underlying rationale is that reordering within distortion limit should be left to the distance-based model during decoding, and syntax-based reordering should focus on global reordering only. The result shows that this hypothesis does not hold. In practice syntax-based reordering also helps local reordering. Consider the translation example of test (a) shown in table 6. Both the baseline model and our model translate in the same way up to the word “Œw” (which is incorrectly translated as “and”). From this point, the proposed preprocessing model correctly jump to the last phrase “Ÿq ê ÿX/discussed”, while the baseline model fail to do so for the best translation. It should be noted, however, that there are only four words between “Œw” and the last phrase, and the desired order of decoding is within the capability of the baseline system. With the feature of syntax-based global reordering, a phrase-based decoder performs better even with respect to local reordering. It is because syntaxbased reordering adds more weight to a hypothesis that moves words across longer distance, which is penalized by the distance-based model. In test (b) distortion limit is set as 0; i.e. reordering is done merely by syntax-based preprocessing. The worse result is not surprising since, after all, preprocessing discards many possibilities and thus reduce the search space of the decoder. Some local reordering model is still needed during decoding. Finally, test (c) shows that translation performance does not improve significantly by raising the number of reorderings. This implies that our approach is very efficient in that only a small value of n is capable of capturing the most important global reordering patterns. 8 Conclusion and Future Work This paper proposes a novel, probabilistic approach to reordering which combines the merits of syntax and phrase-based SMT. On the one hand, global reordering, which cannot be accomplished by the 726 Input ¦$3 , ƒ ) Z ÏC Œw O c u ¯ Ÿq ê ÿX Reference Meanwhile , Yushchenko and his assistants discussed issues concerning the establishment of a new government Baseline The same time , Yushchenko assistants and a new Government on issues discussed Translation with Preprocessing The same time , Yushchenko assistants and held discussions on the issue of a new government Table 6: Translation Example 2 phrase-based model, is enabled by the tree operations in preprocessing. On the other hand, local reordering is preserved and even strengthened in our approach. Experiments show that, for the NIST MT05 task of Chinese-to-English translation, the proposal leads to BLEU improvement of 1.56%. Despite the encouraging experiment results, it is still not very clear how the syntax-based and distance-based models complement each other in improving word reordering. In future we need to investigate their interaction and identify the contribution of each component. Moreover, it is observed that the parse trees returned by a full parser like the Stanford parser contain too many nodes which seem not be involved in desired reorderings. Shallow parsers should be tried to see if they improve the quality of reordering knowledge. References Yaser Al-Onaizan, and Kishore Papineni. 2006. Distortion Models for Statistical Machine Translation. Proceedings for ACL 2006. Michael Collins, Philipp Koehn, and Ivona Kucerova. 2005. Clause Restructuring for Statistical Machine Translation. Proceedings for ACL 2005. M.R. Costa-juss`a, and J.A.R. Fonollosa. 2006. Statistical Machine Reordering. Proceedings for EMNLP 2006. Heidi Fox. 2002. Phrase Cohesion and Statistical Machine Translation. Proceedings for EMNLP 2002. Jianfeng Gao, Mu Li, and Chang-Ning Huang 2003. Improved Source-Channel Models for Chinese Word Segmentation. Proceedings for ACL 2003. Dan Klein and Christopher D. Manning. 2003. Accurate Unlexicalized Parsing. Proceedings for ACL 2003. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical Phrase-based Translation. Proceedings for HLT-NAACL 2003. Philipp Koehn. 2004. Pharaoh: a Beam Search Decoder for Phrase-Based Statistical Machine Translation Models. Proceedings for AMTA 2004. Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot 2005. Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation. Proceedings for IWSLT 2005. Franz J. Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. Proceedings for ACL 2003. Franz J. Och, and Hermann Ney. 2000. Improved Statistical Alignment Models. Proceedings for ACL 2000. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. Proceedings for ACL 2002. Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency Treelet Translation: Syntactically Informed Phrasal SMT. Proceedings for ACL 2005. Andreas Stolcke. 2002. SRILM - An Extensible Language Modeling Toolkit. Proceedings for the International Conference on Spoken Language Understanding 2002. Christoph Tillmann. 2004. A Unigram Orientation Model for Statistical Machine Translation. Proceedings for ACL 2004. Fei Xia, and Michael McCord 2004. Improving a Statistical MT System with Automatically Learned Rewrite Patterns. Proceedings for COLING 2004. Kenji Yamada, and Kevin Knight. 2001. A syntaxbased statistical translation model. Proceedings for ACL 2001. Le Zhang. 2004. Maximum Entropy Modeling Toolkit for Python and C++. http://homepages.inf.ed.ac.uk/s0450736/maxent toolkit.html. 727
2007
91
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 728–735, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Machine Translation by Triangulation: Making Effective Use of Multi-Parallel Corpora Trevor Cohn and Mirella Lapata Human Computer Research Centre, School of Informatics University of Edinburgh {tcohn,mlap}@inf.ed.ac.uk Abstract Current phrase-based SMT systems perform poorly when using small training sets. This is a consequence of unreliable translation estimates and low coverage over source and target phrases. This paper presents a method which alleviates this problem by exploiting multiple translations of the same source phrase. Central to our approach is triangulation, the process of translating from a source to a target language via an intermediate third language. This allows the use of a much wider range of parallel corpora for training, and can be combined with a standard phrase-table using conventional smoothing methods. Experimental results demonstrate BLEU improvements for triangulated models over a standard phrase-based system. 1 Introduction Statistical machine translation (Brown et al., 1993) has seen many improvements in recent years, most notably the transition from word- to phrase-based models (Koehn et al., 2003). Modern SMT systems are capable of producing high quality translations when provided with large quantities of training data. With only a small training sample, the translation output is often inferior to the output from using larger corpora because the translation algorithm must rely on more sparse estimates of phrase frequencies and must also ‘back-off’ to smaller sized phrases. This often leads to poor choices of target phrases and reduces the coherence of the output. Unfortunately, parallel corpora are not readily available in large quantities, except for a small subset of the world’s languages (see Resnik and Smith (2003) for discussion), therefore limiting the potential use of current SMT systems. In this paper we provide a means for obtaining more reliable translation frequency estimates from small datasets. We make use of multi-parallel corpora (sentence aligned parallel texts over three or more languages). Such corpora are often created by international organisations, the United Nations (UN) being a prime example. They present a challenge for current SMT systems due to their relatively moderate size and domain variability (examples of UN texts include policy documents, proceedings of meetings, letters, etc.). Our method translates each target phrase, t, first to an intermediate language, i, and then into the source language, s. We call this two-stage translation process triangulation (Kay, 1997). We present a probabilistic formulation through which we can estimate the desired phrase translation distribution (phrase-table) by marginalisation, p(s|t) = P i p(s, i|t). As with conventional smoothing methods (Koehn et al., 2003; Foster et al., 2006), triangulation increases the robustness of phrase translation estimates. In contrast to smoothing, our method alleviates data sparseness by exploring additional multiparallel data rather than adjusting the probabilities of existing data. Importantly, triangulation provides us with separately estimated phrase-tables which could be further smoothed to provide more reliable distributions. Moreover, the triangulated phrase-tables can be easily combined with the standard sourcetarget phrase-table, thereby improving the coverage over unseen source phrases. As an example, consider Figure 1 which shows the coverage of unigrams and larger n-gram phrases when using a standard source target phrase-table, a triangulated phrase-table with one (it) and nine languages (all), and a combination of standard and triangulated phrase-tables (all+standard). The phrases were harvested from a small French-English bitext 728 and evaluated against a test set. Although very few small phrases are unknown, the majority of larger phrases are unseen. The Italian and all results show that triangulation alone can provide similar or improved coverage compared to the standard sourcetarget model; further improvement is achieved by combining the triangulated and standard models (all+standard). These models and datasets will be described in detail in Section 3. We also demonstrate that triangulation can be used on its own, that is without a source-target distribution, and still yield acceptable translation output. This is particularly heartening, as it provides a means of translating between the many “low density” language pairs for which we don’t yet have a source-target bitext. This allows SMT to be applied to a much larger set of language pairs than was previously possible. In the following section we provide an overview of related work. Section 3 introduces a generative formulation of triangulation. We present our evaluation framework in Section 4 and results in Section 5. 2 Related Work The idea of using multiple source languages for improving the translation quality of the target language dates back at least to Kay (1997), who observed that ambiguities in translating from one language onto another may be resolved if a translation into some third language is available. Systems which have used this notion of triangulation typically create several candidate sentential target translations for source sentences via different languages. A single translation is then selected by finding the candidate that yields the best overall score (Och and Ney, 2001; Utiyama and Isahara, 2007) or by cotraining (Callison-Burch and Osborne, 2003). This ties in with recent work on ensemble combinations of SMT systems, which have used alignment techniques (Matusov et al., 2006) or simple heuristics (Eisele, 2005) to guide target sentence selection and generation. Beyond SMT, the use of an intermediate language as a translation aid has also found application in cross-lingual information retrieval (Gollins and Sanderson, 2001). Callison-Burch et al. (2006) propose the use of paraphrases as a means of dealing with unseen source phrases. Their method acquires paraphrases by identifying candidate phrases in the source lan1 2 3 4 5 6 phrase length proportion of test events in phrase table 0.005 0.01 0.02 0.05 0.1 0.2 0.5 1 standard Italian all all + standard Figure 1: Coverage of fr →en test phrases using a 10,000 sentence bitext. The standard model is shown alongside triangulated models using one (Italian) or nine other languages (all). guage, translating them into multiple target languages, and then back to the source. Unknown source phrases are substituted by the back-translated paraphrases and translation proceeds on the paraphrases. In line with previous work, we exploit multiple source corpora to alleviate data sparseness and increase translation coverage. However, we differ in several important respects. Our method operates over phrases rather than sentences. We propose a generative formulation which treats triangulation not as a post-processing step but as part of the translation model itself. The induced phrase-table entries are fed directly into the decoder, thus avoiding the additional inefficiencies of merging the output of several translation systems. Although related to Callison-Burch et al. (2006) our method is conceptually simpler and more general. Phrase-table entries are created via multiple source languages without the intermediate step of paraphrase extraction, thereby reducing the exposure to compounding errors. Our phrase-tables may well contain paraphrases but these are naturally induced as part of our model, without extra processing effort. Furthermore, we improve the translation estimates for both seen and unseen phrase-table entries, whereas Callison-Burch et al. concentrate solely on unknown phrases. In contrast to Utiyama and Isahara (2007), we employ a large number of intermediate languages and demonstrate how triangulated phrase-tables can be combined with standard phrase-tables to improve translation output. 729 en varm kartoffel een hete aardappel uma batata quente une patate une patate chaud délicate une question délicate a hot potato source intermediate target Figure 2: Triangulation between English (source) and French (target), showing three phrases in Dutch, Danish and Portuguese, respectively. Arrows denote phrases aligned in a language pair and also the generative translation process. 3 Triangulation We start with a motivating example before formalising the mechanics of triangulation. Consider translating the English phrase a hot potato1 into French, as shown in Figure 2. In our corpus this English phrase occurs only three times. Due to errors in the word alignment the phrase was not included in the English-French phrase-table. Triangulation first translates hot potato into a set of intermediate languages (Dutch, Danish and Portuguese are shown in the figure), and then these phrases are further translated into the target language (French). In the example, four different target phrases are obtained, all of which are useful phrase-table entries. We argue that the redundancy introduced by a large suite of other languages can correct for errors in the word alignments and also provide greater generalisation, since the translation distribution is estimated from a richer set of data-points. For example, instances of the Danish en varm kartoffel may be used to translate several English phrases, not only a hot potato. In general we expect that a wider range of possible translations are found for any source phrase, simply due to the extra layer of indirection. So, if a source phrase tends to align with two different target phrases, then we would also expect it to align with two phrases in the ‘intermediate’ language. These intermediate phrases should then each align with two target phrases, yielding up to four target phrases. Consequently, triangulation will often produce more varied translation distributions than the standard source-target approach. 3.1 Formalisation We now formalise triangulation as a generative probabilistic process operating independently on phrase pairs. We start with the conditional distribution over three languages, p(s, i|t), where the arguments denote phrases in the source, intermediate 1An idiom meaning a situation for which no one wants to claim responsibility. and target language, respectively. From this distribution, we can find the desired conditional over the source-target pair by marginalising out the intermediate phrases:2 p(s|t) = X i p(s|i, t)p(i|t) ≈ X i p(s|i)p(i|t) (1) where (1) imposes a simplifying conditional independence assumption: the intermediate phrase fully represents the information (semantics, syntax, etc.) in the source phrase, rendering the target phrase redundant in p(s|i, t). Equation (1) requires that all phrases in the intermediate-target bitext must also be found in the source-intermediate bitext, such that p(s|i) is defined. Clearly this will often not be the case. In these situations we could back-off to another distribution (by discarding part, or all, of the conditioning context), however we take a more pragmatic approach and ignore the missing phrases. This problem of missing contexts is uncommon in multi-parallel corpora, but is more common when the two bitexts are drawn from different sources. While triangulation is intuitively appealing, it may suffer from a few problems. Firstly, as with any SMT approach, the translation estimates are based on noisy automatic word alignments. This leads to many errors and omissions in the phrase-table. With a standard source-target phrase-table these errors are only encountered once, however with triangulation they are encountered twice, and therefore the errors will compound. This leads to more noisy estimates than in the source-target phrase-table. Secondly, the increased exposure to noise means that triangulation will omit a greater proportion of large or rare phrases than the standard method. An 2Equation (1) is used with the source and target arguments reversed to give p(t|s). 730 alignment error in either of the source-intermediate or intermediate-target bitexts can prevent the extraction of a source-target phrase pair. This effect can be seen in Figure 1, where the coverage of the Italian triangulated phrase-table is worse than the standard source-target model, despite the two models using the same sized bitexts. As we explain in the next section, these problems can be ameliorated by using the triangulated phrase-table in conjunction with a standard phrase-table. Finally, another potential problem stems from the independence assumption in (1), which may be an oversimplification and lead to a loss of information. The experiments in Section 5 show that this effect is only mild. 3.2 Merging the phrase-tables Once induced, the triangulated phrase-table can be usefully combined with the standard source-target phrase-table. The simplest approach is to use linear interpolation to combine the two (or more) distributions, as follows: p(s, t) = X j λjpj(s, t) (2) where each joint distribution, pj, has a non-negative weight, λj, and the weights sum to one. The joint distribution for triangulated phrase-tables is defined in an analogous way to Equation (1). We expect that the standard phrase-table should be allocated a higher weight than triangulated phrase-tables, as it will be less noisy. The joint distribution is now conditionalised to yield p(s|t) and p(t|s), which are both used as features in the decoder. Note that the resulting conditional distribution will be drawn solely from one input distribution when the conditioning context is unseen in the remaining distributions. This may lead to an over-reliance on unreliable distributions, which can be ameliorated by smoothing (e.g., Foster et al. (2006)). As an alternative to linear interpolation, we also employ a weighted product for phrase-table combination: p(s|t) ∝ Y j pj(s|t)λj (3) This has the same form used for log-linear training of SMT decoders (Och, 2003), which allows us to treat each distribution as a feature, and learn the mixing weights automatically. Note that we must individually smooth the component distributions in (3) to stop zeros from propagating. For this we use Simple Good-Turing smoothing (Gale and Sampson, 1995) for each distribution, which provides estimates for zero count events. 4 Experimental Design Corpora We used the Europarl corpus (Koehn, 2005) for experimentation. This corpus consists of about 700,000 sentences of parliamentary proceedings from the European Union in eleven European languages. We present results on the full corpus for a range of language pairs. In addition, we have created smaller parallel corpora by sub-sampling 10,000 sentence bitexts for each language pair. These corpora are likely to have minimal overlap — about 1.5% of the sentences will be shared between each pair. However, the phrasal overlap is much greater (10 to 20%), which allows for triangulation using these common phrases. This training setting was chosen to simulate translating to or from a “low density” language, where only a few small independently sourced parallel corpora are available. These bitexts were used for direct translation and triangulation. All experimental results were evaluated on the ACL/WMT 20053 set of 2,000 sentences, and are reported in BLEU percentage-points. Decoding Pharaoh (Koehn, 2003), a beamsearch decoder, was used to maximise: T∗= arg max T Y j fj(T, S)λj (4) where T and S denote a target and source sentence respectively. The parameters, λj, were trained using minimum error rate training (Och, 2003) to maximise the BLEU score (Papineni et al., 2002) on a 150 sentence development set. We used a standard set of features, comprising a 4-gram language model, distance based distortion model, forward and backward translation probabilities, forward and backward lexical translation scores and the phraseand word-counts. The translation models and lexical scores were estimated on the training corpus which was automatically aligned using Giza++ (Och et al., 1999) in both directions between source and target and symmetrised using the growing heuristic (Koehn et al., 2003). 3For details see http://www.statmt.org/wpt05/ mt-shared-task. 731 Lexical weights The lexical translation score is used for smoothing the phrase-table translation estimate. This represents the translation probability of a phrase when it is decomposed into a series of independent word-for-word translation steps (Koehn et al., 2003), and has proven a very effective feature (Zens and Ney, 2004; Foster et al., 2006). Pharaoh’s lexical weights require access to word-alignments; calculating these alignments between the source and target words in a phrase would prove difficult for a triangulated model. Therefore we use a modified lexical score, corresponding to the maximum IBM model 1 score for the phrase pair: lex(t|s) = 1 Z max a Y k p(tk|sak) (5) where the maximisation4 ranges over all one-tomany alignments and Z normalises the score by the number of possible alignments. The lexical probability is obtained by interpolating a relative frequency estimate on the sourcetarget bitext with estimates from triangulation, in the same manner used for phrase translations in (1) and (2). The addition of the lexical probability feature yielded a substantial gain of up to two BLEU points over a basic feature set. 5 Experimental Results The evaluation of our method was motivated by three questions: (1) How do different training requirements affect the performance of the triangulated models presented in this paper? We expect performance gains with triangulation on small and moderate datasets. (2) Is machine translation output influenced by the choice of the intermediate language/s? Here, we would like to evaluate whether the number and choice of intermediate languages matters. (3) What is the quality of the triangulated phrase-table? In particular, we are interested in the resulting distribution and whether it is sufficiently distinct from the standard phrase-table. 5.1 Training requirements Before reporting our results, we briefly discuss the specific choice of model for our experiments. As mentioned in Section 3, our method combines the 4The maximisation in (5) can be replaced with a sum with similar experimental results. standard interp +indic separate en →de 12.03 12.66 12.95 12.25 fr →en 23.02 24.63 23.86 23.43 Table 1: Different feature sets used with the 10K training corpora, using a single language (es) for triangulation. The columns refer to standard, uniform interpolation, interpolation with 0-1 indicator features, and separate phrase-tables, respectively. triangulated phrase-table with the standard sourcetarget one. This is desired in order to compensate for the noise incurred by the triangulation process. We used two combination methods, namely linear interpolation (see (2)) and a weighted geometric mean (see (3)). Table 1 reports the results for two translation tasks when triangulating with a single language (es) using three different feature sets, each with different translation features. The interpolation model uses uniform linear interpolation to merge the standard and triangulated phrase-tables. Non-uniform mixtures did not provide consistent gains, although, as expected, biasing towards the standard phrasetable was more effective than against. The indicator model uses the same interpolated distribution along with a series of 0-1 indicator features to identify the source of each event, i.e., if each (s, t) pair is present in phrase-table j. We also tried per-context features with similar results. The separate model has a separate feature for each phrase-table. All three feature sets improve over the standard source-target system, while the interpolated features provided the best overall performance. The relatively poorer performance of the separate model is perhaps surprising, as it is able to differentially weight the component distributions; this is probably due to MERT not properly handling the larger feature sets. In all subsequent experiments we report results using linear interpolation. As a proof of concept, we first assessed the effect of triangulation on corpora consisting of 10,000 sentence bitexts. We expect triangulation to deliver performance gains on small corpora, since a large number of phrase-table entries will be unseen. In Table 2 each entry shows the BLEU score when using the standard phrase-table and the absolute improvement when using triangulation. Here we have used three languages for triangulation (it ∪{de, en, es, fr}\{s, t}). The source-target languages were chosen so as to mirror the evaluation setup of NAACL/WMT. The translation tasks range 732 s ↓t → de en es fr de 17.58 16.84 18.06 +1.20 +1.99 +1.94 en 12.45 23.83 24.05 +1.22 +1.04 +1.48 es 12.31 23.83 32.69 +2.24 +1.35 +0.85 fr 11.76 23.02 31.22 +2.41 +2.24 +1.30 Table 2: BLEU improvements over the standard phrase-table (top) when interpolating with three triangulated phrase-tables (bottom) on the small training sample. from easy (es →fr) to very hard (de →en). In all cases triangulation resulted in an improvement in translation quality, with the highest gains observed for the most difficult tasks (to and from German). For these tasks the standard systems have poor coverage (due in part to the sizeable vocabulary of German phrases) and therefore the gain can be largely explained by the additional coverage afforded by the triangulated phrase-tables. To test whether triangulation can also improve performance of larger corpora we ran six separate translation tasks on the full Europarl corpus. The results are presented in Table 3, for a single triangulation language used alone (triang) or uniformly interpolated with the standard phrase-table (interp). These results show that triangulation can produce high quality translations on its own, which is noteworthy, as it allows for SMT between a much larger set of language pairs. Using triangulation in conjunction with the standard phrase-table improved over the standard system in most instances, and only degraded performance once. The improvement is largest for the German tasks which can be explained by triangulation providing better robustness to noisy alignments (which are often quite poor for German) and better estimates of low-count events. The difficulty of aligning German with the other languages is apparent from the Giza++ perplexity: the final Model 4 perplexities for German are quite high, as much as double the perplexity for more easily aligned language pairs (e.g., Spanish-French). Figure 3 shows the effect of triangulation on different sized corpora for the language pair fr →en. It presents learning curves for the standard system and a triangulated system using one language (es). As can be seen, gains from triangulation only diminish slightly for larger training corpora, and that task standard interm triang interp de →en 23.85 es 23.48 24.36 en →de 17.24 es 16.28 17.42 es →en 30.48 fr 29.06 30.52 en →es 29.09 fr 28.19 29.09 fr →en 29.66 es 29.59 30.36 en →fr 30.07 es 28.94 29.62 Table 3: Results on the full training set showing triangulation with a single language, both alone (triang) and alongside a standard model (interp). G G G G size of training bitext(s) BLEU score 10K 40K 160K 700K 22 24 26 28 30 G standard triang interp Figure 3: Learning curve for fr →en translation for the standard source-target model and a triangulated model using Spanish as an intermediate language. the purely triangulated models have very competitive performance. The gain from interpolation with a triangulated model is roughly equivalent to having twice as much training data. Finally, notice that triangulation may benefit when the sentences in each bitext are drawn from the same source, in that there are no unseen ‘intermediate’ phrases, and therefore (1) can be easily evaluated. We investigate this by examining the robustness of our method in the face of disjoint bitexts. The concepts contained in each bitext will be more varied, potentially leading to better coverage of the target language. In lieu of a study on different domain bitexts which we plan for the future, we bisected the Europarl corpus for fr →en, triangulating with Spanish. The triangulated models were presented with fr-es and es-en bitexts drawn from either the same half of the corpus or from different halves, resulting in scores of 28.37 and 28.13, respectively.5 These results indicate that triangulation is effective 5The baseline source-target system on one half has a score of 28.85. 733 triang interp BLEU score 19 20 21 22 23 24 25 fi (−14.26) da da de de el el es es fi it it nl nl pt pt sv sv Figure 4: Comparison of different triangulation languages for fr →en translation, relative to the standard model (10K training sample). The bar for fihas been truncated to fit on the graph. for disjoint bitexts, although ideally we would test this with independently sourced parallel texts. 5.2 The choice of intermediate languages The previous experiments used an ad-hoc choice of ‘intermediate’ language/s for triangulation, and we now examine which languages are most effective. Figure 4 shows the efficacy of the remaining nine languages when translating fr →en. Minimum error-rate training was not used for this experiment, or the next shown in Figure 5, in order to highlight the effect of the changing translation estimates. Romance languages (es, it, pt) give the best results, both on their own and when used together with the standard phrase-table (using uniform interpolation); Germanic languages (de, nl, da, sv) are a distant second, with the less related Greek and Finnish the least useful. Interpolation yields an improvement for all ‘intermediate’ languages, even Finnish, which has a very low score when used alone. The same experiment was repeated for en →de translation with similar trends, except that the Germanic languages out-scored the Romance languages. These findings suggest that ‘intermediate’ languages which exhibit a high degree of similarity with the source or target language are desirable. We conjecture that this is a consequence of better automatic word alignments and a generally easier translation task, as well as a better preservation of information between aligned phrases. Using a single language for triangulation clearly improves performance, but can we realise further improvements by using additional languages? Fig1 2 3 4 5 6 7 8 9 # intermediate languages BLEU score 22 23 24 25 26 triang interp Figure 5: Increasing the number of intermediate languages used for triangulation increases performance for fr →en (10K training sample). The dashed line shows the BLEU score for the standard phrase-table. ure 5 shows the performance profile for fr →en when adding languages in a fixed order. The languages were ordered by family, with Romance before Germanic before Greek and Finnish. Each addition results in an increase in performance, even for the final languages, from which we expect little information. The purely triangulated (triang) and interpolated scores (interp) are converging, suggesting that the source-target bitext is redundant given sufficient triangulated data. We obtained similar results for en →de. 5.3 Evaluating the quality of the phrase-table Our experimental results so far have shown that triangulation is not a mere approximation of the source-target phrase-table, but that it extracts additional useful translation information. We now assess the phrase-table quality more directly. Comparative statistics of a standard and a triangulated phrase-table are given in Table 4. The coverage over source and target phrases is much higher in the standard table than the triangulated tables, which reflects the reduced ability of triangulation to extract large phrases — despite the large increase in the number of events. The table also shows the overlapping probability mass which measures the sum of probability in one table for which the events are present in the other. This shows that the majority of mass is shared by both tables (as joint distributions), although there are significant differences. The JensenShannon divergence is perhaps more appropriate for the comparison, giving a relatively high divergence 734 standard triang source phrases (M) 8 2.5 target phrases (M) 7 2.5 events (M) 12 70 overlapping mass 0.646 0.750 Table 4: Comparative statistics of the standard triangulated table on fr →en using the full training set and Spanish as an intermediate language. of 0.3937. This augurs well for the combination of standard and triangulated phrase-tables, where diversity is valued. The decoding results (shown in Table 3 for fr →en) indicate that the two methods have similar efficacy, and that their interpolated combination provides the best overall performance. 6 Conclusion In this paper we have presented a novel method for obtaining more reliable translation estimates from small datasets. The key premise of our work is that multi-parallel data can be usefully exploited for improving the coverage and quality of phrase-based SMT. Our triangulation method translates from a source to a target via one or many intermediate languages. We present a generative formulation of this process and show how it can be used together with the entries of a standard source-target phrase-table. We observe large performance gains when translating with triangulated models trained on small datasets. Furthermore, when combined with a standard phrase-table, our models also yield performance improvements on larger datasets. Our experiments revealed that triangulation benefits from a large set of intermediate languages and that performance is increased when languages of the same family to the source or target are used as intermediates. We have just scratched the surface of the possibilities for the framework discussed here. Important future directions lie in combining triangulation with richer means of conventional smoothing and using triangulation to translate between low-density language pairs. Acknowledgements The authors acknowledge the support of EPSRC (grants GR/T04540/01 and GR/T04557/01). Special thanks to Markus Becker, Chris Callison-Burch, David Talbot and Miles Osborne for their helpful comments. References P. F. Brown, V. J. D. Pietra, S. A. D. Pietra, R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. C. Callison-Burch, M. Osborne. 2003. Bootstrapping parallel corpora. In Proceedings of the NAACL Workshop on Building and Using Parallel Texts: Data Driven Machine Translation and Beyond, Edmonton, Canada. C. Callison-Burch, P. Koehn, M. Osborne. 2006. Improved statistical machine translation using paraphrases. In Proceedings of the HLT/NAACL, 17–24, New York, NY. A. Eisele. 2005. First steps towards multi-engine machine translation. In Proceedings of the ACL Workshop on Building and Using Parallel Texts, 155–158, Ann Arbor, MI. G. Foster, R. Kuhn, H. Johnson. 2006. Phrase-table smoothing for statistical machine translation. In Proceedings of the EMNLP, 53–61, Sydney, Australia. W. A. Gale, G. Sampson. 1995. Good-turing frequency estimation without tears. Journal of Quantitative Linguistics, 2(3):217–237. T. Gollins, M. Sanderson. 2001. Improving cross language retrieval with triangulated translation. In Proceedings of the SIGIR, 90–95, New Orleans, LA. M. Kay. 1997. The proper place of men and machines in language translation. Machine Translation, 12(1–2):3–23. P. Koehn, F. J. Och, D. Marcu. 2003. Statistical phrasebased translation. In Proceedings of the HLT/NAACL, 48– 54, Edomonton, Canada. P. Koehn. 2003. Noun Phrase Translation. Ph.D. thesis, University of Southern California, Los Angeles, California. P. Koehn. 2005. Europarl: A parallel corpus for evaluation of machine translation. In Proceedings of MT Summit, Phuket, Thailand. E. Matusov, N. Ueffing, H. Ney. 2006. Computing consesus translation from multiple machine translation systems using enhanced hypotheses alignment. In Proceedings of the EACL, 33–40, Trento, Italy. F. J. Och, H. Ney. 2001. Statistical multi-source translation. In Proceedings of the MT Summit, 253–258, Santiago de Compostela, Spain. F. J. Och, C. Tillmann, H. Ney. 1999. Improved alignment models for statistical machine translation. In Proceedings of the EMNLP and VLC, 20–28, University of Maryland, College Park, MD. F. J. Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the ACL, 160–167, Sapporo, Japan. K. Papineni, S. Roukos, T. Ward, W.-J. Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the ACL, 311–318, Philadelphia, PA. P. Resnik, N. A. Smith. 2003. The Web as a parallel corpus. Computational Linguistics, 29(3):349–380. M. Utiyama, H. Isahara. 2007. A comparison of pivot methods for phrase-based statistical machine translation. In Proceedings of the HLT/NAACL, 484–491, Rochester, NY. R. Zens, H. Ney. 2004. Improvements in phrase-based statistical machine translation. In D. M. Susan Dumais, S. Roukos, eds., Proceedings of the HLT/NAACL, 257–264, Boston, MA. 735
2007
92
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 736–743, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics A Maximum Expected Utility Framework for Binary Sequence Labeling Martin Jansche∗ [email protected] Abstract We consider the problem of predictive inference for probabilistic binary sequence labeling models under F-score as utility. For a simple class of models, we show that the number of hypotheses whose expected Fscore needs to be evaluated is linear in the sequence length and present a framework for efficiently evaluating the expectation of many common loss/utility functions, including the F-score. This framework includes both exact and faster inexact calculation methods. 1 Introduction 1.1 Motivation and Scope The weighted F-score (van Rijsbergen, 1974) plays an important role in the evaluation of binary classifiers, as it neatly summarizes a classifier’s ability to identify the positive class. A variety of methods exists for training classifiers that optimize the F-score, or some similar trade-off between false positives and false negatives, precision and recall, sensitivity and specificity, type I error and type II error rate, etc. Among the most general methods are those of Mozer et al. (2001), whose constrained optimization technique is similar to those in (Gao et al., 2006; Jansche, 2005). More specialized methods also exist, for example for support vector machines (Musicant et al., 2003) and for conditional random fields (Gross et al., 2007; Suzuki et al., 2006). All of these methods are about classifier training. In this paper we focus primarily on the related, but orthogonal, issue of predictive inference with a fully trained probabilistic classifier. Using the weighted F-score as our utility function, predictive inference amounts to choosing an optimal hypothesis which maximizes the expected utility. We refer to this as ∗Current affiliation: Google Inc. Former affiliation: Center of Computational Learning Systems, Columbia University. the prediction or decoding task. In general, decoding can be a hard computational problem (Casacuberta and de la Higuera, 2000; Knight, 1999). In this paper we show that the maximum expected F-score decoding problem can be solved in polynomial time under certain assumptions about the underlying probability model. One key ingredient in our solution is a very general framework for evaluating the expected F-score, and indeed many other utility functions, of a fixed hypothesis.1 This framework can also be applied to discriminative classifier training. 1.2 Background and Notation We formulate our approach in terms of sequence labeling, although it has applications beyond that. This is motivated by the fact that our framework for evaluating expected utility is indeed applicable to general sequence labeling tasks, while our decoding method is more restricted. Another reason is that the F-score is only meaningful for comparing two (multi)sets or two binary sequences, but the notation for multisets is slightly more awkward. All tasks considered here involve strings of binary labels. We write the length of a given string y ∈ {0,1}n as |y| = n. It is convenient to view such strings as real vectors – whose components happen to be 0 or 1 – with the dot product defined as usual. Then y·y is the number of ones that occur in the string y. For two strings x,y of the same length |x| = |y| the number of ones that occur at corresponding indices is x·y. Given a hypothesis z and a gold standard label sequence y, we define the following quantities: 1. T = y·y, the genuine positives; 2. P = z·z, the predicted positives; 3. A = z·y, the true positives (predicted positives that are genuinely positive); 1A proof-of-concept implementation is available at http: //purl.org/net/jansche/meu_framework/. 736 4. Recl = A/T, recall (a.k.a. sensitivity or power); 5. Prec = A/P, precision. The β-weighted F-score is then defined as the weighted harmonic mean of recall and precision. This simplifies to Fβ = (β +1)A P+β T (β > 0) (1) where we assume for convenience that 0/0 def = 1 to avoid explicitly dealing with the special case of the denominator being zero. We will write the weighted F-score from now on as F(z,y) to emphasize that it is a function of z and y. 1.3 Expected F-Score In Section 3 we will develop a method for evaluating the expectation of the F-score, which can also be used as a smooth approximation of the raw F-score during classifier training: in that task (which we will not discuss further in this paper), z are the supervised labels, y is the classifier output, and the challenge is that F(z,y) does not depend smoothly on the parameters of the classifier. Gradient-based optimization techniques are not applicable unless some of the quantities defined above are replaced by approximations that depend smoothly on the classifier’s parameters. For example, the constrained optimization method of (Mozer et al., 2001) relies on approximations of sensitivity (which they call CA) and specificity2 (their CR); related techniques (Gao et al., 2006; Jansche, 2005) rely on approximations of true positives, false positives, and false negatives, and, indirectly, recall and precision. Unlike these methods we compute the expected F-score exactly, without relying on ad hoc approximations of the true positives, etc. Being able to efficiently compute the expected F-score is a prerequisite for maximizing it during decoding. More precisely, we compute the expectation of the function y 7→F(z,y), (2) which is a unary function obtained by holding the first argument of the binary function F fixed. It will henceforth be abbreviated as F(z,·), and we will denote its expected value by E[F(z,·)] = ∑ y∈{0,1}|z| F(z,y) Pr(y). (3) 2Defined as [(⃗1−z)·(⃗1−y)]  [(⃗1−y)·(⃗1−y)]. This expectation is taken with respect to a probability model over binary label sequences, written as Pr(y) for simplicity. This probability model may be conditional, that is, in general it will depend on covariates x and parameters θ. We have suppressed both in our notation, since x is fixed during training and decoding, and we assume that the model is fully identified during decoding. This is for clarity only and does not limit the class of models, though we will introduce additional, limiting assumptions shortly. We are now ready to tackle the inference task formally. 2 Maximum Expected F-Score Inference 2.1 Problem Statement Optimal predictive inference under F-score utility requires us to find an hypothesis ˆz of length n which maximizes the expected F-score relative to a given probabilistic sequence labeling model: ˆz = argmax z∈{0,1}n E[F(z,·)] = argmax z∈{0,1}n ∑ y F(z,y) Pr(y). (4) We require the probability model to factor into independent Bernoulli components (Markov order zero): Pr(y = (y1,...,yn)) = n ∏ i=1 pyi i (1−pi)1−yi. (5) In practical applications we might choose the overall probability distribution to be the product of independent logistic regression models, for example. Ordinary classification arises as a special case when the yi are i.i.d., that is, a single probabilistic classifier is used to find Pr(yi = 1 | xi). For our present purposes it is sufficient to assume that the inference algorithm takes as its input the vector (p1,..., pn), where pi is the probability that yi = 1. The discrete maximization problem (4) cannot be solved naively, since the number of hypotheses that would need to be evaluated in a brute-force search for an optimal hypothesis ˆz is exponential in the sequence length n. We show below that in fact only a few hypotheses (n+1 instead of 2n) need to be examined in order to find an optimal one. The inference algorithm is the intuitive one, analogous to the following simple observation: Start with the hypothesis z = 00...0 and evaluate its raw Fscore F(z,y) relative to a fixed but unknown binary 737 string y. Then z will have perfect precision (no positive labels means no chance to make mistakes), and zero recall (unless y = z). Switch on any bit of z that is currently off. Then precision will decrease or remain equal, while recall will increase or remain equal. Repeat until z = 11...1 is reached, in which case recall will be perfect and precision at its minimum. The inference algorithm for expected F-score follows the same strategy, and in particular it switches on the bits of z in order of non-increasing probability: start with 00...0, then switch on the bit i1 = argmaxi pi, etc. until 11...1 is reached. We now show that this intuitive strategy is indeed admissible. 2.2 Outer and Inner Maximization In general, maximization can be carried out piecewise, since argmax x∈X f(x) = argmax x∈{argmaxy∈Y f(y)|Y∈π(X)} f(x), where π(X) is any family (Y1,Y2,...) of nonempty subsets of X whose union S iYi is equal to X. (Recursive application would lead to a divide-and-conquer algorithm.) Duplication of effort is avoided if π(X) is a partition of X. Here we partition the set {0,1}n into equivalence classes based on the number of ones in a string (viewed as a real vector). Define Sm to be the set Sm = {s ∈{0,1}n | s·s = m} consisting of all binary strings of fixed length n that contain exactly m ones. Then the maximization problem (4) can be transformed into an inner maximization ˆs(m) = argmax s∈Sm E[F(s,·)], (6) followed by an outer maximization ˆz = argmax z∈{ˆs(0),...,ˆs(n)} E[F(z,·)]. (7) 2.3 Closed-Form Inner Maximization The key insight is that the inner maximization problem (6) can be solved analytically. Given a vector p = (p1,..., pn) of probabilities, define z(m) to be the binary label sequence with exactly m ones and n−m zeroes where for all indices i,k we have h z(m) i = 1∧z(m) k = 0 i →pi ≥pk. Algorithm 1 Maximizing the Expected F-Score. 1: Input: probabilities p = (p1,..., pn) 2: I ←indices of p sorted by non-increasing probability 3: z ←0...0 4: a ←0 5: v ←expectF(z, p) 6: for j ←1 to n do 7: i ←I[j] 8: z[i] ←1 // switch on the ith bit 9: u ←expectF(z, p) 10: if u > v then 11: a ←j 12: v ←u 13: for j ←a+1 to n do 14: z[I[j]] ←0 15: return (z,v) In other words, the most probable m bits (according to p) in z(m) are set and the least probable n−m bits are off. We rely on the following result, whose proof is deferred to Appendix A: Theorem 1. (∀s ∈Sm) E[F(z(m),·)] ≥E[F(s,·)]. Because z(m) is maximal in Sm, we may equate z(m) = argmaxs∈Sm E[F(s,·)] = ˆs(m) (modulo ties, which can always arise with argmax). 2.4 Pedestrian Outer Maximization With the inner maximization (6) thus solved, the outer maximization (7) can be carried out naively, since only n + 1 hypotheses need to be evaluated. This is precisely what Algorithm 1 does, which keeps track of the maximum value in v. On termination z = argmaxs E[F(s,·)]. Correctness follows directly from our results in this section. Algorithm 1 runs in time O(nlogn + n f(n)). A total of O(nlogn) time is required for accessing the vector p in sorted order (line 2). This dominates the O(n) time required to explicitly generate the optimal hypothesis (lines 13–14). The algorithm invokes a subroutine expectF(z, p) a total of n+1 times. This subroutine, which is the topic of the next section, evaluates, in time f(n), the expected F-score (with respect to p) of a given hypothesis z of length n. 3 Computing the Expected F-Score 3.1 Problem Statement We now turn to the problem of computing the expected value (3) of the F-score for a given hypothesis z relative to a fully identified probability model. The method presented here does not strictly require the 738 zeroth-order Markov assumption (5) instated earlier (a higher-order Markov assumption will suffice), but it shall remain in effect for simplicity. As with the maximization problem (4), the sum in (3) is over exponentially many terms and cannot be computed naively. But observe that the F-score (1) is a (rational) function of integer counts which are bounded, so it can take on only a finite, and indeed small, number of distinct values. We shall see shortly that the function (2) whose expectation we wish to compute has a domain whose cardinality is exponential in n, but the cardinality of its range is polynomial in n. The latter is sufficient to ensure that its expectation can be computed in polynomial time. The method we are about to develop is in fact very general and applies to many other loss and utility functions besides the F-score. 3.2 Expected F-Score as an Integral A few notions from real analysis are helpful because they highlight the importance of thinking about functions in terms of their range, level sets, and the equivalence classes they induce on their domain (the kernel of the function). A function g : Ω→R is said to be simple if it can be expressed as a linear combination of indicator functions (characteristic functions): g(x) = ∑ k∈K ak χBk(x), where K is a finite index set, ak ∈R, and Bk ⊆Ω. (χS : S →{0,1} is the characteristic function of set S.) Let Ωbe a countable set and P be a probability measure on Ω. Then the expectation of g is given by the Lebesgue integral of g. In the case of a simple function g as defined above, the integral, and hence the expectation, is defined as E[g] = Z Ωg dP = ∑ k∈K ak P(Bk). (8) This gives us a general recipe for evaluating E[g] when Ωis much larger than the range of g. Instead of computing the sum ∑y∈Ωg(y)P({y}) we can compute the sum in (8) above. This directly yields an efficient algorithm whenever K is sufficiently small and P(Bk) can be evaluated efficiently. The expected F-score is thus the Lebesgue integral of the function (2). Looking at the definition of the 0,0 Y:n, n:n 1,1 Y:Y 0,1 n:Y Y:n, n:n 2,2 Y:Y 1,2 n:Y Y:n, n:n Y:Y 0,2 n:Y Y:n, n:n 3,3 Y:Y 2,3 n:Y Y:n, n:n Y:Y 1,3 n:Y Y:n, n:n Y:Y 0,3 n:Y Y:n, n:n Y:n, n:n Y:n, n:n Y:n, n:n Figure 1: Finite State Classifier h′. F-score in (1) we see that the only expressions which depend on y are A = z · y and T = y · y (P = z · z is fixed because z is). But 0 ≤z · y ≤y · y ≤n = |z|. Therefore F(z,·) takes on at most (n+1)(n+2)/2, i.e. quadratically many, distinct values. It is a simple function with K = {(A,T) ∈N0 ×N0 | A ≤T ≤|z|, A ≤z·z} a(A,T) = (β +1)A z·z+β T where 0/0 def = 1 B(A,T) = {y | z·y = A, y·y = T}. 3.3 Computing Membership in Bk Observe that the family of sets B(A,T)  (A,T)∈K is a partition (namely the kernel of F(z,·)) of the set Ω= {0,1}n of all label sequences of length n. In turn it gives rise to a function h : Ω→K where h(y) = k iff y ∈Bk. The function h can be computed by a deterministic finite automaton, viewed as a sequence classifier: rather than assigning binary accept/reject labels, it assigns arbitrary labels from a finite set, in this case the index set K. For simplicity we show the initial portion of a slightly more general two-tape automaton h′ in Figure 1. It reads the two sequences z and y on its two input tapes and counts the number of matching positive labels (represented as Y) as well as the number of positive labels on the second tape. Its behavior is therefore h′(z,y) = (z · y, y · y). The function h is obtained as a special case when z (the first tape) is fixed. Note that this only applies to the special case when 739 Algorithm 2 Simple Function Instance for F-Score. def start(): return (0,0) def transition(k,z,i,yi): (A,T) ←k if yi = 1 then T ←T +1 if z[i] = 1 then A ←A+1 return (A,T) def a(k,z): (A,T) ←k F ←(β +1)A z·z+β T // where 0/0 def = 1 return F Algorithm 3 Value of a Simple Function. 1: Input: instance g of the simple function interface, strings z and y of length n 2: k ←g.start() 3: for i ←1 to n do 4: k ←g.transition(k,z,i,y[i]) 5: return g.a(k,z) the family B = (Bk)k∈K is a partition of Ω. It is always possible to express any simple function in this way, but in general there may be an exponential increase in the size of K when the family B is required to be a partition. However for the special cases we consider here this problem does not arise. 3.4 The Simple Function Trick In general, what we will call the simple function trick amounts to representing the simple function g whose expectation we want to compute by: 1. a finite index set K (perhaps implicit), 2. a deterministic finite state classifier h : Ω→K, 3. and a vector of coefficients (ak)k∈K. In practice, this means instantiating an interface with three methods: the start and transition function of the transducer which computes h′ (and from which h can be derived), and an accessor method for the coefficients a. Algorithm 2 shows the F-score instance. Any simple function g expressed as an instance of this interface can then be evaluated very simply as g(x) = ah(x). This is shown in Algorithm 3. Evaluating E[g] is also straightforward: Compose the DFA h with the probability model p and use an algebraic path algorithm to compute the total probability mass P(Bk) for each final state k of the resulting automaton. If p factors into independent components as required by (5), the composition is greatly simAlgorithm 4 Expectation of a Simple Function. 1: Input: instance g of the simple function interface, string z and probability vector p of length n 2: M ←Map() 3: M[g.start()] ←1 4: for i ←1 to n do 5: N ←Map() 6: for (k,P) ∈M do 7: // transition on yi = 0 8: k0 ←g.transition(k,z,i,0) 9: if k0 /∈N then 10: N[k0] ←0 11: N[k0] ←N[k0]+P×(1−p[i]) 12: // transition on yi = 1 13: k1 ←g.transition(k,z,i,1) 14: if k1 /∈N then 15: N[k1] ←0 16: N[k1] ←N[k1]+P× p[i] 17: M ←N 18: E ←0 19: for (k,P) ∈M do 20: E ←E +g.a(k,z)×P 21: return E plified. If p incorporates label history (higher-order Markov assumption), nothing changes in principle, though the following algorithm assumes for simplicity that the stronger assumption is in effect. Algorithm 4 expands the following composed automaton, represented implicitly: the finite-state transducer h′ specified as part of the simple function object g is composed on the left with the string z (yielding h) and on the right with the probability model p. The outer loop variable i is an index into z and hence a state in the automaton that accepts z; the variable k keeps track of the states of the automaton implemented by g; and the probability model has a single state by assumption, which does not need to be represented explicitly. Exploring the states in order of increasing i puts them in topological order, which means that the algebraic path problem can be solved in time linear in the size of the composed automaton. The maps M and N keep track of the algebraic distance from the start state to each intermediate state. On termination of the first outer loop (lines 4–17), the map M contains the final states together with their distances. The algebraic distance of a final state k is now equal to P(Bk), so the expected value E can be computed in the second loop (lines 18–20) as suggested by (8). When the utility function interface g is instantiated as in Algorithm 2 to represent the F-score, the runtime of Algorithm 4 is cubic in n, with very small 740 constants.3 The first main loop iterates over n. The inner loop iterates over the states expanded at iteration i, of which there are O(i2) many when dealing with the F-score. The second main loop iterates over the final states, whose number is quadratic in n in this case. The overall cubic runtime of the first loop dominates the computation. 3.5 Other Utility Functions With other functions g the runtime of Algorithm 4 will depend on the asymptotic size of the index set K. If there are asymptotically as many intermediate states at any point as there are final states, then the general asymptotic runtime is O(n|K|). Many loss/utility functions are subsumed by the present framework. Zero–one loss is trivial: the automaton has two states (success, failure); it starts and remains in the success state as long as the symbols read on both tapes match; on the first mismatch it transitions to, and remains in, the failure state. Hamming (1950) distance is similar to zero–one loss, but counts the number of mismatches (bounded by n), whereas zero–one loss only counts up to a threshold of one. A more interesting case is given by the Pk-score (Beeferman et al., 1999) and its generalizations, which moves a sliding window of size k over a pair of label sequences (z,y) and counts the number of windows which contain a segment boundary on one of the sequences but not the other. To compute its expectation in our framework, all we have to do is express the sliding window mechanism as an automaton, which can be done very naturally (see the proofof-concept implementation for further details). 4 Faster Inexact Computations Because the exact computation of the expected Fscore by Algorithm 4 requires cubic time, the overall runtime of Algorithm 1 (the decoder) is quartic.4 3A tight upper bound on the total number of states of the composed automaton in the worst case is j 1 12n3 + 5 8n2 + 17 12n+1 k . 4It is possible to speed up the decoding algorithm in absolute terms, though not asymptotically, by exploiting the fact that it explores very similar hypotheses in sequence. Algorithm 4 can be modified to store and return all of its intermediate map datastructures. This modified algorithm then requires cubic space instead of quadratic space. This additional storage cost pays off when the algorithm is called a second time, with its formal parameter z bound to a string that differs from the one of the Faster decoding can be achieved by modifying Algorithm 4 to compute an approximation (in fact, a lower bound) of the expected F-score.5 This is done by introducing an additional parameter L which limits the number of intermediate states that get expanded. Instead of iterating over all states and their associated probabilities (inner loop starting at line 6), one iterates over the top L states only. We require that L ≥1 for this to be meaningful. Before entering the inner loop the entries of the map M are expanded and, using the linear time selection algorithm, the top L entries are selected. Because each state that gets expanded in the inner loop has out-degree 2, the new state map N will contain at most 2L states. This means that we have an additional loop invariant: the size of M is always less than or equal to 2L. Therefore the selection algorithm runs in time O(L), and so does the abridged inner loop, as well as the second outer loop. The overall runtime of this modified algorithm is therefore O(nL). If L is a constant function, the inexact computation of the expected F-score runs in linear time and the overall decoding algorithm in quadratic time. In particular if L = 1 the approximate expected F-score is equal to the F-score of the MAP hypothesis, and the modified inference algorithm reduces to a variant of Viterbi decoding. If L is a linear function of n, the overall decoding algorithm runs in cubic time. We experimentally compared the exact quartictime decoding algorithm with the approximate decoding algorithm for L = 2n and for L = 1. We computed the absolute difference between the expected F-score of the optimal hypothesis (as found by the exact algorithm) and the expected F-score of the winning hypothesis found by the approximate decoding algorithm. For different sequence lengths n ∈{1,...,50} we performed 10 runs of the different decoding algorithms on randomly generated probability vectors p, where each pi was randomly drawn from a continuous uniform distribution on (0,1), or, in a second experiment, from a Beta(1/2,1/2) distribution (to simulate an over-trained classifier). For L = 1 there is a substantial difference of about preceding run in just one position. This means that the map data-structures only need to be recomputed from that position forward. However, this does not lead to an asymptotically faster algorithm in the worst case. 5For error bounds, see the proof-of-concept implementation. 741 0.6 between the expected F-scores of the winning hypothesis computed by the exact algorithm and by the approximate algorithm. Nevertheless the approximate decoding algorithm found the optimal hypothesis more than 99% of the time. This is presumably due to the additional regularization inherent in the discrete maximization of the decoder proper: even though the computed expected F-scores may be far from their exact values, this does not necessarily affect the behavior of the decoder very much, since it only needs to find the maximum among a small number of such scores. The error introduced by the approximation would have to be large enough to disturb the order of the hypotheses examined by the decoder in such a way that the true maximum is reordered. This generally does not seem to happen. For L = 2n the computed approximate expected Fscores were indistinguishable from their exact values. Consequently the approximate decoder found the true maximum every time. 5 Conclusion and Related Work We have presented efficient algorithms for maximum expected F-score decoding. Our exact algorithm runs in quartic time, but an approximate cubic-time variant is indistinguishable in practice. A quadratic-time approximation makes very few mistakes and remains practically useful. We have further described a general framework for computing the expectations of certain loss/utility functions. Our method relies on the fact that many functions are sparse, in the sense of having a finite range that is much smaller than their codomain. To evaluate their expectations, we can use the simple function trick and concentrate on their level sets: it suffices to evaluate the probability of those sets/ events. The fact that the commonly used utility functions like the F-score have only polynomially many level sets is sufficient (but not necessary) to ensure that our method is efficient. Because the coefficients ak can be arbitrary (in fact, they can be generalized to be elements of a vector space over the reals), we can deal with functions that go beyond simple counts. Like the methods developed by Allauzen et al. (2003) and Cortes et al. (2003) our technique incorporates finite automata, but uses a direct thresholdcounting technique, rather than a nondeterministic counting technique which relies on path multiplicities. This makes it easy to formulate the simultaneous counting of two distinct quantities, such as our A and T, and to reason about the resulting automata. The method described here is similar in spirit to those of Gao et al. (2006) and Jansche (2005), who discuss maximum expected F-score training of decision trees and logistic regression models. However, the present work is considerably more general in two ways: (1) the expected utility computations presented here are not tied in any way to particular classifiers, but can be used with large classes of probabilistic models; and (2) our framework extends beyond the computation of F-scores, which fall out as a special case, to other loss and utility functions, including the Pk score. More importantly, expected F-score computation as presented here can be exact, if desired, whereas the cited works always use an approximation to the quantities we have called A and T. Acknowledgements Most of this research was conducted while I was affilated with the Center for Computational Learning Systems, Columbia University. I would like to thank my colleagues at Google, in particular Ryan McDonald, as well as two anonymous reviewers for valuable feedback. References Cyril Allauzen, Mehryar Mohri, and Brian Roark. 2003. Generalized algorithms for constructing language models. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. Doug Beeferman, Adam Berger, and John Lafferty. 1999. Statistical models for text segmentation. Machine Learning, 34(1–3):177–210. Francisco Casacuberta and Colin de la Higuera. 2000. Computational complexity of problems on probabilistic grammars and transducers. In 5th International Colloquium on Grammatical Inference. Corinna Cortes, Patrick Haffner, and Mehryar Mohri. 2003. Rational kernels. In Advances in Neural Information Processing Systems, volume 15. Sheng Gao, Wen Wu, Chin-Hui Lee, and Tai-Seng Chua. 2006. A maximal figure-of-merit (MFoM)-learning approach to robust classifier design for text categorization. ACM Transactions on Information Systems, 24(2):190–218. Also in ICML 2004. Samuel S. Gross, Olga Russakovsky, Chuong B. Do, and Serafim Batzoglou. 2007. Training conditional random fields for maximum labelwise accuracy. In Advances in Neural Information Processing Systems, volume 19. R. W. Hamming. 1950. Error detecting and error correcting codes. The Bell System Technical Journal, 26(2):147–160. Martin Jansche. 2005. Maximum expected F-measure training of logistic regression models. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. 742 Kevin Knight. 1999. Decoding complexity in word-replacement translation models. Computational Linguistics, 25(4):607– 615. Michael C. Mozer, Robert Dodier, Michael D. Colagrosso, C´esar Guerra-Salcedo, and Richard Wolniewicz. 2001. Prodding the ROC curve: Constrained optimization of classifier performance. In Advances in Neural Information Processing Systems, volume 14. David R. Musicant, Vipin Kumar, and Aysel Ozgur. 2003. Optimizing F-measure with support vector machines. In Proceedings of the Sixteenth International Florida Artificial Intelligence Research Society Conference. Jun Suzuki, Erik McDermott, and Hideki Isozaki. 2006. Training conditional random fields with multivariate evaluation measures. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. C. J. van Rijsbergen. 1974. Foundation of evaluation. Journal of Documentation, 30(4):365–373. Appendix A Proof of Theorem 1 The proof of Theorem 1 employs the following lemma: Theorem 2. For fixed n and p, let s,t ∈Sm for some m with 1 ≤m < n. Further assume that s and t differ only in two bits, i and k, in such a way that si = 1, sk = 0; ti = 0, tk = 1; and pi ≥pk. Then E[F(s,·)] ≥E[F(t,·)]. Proof. Express the expected F-score E[F(s,·)] as a sum and split the summation into two parts: ∑ y F(s,y) Pr(y) = ∑ y yi=yk F(s,y) Pr(y) +∑ y yi̸=yk F(s,y) Pr(y). If yi = yk then F(s,y) = F(t,y), for three reasons: the number of ones in s and t is the same (namely m) by assumption; y is constant; and the number of true positives is the same, that is s · y = t · y. The latter holds because s and y agree everywhere except on i and k; if yi = yk = 0, then there are no true positives at i and k; and if yi = yk = 1 then si is a true positive but sk is not, and conversely tk is but ti is not. Therefore ∑ y yi=yk F(s,y) Pr(y) = ∑ y yi=yk F(t,y) Pr(y). (9) Focus on those summands where yi ̸= yk. Specifically group them into pairs (y,z) where y and z are identical except that yi = 1 and yk = 0, but zi = 0 and zk = 1. In other words, the two summations on the right-hand side of the following equality are carried out in parallel: ∑ y yi̸=yk F(s,y) Pr(y) = ∑ y yi=1 yk=0 F(s,y) Pr(y)+ ∑ z zi=0 zk=1 F(s,z) Pr(z). Then, focusing on s first: F(s,y) Pr(y)+F(s,z) Pr(z) = (β +1)(A+1) m+βT Pr(y)+ (β +1)A m+βT Pr(z) = [(A+1)pi (1−pk)+A(1−pi)pk] (β +1) m+βT C = [pi +(pi + pk −2pipk)A−pipk] (β +1) m+βT C = [pi +C0] C1, where A = s·z is the number of true positives between s and z (s and y have an additional true positive at i by construction); T = y·y = z·z is the number of positive labels in y and z (identical by assumption); and C = Pr(y) pi (1−pk) = Pr(z) (1−pi) pk is the probability of y and z evaluated on all positions except for i and k. This equality holds because of the zeroth-order Markov assumption (5) imposed on Pr(y). C0 and C1 are constants that allow us to focus on the essential aspects. The situation for t is similar, except for the true positives: F(t,y) Pr(y)+F(t,z) Pr(z) = (β +1)A m+βT Pr(y)+ (β +1)(A+1) m+βT Pr(z) = [A pi (1−pk)+(A+1)(1−pi)pk] (β +1) m+βT C = [pk +(pi + pk −2pipk)A−pipk] (β +1) m+βT C = [pk +C0] C1 where all constants have the same values as above. But pi ≥pk by assumption, pk +C0 ≥0, and C1 ≥0, so we have F(s,y) Pr(y)+F(s,z) Pr(z) = [pi +C0] C1 ≥F(t,y) Pr(y)+F(t,z) Pr(z) = [pk +C0] C1, and therefore ∑ y yi̸=yk F(s,y) Pr(y) ≥∑ y yi̸=yk F(t,y) Pr(y). (10) The theorem follows from equality (9) and inequality (10). Proof of Theorem 1: (∀s ∈Sm) E[F(z(m),·)] ≥E[F(s,·)]. Observe that z(m) ∈Sm by definition (see Section 2.3). For m = 0 and m = n the theorem holds trivially because Sm is a singleton set. In the nontrivial cases, Theorem 2 is applied repeatedly. The string z(m) can be transformed into any other string s ∈Sm by repeatedly clearing a more likely set bit and setting a less likely unset bit. In particular this can be done as follows: First, find the indices where z(m) and s disagree. By construction there must be an even number of such indices; indeed there are equinumerous sets n i z(m) i = 1∧si = 0 o ≈ n j z(m) j = 0∧s j = 1 o . This holds because the total number of ones is fixed and identical in z(m) and s, and so is the total number of zeroes. Next, sort those indices by non-increasing probability and represent them as i1,...,ik and j1,..., jk. Let s0 = z(m). Then let s1 be identical to s0 except that si1 = 0 and s j1 = 1. Form s2,...,sk along the same lines and observe that sk = s by construction. By definition of z(m) it must be the case that pir ≥pjr for all r ∈{1,...,k}. Therefore Theorem 2 applies at every step along the way from z(m) = s0 to sk = s, and so the expected utility is non-increasing along that path. 743
2007
93
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 744–751, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics A Fully Bayesian Approach to Unsupervised Part-of-Speech Tagging∗ Sharon Goldwater Department of Linguistics Stanford University [email protected] Thomas L. Griffiths Department of Psychology UC Berkeley tom [email protected] Abstract Unsupervised learning of linguistic structure is a difficult problem. A common approach is to define a generative model and maximize the probability of the hidden structure given the observed data. Typically, this is done using maximum-likelihood estimation (MLE) of the model parameters. We show using part-of-speech tagging that a fully Bayesian approach can greatly improve performance. Rather than estimating a single set of parameters, the Bayesian approach integrates over all possible parameter values. This difference ensures that the learned structure will have high probability over a range of possible parameters, and permits the use of priors favoring the sparse distributions that are typical of natural language. Our model has the structure of a standard trigram HMM, yet its accuracy is closer to that of a state-of-the-art discriminative model (Smith and Eisner, 2005), up to 14 percentage points better than MLE. We find improvements both when training from data alone, and using a tagging dictionary. 1 Introduction Unsupervised learning of linguistic structure is a difficult problem. Recently, several new model-based approaches have improved performance on a variety of tasks (Klein and Manning, 2002; Smith and ∗This work was supported by grants NSF 0631518 and ONR MURI N000140510388. We would also like to thank Noah Smith for providing us with his data sets. Eisner, 2005). Nearly all of these approaches have one aspect in common: the goal of learning is to identify the set of model parameters that maximizes some objective function. Values for the hidden variables in the model are then chosen based on the learned parameterization. Here, we propose a different approach based on Bayesian statistical principles: rather than searching for an optimal set of parameter values, we seek to directly maximize the probability of the hidden variables given the observed data, integrating over all possible parameter values. Using part-of-speech (POS) tagging as an example application, we show that the Bayesian approach provides large performance improvements over maximum-likelihood estimation (MLE) for the same model structure. Two factors can explain the improvement. First, integrating over parameter values leads to greater robustness in the choice of tag sequence, since it must have high probability over a range of parameters. Second, integration permits the use of priors favoring sparse distributions, which are typical of natural language. These kinds of priors can lead to degenerate solutions if the parameters are estimated directly. Before describing our approach in more detail, we briefly review previous work on unsupervised POS tagging. Perhaps the most well-known is that of Merialdo (1994), who used MLE to train a trigram hidden Markov model (HMM). More recent work has shown that improvements can be made by modifying the basic HMM structure (Banko and Moore, 2004), using better smoothing techniques or added constraints (Wang and Schuurmans, 2005), or using a discriminative model rather than an HMM 744 (Smith and Eisner, 2005). Non-model-based approaches have also been proposed (Brill (1995); see also discussion in Banko and Moore (2004)). All of this work is really POS disambiguation: learning is strongly constrained by a dictionary listing the allowable tags for each word in the text. Smith and Eisner (2005) also present results using a diluted dictionary, where infrequent words may have any tag. Haghighi and Klein (2006) use a small list of labeled prototypes and no dictionary. A different tradition treats the identification of syntactic classes as a knowledge-free clustering problem. Distributional clustering and dimensionality reduction techniques are typically applied when linguistically meaningful classes are desired (Sch¨utze, 1995; Clark, 2000; Finch et al., 1995); probabilistic models have been used to find classes that can improve smoothing and reduce perplexity (Brown et al., 1992; Saul and Pereira, 1997). Unfortunately, due to a lack of standard and informative evaluation techniques, it is difficult to compare the effectiveness of different clustering methods. In this paper, we hope to unify the problems of POS disambiguation and syntactic clustering by presenting results for conditions ranging from a full tag dictionary to no dictionary at all. We introduce the use of a new information-theoretic criterion, variation of information (Meilˇa, 2002), which can be used to compare a gold standard clustering to the clustering induced from a tagger’s output, regardless of the cluster labels. We also evaluate using tag accuracy when possible. Our system outperforms an HMM trained with MLE on both metrics in all circumstances tested, often by a wide margin. Its accuracy in some cases is close to that of Smith and Eisner’s (2005) discriminative model. Our results show that the Bayesian approach is particularly useful when learning is less constrained, either because less evidence is available (corpus size is small) or because the dictionary contains less information. In the following section, we discuss the motivation for a Bayesian approach and present our model and search procedure. Section 3 gives results illustrating how the parameters of the prior affect results, and Section 4 describes how to infer a good choice of parameters from unlabeled data. Section 5 presents results for a range of corpus sizes and dictionary information, and Section 6 concludes. 2 A Bayesian HMM 2.1 Motivation In model-based approaches to unsupervised language learning, the problem is formulated in terms of identifying latent structure from data. We define a model with parameters θ, some observed variables w (the linguistic input), and some latent variables t (the hidden structure). The goal is to assign appropriate values to the latent variables. Standard approaches do so by selecting values for the model parameters, and then choosing the most probable variable assignment based on those parameters. For example, maximum-likelihood estimation (MLE) seeks parameters ˆθ such that ˆθ = argmax θ P(w|θ), (1) where P(w|θ) = P t P(w, t|θ). Sometimes, a non-uniform prior distribution over θ is introduced, in which case ˆθ is the maximum a posteriori (MAP) solution for θ: ˆθ = argmax θ P(w|θ)P(θ). (2) The values of the latent variables are then taken to be those that maximize P(t|w, ˆθ). In contrast, the Bayesian approach we advocate in this paper seeks to identify a distribution over latent variables directly, without ever fixing particular values for the model parameters. The distribution over latent variables given the observed data is obtained by integrating over all possible values of θ: P(t|w) = Z P(t|w, θ)P(θ|w)dθ. (3) This distribution can be used in various ways, including choosing the MAP assignment to the latent variables, or estimating expected values for them. To see why integrating over possible parameter values can be useful when inducing latent structure, consider the following example. We are given a coin, which may be biased (t = 1) or fair (t = 0), each with probability .5. Let θ be the probability of heads. If the coin is biased, we assume a uniform distribution over θ, otherwise θ = .5. We observe w, the outcomes of 10 coin flips, and we wish to determine whether the coin is biased (i.e. the value of 745 t). Assume that we have a uniform prior on θ, with p(θ) = 1 for all θ ∈[0, 1]. First, we apply the standard methodology of finding the MAP estimate for θ and then selecting the value of t that maximizes P(t|w, ˆθ). In this case, an elementary calculation shows that the MAP estimate is ˆθ = nH/10, where nH is the number of heads in w (likewise, nT is the number of tails). Consequently, P(t|w, ˆθ) favors t = 1 for any sequence that does not contain exactly five heads, and assigns equal probability to t = 1 and t = 0 for any sequence that does contain exactly five heads — a counterintuitive result. In contrast, using some standard results in Bayesian analysis we can show that applying Equation 3 yields P(t = 1|w) = 1/  1 + 11! nH!nT !210  (4) which is significantly less than .5 when nH = 5, and only favors t = 1 for sequences where nH ≥8 or nH ≤2. This intuitively sensible prediction results from the fact that the Bayesian approach is sensitive to the robustness of a choice of t to the value of θ, as illustrated in Figure 1. Even though a sequence with nH = 6 yields a MAP estimate of ˆθ = 0.6 (Figure 1 (a)), P(t = 1|w, θ) is only greater than 0.5 for a small range of θ around ˆθ (Figure 1 (b)), meaning that the choice of t = 1 is not very robust to variation in θ. In contrast, a sequence with nH = 8 favors t = 1 for a wide range of θ around ˆθ. By integrating over θ, Equation 3 takes into account the consequences of possible variation in θ. Another advantage of integrating over θ is that it permits the use of linguistically appropriate priors. In many linguistic models, including HMMs, the distributions over variables are multinomial. For a multinomial with parameters θ = (θ1, . . . , θK), a natural choice of prior is the K-dimensional Dirichlet distribution, which is conjugate to the multinomial.1 For simplicity, we initially assume that all K parameters (also known as hyperparameters) of the Dirichlet distribution are equal to β, i.e. the Dirichlet is symmetric. The value of β determines which parameters θ will have high probability: when β = 1, all parameter values are equally likely; when β > 1, multinomials that are closer to uniform are 1A prior is conjugate to a distribution if the posterior has the same form as the prior. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 θ P( θ | w ) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.5 1 θ P( t = 1 | w, θ ) w = HHTHTTHHTH w = HHTHHHTHHH w = HHTHTTHHTH w = HHTHHHTHHH (a) (b) Figure 1: The Bayesian approach to estimating the value of a latent variable, t, from observed data, w, chooses a value of t robust to uncertainty in θ. (a) Posterior distribution on θ given w. (b) Probability that t = 1 given w and θ as a function of θ. preferred; and when β < 1, high probability is assigned to sparse multinomials, where one or more parameters are at or near 0. Typically, linguistic structures are characterized by sparse distributions (e.g., POS tags are followed with high probability by only a few other tags, and have highly skewed output distributions). Consequently, it makes sense to use a Dirichlet prior with β < 1. However, as noted by Johnson et al. (2007), this choice of β leads to difficulties with MAP estimation. For a sequence of draws x = (x1, . . . , xn) from a multinomial distribution θ with observed counts n1, . . . , nK, a symmetric Dirichlet(β) prior over θ yields the MAP estimate θk = nk+β−1 n+K(β−1). When β ≥1, standard MLE techniques such as EM can be used to find the MAP estimate simply by adding “pseudocounts” of size β −1 to each of the expected counts nk at each iteration. However, when β < 1, the values of θ that set one or more of the θk equal to 0 can have infinitely high posterior probability, meaning that MAP estimation can yield degenerate solutions. If, instead of estimating θ, we integrate over all possible values, we no longer encounter such difficulties. Instead, the probability that outcome xi takes value k given previous outcomes x−i = (x1, . . . , xi−1) is P(k|x−i, β) = Z P(k|θ)P(θ|x−i, β) dθ = nk + β i −1 + Kβ (5) 746 where nk is the number of times k occurred in x−i. See MacKay and Peto (1995) for a derivation. 2.2 Model Definition Our model has the structure of a standard trigram HMM, with the addition of symmetric Dirichlet priors over the transition and output distributions: ti|ti−1 = t, ti−2 = t′, τ (t,t′) ∼Mult(τ (t,t′)) wi|ti = t, ω(t) ∼Mult(ω(t)) τ (t,t′)|α ∼Dirichlet(α) ω(t)|β ∼Dirichlet(β) where ti and wi are the ith tag and word. We assume that sentence boundaries are marked with a distinguished tag. For a model with T possible tags, each of the transition distributions τ (t,t′) has T components, and each of the output distributions ω(t) has Wt components, where Wt is the number of word types that are permissible outputs for tag t. We will use τ and ω to refer to the entire transition and output parameter sets. This model assumes that the prior over state transitions is the same for all histories, and the prior over output distributions is the same for all states. We relax the latter assumption in Section 4. Under this model, Equation 5 gives us P(ti|t−i, α) = n(ti−2,ti−1,ti) + α n(ti−2,ti−1) + Tα (6) P(wi|ti, t−i, w−i, β) = n(ti,wi) + β n(ti) + Wtiβ (7) where n(ti−2,ti−1,ti) and n(ti,wi) are the number of occurrences of the trigram (ti−2, ti−1, ti) and the tag-word pair (ti, wi) in the i −1 previously generated tags and words. Note that, by integrating out the parameters τ and ω, we induce dependencies between the variables in the model. The probability of generating a particular trigram tag sequence (likewise, output) depends on the number of times that sequence (output) has been generated previously. Importantly, trigrams (and outputs) remain exchangeable: the probability of a set of trigrams (outputs) is the same regardless of the order in which it was generated. The property of exchangeability is crucial to the inference algorithm we describe next. 2.3 Inference To perform inference in our model, we use Gibbs sampling (Geman and Geman, 1984), a stochastic procedure that produces samples from the posterior distribution P(t|w, α, β) ∝P(w|t, β)P(t|α). We initialize the tags at random, then iteratively resample each tag according to its conditional distribution given the current values of all other tags. Exchangeability allows us to treat the current counts of the other tag trigrams and outputs as “previous” observations. The only complication is that resampling a tag changes the identity of three trigrams at once, and we must account for this in computing its conditional distribution. The sampling distribution for ti is given in Figure 2. In Bayesian statistical inference, multiple samples from the posterior are often used in order to obtain statistics such as the expected values of model variables. For POS tagging, estimates based on multiple samples might be useful if we were interested in, for example, the probability that two words have the same tag. However, computing such probabilities across all pairs of words does not necessarily lead to a consistent clustering, and the result would be difficult to evaluate. Using a single sample makes standard evaluation methods possible, but yields suboptimal results because the value for each tag is sampled from a distribution, and some tags will be assigned low-probability values. Our solution is to treat the Gibbs sampler as a stochastic search procedure with the goal of identifying the MAP tag sequence. This can be done using tempering (annealing), where a temperature of φ is equivalent to raising the probabilities in the sampling distribution to the power of 1 φ. As φ approaches 0, even a single sample will provide a good MAP estimate. 3 Fixed Hyperparameter Experiments 3.1 Method Our initial experiments follow in the tradition begun by Merialdo (1994), using a tag dictionary to constrain the possible parts of speech allowed for each word. (This also fixes Wt, the number of possible words for tag t.) The dictionary was constructed by listing, for each word, all tags found for that word in the entire WSJ treebank. For the experiments in this section, we used a 24,000-word subset of the tree747 P(ti|t−i, w, α, β) ∝ n(ti,wi) + β nti + Wtiβ · n(ti−2,ti−1,ti) + α n(ti−2,ti−1) + Tα · n(ti−1,ti,ti+1) + I(ti−2 = ti−1 = ti = ti+1) + α n(ti−1,ti) + I(ti−2 = ti−1 = ti) + Tα ·n(ti,ti+1,ti+2) + I(ti−2 = ti = ti+2, ti−1 = ti+1) + I(ti−1 = ti = ti+1 = ti+2) + α n(ti,ti+1) + I(ti−2 = ti, ti−1 = ti+1) + I(ti−1 = ti = ti+1) + Tα Figure 2: Conditional distribution for ti. Here, t−i refers to the current values of all tags except for ti, I(.) is a function that takes on the value 1 when its argument is true and 0 otherwise, and all counts nx are with respect to the tag trigrams and tag-word pairs in (t−i, w−i). bank as our unlabeled training corpus. 54.5% of the tokens in this corpus have at least two possible tags, with the average number of tags per token being 2.3. We varied the values of the hyperparameters α and β and evaluated overall tagging accuracy. For comparison with our Bayesian HMM (BHMM) in this and following sections, we also present results from the Viterbi decoding of an HMM trained using MLE by running EM to convergence (MLHMM). Where direct comparison is possible, we list the scores reported by Smith and Eisner (2005) for their conditional random field model trained using contrastive estimation (CRF/CE).2 For all experiments, we ran our Gibbs sampling algorithm for 20,000 iterations over the entire data set. The algorithm was initialized with a random tag assignment and a temperature of 2, and the temperature was gradually decreased to .08. Since our inference procedure is stochastic, our reported results are an average over 5 independent runs. Results from our model for a range of hyperparameters are presented in Table 1. With the best choice of hyperparameters (α = .003, β = 1), we achieve average tagging accuracy of 86.8%. This far surpasses the MLHMM performance of 74.5%, and is closer to the 90.1% accuracy of CRF/CE on the same data set using oracle parameter selection. The effects of α, which determines the probabil2Results of CRF/CE depend on the set of features used and the contrast neighborhood. In all cases, we list the best score reported for any contrast neighborhood using trigram (but no spelling) features. To ensure proper comparison, all corpora used in our experiments consist of the same randomized sets of sentences used by Smith and Eisner. Note that training on sets of contiguous sentences from the beginning of the treebank consistently improves our results, often by 1-2 percentage points or more. MLHMM scores show less difference between randomized and contiguous corpora. Value Value of β of α .001 .003 .01 .03 .1 .3 1.0 .001 85.0 85.7 86.1 86.0 86.2 86.5 86.6 .003 85.5 85.5 85.8 86.6 86.7 86.7 86.8 .01 85.3 85.5 85.6 85.9 86.4 86.4 86.2 .03 85.9 85.8 86.1 86.2 86.6 86.8 86.4 .1 85.2 85.0 85.2 85.1 84.9 85.5 84.9 .3 84.4 84.4 84.6 84.4 84.5 85.7 85.3 1.0 83.1 83.0 83.2 83.3 83.5 83.7 83.9 Table 1: Percentage of words tagged correctly by BHMM as a function of the hyperparameters α and β. Results are averaged over 5 runs on the 24k corpus with full tag dictionary. Standard deviations in most cases are less than .5. ity of the transition distributions, are stronger than the effects of β, which determines the probability of the output distributions. The optimal value of .003 for α reflects the fact that the true transition probability matrix for this corpus is indeed sparse. As α grows larger, the model prefers more uniform transition probabilities, which causes it to perform worse. Although the true output distributions tend to be sparse as well, the level of sparseness depends on the tag (consider function words vs. content words in particular). Therefore, a value of β that accurately reflects the most probable output distributions for some tags may be a poor choice for other tags. This leads to the smaller effect of β, and suggests that performance might be improved by selecting a different β for each tag, as we do in the next section. A final point worth noting is that even when α = β = 1 (i.e., the Dirichlet priors exert no influence) the BHMM still performs much better than the MLHMM. This result underscores the importance of integrating over model parameters: the BHMM identifies a sequence of tags that have high proba748 bility over a range of parameter values, rather than choosing tags based on the single best set of parameters. The improved results of the BHMM demonstrate that selecting a sequence that is robust to variations in the parameters leads to better performance. 4 Hyperparameter Inference In our initial experiments, we experimented with different fixed values of the hyperparameters and reported results based on their optimal values. However, choosing hyperparameters in this way is timeconsuming at best and impossible at worst, if there is no gold standard available. Luckily, the Bayesian approach allows us to automatically select values for the hyperparameters by treating them as additional variables in the model. We augment the model with priors over the hyperparameters (here, we assume an improper uniform prior), and use a single Metropolis-Hastings update (Gilks et al., 1996) to resample the value of each hyperparameter after each iteration of the Gibbs sampler. Informally, to update the value of hyperparameter α, we sample a proposed new value α′ from a normal distribution with µ = α and σ = .1α. The probability of accepting the new value depends on the ratio between P(t|w, α) and P(t|w, α′) and a term correcting for the asymmetric proposal distribution. Performing inference on the hyperparameters allows us to relax the assumption that every tag has the same prior on its output distribution. In the experiments reported in the following section, we used two different versions of our model. The first version (BHMM1) uses a single value of β for all word classes (as above); the second version (BHMM2) uses a separate βj for each tag class j. 5 Inferred Hyperparameter Experiments 5.1 Varying corpus size In this set of experiments, we used the full tag dictionary (as above), but performed inference on the hyperparameters. Following Smith and Eisner (2005), we trained on four different corpora, consisting of the first 12k, 24k, 48k, and 96k words of the WSJ corpus. For all corpora, the percentage of ambiguous tokens is 54%-55% and the average number of tags per token is 2.3. Table 2 shows results for the various models and a random baseline (averaged Corpus size Accuracy 12k 24k 48k 96k random 64.8 64.6 64.6 64.6 MLHMM 71.3 74.5 76.7 78.3 CRF/CE 86.2 88.6 88.4 89.4 BHMM1 85.8 85.2 83.6 85.0 BHMM2 85.8 84.4 85.7 85.8 σ < .7 .2 .6 .2 Table 2: Percentage of words tagged correctly by the various models on different sized corpora. BHMM1 and BHMM2 use hyperparameter inference; CRF/CE uses parameter selection based on an unlabeled development set. Standard deviations (σ) for the BHMM results fell below those shown for each corpus size. over 5 random tag assignments). Hyperparameter inference leads to slightly lower scores than are obtained by oracle hyperparameter selection, but both versions of BHMM are still far superior to MLHMM for all corpus sizes. Not surprisingly, the advantages of BHMM are most pronounced on the smallest corpus: the effects of parameter integration and sensible priors are stronger when less evidence is available from the input. In the limit as corpus size goes to infinity, the BHMM and MLHMM will make identical predictions. 5.2 Varying dictionary knowledge In unsupervised learning, it is not always reasonable to assume that a large tag dictionary is available. To determine the effects of reduced or absent dictionary information, we ran a set of experiments inspired by those of Smith and Eisner (2005). First, we collapsed the set of 45 treebank tags onto a smaller set of 17 (the same set used by Smith and Eisner). We created a full tag dictionary for this set of tags from the entire treebank, and also created several reduced dictionaries. Each reduced dictionary contains the tag information only for words that appear at least d times in the training corpus (the 24k corpus, for these experiments). All other words are fully ambiguous between all 17 classes. We ran tests with d = 1, 2, 3, 5, 10, and ∞(i.e., knowledge-free syntactic clustering). With standard accuracy measures, it is difficult to 749 Value of d Accuracy 1 2 3 5 10 ∞ random 69.6 56.7 51.0 45.2 38.6 MLHMM 83.2 70.6 65.5 59.0 50.9 CRF/CE 90.4 77.0 71.7 BHMM1 86.0 76.4 71.0 64.3 58.0 BHMM2 87.3 79.6 65.0 59.2 49.7 σ < .2 .8 .6 .3 1.4 VI random 2.65 3.96 4.38 4.75 5.13 7.29 MLHMM 1.13 2.51 3.00 3.41 3.89 6.50 BHMM1 1.09 2.44 2.82 3.19 3.47 4.30 BHMM2 1.04 1.78 2.31 2.49 2.97 4.04 σ < .02 .03 .04 .03 .07 .17 Corpus stats % ambig. 49.0 61.3 66.3 70.9 75.8 100 tags/token 1.9 4.4 5.5 6.8 8.3 17 Table 3: Percentage of words tagged correctly and variation of information between clusterings induced by the assigned and gold standard tags as the amount of information in the dictionary is varied. Standard deviations (σ) for the BHMM results fell below those shown in each column. The percentage of ambiguous tokens and average number of tags per token for each value of d is also shown. evaluate the quality of a syntactic clustering when no dictionary is used, since cluster names are interchangeable. We therefore introduce another evaluation measure for these experiments, a distance metric on clusterings known as variation of information (Meilˇa, 2002). The variation of information (VI) between two clusterings C (the gold standard) and C′ (the found clustering) of a set of data points is a sum of the amount of information lost in moving from C to C′, and the amount that must be gained. It is defined in terms of entropy H and mutual information I: V I(C, C′) = H(C) + H(C′) −2I(C, C′). Even when accuracy can be measured, VI may be more informative: two different tag assignments may have the same accuracy but different VI with respect to the gold standard if the errors in one assignment are less consistent than those in the other. Table 3 gives the results for this set of experiments. One or both versions of BHMM outperform MLHMM in terms of tag accuracy for all values of d, although the differences are not as great as in earlier experiments. The differences in VI are more striking, particularly as the amount of dictionary information is reduced. When ambiguity is greater, both versions of BHMM show less confusion with respect to the true tags than does MLHMM, and BHMM2 performs the best in all circumstances. The confusion matrices in Figure 3 provide a more intuitive picture of the very different sorts of clusterings produced by MLHMM and BHMM2 when no tag dictionary is available. Similar differences hold to a lesser degree when a partial dictionary is provided. With MLHMM, different tokens of the same word type are usually assigned to the same cluster, but types are assigned to clusters more or less at random, and all clusters have approximately the same number of types (542 on average, with a standard deviation of 174). The clusters found by BHMM2 tend to be more coherent and more variable in size: in the 5 runs of BHMM2, the average number of types per cluster ranged from 436 to 465 (i.e., tokens of the same word are spread over fewer clusters than in MLHMM), with a standard deviation between 460 and 674. Determiners, prepositions, the possessive marker, and various kinds of punctuation are mostly clustered coherently. Nouns are spread over a few clusters, partly due to a distinction found between common and proper nouns. Likewise, modal verbs and the copula are mostly separated from other verbs. Errors are often sensible: adjectives and nouns are frequently confused, as are verbs and adverbs. The kinds of results produced by BHMM1 and BHMM2 are more similar to each other than to the results of MLHMM, but the differences are still informative. Recall that BHMM1 learns a single value for β that is used for all output distributions, while BHMM2 learns separate hyperparameters for each cluster. This leads to different treatments of difficult-to-classify low-frequency items. In BHMM1, these items tend to be spread evenly among all clusters, so that all clusters have similarly sparse output distributions. In BHMM2, the system creates one or two clusters consisting entirely of very infrequent items, where the priors on these clusters strongly prefer uniform outputs, and all other clusters prefer extremely sparse outputs (and are more coherent than in BHMM1). This explains the difference in VI between the two systems, as well as the higher accuracy of BHMM1 for d ≥3: the single β discourages placing lowfrequency items in their own cluster, so they are more likely to be clustered with items that have sim750 1 2 3 4 5 6 7 8 9 1011121314151617 N INPUNC ADJ V DET PREP ENDPUNC VBG CONJ VBN ADV TO WH PRT POS LPUNC RPUNC (a) BHMM2 Found Tags True Tags 1 2 3 4 5 6 7 8 9 1011121314151617 N INPUNC ADJ V DET PREP ENDPUNC VBG CONJ VBN ADV TO WH PRT POS LPUNC RPUNC (b) MLHMM Found Tags True Tags Figure 3: Confusion matrices for the dictionary-free clusterings found by (a) BHMM2 and (b) MLHMM. ilar transition probabilities. The problem of junk clusters in BHMM2 might be alleviated by using a non-uniform prior over the hyperparameters to encourage some degree of sparsity in all clusters. 6 Conclusion In this paper, we have demonstrated that, for a standard trigram HMM, taking a Bayesian approach to POS tagging dramatically improves performance over maximum-likelihood estimation. Integrating over possible parameter values leads to more robust solutions and allows the use of priors favoring sparse distributions. The Bayesian approach is particularly helpful when learning is less constrained, either because less data is available or because dictionary information is limited or absent. For knowledgefree clustering, our approach can also be extended through the use of infinite models so that the number of clusters need not be specified in advance. We hope that our success with POS tagging will inspire further research into Bayesian methods for other natural language learning tasks. References M. Banko and R. Moore. 2004. A study of unsupervised partof-speech tagging. In Proceedings of COLING ’04. E. Brill. 1995. Unsupervised learning of disambiguation rules for part of speech tagging. In Proceedings of the 3rd Workshop on Very Large Corpora, pages 1–13. P. Brown, V. Della Pietra, V. de Souza, J. Lai, and R. Mercer. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18:467–479. A. Clark. 2000. Inducing syntactic categories by context distribution clustering. In Proceedings of the Conference on Natural Language Learning (CONLL). S. Finch, N. Chater, and M. Redington. 1995. Acquiring syntactic information from distributional statistics. In J. In Levy, D. Bairaktaris, J. Bullinaria, and P. Cairns, editors, Connectionist Models of Memory and Language. UCL Press, London. S. Geman and D. Geman. 1984. Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721–741. W.R. Gilks, S. Richardson, and D. J. Spiegelhalter, editors. 1996. Markov Chain Monte Carlo in Practice. Chapman and Hall, Suffolk. A. Haghighi and D. Klein. 2006. Prototype-driven learning for sequence models. In Proceedings of HLT-NAACL. M. Johnson, T. Griffiths, and S. Goldwater. 2007. Bayesian inference for PCFGs via Markov chain Monte Carlo. D. Klein and C. Manning. 2002. A generative constituentcontext model for improved grammar induction. In Proceedings of the ACL. D. MacKay and L. Bauman Peto. 1995. A hierarchical Dirichlet language model. Natural Language Engineering, 1:289– 307. M. Meilˇa. 2002. Comparing clusterings. Technical Report 418, University of Washington Statistics Department. B. Merialdo. 1994. Tagging English text with a probabilistic model. Computational Linguistics, 20(2):155–172. L. Saul and F. Pereira. 1997. Aggregate and mixed-order markov models for statistical language processing. In Proceedings of the Second Conference on Empirical Methods in Natural Language Processing (EMNLP). H. Sch¨utze. 1995. Distributional part-of-speech tagging. In Proceedings of the European Chapter of the Association for Computational Linguistics (EACL). N. Smith and J. Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proceedings of ACL. I. Wang and D. Schuurmans. 2005. Improved estimation for unsupervised part-of-speech tagging. In Proceedings of the IEEE International Conference on Natural Language Processing and Knowledge Engineering (IEEE NLP-KE). 751
2007
94
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 752–759, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Computationally Efficient M-Estimation of Log-Linear Structure Models∗ Noah A. Smith and Douglas L. Vail and John D. Lafferty School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 USA {nasmith,dvail2,lafferty}@cs.cmu.edu Abstract We describe a new loss function, due to Jeon and Lin (2006), for estimating structured log-linear models on arbitrary features. The loss function can be seen as a (generative) alternative to maximum likelihood estimation with an interesting information-theoretic interpretation, and it is statistically consistent. It is substantially faster than maximum (conditional) likelihood estimation of conditional random fields (Lafferty et al., 2001; an order of magnitude or more). We compare its performance and training time to an HMM, a CRF, an MEMM, and pseudolikelihood on a shallow parsing task. These experiments help tease apart the contributions of rich features and discriminative training, which are shown to be more than additive. 1 Introduction Log-linear models are a very popular tool in natural language processing, and are often lauded for permitting the use of “arbitrary” and “correlated” features of the data by a model. Users of log-linear models know, however, that this claim requires some qualification: any feature is permitted in principle, but training log-linear models (and decoding under them) is tractable only when the model’s independence assumptions permit efficient inference procedures. For example, in the original conditional random fields (Lafferty et al., 2001), features were con∗This work was supported by NSF grant IIS-0427206 and the DARPA CALO project. The authors are grateful for feedback from David Smith and from three anonymous ACL reviewers, and helpful discussions with Charles Sutton. fined to locally-factored indicators on label bigrams and label unigrams (with any of the observation). Even in cases where inference in log-linear models is tractable, it requires the computation of a partition function. More formally, a log-linear model for random variables X and Y over X, Y defines: pw(x, y) = ew⊤f(x,y) P x′,y′∈X×Yew⊤f(x′,y′) = ew⊤f(x,y) Z(w) (1) where f : X×Y →Rm is the feature vector-function and w ∈Rm is a weight vector that parameterizes the model. In NLP, we rarely train this model by maximizing likelihood, because the partition function Z(w) is expensive to compute exactly. Z(w) can be approximated (e.g., using Gibbs sampling; Rosenfeld, 1997). In this paper, we propose the use of a new loss function that is computationally efficient and statistically consistent (§2). Notably, repeated inference is not required during estimation. This loss function can be seen as a case of M-estimation1 that was originally developed by Jeon and Lin (2006) for nonparametric density estimation. This paper gives an information-theoretic motivation that helps elucidate the objective function (§3), shows how to apply the new estimator to structured models used in NLP (§4), and compares it to a state-of-the-art noun phrase chunker (§5). We discuss implications and future directions in §6. 2 Loss Function As before, let X be a random variable over a highdimensional space X, and similarly Y over Y. X 1“M-estimation” is a generalization of MLE (van der Vaart, 1998); space does not permit a full discussion. 752 might be the set of all sentences in a language, and Y the set of all POS tag sequences or the set of all parse trees. Let q0 be a “base” distribution that is our first approximation to the true distribution over X × Y. HMMs and PCFGs, while less accurate as predictors than the rich-featured log-linear models we desire, might be used to define q0. The model we estimate will have the form pw(x, y) ∝q0(x, y)ew⊤f(x,y) (2) Notice that pw(x, y) = 0 whenever q0(x, y) = 0. It is therefore important for q0 to be smooth, since the support of pw is a subset of the support of q0. Notice that we have not written the partition function explicitly in Eq. 2; it will never need to be computed during estimation or inference. The unnormalized distribution will suffice for all computation. Suppose we have observations ⟨x1, x2, ..., xn⟩ with annotations ⟨y1, ..., yn⟩. The (unregularized) loss function, due to Jeon and Lin (2006), is2 ℓ(w) = 1 n n X i=1 e−w⊤f(xi,yi) + X x,y q0(x, y)  w⊤f(x, y)  (3) = 1 n n X i=1 e−w⊤f(xi,yi) + w⊤X x,y q0(x, y)f(x, y) = 1 n n X i=1 e−w⊤f(xi,yi) + w⊤Eq0(X,Y )[f(X, Y )] | {z } constant(w) Before explaining this objective, we point out some attractive computational properties. Notice that f(xi, yi) (for all i) and the expectations of the feature vectors under q0 are constant with respect to w. Computing the function in Eq. 3, then, requires no inference and no dynamic programming, only O(nm) floating-point operations. 3 An Interpretation Here we give an account of the loss function as a way of “cleaning up” a mediocre model (q0). We 2We give only the discrete version here, because it is most relevant for an ACL audience. Also, our linear function w⊤f(xi, yi) is a simple case; another kernel (for example) could be used. show that this estimate aims to model a presumed perturbation that created q0, by minimizing the KL divergence between q0 and a perturbed version of the sample distribution ˜p. Consider Eq. 2. Given a training dataset, maximizing likelihood under this model means assuming that there is some w∗for which the true distribution p∗(x, y) = pw∗(x, y). Carrying out MLE, however, would require computing the partition function P x′,y′ q0(x′, y′)ew⊤f(x′,y′), which is in general intractable. Rearranging Eq. 2 slightly, we have q0(x, y) ∝p∗(x, y)e−w⊤f(x,y) (4) If q0 is close to the true model, e−w⊤f(x,y) should be close to 1 and w close to zero. In the sequence model setting, for example, if q0 is an HMM that explains the data well, then the additional features are not necessary (equivalently, their weights should be 0). If q0 is imperfect, we might wish to make it more powerful by adding features (e.g., f), but q0 nonetheless provides a reasonable “starting point” for defining our model. So instead of maximizing likelihood, we will minimize the KL divergence between the two sides of Eq. 4.3 DKL(q0(x, y)∥p∗(x, y)e−w⊤f(x,y)) (5) = X x,y q0(x, y) log q0(x, y) p∗(x, y)e−w⊤f(x,y) (6) + X x,y p∗(x, y)e−w⊤f(x,y) − X x,y q0(x, y) = −H(q0) + X x,y p∗(x, y)e−w⊤f(x,y) −1 − X x,y q0(x, y) log  p∗(x, y)e−w⊤f(x,y) = constant(w) + X x,y p∗(x, y)e−w⊤f(x,y) + X x,y q0(x, y)  w⊤f(x, y)  3The KL divergence here is generalized for unnormalized distributions, following O’Sullivan (1998): DKL(u∥v) = P j “ uj log uj vj −uj + vj ” where u and v are nonnegative vectors defining unnormalized distributions over the same event space. Note that when P j uj = P j vj = 1, this formula takes on the more familiar form, as −P j uj and P j vj cancel. 753 If we replace p∗with the empirical (sampled) distribution ˜p, minimizing the above KL divergence is equivalent to minimizing ℓ(w) (Eq. 3). It may be helpful to think of −w as the parameters of a process that “damage” the true model p∗, producing q0, and the estimation of w as learning to undo that damage. In the remainder of the paper, we use the general term “M-estimation” to refer to the minimization of ℓ(w) as a way of training a log-linear model. 4 Algorithms for Models of Sequences and Trees We discuss here some implementation aspects of the application of M-estimation to NLP models. 4.1 Expectations under q0 The base distribution q0 enters into implementation in two places: Eq0(X,Y )[f(X, Y )] must be computed for training, and q0(x, y) is a factor in the model used in decoding. If q0 is a familiar stochastic grammar, such as an HMM or a PCFG, or any generative model from which sampling is straightforward, it is possible to estimate the feature expectations by sampling from the model directly; for sample ⟨(˜xi, ˜yi)⟩s i=1 let: Eq0(X,Y )[fj(X, Y )] ←1 s s X i=1 fj(˜xi, ˜yi) (7) If the feature space is sparse under q0 (likely in most settings), then smoothing may be required. If q0 is an HMM or a PCFG, the expectation vector can be computed exactly by solving a system of equations. We will see that for the common cases where features are local substructures, inference is straightforward. We briefly describe how this can be done for a bigram HMM and a PCFG. 4.1.1 Expectations under an HMM Let S be the state space of a first-order HMM. If s = ⟨s1, ..., sk⟩is a state sequence and x = ⟨x1, ..., xk⟩is an observed sequence of emissions, then: q0(s, x) = k Y i=1 tsi−1(si)esi(xi) ! tsk(stop) (8) (Assume s0 = start is the single, silent, initial state, and stop is the only stop state, also silent. We assume no other states are silent.) The first step is to compute path-sums into and out of each state, under the HMM q0. To do this, define is as the total weight of state-prefixes (beginning in start) ending in s and os as the total weight of statesuffixes beginning in s (and ending in stop):4 istart = ostop = 1 (9) ∀s ∈S \ {start, stop} : is = ∞ X n=1 X ⟨s1,...,sn⟩∈Sn n Y i=1 tsi−1(si) ! tsn(s) = X s′∈S is′ts′(s) (10) os = ∞ X n=1 X ⟨s1,...,sn⟩∈Sn ts(s1) n Y i=2 tsi−1(si) ! = X s′∈S ts(s′)os′ (11) This amounts to two linear systems given the transition probabilities t, where the variables are i• and o•, respectively. In each system there are |S| variables and |S| equations. Once solved, expected counts of transition and emission features under q0 are straightforward: Eq0[s transit → s′] = ists(s′)os′ Eq0[s emit →x] = ises(x)os Given i and o, Eq0 can be computed for other features in the model in a similar way, provided they correspond to contiguous substructures. For example, a feature f627 that counts occurrences of “Si = s and Xi+3 = x” has expected value Eq0[f627] = X s′,s′′,s′′′∈S ists(s′)ts′(s′′)ts′′(s′′′)es′′′(x)os′′′ (12) Non-contiguous substructure features with “gaps” require summing over paths between any pair of states. This is straightforward (we omit it for space), but of course using such features (while interesting) would complicate inference in decoding. 4It may be helpful to think of i as forward probabilities, but for the observation set Y∗rather than a particular observation y. o are like backward probabilities. Note that, because some counted prefixes are prefixes of others, i can be > 1; similarly for o. 754 4.1.2 Expectations under a PCFG In general, the expectations for a PCFG require solving a quadratic system of equations. The analogy this time is to inside and outside probabilities. Let the PCFG have nonterminal set N, start symbol S ∈N, terminal alphabet Σ, and rules of the form A →B C and A →x. (We assume Chomsky normal form for clarity; the generalization is straightforward.) Let rA(B C) and rA(x) denote the probabilities of nonterminal A rewriting to child sequence B C or x, respectively. Then ∀A ∈N: oA = X B∈N X C∈N oBiC[rB(A C) + rB(C A)] +  1 if A = S 0 otherwise iA = X B∈N X C∈N rA(B C)iBiC + X x rA(x)ix ox = X A∈N oArA(x), ∀x ∈Σ ix = 1, ∀x ∈Σ In most practical applications, the PCFG will be “tight” (Booth and Thompson, 1973; Chi and Geman, 1998). Informally, this means that the probability of a derivation rooted in S failing to terminate is zero. If that is the case, then iA = 1 for all A ∈N, and the system becomes linear (see also Corazza and Satta, 2006).5 If tightness is not guaranteed, iterative propagation of weights, following Stolcke (1995), works well in our experience for solving the quadratic system, and converges quickly. As in the HMM case, expected counts of arbitrary contiguous tree substructures can be computed as products of probabilities of rules appearing within the structure, factoring in the o value of the structure’s root and the i values of the structure’s leaves. 4.2 Optimization To carry out M-estimation, we minimize the function ℓ(w) in Eq. 3. To apply gradient descent or a quasi-Newton numerical optimization method,6 it suffices to specify the fixed quantities 5The same is true for HMMs: if the probability of nontermination is zero, then for all s ∈S, os = 1. 6We use L-BFGS (Liu and Nocedal, 1989) as implemented in the R language’s optim function. f(xi, yi) (for all i ∈{1, 2, ..., n}) and the vector Eq0(X,Y )[f(X, Y )]. The gradient is:7 ∂ℓ ∂wj = − n X i=1 e−w⊤f(xi,yi)fj(xi, yi) + Eq0[fj] (13) The Hessian (matrix of second derivatives) can also be computed with relative ease, though the space requirement could become prohibitive. For problems where m is relatively small, this would allow the use of second-order optimization methods that are likely to converge in fewer iterations. It is easy to see that Eq. 3 is convex in w. Therefore, convergence to a global optimum is guaranteed and does not depend on the initializing value of w. 4.3 Regularization Regularization is a technique from pattern recognition that aims to keep parameters (like w) from overfitting the training data. It is crucial to the performance of most statistical learning algorithms, and our experiments show it has a major effect on the success of the M-estimator. Here we use a quadratic regularizer, minimizing ℓ(w) + (w⊤w)/2c. Note that this is also convex and differentiable if c > 0. The value of c can be chosen using a tuning dataset. This regularizer aims to keep each coordinate of w close to zero. In the M-estimator, regularization is particularly important when the expectation of some feature fj, Eq0(X,Y )[fj(X, Y )] is equal to zero. This can happen either due to sampling error (fj simply failed to appear with a positive value in the finite sample) or because q0 assigns zero probability mass to any x ∈X, y ∈Y where fj(x, y) ̸= 0. Without regularization, the weight wj will tend toward ±∞, but the quadratic penalty term will prevent that undesirable tendency. Just as the addition of a quadratic regularizer to likelihood can be interpreted as a zero-mean Gaussian prior on w (Chen and Rosenfeld, 2000), it can be so-interpreted here. The regularized objective is analogous to maximum a posteriori estimation. 5 Shallow Parsing We compared M-estimation to a hidden Markov model and other training methods on English noun 7Taking the limit as n →∞and setting equal to zero, we have the basis for a proof that ℓ(w) is statistically consistent. 755 HMM CRF MEMM PL M-est. 2 sec. 64:18 3:40 9:35 1:04 Figure 1: Wall time (hours:minutes) of training the HMM and 100 L-BFGS iterations for each of the extended-feature models on a 2.2 GHz Sun Opteron with 8GB RAM. See discussion in text for details. phrase (NP) chunking. The dataset comes from the Conference on Natural Language Learning (CoNLL) 2000 shallow parsing shared task (Tjong Kim Sang and Buchholz, 2000); we apply the model to NP chunking only. About 900 sentences were reserved for tuning regularization parameters. Baseline/q0 In this experiment, the simple baseline is a second-order HMM. The states correspond to {B, I, O} labels, denoting the beginning, inside, and outside of noun phrases. Each state emits a tag and a word (independent of each other given the state). We replaced the first occurrence of every tag and of every word in the training data with an OOV symbol, giving a fixed tag vocabulary of 46 and a fixed word vocabulary of 9,014. Transition distributions were estimated using MLE, and tag- and wordemission distributions were estimated using add-1 smoothing. The HMM had 27,213 parameters. This HMM achieves 86.3% F1-measure on the development dataset (slightly better than the lowest-scoring of the CoNLL-2000 systems). Heavier or weaker smoothing (an order of magnitude difference in addλ) of the emission distributions had very little effect. Note that HMM training time is negligible (roughly 2 seconds); it requires counting events, smoothing the counts, and normalizing. Extended Feature Set Sha and Pereira (2003) applied a conditional random field to the NP chunking task, achieving excellent results. To improve the performance of the HMM and test different estimation methods, we use Sha and Pereira’s feature templates, which include subsequences of labels, tags, and words of different lengths and offsets. Here, we use only features observed to occur at least once in the training data, accounting (in addition to our OOV treatment) for the slight drop in performance prec. recall F1 HMM features: HMM 85.60 88.68 87.11 CRF 90.40 89.56 89.98 PL 80.31 81.37 80.84 MEMM 86.03 88.62 87.31 M-est. 85.57 88.65 87.08 extended features: CRF 94.04 93.68 93.86 PL 91.88 91.79 91.83 MEMM 90.89 92.15 91.51 M-est. 88.88 90.42 89.64 Table 1: NP chunking accuracy on test data using different training methods. The effects of discriminative training (CRF) and extended feature sets (lower section) are more than additive. compared to what Sha and Pereira report. There are 630,862 such features. Using the original HMM feature set and the extended feature set, we trained four models that can use arbitrary features: conditional random fields (a near-replication of Sha and Pereira, 2003), maximum entropy Markov models (MEMMs; McCallum et al., 2000), pseudolikelihood (Besag, 1975; see Toutanova et al., 2003, for a tagging application), and our M-estimator with the HMM as q0. CRFs and MEMMs are discriminatively-trained to maximize conditional likelihood (the former is parameterized using a sequence-normalized log-linear model, the latter using a locally-normalized loglinear model). Pseudolikelihood is a consistent estimator for the joint likelihood, like our M-estimator; its objective function is a sum of log probabilities. In each case, we trained seven models for each feature set with quadratic regularizers c ∈ [10−1, 10], spaced at equal intervals in the log-scale, plus an unregularized model (c = ∞). As discussed in §4.2, we trained using L-BFGS; training continued until relative improvement fell within machine precision or 100 iterations, whichever came first. After training, the value of c is chosen that maximizes F1 accuracy on the tuning set. Runtime Fig. 1 compares the wall time of carefully-timed training runs on a dedicated server. Note that Dyna, a high-level programming language, was used for dynamic programming (in the CRF) 756 and summations (MEMM and pseudolikelihood). The runtime overhead incurred by using Dyna is estimated as a slow-down factor of 3–5 against a handtuned implementation (Eisner et al., 2005), though the slow-down factor is almost certainly less for the MEMM and pseudolikelihood. All training (except the HMM, of course) was done using the R language implementation of L-BFGS. In our implementation, the M-estimator trained substantially faster than the other methods. Of the 64 minutes required to train the M-estimator, 6 minutes were spent precomputing Eq0(X,Y )[f(X, Y )] (this need not be repeated if the regularization settings are altered). Accuracy Tab. 1 shows how NP chunking accuracy compares among the models. With HMM features, the M-estimator is about the same as the HMM and MEMM (better than PL and worse than the CRF). With extended features, the M-estimator lags behind the slower methods, but performs about the same as the HMM-featured CRF (2.5–3 points over the HMM). The full-featured CRF improves performance by another 4 points. Performance as a function of training set size is plotted in Fig. 2; the different methods behave relatively similarly as the training data are reduced. Fig. 3 plots accuracy (on tuning data) against training time, for a variety of training dataset sizes and regularizaton settings, under different training methods. This illustrates the training-time/accuracy tradeoff: the Mestimator, when well-regularized, is considerably faster than the other methods, at the expense of accuracy. This experiment gives some insight into the relative importance of extended features versus estimation methods. The M-estimated model is, like the maximum likelihood-estimated HMM, a generative model. Unlike the HMM, it uses a much larger set of features–the same features that the discriminative models use. Our result supports the claim that good features are necessary for state-of-the-art performance, but so is good training. 5.1 Effect of the Base Distribution We now turn to the question of the base distribution q0: how accurate does it need to be? Given that the M-estimator is consistent, it should be clear that, in the limit and assuming that our model family p is correct, q0 should not matter (except in its support). q0 selection prec. recall F1 HMM F1, prec. 88.88 90.42 89.64 l.u. F1 72.91 57.56 64.33 prec. 84.40 37.68 52.10 emp. F1 84.38 89.43 86.83 Table 2: NP chunking accuracy on test data using different base models for the M-estimator. The “selection” column shows which accuracy measure was optimized when selecting the hyperparameter c. In NLP, we deal with finite datasets and imperfect models, so q0 may have practical importance. We next consider an alternative q0 that is far less powerful; in fact, it is uninformative about the variable to be predicted. Let x be a sequence of words, t be a sequence of part-of-speech tags, and y be a sequence of {B, I, O}-labels. The model is: ql.u. 0 (x, t, y) def =   |x| Y i=1 puni(xi)puni(ti) 1 Nyi−1   1 Ny|x| (14) where Ny is the number of labels (including stop) that can follow y (3 for O and y0 = start, 4 for B and I). puni are the tag and word unigram distributions, estimated using MLE with add-1 smoothing. This model ignores temporal effects. On its own, this model achieves 0% precision and recall, because it labels every word O (the most likely label sequence is O|x|). We call this model l.u. (“locally uniform”). Tab. 2 shows that, while an M-estimate that uses ql.u. 0 is not nearly as accurate as the one based on an HMM, the M-estimator did manage to improve considerably over ql.u. 0 . So the M-estimator is far better than nothing, and in this case, tuning c to maximize precision (rather than F1) led to an Mestimated model with precision competitive with the HMM. We point this out because, in applications involving very large corpora, a model with good precision may be useful even if its coverage is mediocre. Another question about q0 is whether it should take into account all possible values of the input variables (here, x and t), or only those seen in training. Consider the following model: qemp 0 (x, t, y) def = q0(y | x, t)˜p(x, t) (15) Here we use the empirical distribution over tag/word 757 70 75 80 85 90 95 100 0 2000 4000 6000 8000 10000 training set size F1 CRF PL MEMM M-est. HMM Figure 2: Learning curves for different estimators; all of these estimators except the HMM use the extended feature set. 65 70 75 80 85 90 95 100 0 1 10 100 1000 10000 100000 1000000 training time (seconds) F1 M-est. CRF HMM PL MEMM Figure 3: Accuracy (tuning data) vs. training time. The M-estimator trains notably faster. The points in a given curve correspond to different regularization strengths (c); M-estimation is more damaged by weak than strong regularization. sequences, and the HMM to define the distribution over label sequences. The expectations Eqemp 0 (X)[f(X)] can be computed using dynamic programming over the training data (recall that this only needs to be done once, cf. the CRF). Strictly speaking, qemp 0 assigns probability zero to any sequence not seen in training, but we can ignore the ˜p marginal at decoding time. As shown in Tab. 2, this model slightly improves recall over the HMM, but damages precision; the gains of M-estimation seen with the HMM as q0, are not reproduced. From these experiments, we conclude that the M-estimator might perform considerably better, given a better q0. 5.2 Input-Only Features We present briefly one negative result. Noting that the M-estimator is a modeling technique that estimates a distribution over both input and output variables (i.e., a generative model), we wanted a way to make the objective more discriminative while still maintaining the computational property that inference (of any kind) not be required during the inner loop of iterative training. The idea is to reduce the predictive burden on the feature weights for f. When designing a CRF, features that do not depend on the output variable (here, y) are unnecessary. They cannot distinguish between competing labelings for an input, and so their weights will be set to zero during conditional estimation. The feature vector function in Sha and Pereira’s chunking model does not include such features. In M-estimation, however, adding such “input-only” features might permit better modeling of the data and, more importantly, use the original features primarily for the discriminative task of modeling y given the input. Adding unigram, bigram, and trigram features to f for M-estimation resulted in a very small decrease in performance: selecting for F1, this model achieves 89.33 F1 on test data. 6 Discussion M-estimation fills a gap in the plethora of training techniques that are available for NLP models today: it permits arbitrary features (like socalled conditional “maximum entropy” models such as CRFs) but estimates a generative model (permitting, among other things, classification on input variables and meaningful combination with other models). It is similar in spirit to pseudolikelihood (Besag, 1975), to which it compares favorably on training runtime and unfavorably on accuracy. Further, since no inference is required during training, any features really are permitted, so long as their expected values can be estimated under the base model q0. Indeed, M-estimation is considerably easier to implement than conditional estimation. Both require feature counts from the training data; M-estimation replaces repeated calculation and differentiation of normalizing constants with inference or sampling (once) under a base model. So 758 the M-estimator is much faster to train. Generative and discriminative models have been compared and discussed a great deal (Ng and Jordan, 2002), including for NLP models (Johnson, 2001; Klein and Manning, 2002). Sutton and McCallum (2005) present approximate methods that keep a discriminative objective while avoiding full inference. We see M-estimation as a particularly promising method in settings where performance depends on high-dimensional, highly-correlated feature spaces, where the desired features “large,” making discriminative training too time-consuming—a compelling example is machine translation. Further, in some settings a locally-normalized conditional log-linear model (like an MEMM) may be difficult to design; our estimator avoids normalization altogether.8 The M-estimator may also be useful as a tool in designing and selecting feature combinations, since more trials can be run in less time. After selecting a feature set under M-estimation, discriminative training can be applied on that set. The M-estimator might also serve as an initializer to discriminative models, perhaps reducing the number of times inference must be performed—this could be particularly useful in very-large data scenarios. In future work we hope to explore the use of the M-estimator within hidden variable learning, such as the ExpectationMaximization algorithm (Dempster et al., 1977). 7 Conclusions We have presented a new loss function for generatively estimating the parameters of log-linear models. The M-estimator is fast to train, requiring no repeated, expensive calculation of normalization terms. It was shown to improve performance on a shallow parsing task over a baseline (generative) HMM, but it is not competitive with the state-ofthe-art. Our sequence modeling experiments support the widely accepted claim that discriminative, richfeature modeling works as well as it does not just because of rich features in the model, but also because of discriminative training. Our technique fills an important gap in the spectrum of learning methods for NLP models and shows promise for application when discriminative methods are too expensive. 8Note that MEMMs also require local partition functions— which may be expensive—to be computed at decoding time. References J. E. Besag. 1975. Statistical analysis of non-lattice data. The Statistician, 24:179–195. T. L. Booth and R. A. Thompson. 1973. Applying probability measures to abstract languages. IEEE Transactions on Computers, 22(5):442–450. S. Chen and R. Rosenfeld. 2000. A survey of smoothing techniques for ME models. IEEE Transactions on Speech and Audio Processing, 8(1):37–50. Z. Chi and S. Geman. 1998. Estimation of probabilistic context-free grammars. Computational Linguistics, 24(2):299–305. A. Corazza and G. Satta. 2006. Cross-entropy and estimation of probabilistic context-free grammars. In Proc. of HLTNAACL. A. Dempster, N. Laird, and D. Rubin. 1977. Maximum likelihood estimation from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39:1–38. J. Eisner, E. Goldlust, and N. A. Smith. 2005. Compiling Comp Ling: Practical weighted dynamic programming and the Dyna language. In Proc. of HLT-EMNLP. Y. Jeon and Y. Lin. 2006. An effective method for highdimensional log-density ANOVA estimation, with application to nonparametric graphical model building. Statistical Sinica, 16:353–374. M. Johnson. 2001. Joint and conditional estimation of tagging and parsing models. In Proc. of ACL. D. Klein and C. D. Manning. 2002. Conditional structure vs. conditional estimation in NLP models. In Proc. of EMNLP. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. of ICML. D. C. Liu and J. Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Math. Programming, 45:503–528. A. McCallum, D. Freitag, and F. Pereira. 2000. Maximum entropy Markov models for information extraction and segmentation. In Proc. of ICML. A. Ng and M. Jordan. 2002. On discriminative vs. generative classifiers: A comparison of logistic regression and na¨ıve Bayes. In NIPS 14. J. A. O’Sullivan. 1998. Alternating minimization algorithms: from Blahut-Armijo to Expectation-Maximization. In A. Vardy, editor, Codes, Curves, and Signals: Common Threads in Communications, pages 173–192. Kluwer. R. Rosenfeld. 1997. A whole sentence maximum entropy language model. In Proc. of ASRU. F. Sha and F. Pereira. 2003. Shallow parsing with conditional random fields. In Proc. of HLT-NAACL. A. Stolcke. 1995. An efficient probabilistic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics, 21(2):165–201. C. Sutton and A. McCallum. 2005. Piecewise training of undirected models. In Proc. of UAI. E. F. Tjong Kim Sang and S. Buchholz. 2000. Introduction to the CoNLL-2000 shared task: Chunking. In Proc. of CoNLL. K. Toutanova, D. Klein, C. D. Manning, and Y. Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proc. of HLT-NAACL. A. W. van der Vaart. 1998. Asymptotic Statistics. Cambridge University Press. 759
2007
95
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 760–767, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Guided Learning for Bidirectional Sequence Classification Libin Shen BBN Technologies Cambridge, MA 02138, USA [email protected] Giorgio Satta Dept. of Inf. Eng’g. University of Padua I-35131 Padova, Italy [email protected] Aravind K. Joshi Department of CIS University of Pennsylvania Philadelphia, PA 19104, USA [email protected] Abstract In this paper, we propose guided learning, a new learning framework for bidirectional sequence classification. The tasks of learning the order of inference and training the local classifier are dynamically incorporated into a single Perceptron like learning algorithm. We apply this novel learning algorithm to POS tagging. It obtains an error rate of 2.67% on the standard PTB test set, which represents 3.3% relative error reduction over the previous best result on the same data set, while using fewer features. 1 Introduction Many NLP tasks can be modeled as a sequence classification problem, such as POS tagging, chunking, and incremental parsing. A traditional method to solve this problem is to decompose the whole task into a set of individual tasks for each token in the input sequence, and solve these small tasks in a fixed order, usually from left to right. In this way, the output of the previous small tasks can be used as the input of the later tasks. HMM and MaxEnt Markov Model are examples of this method. Lafferty et al. (2001) showed that this approach suffered from the so called label bias problem (Bottou, 1991). They proposed Conditional Random Fields (CRF) as a general solution for sequence classification. CRF models a sequence as an undirected graph, which means that all the individual tasks are solved simultaneously. Taskar et al. (2003) improved the CRF method by employing the large margin method to separate the gold standard sequence labeling from incorrect labellings. However, the complexity of quadratic programming for the large margin approach prevented it from being used in large scale NLP tasks. Collins (2002) proposed a Perceptron like learning algorithm to solve sequence classification in the traditional left-to-right order. This solution does not suffer from the label bias problem. Compared to the undirected methods, the Perceptron like algorithm is faster in training. In this paper, we will improve upon Collins’ algorithm by introducing a bidirectional searching strategy, so as to effectively utilize more context information at little extra cost. When a bidirectional strategy is used, the main problem is how to select the order of inference. Tsuruoka and Tsujii (2005) proposed the easiest-first approach which greatly reduced the computation complexity of inference while maintaining the accuracy on labeling. However, the easiest-first approach only serves as a heuristic rule. The order of inference is not incorporated into the training of the MaxEnt classifier for individual labeling. Here, we will propose a novel learning framework, namely guided learning, to integrate classification of individual tokens and inference order selection into a single learning task. We proposed a Perceptron like learning algorithm (Collins and Roark, 2004; Daum´e III and Marcu, 2005) for guided learning. We apply this algorithm to POS tagging, a classic sequence learning problem. Our system reports an error rate of 2.67% on the standard PTB test set, a relative 3.3% error reduction of the previous best system (Toutanova et al., 2003) by using fewer features. By using deterministic search, it obtains an error rate of 2.73%, a 5.9% relative error reduction 760 over the previous best deterministic algorithm (Tsuruoka and Tsujii, 2005). The new POS tagger is similar to (Toutanova et al., 2003; Tsuruoka and Tsujii, 2005) in the way that we employ context features. We use a bidirectional search strategy (Woods, 1976; Satta and Stock, 1994), and our algorithm is based on Perceptron learning (Collins, 2002). A unique contribution of our work is on the integration of individual classification and inference order selection, which are learned simultaneously. 2 Guided Learning for Bidirectional Labeling We first present an example of POS tagging to show the idea of bidirectional labeling. Then we present the inference algorithm and the learning algorithm. 2.1 An Example of POS tagging Suppose that we have an input sentence Agatha found that book interesting w1 w2 w3 w4 w5 (Step 0) If we scan from left to right, we may find it difficult to resolve the ambiguity of the label for that, which could be either DT (determiner), or IN (preposition or subordinating conjunction) in the Penn Treebank. However, if we resolve the labels for book and interesting, it would be relatively easy to figure out the correct label for that. Now, we show how bidirectional inference works on this sample. Suppose we use beam search with width of 2, and we use a window of (-2, 2) for context features. For the first step, we enumerate hypotheses for each word. For example, found could have a label VBN or VBD. Suppose that at this point the most favorable action, out of the candidate hypotheses, is the assignment of NN to book, according to the context features defined on words. Then, we resolve the label for book first. We maintain the top two hypotheses as shown below. Here, the second most favorable label for book is VB. NN VB Agatha found that book interesting w1 w2 w3 w4 w5 (Step 1) At the second step, assume the most favorable action is the assignment of label JJ to interesting in the context of NN for book. Then we maintain the top two hypotheses for span book interesting as shown below. The second most favorable label for interesting is still JJ, but in the context of VB for book. NN------JJ VB------JJ Agatha found that book interesting w1 w2 w3 w4 w5 (Step 2) Then, suppose we are most confident for assigning labels VBD and VBN to found, in that order. We get two separated tagged spans as shown below. VBD NN------JJ VBN VB------JJ Agatha found that book interesting w1 w2 w3 w4 w5 (Step 3) In the next step, suppose we are most confident for assigning label DT to that under the context of VBD on the left and NN-JJ on the right side, as shown below (second most favorable action, not discussed here, is also displayed). After tagging w3, two separated spans merge into one, starting from found to interesting. VBD---DT---NN------JJ VBD---IN---NN------JJ Agatha found that book interesting w1 w2 w3 w4 w5 (Step 4) For the last step, we assign label NNP to Agatha, which could be an out-of-vocabulary word, under the context of VBD-DT on the right. NNP---VBD---DT---NN------JJ NNP---VBD---IN---NN------JJ Agatha found that book interesting w1 w2 w3 w4 w5 (Step 5) This simple example has shown the advantage of adopting a flexible search strategy. However, it is still unclear how we maintain the hypotheses, how we keep candidates and accepted labels and spans, and how we employ dynamic programming. We will answer these questions in the formal definition of the inference algorithm in the next section. 761 2.2 Inference Algorithm Terminology: Let the input sequence be w1w2 · · · wn. For each token wi, we are expected to assign a label ti ∈T, with T the label set. A subsequence wi · · · wj is called a span, and is denoted [i, j]. Each span p considered by the algorithm is associated with one or more hypotheses, that is, sequences over T having the same length as p. Part of the label sequence of each hypothesis is used as a context for labeling tokens outside the span p. For example, if a tri-gram model is adopted, we use the two labels on the left boundary and the two labels on the right boundary of the hypothesis for labeling outside tokens. The left two labels are called the left interface, and the right two labels are called the right interface. Left and right interfaces have only one label in case of spans of length one. A pair s = (Ileft, Iright) with a left and a right interface is called a state. We partition the hypotheses associated with span p into sets compatible with the same state. In practice, for span p, we use a matrix Mp indexed by states, so that Mp(s), s = (Ileft, Iright), is the set of all hypotheses associated with p that are compatible with Ileft and Iright. For a span p and a state s, we denote the associated top hypothesis as s.T = argmax h∈Mp(s) V (h), where V is the score of a hypothesis (defined in (1) below). Similarly, we denote the top state for p as p.S = argmax s: Mp(s)̸=∅ V (s.T). Therefore, for each span p, we have a top hypothesis p.S.T, whose score is the highest among all the hypotheses for span p. Hypotheses are started and grown by means of labeling actions. For each hypothesis h associated with a span p we maintain its most recent labeling action h.A, involving some token within p, as well as the states h.SL and h.SR that have been used as context by such an action, if any. Note that h.SL and h.SR refer to spans that are subsequences of p. We recursively compute the score of h as V (h) = V (h.SL.T) + V (h.SR.T) + U(h.A), (1) Algorithm 1 Inference Algorithm Require: token sequence w1 · · · wn; Require: beam width B; Require: weight vector w; 1: Initialize P, the set of accepted spans; 2: Initialize Q, the queue of candidate spans; 3: repeat 4: span p′ ←argmaxp∈Q U(p.S.T.A); 5: Update P with p′; 6: Update Q with p′ and P; 7: until (Q = ∅) where U is the score of an action. In other words, the score of an hypothesis is the sum of the score of the most recent action h.A and the scores of the top hypotheses of the context states. The score of an action h.A is computed through a linear function whose weight vector is w, as U(h.A) = w · f(h.A), (2) where f(h.A) is the feature vector of action h.A, which depends on h.SL and h.SR. Algorithm: Algorithm 1 is the inference algorithm. We are given the input sequence and two parameters, beam width B to determine the number of states maintained for each span, and weight vector w used to compute the score of an action. We first initialize the set P of accepted spans with the empty set. Then we initialize the queue Q of candidate spans with span [i, i] for each token wi, and for each t ∈T assigned to wi we set M[i,i]((t, t)) = {i →t}, where i →t represents the hypothesis consisting of a single action which assigns label t to wi. This provides the set of starting hypotheses. As for the example Agatha found that book interesting in the previous subsection, we have • P = ∅ • Q = {[1, 1], [2, 2], [3, 3], [4, 4], [5, 5]} Suppose NN and VB are the two possible POS tags for w4 book. We have • M[4,4](NN, NN) = {h441 = 4 →NN} • M[4,4](VB, VB) = {h442 = 4 →VB} The most recent action of hypothesis h441 is to assign NN to w4. According to Equation (2), the score 762 of this action U(h441.A) depends on the features defined on the local context of action. For example, f1001(h441.A) =  1 if t = NN ∧w−1 = that 0 otherwise, where w−1 represents the left word. It should be noted that, for all the features depending on the neighboring tags, the value is always 0, since those tags are still unknown in the step of initialization. Since this operation does not depend on solved tags, we have V (h441) = U(h411.A), according to Equation (1). The core of the algorithm repeatedly selects a candidate span from Q, and uses it to update P and Q, until a span covering the whole sequence is added to P and Q becomes empty. This is explained in detail below. At each step, we remove from Q the span p′ such that the action (not hypothesis) score of its top hypothesis, p′.S.T, is the highest. This represents the labeling action for the next move that we are most confident about. Now we need to update P and Q with the selected span p′. We add p′ to P, and remove from P the spans included in p′, if any. Let S be the set of removed spans. We remove from Q each span which takes one of the spans in S as context, and replace it with a new candidate span taking p′ (and another accepted span) as context. We always maintain B different states for each span. Back to the previous example, after Step 3 is completed, w2 found, w4 book and w5 interesting have been tagged and we have • P = {[2, 2], [4, 5]} • Q = {[1, 2], [2, 5]} There are two candidate spans in Q, each with its associated hypotheses and most recent actions. More specifically, we can either solve w1 based on the context hypotheses for [2, 2], resulting in span [1, 2], or else solve w3 based on the context hypotheses in [2, 2] and [4, 5], resulting in span [2, 5]. The top two states for span [2, 2] are • M[2,2](VBD, VBD) = {h221 = 2 →VBD} • M[2,2](VBN, VBN) = {h222 = 2 →VBN} and the top two states for span [4, 5] are • M[4,5](NN-JJ, NN-JJ) = {h451 = (NN,NN)5 →JJ} • M[4,5](VB-JJ, VB-JJ) = {h452 = (VB,VB)5 →JJ} Here (NN,NN)5 →JJ represents the hypothesis coming from the action of assigning JJ to w5 under the left context state of (NN,NN). (VB,VB)5 →JJ has a similar meaning.1 We first compute the hypotheses resulting from all possible POS tag assignments to w3, under all possible state combinations of the neighboring spans [2, 2] and [4, 5]. Suppose the highest score action consists in the assignment of DT under the left context state (VBD, VBD) and the right context state (NN-JJ, NNJJ). We obtain hypothesis h251 = (VBD,VBD)3 → DT(NN-JJ, NN-JJ) with V (h251) = V ((VBD,VBD).T) + V ((NN-JJ,NN-JJ).T) + U(h251.A) = V (h221) + V (h451) + w · f(h251.A) Here, features for action h251.A may depend on the left tag VBD and right tags NN-JJ, which have been solved before. More details of the feature functions are given in Section 4.2. For example, we can have features like f2002(h251.A) =  1 if t = DT ∧t+2 = JJ 0 otherwise, We maintain the top two states with the highest hypothesis scores, if the beam width is set to two. We have • M[2,5](VBD-DT, NN-JJ) = {h251 = (VBD,VBD)3 →DT(NN-JJ,NN-JJ)} • M[2,5](VBD-IN, NN-JJ) = {h252 = (VBD,VBD)3 →IN(NN-JJ,NN-JJ)} Similarly, we compute the top hypotheses and states for span [1, 2]. Suppose now the hypothesis with the highest action score is h251. Then we update P by adding [2, 5] and removing [2, 2] and [4, 5], which are covered by [2, 5]. We also update Q by removing [2, 5] and [1, 2],2 and add new candidate span [1, 5] resulting in • P = {[2, 5]} • Q = {[1, 5]} 1It should be noted that, in these cases, each state contains only one hypothesis. However, if the span is longer than 4 words, there may exist multiple hypotheses for the same state. For example, hypotheses DT-NN-VBD-DT-JJ and DTNN-VBN-DT-JJ have the same left interface DT-NN and right interface DT-JJ. 2Span [1, 2] depends on [2, 2] and [2, 2] has been removed from P. So it is no longer a valid candidate given the accepted spans in P. 763 The algorithm is especially designed in such a way that, at each step, some new span is added to P or else some spans already present in P are extended by some token(s). Furthermore, no pair of overlapping spans is ever found in P, and the number of pairs of overlapping spans that may be found in Q is always bounded by a constant. This means that the algorithm performs at most n iterations, and its running time is therefore O(B2n), that is, linear in the length of the input sequence. 2.3 Learning Algorithm In this section, we propose guided learning, a Perceptron like algorithm, to learn the weight vector w, as shown in Algorithm 2. We use p′.G to represent the gold standard hypothesis on span p′. For each input sequence Xr and the gold standard sequence of labeling Yr, we first initialize P and Q as in the inference algorithm. Then we select the span for the next move as in Algorithm 1. If p′.S.T, the top hypothesis of the selected span p′, is compatible with the gold standard, we update P and Q as in Algorithm 1. Otherwise, we update the weight vector in the Perceptron style, by promoting the features of the gold standard action, and demoting the features of the action of the top hypothesis. Then we re-generate the queue Q with P and the updated weight vector w. Specifically, we first remove all the elements in Q, and then generate hypotheses for all the possible spans based on the context spans in P. Hypothesis scores and action scores are calculated with the updated weight vector w. A special aspect of Algorithm 2 is that we maintain two scores: the score of the action represents the confidence for the next move, and the score of the hypothesis represents the overall quality of a partial result. The selection for the next action directly depends on the score of the action, but not on the score of the hypothesis. On the other hand, the score of the hypothesis is used to maintain top partial results for each span. We briefly describe the soundness of the Guided Learning Algorithm in terms of two aspects. First, in Algorithm 2 weight update is activated whenever there exists an incorrect state s, the action score of whose top hypothesis s.T is higher than that of any state in each span. We demote this action and promote the gold standard action on the same span. Algorithm 2 Guided Learning Algorithm Require: training sequence pairs {(Xr, Yr)}1≤r≤R; Require: beam width B and iterations I; 1: w ←0; 2: for (i ←1; i ≤I; i++) do 3: for (r ←1; r ≤R; r++) do 4: Load sequence Xr and gold labeling Yr. 5: Initialize P, the set of accepted spans 6: Initialize Q, the queue of candidate spans; 7: repeat 8: p′ ←argmaxp∈Q U(p.S.T.A); 9: if (p′.S.T = p′.G) then 10: Update P with p′; 11: Update Q with p′ and P; 12: else 13: promote(w, f(p′.G.A)); 14: demote(w, f(p′.S.T.A)); 15: Re-generate Q with w and P; 16: end if 17: until (Q = ∅) 18: end for 19: end for However, we do not automatically adopt the gold standard action on this span. Instead, in the next step, the top hypothesis of another span might be selected based on the score of action, which means that it becomes the most favorable action according to the updated weights. As a second aspect, if the action score of a gold standard hypothesis is higher than that of any others, this hypothesis and the corresponding span are guaranteed to be selected at line 8 of Algorithm 2. The reason for this is that the scores of the context hypotheses of a gold standard hypothesis must be no less than those of other hypotheses of the same span. This could be shown recursively with respect to Equation 1, because the context hypotheses of a gold standard hypothesis are also compatible with the gold standard. Furthermore, if we take (xi = f(p′.G.A) −f(p′.S.T.A), yi = +1) as a positive sample, and (xj = f(p′.S.T.A) −f(p′.G.A), yj = −1) as a negative sample, the weight updates at lines 13 764 and 14 are a stochastic approximation of gradient descent that minimizes the squared errors of the misclassified samples (Widrow and Hoff, 1960). What is special with our learning algorithm is the strategy used to select samples for training. In general, this novel learning framework lies between supervised learning and reinforcement learning. Guided learning is more difficult than supervised learning, because we do not know the order of inference. The order is learned automatically, and partial output is in turn used to train the local classifier. Therefore, the order of inference and the local classification are dynamically incorporated in the learning phase. Guided learning is not as hard as reinforcement learning. At each local step in learning, we always know the undesirable labeling actions according to the gold standard, although we do not know which is the most desirable. In this approach, we can easily collect the automatically generated negative samples, and use them in learning. These negative samples are exactly those we will face during inference with the current weight vector. In our experiments, we have used Averaged Perceptron (Collins, 2002; Freund and Schapire, 1999) and Perceptron with margin (Krauth and M´ezard, 1987) to improve performance. 3 Related Works Tsuruoka and Tsujii (2005) proposed a bidirectional POS tagger, in which the order of inference is handled with the easiest-first heuristic. Gim´enez and M`arquez (2004) combined the results of a left-toright scan and a right-to-left scan. In our model, the order of inference is dynamically incorporated into the training of the local classifier. Toutanova et al. (2003) reported a POS tagger based on cyclic dependency network. In their work, the order of inference is fixed as from left to right. In this approach, large beam width is required to maintain the ambiguous hypotheses. In our approach, we can handle tokens that we are most confident about first, so that our system does not need a large beam. As shown in Section 4.2, even deterministic inference shows rather good results. Our guided learning can be modeled as a search algorithm with Perceptron like learning (Daum´e III and Marcu, 2005). However, as far as we know, Data Set Sections Sentences Tokens Training 0-18 38,219 912,344 Develop 19-21 5,527 131,768 Test 22-24 5,462 129,654 Table 1: Data set splits the mechanism of bidirectional search with an online learning algorithm has not been investigated before. In (Daum´e III and Marcu, 2005), as well as other similar works (Collins, 2002; Collins and Roark, 2004; Shen and Joshi, 2005), only left-toright search was employed. Our guided learning algorithm provides more flexibility in search with an automatically learned order. In addition, our treatment of the score of action and the score of hypothesis is unique (see discussion in Section 2.3). Furthermore, compared to the above works, our guided learning algorithm is more aggressive on learning. In (Collins and Roark, 2004; Shen and Joshi, 2005), a search stops if there is no hypothesis compatible with the gold standard in the queue of candidates. In (Daum´e III and Marcu, 2005), the search is resumed after some gold standard compatible hypotheses are inserted into a queue for future expansion, and the weights are updated correspondingly. However, there is no guarantee that the updated weights assign a higher score to those inserted gold standard compatible hypotheses. In our algorithm, the gold standard compatible hypotheses are used for weight update only. As a result, after each sentence is processed, the weight vector can usually successfully predict the gold standard parse. Therefore our learning algorithm is aggressive on weight update. As far as this aspect is concerned, our algorithm is similar to the MIRA algorithm in (Crammer and Singer, 2003). In MIRA, one always knows the correct hypothesis. In our case, we do not know the correct order of operations. So we use our form of weight update to implement aggressive learning. 4 Experiments on POS Tagging 4.1 Settings We apply our guided learning algorithm to POS tagging. We carry out experiments on the standard data set of the Penn Treebank (PTB) (Marcus et al., 1994). Following (Ratnaparkhi, 1996; Collins, 2002; Toutanova et al., 2003; Tsuruoka and Tsujii, 2005), 765 Feature Sets Templates Error% A Ratnaparkhi’s 3.05 B A + [t0, t1], [t0, t−1, t1], [t0, t1, t2] 2.92 C B + [t0, t−2], [t0, t2], [t0, t−2, w0], [t0, t−1, w0], [t0, t1, w0], [t0, t2, w0], [t0, t−2, t−1, w0], [t0, t−1, t1, w0], [t0, t1, t2, w0] 2.84 D C + [t0, w−1, w0], [t0, w1, w0] 2.78 E D + [t0, X = prefix or suffix of w0], 4 < |X| ≤9 2.72 Table 2: Experiments on the development data with beam width of 3 we cut the PTB into the training, development and test sets as shown in Table 1. We use tools provided by CoNLL-2005 3 to extract POS tags from the mrg files of PTB. So the data set is the same as previous work. We use the development set to select features and estimate the number of iterations in training. In our experiments, we enumerate all the POS tags for each word instead of using a dictionary as in (Ratnaparkhi, 1996), since the size of the tag set is tractable and our learning algorithm is efficient enough. 4.2 Results Effect of Features: We first run the experiments to evaluate the effect of features. We use templates to define features. For this set of experiments, we set the beam width B = 3 as a balance between speed and accuracy. The guided learning algorithm usually converges on the development data set in 4-8 iterations over the training data. Table 2 shows the error rate on the development set with different features. We first use the same feature set used in (Ratnaparkhi, 1996), which includes a set of prefix, suffix and lexical features, as well as some bi-gram and tri-gram context features. Following (Collins, 2002), we do not distinguish rare words. On set A, Ratnaparkhi’s feature set, our system reports an error rate of 3.05% on the development data set. With set B, we include a few feature templates which are symmetric to those in Ratnaparkhi’s set, but are only available with bidirectional search. With set C, we add more bi-gram and tri-gram features. With set D, we include bi-lexical features. With set E, we use prefixes and suffixes of length up to 9, as in (Toutanova et al., 2003; Tsuruoka and Tsujii, 2005). We obtain 2.72% of error rate. We will use this feature set on our final experiments on the test data. Effect of Search and Learning Strategies: For the second set of experiments, we evaluate the effect of 3http://www.lsi.upc.es/˜srlconll/soft.html, package srlconll1.1.tgz. Search Aggressive? Beam=1 Beam=3 L-to-R Yes 2.94 2.82 L-to-R No 3.24 2.75 Bi-Dir Yes 2.84 2.72 Bi-Dir No does not converge Table 3: Experiments on the development data search methods, learning strategies, and beam width. We use feature set E for this set of experiments. Table 3 shows the error rates on the development data set with both left-to-right (L-to-R) and bidirectional (Bi-Dir) search methods. We also tested both aggressive learning and non-aggressive learning strategies with beam width of 1 and 3. First, with non-aggressive learning on bidirectional search, the error rate does not converge to a comparable number. This is due to the fact that the search space is too large in bidirectional search, if we do not use aggressive learning to constrain the samples for learning. With aggressive learning, the bidirectional approach always shows advantages over left-to-right search. However, the gap is not large. This is due to the fact that the accuracy of POS tagging is very high. As a result, we can always keep the gold-standard tags in the beam even with left-to-right search in training. This can also explain why the performance of leftto-right search with non-aggressive learning is close to bidirectional search if the beam is large enough. However, with beam width = 1, non-aggressive learning over left-to-right search performs much worse, because in this case it is more likely that the gold-standard tag is not in the beam. This set of experiments show that guided learning is more preferable for tasks with higher ambiguities. In our recent work (Shen and Joshi, 2007), we have applied a variant of this algorithm to dependency parsing, and showed significant improvement over left-to-right non-aggressive learning strategy. Comparison: Table 4 shows the comparison with the previous works on the PTB test sections. 766 System Beam Error% (Ratnaparkhi, 1996) 5 3.37 (Tsuruoka and Tsujii, 2005) 1 2.90 (Collins, 2002) 2.89 Guided Learning, feature B 3 2.85 (Tsuruoka and Tsujii, 2005) all 2.85 (Gim´enez and M`arquez, 2004) 2.84 (Toutanova et al., 2003) 2.76 Guided Learning, feature E 1 2.73 Guided Learning, feature E 3 2.67 Table 4: Comparison with the previous works According to the experiments shown above, we build our best system by using feature set E with beam width B = 3. The number of iterations on the training data is estimated with respect to the development data. We obtain an error rate of 2.67% on the test data. With deterministic search, or beam with B = 1, we obtain an error rate of 2.73%. Compared to previous best result on the same data set, 2.76% by (Toutanova et al., 2003), our best result shows a relative error reduction of 3.3%. This result is very promising, since we have not used any specially designed features in our experiments. It is reported in (Toutanova et al., 2003) that a crude company name detector was used to generate features, and it gave rise to significant improvement in performance. However, it is difficult for us to duplicate exactly the same feature for the purpose of comparison, although it is convenient to use features like that in our framework. 5 Conclusions In this paper, we propose guided learning, a new learning framework for bidirectional sequence classification. The tasks of learning the order of inference and training the local classifier are dynamically incorporated into a single Perceptron like algorithm. We apply this novel algorithm to POS tagging. It obtains an error rate of 2.67% on the standard PTB test set, which represents 3.3% relative error reduction over the previous best result (Toutanova et al., 2003) on the same data set, while using fewer features. By using deterministic search, it obtains an error rate of 2.73%, a 5.9% relative error reduction over the previous best deterministic algorithm (Tsuruoka and Tsujii, 2005). It should be noted that the error rate is close to the inter-annotator discrepancy on PTB, the standard test set for POS tagging, therefore it is very difficult to achieve improvement. References L. Bottou. 1991. Une approche th´eorique de l’apprentissage connexionniste: Applications `a la reconnaissance de la parole. Ph.D. thesis, Universit´e de Paris XI. M. Collins and B. Roark. 2004. Incremental parsing with the perceptron algorithm. In ACL-2004. M. Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In EMNLP-2002. K. Crammer and Y. Singer. 2003. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research, 3:951–991. H. Daum´e III and D. Marcu. 2005. Learning as search optimization: Approximate large margin methods for structured prediction. In ICML-2005. Y. Freund and R. E. Schapire. 1999. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277–296. J. Gim´enez and L. M`arquez. 2004. Svmtool: A general pos tagger generator based on support vector machines. In LREC2004. W. Krauth and M. M´ezard. 1987. Learning algorithms with optimal stability in neural networks. Journal of Physics A, 20:745–752. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmentation and labeling sequence data. In ICML-2001. M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1994. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. A. Ratnaparkhi. 1996. A maximum entropy part-of-speech tagger. In EMNLP-1996. G. Satta and O. Stock. 1994. Bi-Directional Context-Free Grammar Parsing for Natural Language Processing. Artificial Intelligence, 69(1-2). L. Shen and A. K. Joshi. 2005. Incremental LTAG Parsing. In EMNLP-2005. L. Shen and A. K. Joshi. 2007. Bidirectional LTAG Dependency Parsing. Technical Report 07-02, IRCS, UPenn. B. Taskar, C. Guestrin, and D. Koller. 2003. Max-margin markov networks. In NIPS-2003. K. Toutanova, D. Klein, C. Manning, and Y. Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In NAACL-2003. Y. Tsuruoka and J. Tsujii. 2005. Bidirectional inference with the easiest-first strategy for tagging sequence data. In EMNLP-2005. B. Widrow and M. E. Hoff. 1960. Adaptive switching circuits. IRE WESCON Convention Record, part 4. W. Woods. 1976. Parsers in speech understanding systems. Technical Report 3438, Vol. 4, 1–21, BBN Inc. 767
2007
96
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 768–775, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Different Structures for Evaluating Answers to Complex Questions: Pyramids Won’t Topple, and Neither Will Human Assessors Hoa Trang Dang Information Access Division National Institute of Standards and Technology Gaithersburg, MD 20899 [email protected] Jimmy Lin College of Information Studies University of Maryland College Park, MD 20742 [email protected] Abstract The idea of “nugget pyramids” has recently been introduced as a refinement to the nugget-based methodology used to evaluate answers to complex questions in the TREC QA tracks. This paper examines data from the 2006 evaluation, the first large-scale deployment of the nugget pyramids scheme. We show that this method of combining judgments of nugget importance from multiple assessors increases the stability and discriminative power of the evaluation while introducing only a small additional burden in terms of manual assessment. We also consider an alternative method for combining assessor opinions, which yields a distinction similar to micro- and macro-averaging in the context of classification tasks. While the two approaches differ in terms of underlying assumptions, their results are nevertheless highly correlated. 1 Introduction The emergence of question answering (QA) systems for addressing complex information needs has necessitated the development and refinement of new methodologies for evaluating and comparing systems. In the Text REtrieval Conference (TREC) QA tracks organized by the U.S. National Institute of Standards and Technology (NIST), improvements in evaluation processes have kept pace with the evolution of QA tasks. For the past several years, NIST has implemented an evaluation methodology based on the notion of “information nuggets” to assess answers to complex questions. As it has become the de facto standard for evaluating such systems, the research community stands to benefit from a better understanding of the characteristics of this evaluation methodology. This paper explores recent refinements to the nugget-based evaluation methodology developed by NIST. In particular, we examine the recent so-called “pyramid extension” that incorporates relevance judgments from multiple assessors to improve evaluation stability (Lin and Demner-Fushman, 2006). We organize our discussion as follows: The next section begins by providing a brief overview of nugget-based evaluations and the pyramid extension. Section 3 presents results from the first largescale implementation of nugget pyramids for QA evaluation in TREC 2006. Analysis shows that this extension improves both stability and discriminative power. In Section 4, we discuss an alternative for combining multiple judgments that parallels the distinction between micro- and macro-averaging often seen in classification tasks. Experiments reveal that the methods yield almost exactly the same results, despite operating on different granularities (individual nuggets vs. individual users). 2 Evaluating Complex Questions Complex questions are distinguished from factoid questions such as “Who shot Abraham Lincoln?” in that they cannot be answered by named entities (e.g., persons, organizations, dates, etc.). Typically, these information needs are embedded in the context of a scenario (i.e., user task) and often require systems to 768 synthesize information from multiple documents or to generate answers that cannot be easily extracted (e.g., by leveraging inference capabilities). To date, NIST has already conducted several large-scale evaluations of complex questions: definition questions in TREC 2003, “Other” questions in TREC 2004–2006, “relationship” questions in TREC 2005, and the complex, interactive QA (ciQA) task in TREC 2006. Definition and Other questions are similar in that they both request novel facts about “targets”, which can be persons, organizations, things, and events. Relationship questions evolved into the ciQA task and focus on information needs such as “What financial relationships exist between South American drug cartels and banks in Liechtenstein?” Such complex questions focus on ties (financial, military, familial, etc.) that connect two or more entities. All of these evaluations have employed the nugget-based methodology, which demonstrates its versatility and applicability to a wide range of information needs. 2.1 Basic Setup In the TREC QA evaluations, an answer to a complex question consists of an unordered set of [document-id, answer string] pairs, where the strings are presumed to provide some relevant information that addresses the question. Although no explicit limit is placed on the length of the answer, the final metric penalizes verbosity (see below). Evaluation of system output proceeds in two steps. First, answer strings from all submissions are gathered together and presented to a single assessor. The source of each answer string is blinded so that the assessor can not obviously tell which systems generated what output. Using these answers and searches performed during question development, the assessor creates a list of relevant nuggets. A nugget is a piece of information (i.e., “fact”) that addresses one aspect of the user’s question. Nuggets should be atomic, in the sense that an assessor should be able to make a binary decision as to whether the nugget appears in an answer string. Although a nugget represents a conceptual entity, the assessor provides a natural language description—primarily as a memory aid for the subsequent evaluation steps. These descriptions range from sentence-length document extracts to r = # of vital nuggets returned a = # of okay nuggets returned R = # of vital nuggets in the answer key l = # of non-whitespace characters in entire run recall: R = r/R allowance: α = 100 × (r + a) precision: P = ( 1 if l < α 1 −l−α l otherwise F(β) = (β2 + 1) × P × R β2 × P + R Figure 1: Official definition of F-score for nugget evaluation in TREC. key phrases to telegraphic short-hand notes—their readability greatly varies from assessor to assessor. The assessor also manually classifies each nugget as either vital or okay (non-vital). Vital nuggets represent concepts that must be present in a “good” answer. Okay nuggets may contain interesting information, but are not essential. In the second step, the same assessor who created the nuggets reads each system’s output in turn and marks the appearance of the nuggets. An answer string contains a nugget if there is a conceptual match; that is, the match is independent of the particular wording used in the system’s output. A nugget match is marked at most once per run—i.e., a system is not rewarded for retrieving a nugget multiple times. If the system’s output contains more than one match for a nugget, the best match is selected and the rest are left unmarked. A single [document-id, answer string] pair in a system response can match 0, 1, or multiple nuggets. The final F-score for an answer is calculated in the manner described in Figure 1, and the final score of a run is the average across the F-scores of all questions. The metric is a weighted harmonic mean between nugget precision and nugget recall, where recall is heavily favored (controlled by the β parameter, usually set to three). Nugget recall is calculated solely on vital nuggets, while nugget precision is approximated by a length allowance based on the number of both vital and okay nuggets returned. In an 769 earlier pilot study, researchers discovered that it was not possible for assessors to consistently enumerate the total set of nuggets contained in an answer, which corresponds to the denominator in a precision calculation (Voorhees, 2003). Thus, a penalty for verbosity serves as a surrogate for precision. 2.2 The Pyramid Extension The vital/okay distinction has been identified as a weakness in the TREC nugget-based evaluation methodology (Hildebrandt et al., 2004; Lin and Demner-Fushman, 2005; Lin and DemnerFushman, 2006). There do not appear to be any reliable indicators for predicting nugget importance, which makes it challenging to develop algorithms sensitive to this consideration. Since only vital nuggets affect nugget recall, it is difficult for systems to achieve non-zero scores on topics with few vital nuggets in the answer key. Thus, scores are easily affected by assessor errors and other random variations in evaluation conditions. One direct consequence is that in previous TREC evaluations, the median score for many questions turned out to be zero. A binary distinction on nugget importance is insufficient to discriminate between the quality of runs that return no vital nuggets but different numbers of okay nuggets. Also, a score distribution heavily skewed towards zero makes meta-analyses of evaluation stability difficult to perform (Voorhees, 2005). The pyramid extension (Lin and DemnerFushman, 2006) was proposed to address the issues mentioned above. The idea was relatively simple: by soliciting vital/okay judgments from multiple assessors (after the list of nuggets has been produced by a primary assessor), it is possible to define nugget importance with greater granularity. Each nugget is assigned a weight between zero and one that is proportional to the number of assessors who judged it to be vital. Nugget recall from Figure 1 can be redefined to incorporate these weights: R = P m∈A wm P n∈V wn Where A is the set of reference nuggets that are matched in a system’s output and V is the set of all reference nuggets; wm and wn are the weights of nuggets m and n, respectively.1 The calculation of nugget precision remains the same. 3 Nugget Pyramids in TREC 2006 Lin and Demner-Fushman (2006) present experimental evidence in support of nugget pyramids by applying the proposal to results from previous TREC QA evaluations. Their simulation studies appear to support the assertion that pyramids address many of the issues raised in Section 2.2. Based on the results, NIST proceeded with a trial deployment of nugget pyramids in the TREC 2006 QA track. Although scores based on the binary vital/okay distinction were retained as the “official” metric, pyramid scores were simultaneously computed. This provided an opportunity to compare the two methodologies on a large scale. 3.1 The Data The basic unit of evaluation for the main QA task at TREC 2006 was the “question series”. Each series focused on a “target”, which could be a person, organization, thing, or event. Individual questions in a series inquired about different facets of the target, and were explicitly classified as factoid, list, or Other. One complete series is shown in Figure 2. The Other questions can be best paraphrased as “Tell me interesting things about X that I haven’t already explicitly asked about.” It was the system’s task to retrieve interesting nuggets about the target (in the opinion of the assessor), but credit was not given for retrieving facts already explicitly asked for in the factoid and list questions. The Other questions were evaluated using the nugget-based methodology, and are the subject of this analysis. The QA test set in TREC 2006 contained 75 series. Of the 75 targets, 19 were persons, 19 were organizations, 19 were events, and 18 were things. The series contained a total of 75 Other questions (one per target). Each series contained 6–9 questions (counting the Other question), with most series containing 8 questions. The task employed the AQUAINT collection of newswire text (LDC catalog number LDC2002T31), consisting of English data drawn from three sources: the New York Times, 1Note that this new scoring model captures the existing binary vital/okay distinction in a straightforward way: vital nuggets get a score of one, and okay nuggets zero. 770 147 Britain’s Prince Edward marries 147.1 FACTOID When did Prince Edward engage to marry? 147.2 FACTOID Who did the Prince marry? 147.3 FACTOID Where did they honeymoon? 147.4 FACTOID Where was Edward in line for the throne at the time of the wedding? 147.5 FACTOID What was the Prince’s occupation? 147.6 FACTOID How many people viewed the wedding on television? 147.7 LIST What individuals were at the wedding? 147.8 OTHER Figure 2: Sample question series from TREC 2006. Nugget 0 1 2 3 4 5 6 7 8 The couple had a long courtship 1 0 0 0 0 0 1 1 0 Queen Elizabeth II was delighted with the match 0 1 0 1 0 0 0 0 1 Queen named couple Earl and Contessa of Wessex 0 1 0 0 1 1 1 0 0 All marriages of Edward’s siblings ended in divorce 0 0 0 0 0 1 0 0 1 Edward arranged for William to appear more cheerful in photo 0 0 0 0 0 0 0 0 0 they were married in St. Georges Chapel, Windsor 1 1 1 0 1 0 1 1 0 Figure 3: Multiple assessors’ judgments of nugget importance for Series 147 (vital=1, okay=0). Assessor 2 was the same as the primary assessor (assessor 0), but judgments were elicited at different times. the Associated Press, and the Xinhua News Service. There are approximately one million articles in the collection, totaling roughly three gigabytes. In total, 59 runs from 27 participants were submitted to NIST. For more details, see (Dang et al., 2006). For the Other questions, nine sets of judgments were elicited from eight judges (the primary assessor who originally created the nuggets later annotated the nuggets once again). Each assessor was asked to assign the vital/okay label in a rapid fashion, without giving each decision much thought. Figure 3 gives an example of the multiple judgments for nuggets in Series 147. There is variation in notions of importance not only between different assessors, but also for a single assessor over time. 3.2 Results After the human annotation process, nugget pyramids were built in the manner described by Lin and Demner-Fushman (2006). Two scores were computed for each run submitted to the TREC 2006 main QA task: one based on the vital/okay judgments of the primary assessor (which we call the binary Fscore) and one based on the nugget pyramids (the pyramid F-score). The characteristics of the pyramid method can be inferred by comparing these two sets of scores. Figure 4 plots the average binary and average pyramid F-scores for each run (which represents average performance across all series). Even though the nugget pyramid does not represent any single real user (a point we return to later), pyramid Fscores do correlate highly with the binary F-scores. The Pearson’s correlation is 0.987, with a 95% confidence interval of [0.980, 1.00]. While the average F-score for a run is stable given a sufficient number of questions, the F-score for a single Other question exhibits greater variability across assessors. This is shown in Figure 5, which plots binary and pyramid F-scores for individual questions from all runs. In this case, the Pearson correlation is 0.870, with a 95% confidence interval of [0.863, 1.00]. For 16.4% of all Other questions, the nugget pyramid assigned a non-zero F-score where the original binary F-score was zero. This can be seen in the band of points on the left edge of the plot in Figure 5. This highlights the strength of nugget 771 0.00 0.05 0.10 0.15 0.20 0.25 0.00 0.05 0.10 0.15 0.20 0.25 Average binary F−score Average pyramid F−score Figure 4: Scatter plot comparing the binary and pyramid F-scores for each run. pyramids—their ability to smooth out assessor differences and more finely discriminate among system outputs. This is a key capability that is useful for system developers, particularly since algorithmic improvements are often incremental and small. Because it is more stable than the single-assessor method of evaluation, the pyramid method also appears to have greater discriminative power. We fit a two-way analysis of variance model with the series and run as factors, and the binary F-score as the dependent variable. We found significant differences between series and between runs (p essentially equal to 0 for both factors). To determine which runs were significantly different from each other, we performed a multiple comparison using Tukey’s honestly significant difference criterion and controlling for the experiment-wise Type I error so that the probability of declaring a difference between two runs to be significant, when it is actually not, is at most 5%. With 59 runs, there are C59 2 = 1711 different pairs that can be compared. The single-assessor method was able to declare one run to be significantly better than the other in 557 of these pairs. Using the pyramid F-scores, it was possible to find significant differences in performance between runs in 617 pairs. 3.3 Discussion Any evaluation represents a compromise between effort (which correlates with cost) and insightfulness of results. The level of detail and meaning0.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 Binary F−score Pyramid F−score Figure 5: Scatter plot comparing the binary and pyramid F-scores for each Other question. fulness of evaluations are constantly in tension with the availability of resources. Modifications to existing processes usually come at a cost that needs to be weighed against potential gains. Based on these considerations, the balance sheet for nugget pyramids shows a favorable orientation. In the TREC 2006 QA evaluation, soliciting vital/okay judgments from multiple assessors was not very time-consuming (a couple of hours per assessor). Analysis confirms that pyramid scores confer many benefits at an acceptable cost, thus arguing for its adoption in future evaluations. Cost considerations precluded exploring other refinements to the nugget-based evaluation methodology. One possible alternative would involve asking multiple assessors to create different sets of nuggets from scratch. Not only would this be timeconsuming, one would then need to deal with the additional complexities of aligning each assessor’s nuggets list. This includes resolving issues such as nugget granularity, overlap in information content, implicature and other relations between nuggets, etc. 4 Exploration of Alternative Structures Despite the demonstrated effectiveness of nugget pyramids, there are a few potential drawbacks that are worth discussing. One downside is that the nugget pyramid does not represent a single assessor. The nugget weights reflect the aggregation of opinions across a sample population, but there is no guar772 antee that the method for computing those weights actually captures any aspect of real user behavior. It can be argued that the binary F-score is more realistic since it reflects the opinion of a real user (the primary assessor), whereas the pyramid F-score tries to model the opinion of a mythical average user. Although this point may seem somewhat counterintuitive, it represents a well-established tradition in the information retrieval literature (Voorhees, 2002). In document retrieval, for example, relevance judgments are provided by a single assessor—even though it is well known that there are large individual differences in notions of relevance. IR researchers believe that human idiosyncrasies are an inescapable fact present in any system designed for human users, and hence any attempt to remove those elements in the evaluation setup is actually undesirable. It is the responsibility of researchers to develop systems that are robust and flexible. This premise, however, does not mean that IR evaluation results are unstable or unreliable. Analyses have shown that despite large variations in human opinions, system rankings are remarkably stable (Voorhees, 2000; Sormunen, 2002)—that is, one can usually be confident about system comparisons. The philosophy in IR sharply contrasts with work in NLP annotation tasks such as parsing, word sense disambiguation, and semantic role labeling—where researchers strive for high levels of interannotator agreement, often through elaborate guidelines. The difference in philosophies arises because unlike these NLP annotation tasks, where the products are used primarily by other NLP system components, IR (and likewise QA) is an end-user task. These systems are intended for real world use. Since people differ, systems must be able to accommodate these differences. Hence, there is a strong preference in QA for evaluations that maintain a model of the individual user. 4.1 Micro- vs. Macro-Averaging The current nugget pyramid method leverages multiple judgments to define a weight for each individual nugget, and then incorporates this weight into the F-score computation. As an alternative, we propose another method for combining the opinions of multiple assessors: evaluate system responses individually against N sets of binary judgments, and then compute the mean across those scores. We define the macro-averaged binary F-score over a set A = {a1, ..., aN} of N assessors as: F = P a∈A Fa N Where Fa is the binary F-score according to the vital/okay judgments of assessor a. The differences between the pyramid F-score and the macroaveraged binary F-score correspond to the distinction between micro- and macro-averaging discussed in the context of text classification (Lewis, 1991). In those applications, both measures are meaningful depending on focus: individual instances or entire classes. In tasks where it is important to correctly classify individual instances, microaveraging is more appropriate. In tasks where it is important to correctly identify a class, macroaveraging better quantifies performance. In classification tasks, imbalance in the prevalence of each class can lead to large differences in macro- and micro-averaged scores. Analogizing to our work, the original formulation of nugget pyramids corresponds to micro-averaging (since we focus on individual nuggets), while the alternative corresponds to macro-averaging (since we focus on the assessor). We additionally note that the two methods encode different assumptions. Macro-averaging assumes that there is nothing intrinsically interesting about a nugget—it is simply a matter of a particular user with particular needs finding a particular nugget to be of interest. Micro-averaging, on the other hand, assumes that some nuggets are inherently interesting, independent of the particular interests of users.2 Each approach has characteristics that make it desirable. From the perspective of evaluators, the macro-averaged binary F-score is preferable because it models real users; each set of binary judgments represents the information need of a real user, each binary F-score represents how well an answer will satisfy a real user, and the macro-averaged binary F-score represents how well an answer will satisfy, on average, a sample population of real users. From the perspective of QA system developers, the micro-averaged nugget pyramid F-score is preferable because it allows finer discrimination in in2We are grateful to an anonymous reviewer for this insight. 773 dividual nugget performance, which enables better techniques for system training and optimization. The macro-averaged binary F-score has the same desirable properties as the micro-averaged pyramid F-score in that fewer responses will have zero Fscores as compared to the single-assessor binary Fscore. We demonstrate this as follows. Let X be a response that receives a non-zero pyramid F-score. Let A = {a1, a2, a3, ..., aN} be the set of N assessors. Then it can be proven that X also receives a non-zero macro-averaged binary F-score: 1. There exists some nugget v with weight greater than 0, such that an answer string r in X matches v. (def. of pyramid recall) 2. There exists some assessor ap ∈A who marked v as vital. (def. of pyramid nugget weight) 3. To show that X will also receive a non-zero macro-averaged binary score, it is sufficient to show that there is some assessor am ∈A such that X receives a non-zero F-score when evaluated using just the vital/okay judgments of am. (def. of macro-averaged binary F-score) 4. But, such an assessor does exist, namely assessor ap: Consider the binary F-score assigned to X according to just assessor ap. The recall of X is greater than zero, since X contains the response r that matches the nugget v that was marked as vital by ap (from (2), (1), and the def. of recall). The precision must also be greater than zero (def. of precision). Therefore, the macro-averaged binary F-score of X is nonzero. (def. of F-score) 4.2 Analysis from TREC 2006 While the macro-averaged method is guaranteed to produce no more zero-valued scores than the microaveraged pyramid method, it is not guaranteed that the scores will be the same for any given response. What are the empirical characteristics of each approach? To explore this question, we once again examined data from TREC 2006. Figure 6 shows a scatter plot of the pyramid Fscore and macro-averaged binary F-score for every Other questions in all runs from the TREC 2006 QA track main task. Despite focusing on different aspects of the evaluation setup, these measures 0.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 Pyramid F−score Macro−averaged binary F−score Figure 6: Scatter plot comparing the pyramid and macro-averaged binary F-scores for all questions. binary micro macro binary 1.000/1.000 0.870/0.987 0.861/0.988 micro 1.000/1.000 0.985/0.996 macro 1.000/1.000 Table 1: Pearson’s correlation of F-scores, by question and by run. are highly correlated, even at the level of individual questions. Table 1 provides a summary of the correlations between the original binary F-score, the (micro-averaged) pyramid F-score, and the macroaveraged binary F-score. Pearson’s r is given for F-scores at the individual question level (first number) and at the run level (second number). The correlation between all three variants are about equal at the level of system runs. At the level of individual questions, the micro- and macro-averaged F-scores (using multiple judgments) are still highly correlated with each other, but each is less correlated with the single-assessor binary F-score. 4.3 Discussion The differences between macroand microaveraging methods invoke a more general discussion on notions of nugget importance. There are actually two different issues we are attempting to address with our different approaches: the first is a more granular scale of nugget importance, the second is variations across a population of users. In 774 the micro-averaged pyramid F-scores, we achieve the first by leveraging the second, i.e., binary judgments from a large population are combined to yield weights for individual nuggets. In the macro-averaged binary F-score, we focus solely on population effects without addressing granularity of nugget importance. Exploring this thread of argument, we can formulate additional approaches for tackling these issues. We could, for example, solicit more granular individual judgments on each nugget from each assessor, perhaps on a Likert scale or as a continuous quantity ranging from zero to one. This would yield two more methods for computing F-scores, both a macro-averaged and a micro-averaged variant. The macro-averaged variant would be especially attractive because it reflects real users and yet individual F-scores remain discriminative. Despite its possible advantages, this extension is rejected based on resource considerations; making snap binary judgments on individual nuggets is much quicker than a multi-scaled value assignment—at least at present, the additional costs are not sufficient to offset the potential gains. 5 Conclusion The important role that large-scale evaluations play in guiding research in human language technologies means that the community must “get it right.” This would ordinarily call for a more conservative approach to avoid changes that might have unintended consequences. However, evaluation methodologies must evolve to reflect the shifting interests of the research community to remain relevant. Thus, organizers of evaluations must walk a fine line between progress and chaos. Nevertheless, the introduction of nugget pyramids in the TREC QA evaluation provides a case study showing how this fine balance can indeed be achieved. The addition of multiple judgments of nugget importance yields an evaluation that is both more stable and more discriminative than the original single-assessor evaluation, while requiring only a small additional cost in terms of human labor. We have explored two different methods for combining judgments from multiple assessors to address shortcomings in the original nugget-based evaluation setup. Although they make different assumptions about the evaluation, results from both approaches are highly correlated. Thus, we can continue employing the pyramid-based method, which is well-suited for developing systems, and still be assured that the results remain consistent with an evaluation method that maintains a model of real individual users. Acknowledgments This work has been supported in part by DARPA contract HR0011-06-2-0001 (GALE). The second author would like to thank Kiri and Esther for their kind support. References H. Dang, J. Lin, and D. Kelly. 2006. Overview of the TREC 2006 question answering track. In Proc. of TREC 2006. W. Hildebrandt, B. Katz, and J. Lin. 2004. Answering definition questions with multiple knowledge sources. In Proc. HLT/NAACL 2004. D. Lewis. 1991. Evaluating text categorization. In Proc. of the Speech and Natural Language Workshop. J. Lin and D. Demner-Fushman. 2005. Automatically evaluating answers to definition questions. In Proc. of HLT/EMNLP 2005. J. Lin and D. Demner-Fushman. 2006. Will pyramids built of nuggets topple over? In Proc. of HLT/NAACL 2006. E. Sormunen. 2002. Liberal relevance criteria of TREC—counting on negligible documents? In Proc. of SIGIR 2002. E. Voorhees. 2000. Variations in relevance judgments and the measurement of retrieval effectiveness. IP&M, 36(5):697–716. E. Voorhees. 2002. The philosophy of information retrieval evaluation. In Proc. of CLEF Workshop. E. Voorhees. 2003. Overview of the TREC 2003 question answering track. In Proc. of TREC 2003. E. Voorhees. 2005. Using question series to evaluate question answering system effectiveness. In Proc. of HLT/EMNLP 2005. 775
2007
97
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 776–783, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Exploiting Syntactic and Shallow Semantic Kernels for Question/Answer Classification Alessandro Moschitti University of Trento 38050 Povo di Trento Italy [email protected] Silvia Quarteroni The University of York York YO10 5DD United Kingdom [email protected] Roberto Basili “Tor Vergata” University Via del Politecnico 1 00133 Rome, Italy [email protected] Suresh Manandhar The University of York York YO10 5DD United Kingdom [email protected] Abstract We study the impact of syntactic and shallow semantic information in automatic classification of questions and answers and answer re-ranking. We define (a) new tree structures based on shallow semantics encoded in Predicate Argument Structures (PASs) and (b) new kernel functions to exploit the representational power of such structures with Support Vector Machines. Our experiments suggest that syntactic information helps tasks such as question/answer classification and that shallow semantics gives remarkable contribution when a reliable set of PASs can be extracted, e.g. from answers. 1 Introduction Question answering (QA) is as a form of information retrieval where one or more answers are returned to a question in natural language in the form of sentences or phrases. The typical QA system architecture consists of three phases: question processing, document retrieval and answer extraction (Kwok et al., 2001). Question processing is often centered on question classification, which selects one of k expected answer classes. Most accurate models apply supervised machine learning techniques, e.g. SNoW (Li and Roth, 2005), where questions are encoded using various lexical, syntactic and semantic features. The retrieval and answer extraction phases consist in retrieving relevant documents (Collins-Thompson et al., 2004) and selecting candidate answer passages from them. A further answer re-ranking phase is optionally applied. Here, too, the syntactic structure of a sentence appears to provide more useful information than a bag of words (Chen et al., 2006), although the correct way to exploit it is still an open problem. An effective way to integrate syntactic structures in machine learning algorithms is the use of tree kernel (TK) functions (Collins and Duffy, 2002), which have been successfully applied to question classification (Zhang and Lee, 2003; Moschitti, 2006) and other tasks, e.g. relation extraction (Zelenko et al., 2003; Moschitti, 2006). In more complex tasks such as computing the relatedness between questions and answers in answer re-ranking, to our knowledge no study uses kernel functions to encode syntactic information. Moreover, the study of shallow semantic information such as predicate argument structures annotated in the PropBank (PB) project (Kingsbury and Palmer, 2002) (www.cis.upenn.edu/∼ace) is a promising research direction. We argue that semantic structures can be used to characterize the relation between a question and a candidate answer. In this paper, we extensively study new structural representations, encoding parse trees, bag-of-words, POS tags and predicate argument structures (PASs) for question classification and answer re-ranking. We define new tree representations for both simple and nested PASs, i.e. PASs whose arguments are other predicates (Section 2). Moreover, we define new kernel functions to exploit PASs, which we automatically derive with our SRL system (Moschitti et al., 2005) (Section 3). Our experiments using SVMs and the above ker776 nels and data (Section 4) shows the following: (a) our approach reaches state-of-the-art accuracy on question classification. (b) PB predicative structures are not effective for question classification but show promising results for answer classification on a corpus of answers to TREC-QA 2001 description questions. We created such dataset by using YourQA (Quarteroni and Manandhar, 2006), our basic Webbased QA system1. (c) The answer classifier increases the ranking accuracy of our QA system by about 25%. Our results show that PAS and syntactic parsing are promising methods to address tasks affected by data sparseness like question/answer categorization. 2 Encoding Shallow Semantic Structures Traditionally, information retrieval techniques are based on the bag-of-words (BOW) approach augmented by language modeling (Allan et al., 2002). When the task requires the use of more complex semantics, the above approaches are often inadequate to perform fine-level textual analysis. An improvement on BOW is given by the use of syntactic parse trees, e.g. for question classification (Zhang and Lee, 2003), but these, too are inadequate when dealing with definitional answers expressed by long and articulated sentences or even paragraphs. On the contrary, shallow semantic representations, bearing a more “compact” information, could prevent the sparseness of deep structural approaches and the weakness of BOW models. Initiatives such as PropBank (PB) (Kingsbury and Palmer, 2002) have made possible the design of accurate automatic Semantic Role Labeling (SRL) systems (Carreras and M`arquez, 2005). Attempting an application of SRL to QA hence seems natural, as pinpointing the answer to a question relies on a deep understanding of the semantics of both. Let us consider the PB annotation: [ARG1 Antigens] were [AM−T MP originally] [rel defined] [ARG2 as non-self molecules]. Such annotation can be used to design a shallow semantic representation that can be matched against other semantically similar sentences, e.g. [ARG0 Researchers] [rel describe] [ARG1 antigens] [ARG2 as foreign molecules] [ARGM−LOC in 1Demo at: http://cs.york.ac.uk/aig/aqua. PAS rel define ARG1 antigens ARG2 molecules ARGM-TMP originally PAS rel describe ARG0 researchers ARG1 antigens ARG2 molecules ARGM-LOC body Figure 1: Compact predicate argument structures of two different sentences. the body]. For this purpose, we can represent the above annotated sentences using the tree structures described in Figure 1. In this compact representation, hereafter Predicate-Argument Structures (PAS), arguments are replaced with their most important word – often referred to as the semantic head. This reduces data sparseness with respect to a typical BOW representation. However, sentences rarely contain a single predicate; it happens more generally that propositions contain one or more subordinate clauses. For instance let us consider a slight modification of the first sentence: “Antigens were originally defined as non-self molecules which bound specifically to antibodies2.” Here, the main predicate is “defined”, followed by a subordinate predicate “bound”. Our SRL system outputs the following two annotations: (1) [ARG1 Antigens] were [ARGM−T MP originally] [rel defined] [ARG2 as non-self molecules which bound specifically to antibodies]. (2) Antigens were originally defined as [ARG1 non-self molecules] [R−A1 which] [rel bound] [ARGM−MNR specifically] [ARG2 to antibodies]. giving the PASs in Figure 2.(a) resp. 2.(b). As visible in Figure 2.(a), when an argument node corresponds to an entire subordinate clause, we label its leaf with PAS, e.g. the leaf of ARG2. Such PAS node is actually the root of the subordinate clause in Figure 2.(b). Taken as standalone, such PASs do not express the whole meaning of the sentence; it is more accurate to define a single structure encoding the dependency between the two predicates as in 2This is an actual answer to ”What are antibodies?” from our question answering system, YourQA. 777 PAS rel define ARG1 antigens ARG2 PAS AM-TMP originally (a) PAS rel bound ARG1 molecules R-ARG1 which AM-ADV specifically ARG2 antibodies (b) PAS rel define ARG1 antigens ARG2 PAS rel bound ARG1 molecules R-ARG1 which AM-ADV specifically ARG2 antibodies AM-TMP originally (c) Figure 2: Two PASs composing a PASN Figure 2.(c). We refer to nested PASs as PASNs. It is worth to note that semantically equivalent sentences syntactically expressed in different ways share the same PB arguments and the same PASs, whereas semantically different sentences result in different PASs. For example, the sentence: “Antigens were originally defined as antibodies which bound specifically to non-self molecules”, uses the same words as (2) but has different meaning. Its PB annotation: (3) Antigens were originally defined as [ARG1 antibodies] [R−A1 which] [rel bound] [ARGM−MNR specifically] [ARG2 to non-self molecules], clearly differs from (2), as ARG2 is now nonself molecules; consequently, the PASs are also different. Once we have assumed that parse trees and PASs can improve on the simple BOW representation, we face the problem of representing tree structures in learning machines. Section 3 introduces a viable approach based on tree kernels. 3 Syntactic and Semantic Kernels for Text As mentioned above, encoding syntactic/semantic information represented by means of tree structures in the learning algorithm is problematic. A first solution is to use all its possible substructures as features. Given the combinatorial explosion of considering subparts, the resulting feature space is usually very large. A tree kernel (TK) function which computes the number of common subtrees between two syntactic parse trees has been given in (Collins and Duffy, 2002). Unfortunately, such subtrees are subject to the constraint that their nodes are taken with all or none of the children they have in the original tree. This makes the TK function not well suited for the PAS trees defined above. For instance, although the two PASs of Figure 1 share most of the subtrees rooted in the PAS node, Collins and Duffy’s kernel would compute no match. In the next section we describe a new kernel derived from the above tree kernel, able to evaluate the meaningful substructures for PAS trees. Moreover, as a single PAS may not be sufficient for text representation, we propose a new kernel that combines the contributions of different PASs. 3.1 Tree kernels Given two trees T1 and T2, let {f1, f2, ..} = F be the set of substructures (fragments) and Ii(n) be equal to 1 if fi is rooted at node n, 0 otherwise. Collins and Duffy’s kernel is defined as TK(T1, T2) = P n1∈NT1 P n2∈NT2 ∆(n1, n2), (1) where NT1 and NT2 are the sets of nodes in T1 and T2, respectively and ∆(n1, n2) = P|F| i=1 Ii(n1)Ii(n2). The latter is equal to the number of common fragments rooted in nodes n1 and n2. ∆ can be computed as follows: (1) if the productions (i.e. the nodes with their direct children) at n1 and n2 are different then ∆(n1, n2) = 0; (2) if the productions at n1 and n2 are the same, and n1 and n2 only have leaf children (i.e. they are preterminal symbols) then ∆(n1, n2) = 1; (3) if the productions at n1 and n2 are the same, and n1 and n2 are not pre-terminals then ∆(n1, n2) = Qnc(n1) j=1 (1+∆(cj n1, cj n2)), where nc(n1) is the number of children of n1 and cj n is the j-th child of n. Such tree kernel can be normalized and a λ factor can be added to reduce the weight of large structures (refer to (Collins and Duffy, 2002) for a complete description). The critical aspect of steps (1), (2) and (3) is that the productions of two evaluated nodes have to be identical to allow the match of further descendants. This means that common substructures cannot be composed by a node with only some of its 778 PAS SLOT rel define SLOT ARG1 antigens * SLOT ARG2 PAS * SLOT ARGM-TMP originally * (a) PAS SLOT rel define SLOT ARG1 antigens * SLOT null SLOT null (b) PAS SLOT rel define SLOT null SLOT ARG2 PAS * SLOT null (c) Figure 3: A PAS with some of its fragments. children as an effective PAS representation would require. We solve this problem by designing the Shallow Semantic Tree Kernel (SSTK) which allows to match portions of a PAS. 3.2 The Shallow Semantic Tree Kernel (SSTK) The SSTK is based on two ideas: first, we change the PAS, as shown in Figure 3.(a) by adding SLOT nodes. These accommodate argument labels in a specific order, i.e. we provide a fixed number of slots, possibly filled with null arguments, that encode all possible predicate arguments. For simplicity, the figure shows a structure of just 4 arguments, but more can be added to accommodate the maximum number of arguments a predicate can have. Leaf nodes are filled with the wildcard character * but they may alternatively accommodate additional information. The slot nodes are used in such a way that the adopted TK function can generate fragments containing one or more children like for example those shown in frames (b) and (c) of Figure 3. As previously pointed out, if the arguments were directly attached to the root node, the kernel function would only generate the structure with all children (or the structure with no children, i.e. empty). Second, as the original tree kernel would generate many matches with slots filled with the null label, we have set a new step 0: (0) if n1 (or n2) is a pre-terminal node and its child label is null, ∆(n1, n2) = 0; and subtract one unit to ∆(n1, n2), in step 3: (3) ∆(n1, n2) = Qnc(n1) j=1 (1 + ∆(cj n1, cj n2)) −1, The above changes generate a new ∆which, when substituted (in place of the original ∆) in Eq. 1, gives the new Shallow Semantic Tree Kernel. To show that SSTK is effective in counting the number of relations shared by two PASs, we propose the following: Proposition 1 The new ∆function applied to the modified PAS counts the number of all possible kary relations derivable from a set of k arguments, i.e. Pk i=1 k i  relations of arity from 1 to k (the predicate being considered as a special argument). Proof We observe that a kernel applied to a tree and itself computes all its substructures, thus if we evaluate SSTK between a PAS and itself we must obtain the number of generated k-ary relations. We prove by induction the above claim. For the base case (k = 0): we use a PAS with no arguments, i.e. all its slots are filled with null labels. Let r be the PAS root; since r is not a preterminal, step 3 is selected and ∆is recursively applied to all r’s children, i.e. the slot nodes. For the latter, step 0 assigns ∆(cj r, cj r) = 0. As a result, ∆(r, r) = Qnc(r) j=1 (1 + 0) −1 = 0 and the base case holds. For the general case, r is the root of a PAS with k+1 arguments. ∆(r, r) = Qnc(r) j=1 (1 + ∆(cj r, cj r)) −1 =Qk j=1(1+∆(cj r, cj r))×(1+∆(ck+1 r , ck+1 r ))−1. For k arguments, we assume by induction that Qk j=1(1+ ∆(cj r, cj r)) −1 = Pk i=1 k i , i.e. the number of k-ary relations. Moreover, (1 + ∆(ck+1 r , ck+1 r )) = 2, thus ∆(r, r) = Pk i=1 k i  × 2 = 2k × 2 = 2k+1 = Pk+1 i=1 k+1 i , i.e. all the relations until arity k + 1 2 TK functions can be applied to sentence parse trees, therefore their usefulness for text processing applications, e.g. question classification, is evident. On the contrary, the SSTK applied to one PAS extracted from a text fragment may not be meaningful since its representation needs to take into account all the PASs that it contains. We address such problem 779 by defining a kernel on multiple PASs. Let Pt and Pt′ be the sets of PASs extracted from the text fragment t and t′. We define: Kall(Pt, Pt′) = X p∈Pt X p′∈Pt′ SSTK(p, p′), (2) While during the experiments (Sect. 4) the Kall kernel is used to handle predicate argument structures, TK (Eq. 1) is used to process parse trees and the linear kernel to handle POS and BOW features. 4 Experiments The purpose of our experiments is to study the impact of the new representations introduced earlier for QA tasks. In particular, we focus on question classification and answer re-ranking for Web-based QA systems. In the question classification task, we extend previous studies, e.g. (Zhang and Lee, 2003; Moschitti, 2006), by testing a set of previously designed kernels and their combination with our new Shallow Semantic Tree Kernel. In the answer re-ranking task, we approach the problem of detecting description answers, among the most complex in the literature (Cui et al., 2005; Kazawa et al., 2001). The representations that we adopt are: bag-ofwords (BOW), bag-of-POS tags (POS), parse tree (PT), predicate argument structure (PAS) and nested PAS (PASN). BOW and POS are processed by means of a linear kernel, PT is processed with TK, PAS and PASN are processed by SSTK. We implemented the proposed kernels in the SVM-light-TK software available at ai-nlp.info.uniroma2.it/ moschitti/ which encodes tree kernel functions in SVM-light (Joachims, 1999). 4.1 Question classification As a first experiment, we focus on question classification, for which benchmarks and baseline results are available (Zhang and Lee, 2003; Li and Roth, 2005). We design a question multi-classifier by combining n binary SVMs3 according to the ONEvs-ALL scheme, where the final output class is the one associated with the most probable prediction. The PASs were automatically derived by our SRL 3We adopted the default regularization parameter (i.e., the average of 1/||⃗x||) and tried a few cost-factor values to adjust the rate between Precision and Recall on the development set. system which achieves a 76% F1-measure (Moschitti et al., 2005). As benchmark data, we use the question training and test set available at: l2r.cs.uiuc.edu/ ∼cogcomp/Data/QA/QC/, where the test set are the 500 TREC 2001 test questions (Voorhees, 2001). We refer to this split as UIUC. The performance of the multi-classifier and the individual binary classifiers is measured with accuracy resp. F1-measure. To collect statistically significant information, we run 10-fold cross validation on the 6,000 questions. Features Accuracy (UIUC) Accuracy (c.v.) PT 90.4 84.8±1.2 BOW 90.6 84.7±1.2 PAS 34.2 43.0±1.9 POS 26.4 32.4±2.1 PT+BOW 91.8 86.1±1.1 PT+BOW+POS 91.8 84.7±1.5 PAS+BOW 90.0 82.1±1.3 PAS+BOW+POS 88.8 81.0±1.5 Table 1: Accuracy of the question classifier with different feature combinations Question classification results Table 1 shows the accuracy of different question representations on the UIUC split (Column 1) and the average accuracy ± the corresponding confidence limit (at 90% significance) on the cross validation splits (Column 2).(i) The TK on PT and the linear kernel on BOW produce a very high result, i.e. about 90.5%. This is higher than the best outcome derived in (Zhang and Lee, 2003), i.e. 90%, obtained with a kernel combining BOW and PT on the same data. Combined with PT, BOW reaches 91.8%, very close to the 92.5% accuracy reached in (Li and Roth, 2005) using complex semantic information from external resources. (ii) The PAS feature provides no improvement. This is mainly because at least half of the training and test questions only contain the predicate “to be”, for which a PAS cannot be derived by a PB-based shallow semantic parser. (iii) The 10-fold cross-validation experiments confirm the trends observed in the UIUC split. The best model (according to statistical significance) is PT+BOW, achieving an 86.1% average accuracy4. 4This value is lower than the UIUC split one as the UIUC test set is not consistent with the training set (it contains the 780 4.2 Answer classification Question classification does not allow to fully exploit the PAS potential since questions tend to be short and with few verbal predicates (i.e. the only ones that our SRL system can extract). A different scenario is answer classification, i.e. deciding if a passage/sentence correctly answers a question. Here, the semantics to be generated by the classifier are not constrained to a small taxonomy and answer length may make the PT-based representation too sparse. We learn answer classification with a binary SVM which determines if an answer is correct for the target question: here, the classification instances are ⟨question, answer⟩pairs. Each pair component can be encoded with PT, BOW, PAS and PASN representations (processed by previous kernels). As test data, we collected the 138 TREC 2001 test questions labeled as “description” and for each, we obtained a list of answer paragraphs extracted from Web documents using YourQA. Each paragraph sentence was manually evaluated based on whether it contained an answer to the corresponding question. Moreover, to simplify the classification problem, we isolated for each paragraph the sentence which obtained the maximal judgment (in case more than one sentence in the paragraph had the same judgment, we chose the first one). We collected a corpus containing 1309 sentences, 416 of which – labeled “+1” – answered the question either concisely or with noise; the rest – labeled “-1”– were either irrelevant to the question or contained hints relating to the question but could not be judged as valid answers5. Answer classification results To test the impact of our models on answer classification, we ran 5-fold cross-validation, with the constraint that two pairs ⟨q, a1⟩and ⟨q, a2⟩associated with the same question q could not be split between training and testing. Hence, each reported value is the average over 5 different outcomes. The standard deviations ranged TREC 2001 questions) and includes a larger percentage of easily classified question types, e.g. the numeric (22.6%) and description classes (27.6%) whose percentage in training is 16.4% resp. 16.2%. 5For instance, given the question “What are invertebrates?”, the sentence “At least 99% of all animal species are invertebrates, comprising . . . ” was labeled “-1” , while “Invertebrates are animals without backbones.” was labeled “+1”.                                                                                ! " #    !     ! " #  $ % !     ! " #  $ % &   !   $ % &   ! " #  $ % &   !   $ % ! " #  $ % !   $ % ! " #  $ % &   !   $ % ! " #    ! Figure 4: Impact of the BOW and PT features on answer classification ' ( ) * ' ( ) + ' + ) * ' + ) + ' ' ) * ' ' ) + ' , ) * ' , ) + ' ) * ' ) + ' . ) * / ) + 0 ) * 0 ) + 1 ) * 1 ) + ( ) * ( ) + + ) * + ) + ' ) * ' ) + , ) * 2 3 4 5 6 7 8 9 5 3 : ; < = > ? @ A B C ? D E F G H I J K E F G H I D E F G H I J K E L M N F G H I D E F G H I J K E F G H N L M N L K O I D E F G H I J K E F G H N L M N L K O P I D E F G H I J K E F G H N L K O I D E F G H I J K E F G H N L K O P I Figure 5: Impact of the PAS and PASN features combined with the BOW and PT features on answer classification Q R S T Q R S U Q V S T Q V S U U T S T U T S U U W S T U W S U U X S T W S U X S T X S U Y S T Y S U Q S T Q S U U S T U S U Z S T Z S U [ S T \ ] ^ _ ` a b c _ ] d e f g h i j k l m i n o p q r s t u o v u w s n o p q r s t u o v u w x s Figure 6: Comparison between PAS and PASN when used as standalone features for the answer on answer classification 781 approximately between 2.5 and 5. The experiments were organized as follows: First, we examined the contributions of BOW and PT representations as they proved very important for question classification. Figure 4 reports the plot of the F1-measure of answer classifiers trained with all combinations of the above models according to different values of the cost-factor parameter, adjusting the rate between Precision and Recall. We see here that the most accurate classifiers are the ones using both the answer’s BOW and PT feature and either the question’s PT or BOW feature (i.e. Q(BOW) + A(PT,BOW) resp. Q(PT) + A(PT,BOW) combinations). When PT is used for the answer the simple BOW model is outperformed by 2 to 3 points. Hence, we infer that both the answer’s PT and BOW features are very useful in the classification task. However, PT does not seem to provide additional information to BOW when used for question representation. This can be explained by considering that answer classification (restricted to description questions) does not require question type classification since its main purpose is to detect question/answer relations. In this scenario, the question’s syntactic structure does not seem to provide much more information than BOW. Secondly, we evaluated the impact of the newly defined PAS and PASN features combined with the best performing previous model, i.e. Q(BOW) + A(PT,BOW). Figure 5 illustrates the F1-measure plots again according to the cost-factor parameter. We observe here that model Q(BOW) + A(PT,BOW,PAS) greatly outperforms model Q(BOW) + A(PT,BOW), proving that the PAS feature is very useful for answer classification, i.e. the improvement is about 2 to 3 points while the difference with the BOW model, i.e. Q(BOW) + A(BOW), exceeds 3 points. The Q(BOW) + A(PT,BOW,PASN) model is not more effective than Q(BOW) + A(PT,BOW,PAS). This suggests either that PAS is more effective than PASN or that when the PT information is added, the PASN contribution fades out. To further investigate the previous issue, we finally compared the contribution of the PAS and PASN when combined with the question’s BOW feature alone, i.e. no PT is used. The results, reported in Figure 6, show that this time PASN performs better than PAS. This suggests that the dependencies between the nested PASs are in some way captured by the PT information. Indeed, it should be noted that we join predicates only in case one is subordinate to the other, thus considering only a restricted set of all possible predicate dependencies. However, the improvement over PAS confirms that PASN is the right direction to encode shallow semantics from different sentence predicates. Baseline P R F1-measure Gg@5 39.22±3.59 33.15±4.22 35.92±3.95 QA@5 39.72±3.44 34.22±3.63 36.76±3.56 Gg@all 31.58±0.58 100 48.02±0.67 QA@all 31.58±0.58 100 48.02±0.67 Gg QA Re-ranker MRR 48.97±3.77 56.21±3.18 81.12±2.12 Table 2: Baseline classifiers accuracy and MRR of YourQA (QA), Google (Gg) and the best re-ranker 4.3 Answer re-ranking The output of the answer classifier can be used to re-rank the list of candidate answers of a QA system. Starting from the top answer, each instance can be classified based on its correctness with respect to the question. If it is classified as correct its rank is unchanged; otherwise it is pushed down, until a lower ranked incorrect answer is found. We used the answer classifier with the highest F1measure on the development set according to different cost-factor values6. We applied such model to the Google ranks and to the ranks of our Web-based QA system, i.e. YourQA. The latter uses Web documents corresponding to the top 20 Google results for the question. Then, each sentence in each document is compared to the question via a blend of similarity metrics used in the answer extraction phase to select the most relevant sentence. A passage of up to 750 bytes is then created around the sentence and returned as an answer. Table 2 illustrates the results of the answer classifiers derived by exploiting Google (Gg) and YourQA (QA) ranks: the top N ranked results are considered as correct definitions and the remaining ones as in6However, by observing the curves in Fig. 5, the selected parameters appear as pessimistic estimates for the best model improvement: the one for BOW is the absolute maximum, but an average one is selected for the best model. 782 correct for different values of N. We show N = 5 and the maximum N (all), i.e. all the available answers. Each measure is the average of the Precision, Recall and F1-measure from cross validation. The F1-measure of Google and YourQA are greatly outperformed by our answer classifier. The last row of Table 2 reports the MRR7 achieved by Google, YourQA (QA) and YourQA after re-ranking (Re-ranker). We note that Google is outperformed by YourQA since its ranks are based on whole documents, not on single passages. Thus Google may rank a document containing several sparsely distributed question words higher than documents with several words concentrated in one passage, which are more interesting. When the answer classifier is applied to improve the YourQA ranking, the MRR reaches 81.1%, rising by about 25%. Finally, it is worth to note that the answer classifier based on Q(BOW)+A(BOW,PT,PAS) model (parameterized as described) gave a 4% higher MRR than the one based on the simple BOW features. As an example, for question “What is foreclosure?”, the sentence “Foreclosure means that the lender takes possession of your home and sells it in order to get its money back.” was correctly classified by the best model, while BOW failed. 5 Conclusion In this paper, we have introduced new structures to represent textual information in three question answering tasks: question classification, answer classification and answer re-ranking. We have defined tree structures (PAS and PASN) to represent predicateargument relations, which we automatically extract using our SRL system. We have also introduced two functions, SSTK and Kall, to exploit their representative power. Our experiments with SVMs and the above models suggest that syntactic information helps tasks such as question classification whereas semantic information contained in PAS and PASN gives promising results in answer classification. In the future, we aim to study ways to capture relations between predicates so that more general se7The Mean Reciprocal Rank is defined as: MRR = 1 n Pn i=1 1 ranki , where n is the number of questions and ranki is the rank of the first correct answer to question i. mantics can be encoded by PASN. Forms of generalization for predicates and arguments within PASNs like LSA clusters, WordNet synsets and FrameNet (roles and frames) information also appear as a promising research area. Acknowledgments We thank the anonymous reviewers for their helpful suggestions. Alessandro Moschitti would like to thank the AMI2 lab at the University of Trento and the EU project LUNA “spoken Language UNderstanding in multilinguAl communication systems” contract no 33549 for supporting part of his research. References J. Allan, J. Aslam, N. Belkin, and C. Buckley. 2002. Challenges in IR and language modeling. In Report of a Workshop at the University of Amherst. X. Carreras and L. M`arquez. 2005. Introduction to the CoNLL2005 shared task: SRL. In CoNLL-2005. Y. Chen, M. Zhou, and S. Wang. 2006. Reranking answers from definitional QA using language models. In ACL’06. M. Collins and N. Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In ACL’02. K. Collins-Thompson, J. Callan, E. Terra, and C. L.A. Clarke. 2004. The effect of document retrieval quality on factoid QA performance. In SIGIR’04. ACM. H. Cui, M. Kan, and T. Chua. 2005. Generic soft pattern models for definitional QA. In SIGIR’05. ACM. T. Joachims. 1999. Making large-scale SVM learning practical. In Advances in Kernel Methods - Support Vector Learning. H. Kazawa, H. Isozaki, and E. Maeda. 2001. NTT question answering system in TREC 2001. In TREC’01. P. Kingsbury and M. Palmer. 2002. From Treebank to PropBank. In LREC’02. C. C. T. Kwok, O. Etzioni, and D. S. Weld. 2001. Scaling question answering to the web. In WWW’01. X. Li and D. Roth. 2005. Learning question classifiers: the role of semantic information. Journ. Nat. Lang. Eng. A. Moschitti, B. Coppola, A. Giuglea, and R. Basili. 2005. Hierarchical semantic role labeling. In CoNLL 2005 shared task. A. Moschitti. 2006. Efficient convolution kernels for dependency and constituent syntactic trees. In ECML’06. S. Quarteroni and S. Manandhar. 2006. User modelling for Adaptive Question Answering and Information Retrieval. In FLAIRS’06. E. M. Voorhees. 2001. Overview of the TREC 2001 QA track. In TREC’01. D. Zelenko, C. Aone, and A. Richardella. 2003. Kernel methods for relation extraction. Journ. of Mach. Learn. Res. D. Zhang and W. Lee. 2003. Question classification using support vector machines. In SIGIR’03. ACM. 783
2007
98
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 784–791, Prague, Czech Republic, June 2007. c⃝2007 Association for Computational Linguistics Language-independent Probabilistic Answer Ranking for Question Answering Jeongwoo Ko, Teruko Mitamura, Eric Nyberg Language Technologies Institute School of Computer Science Carnegie Mellon University {jko, teruko, ehn}@cs.cmu.edu Abstract This paper presents a language-independent probabilistic answer ranking framework for question answering. The framework estimates the probability of an individual answer candidate given the degree of answer relevance and the amount of supporting evidence provided in the set of answer candidates for the question. Our approach was evaluated by comparing the candidate answer sets generated by Chinese and Japanese answer extractors with the re-ranked answer sets produced by the answer ranking framework. Empirical results from testing on NTCIR factoid questions show a 40% performance improvement in Chinese answer selection and a 45% improvement in Japanese answer selection. 1 Introduction Question answering (QA) systems aim at finding precise answers to natural language questions from large document collections. Typical QA systems (Prager et al., 2000; Clarke et al., 2001; Harabagiu et al., 2000) adopt a pipeline architecture that incorporates four major steps: (1) question analysis, (2) document retrieval, (3) answer extraction and (4) answer selection. Question analysis is a process which analyzes a question and produces a list of keywords. Document retrieval is a step that searches for relevant documents or passages. Answer extraction extracts a list of answer candidates from the retrieved documents. Answer selection is a process which pinpoints correct answer(s) from the extracted candidate answers. Since the first three steps in the QA pipeline may produce erroneous outputs, the final answer selection step often entails identifying correct answer(s) amongst many incorrect ones. For example, given the question “Which Chinese city has the largest number of foreign financial companies?”, the answer extraction component produces a ranked list of five answer candidates: Beijing (AP880603-0268)1, Hong Kong (WSJ920110-0013), Shanghai (FBIS358), Taiwan (FT942-2016) and Shanghai (FBIS345320). Due to imprecision in answer extraction, an incorrect answer (“Beijing”) can be ranked in the first position, and the correct answer (“Shanghai”) was extracted from two different documents and ranked in the third and the fifth positions. In order to rank “Shanghai” in the top position, we have to address two interesting challenges: • Answer Similarity. How do we exploit similarity among answer candidates? For example, when the candidates list contains redundant answers (e.g., “Shanghai” as above) or several answers which represent a single instance (e.g. “U.S.A.” and “the United States”), how much should we boost the rank of the redundant answers? • Answer Relevance. How do we identify relevant answer(s) amongst irrelevant ones? This task may involve searching for evidence of a relationship between the answer 1Answer candidates are shown with the identifier of the TREC document where they were found. 784 and the answer type or a question keyword. For example, we might wish to query a knowledge base to determine if “Shanghai” is a city (IS-A(Shanghai, city)), or to determine if Shanghai is in China (IS-IN(Shanghai, China)). The first challenge is to exploit redundancy in the set of answer candidates. As answer candidates are extracted from different documents, they may contain identical, similar or complementary text snippets. For example, “U.S.” can appear as “United States” or “USA” in different documents. It is important to detect redundant information and boost answer confidence, especially for list questions that require a set of unique answers. One approach is to perform answer clustering (Nyberg et al., 2002; Jijkoun et al., 2006). However, the use of clustering raises additional questions: how to calculate the score of the clustered answers, and how to select the cluster label. To address the second question, several answer selection approaches have used external knowledge resources such as WordNet, CYC and gazetteers for answer validation or answer reranking. Answer candidates are either removed or discounted if they are not of the expected answer type (Xu et al., 2002; Moldovan et al., 2003; Chu-Carroll et al., 2003; Echihabi et al., 2004). The Web also has been used for answer reranking by exploiting search engine results produced by queries containing the answer candidate and question keywords (Magnini et al., 2002). This approach has been used in various languages for answer validation. Wikipedia’s structured information was used for Spanish answer type checking (Buscaldi and Rosso, 2006). Although many QA systems have incorporated individual features and/or resources for answer selection in a single language, there has been little research on a generalized probabilistic framework that supports answer ranking in multiple languages using any answer relevance and answer similarity features that are appropriate for the language in question. In this paper, we describe a probabilistic answer ranking framework for multiple languages. The framework uses logistic regression to estimate the probability that an answer candidate is correct given multiple answer relevance features and answer similarity features. An existing framework which was originally developed for English (Ko et al., 2007) was extended for Chinese and Japanese answer ranking by incorporating language-specific features. Empirical results on NTCIR Chinese and Japanese factoid questions show that the framework significantly improved answer selection performance; Chinese performance improved by 40% over the baseline, and Japanese performance improved by 45% over the baseline. The remainder of this paper is organized as follows: Section 2 contains an overview of the answer ranking task. Section 3 summarizes the answer ranking framework. In Section 4, we explain how we extended the framework by incorporating languagespecific features. Section 5 describes the experimental methodology and results. Finally, Section 6 concludes with suggestions for future research. 2 Answer Ranking Task The relevance of an answer to a question can be estimated by the probability P(correct(Ai) |Ai, Q), where Q is a question and Ai is an answer candidate. To exploit answer similarity, we estimate the probability P(correct(Ai) |Ai, Aj), where Aj is similar to Ai. Since both probabilities influence overall answer ranking performance, it is important to combine them in a unified framework and estimate the probability of an answer candidate as: P(correct(Ai)|Q, A1, ..., An). The estimated probability is used to rank answer candidates and select final answers from the list. For factoid questions, the top answer is selected as a final answer to the question. In addition, we can use the estimated probability to classify incorrect answers: if the probability of an answer candidate is lower than 0.5, it is considered to be a wrong answer and is filtered out of the answer list. This is useful in deciding whether or not a valid answer to a question exists in a given corpus (Voorhees, 2002). The estimated probability can also be used in conjunction with a cutoff threshold when selecting multiple answers to list questions. 3 Answer Ranking Framework This section summarizes our answer ranking framework, originally developed for English answers (Ko 785 P(correct(Ai)|Q, A1, ..., An) ≈P(correct(Ai)|rel1(Ai), ..., relK1(Ai), sim1(Ai), ..., simK2(Ai)) = exp(α0 + K1 P k=1 βkrelk(Ai) + K2 P k=1 λksimk(Ai)) 1 + exp(α0 + K1 P k=1 βkrelk(Ai) + K2 P k=1 λksimk(Ai)) where, simk(Ai) = N X j=1(j̸=i) sim′ k(Ai, Aj). Figure 1: Estimating correctness of an answer candidate given a question and a set of answer candidates et al., 2007). The model uses logistic regression to estimate the probability of an answer candidate (Figure 1). Each relk(Ai) is a feature function used to produce an answer relevance score for an answer candidate Ai. Each sim′ k(Ai, Aj) is a similarity function used to calculate an answer similarity between Ai and Aj. K1 and K2 are the number of answer relevance and answer similarity features, respectively. N is the number of answer candidates. To incorporate multiple similarity features, each simk(Ai) is obtained from an individual similarity metric, sim′ k(Ai, Aj). For example, if Levenshtein distance is used as one similarity metric, simk(Ai) is calculated by summing N-1 Levenshtein distances between one answer candidate and all other candidates. The parameters α, β, λ were estimated from training data by maximizing the log likelihood. We used the Quasi-Newton algorithm (Minka, 2003) for parameter estimation. Multiple features were used to generate answer relevance scores and answer similarity scores; these are discussed below. 3.1 Answer Relevance Features Answer relevance features can be classified into knowledge-based features or data-driven features. 1) Knowledge-based features Gazetteers: Gazetteers provide geographic information, which allows us to identify strings as instances of countries, their cities, continents, capitals, etc. For answer ranking, three gazetteer resources were used: the Tipster Gazetteer, the CIA World Factbook and information about the US states provided by 50states.com. These resources were used to assign an answer relevance score between -1 and 1 to each candidate. For example, given the question “Which city in China has the largest number of foreign financial companies?”, the candidate “Shanghai” receives a score of 0.5 because it is a city in the gazetteers. But “Taiwan” receives a score of -1.0 because it is not a city in the gazetteers. A score of 0 means the gazetteers did not contribute to the answer selection process for that candidate. Ontology: Ontologies such as WordNet contain information about relationships between words and general meaning types (synsets, semantic categories, etc.). WordNet was used to identify answer relevance in a manner analogous to the use of gazetteers. For example, given the question “Who wrote the book ’Song of Solomon’?”, the candidate “Mark Twain” receives a score of 0.5 because its hypernyms include “writer”. 2) Data-driven features Wikipedia: Wikipedia was used to generate an answer relevance score. If there is a Wikipedia document whose title matches an answer candidate, the document is analyzed to obtain the term frequency (tf) and the inverse term frequency (idf) of the candidate, from which a tf.idf score is calculated. When there is no matched document, each question keyword is also processed as a back-off strategy, and the answer relevance score is calculated by summing the tf.idf scores obtained from individual keywords. Google: Following Magnini et al. (2002), a query consisting of an answer candidate and question key786 words was sent to the Google search engine. Then the top 10 text snippets returned by Google were analyzed to generate an answer relevance score by computing the minimum number of words between a keyword and the answer candidate. 3.2 Answer Similarity Features Answer similarity is calculated using multiple string distance metrics and a list of synonyms. String Distance Metrics: String distance metrics such as Levenshtein, Jaro-Winkler, and Cosine similarity were used to calculate the similarity between two English answer candidates. Synonyms: Synonyms can be used as another metric to calculate answer similarity. If one answer is synonym of another answer, the score is 1. Otherwise the score is 0. To get a list of synonyms, three knowledge bases were used: WordNet, Wikipedia and the CIA World Factbook. In addition, manually generated rules were used to obtain synonyms for different types of answer candidates. For example, “April 12 1914” and “12th Apr. 1914” are converted into “1914-04-12” and treated as synonyms. 4 Extensions for Multiple Languages We extended the framework for Chinese and Japanese QA. This section details how we incorporated language-specific resources into the framework. As logistic regression is based on a probabilistic framework, the model does not need to be changed to support other languages. We only retrained the model for individual languages. To support Chinese and Japanese QA, we incorporated new features for individual languages. 4.1 Answer Relevance Features We replaced the English gazetteers and WordNet with language-specific resources for Japanese and Chinese. As Wikipedia and the Web support multiple languages, the same algorithm was used in searching language-specific corpora for the two languages. 1) Knowledge-based features The knowledge-based features involve searching for facts in a knowledge base such as gazetteers and WordNet. We utilized comparable resources for Chinese and Japanese. Using language-specific re#Articles Language Nov. 2005 Aug. 2006 English 1,811,554 3,583,699 Japanese 201,703 446,122 Chinese 69,936 197,447 Table 1: Articles in Wikipedia for different languages sources, the same algorithms were applied to generate an answer relevance score between -1 and 1. Gazetteers: There are few available gazetteers for Chinese and Japanese. Therefore, we extracted location data from language-specific resources. For Japanese, we extracted Japanese location information from Yahoo2, which contains many location names in Japan and the relationships among them. For Chinese, we extracted location names from the Web. In addition, we translated country names provided by the CIA World Factbook and the Tipster gazetteers into Chinese and Japanese names. As there is more than one translation, top 3 translations were used. Ontology: For Chinese, we used HowNet (Dong, 2000) which is a Chinese version of WordNet. It contains 65,000 Chinese concepts and 75,000 corresponding English equivalents. For Japanese, we used semantic classes provided by Gengo GoiTaikei3. Gengo GoiTaikei is a Japanese lexicon containing 300,000 Japanese words with their associated 3,000 semantic classes. The semantic information provided by HowNet and Gengo GoiTaikei was used to assign an answer relevance score between -1 and 1. 2) Data-driven features Wikipedia: As Wikipedia supports more than 200 language editions, the approach used in English can be used for different languages without any modification. Table 1 shows the number of text articles in three different languages. Wikipedia’s current coverage in Japanese and Chinese does not match its coverage in English, but coverage in these languages continues to improve. To supplement the small corpus of Chinese documents available, we used Baidu 2http://map.yahoo.co.jp/ 3http://www.kecl.ntt.co.jp/mtg/resources/GoiTaikei 787 (http://baike.baidu.com), which is similar to Wikipedia but contains more articles written in Chinese. We first search for Chinese Wikipedia. When there is no matching document in Wikipedia, each answer candidate is sent to Baidu and the retrieved document is analyzed in the same way to analyze Wikipedia documents. The idf score was calculated using word statistics from Japanese Yomiuri newspaper corpus and the NTCIR Chinese corpus. Google: The same algorithm was applied to analyze Japanese and Chinese snippets returned from Google. But we restricted the language to Chinese or Japanese so that Google returned only Chinese or Japanese documents. To calculate the distance between an answer candidate and question keywords, segmentation was done with linguistic tools. For Japanese, Chasen4 was used. For Chinese segmentation, a maximum-entropy based parser was used (Wang et al., 2006). 3) Manual Filtering Other than the features mentioned above, we manually created many rules for numeric and temporal questions to filter out invalid answers. For example, when the question is looking for a year as an answer, an answer candidate which contains only the month receives a score of -1. Otherwise, the score is 0. 4.2 Answer Similarity Features The same features used for English were applied to calculate the similarity of Chinese/Japanese answer candidates. To identify synonyms, Wikipedia were used for both Chinese and Japanese. EIJIRO dictionary was used to obtain Japanese synonyms. EIJIRO is a English-Japanese dictionary containing 1,576,138 words and provides synonyms for Japanese words. As there are several different ways to represent temporal and numeric expressions (Nyberg et al., 2002; Greenwood, 2006), language-specific conversion rules were applied to convert them into a canonical format; for example, a rule to convert Japanese Kanji characters to Arabic numbers is shown in Figure 2. 4http://chasen.aist-nara.ac.jp/hiki/ChaSen 0.25 四分の一 1993-07-04 1993 年 7 月4 日 50 % 5割 1993-07-04 一九九三年七月四日 3E+11 円 3,000億円 3E+11 円 三千 億 円 Normalized answer string Original answer string Figure 2: Example of normalized answer strings 5 Experiments This section describes the experiments to evaluate the extended answer ranking framework for Chinese and Japanese QA. 5.1 Experimental Setup We used Chinese and Japanese questions provided by the NTCIR (NII Test Collection for IR Systems), which focuses on evaluating cross-lingual and monolingual QA tasks for Chinese, Japanese and English. For Chinese, a total of 550 factoid questions from the NTCIR5-6 QA evaluations served as the dataset. Among them, 200 questions were used to train the Chinese answer extractor and 350 questions were used to evaluate our answer ranking framework. For Japanese, 700 questions from the NTCIR5-6 QA evaluations served as the dataset. Among them, 300 questions were used to train the Japanese answer extractor and 400 questions were used to evaluate our framework. Both the Chinese and Japanese answer extractors use maximum-entropy to extract answer candidates based on multiple features such as named entity, dependency structures and some language-dependent features. Performance of the answer ranking framework was measured by average answer accuracy: the number of correct top answers divided by the number of questions where at least one correct answer exists in the candidate list provided by an extractor. Mean Reciprocal Rank (MRR5) was also used to calculate the average reciprocal rank of the first correct answer in the top 5 answers. The baseline for average answer accuracy was calculated using the answer candidate likelihood scores provided by each individual extractor; the 788 TOP1 TOP3 MRR5 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Japanese Answer Selection Baseline Framework TOP1 TOP3 MRR5 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Chinese Answer Selection Avgerage Accuracy Baseline Framework Figure 3: Performance of the answer ranking framework for Chinese and Japanese answer selection (TOP1: average accuracy of top answer, TOP3: average accuracy of top 3 answers, MRR5: average of mean reciprocal rank of top 5 answers) answer with the best extractor score was chosen, and no validation or similarity processing was performed. 3-fold cross-validation was performed, and we used a version of Wikipedia downloaded in Aug 2006. 5.2 Results and Analysis We first analyzed the average accuracy of top 1, top3 and top 5 answers. Figure 3 compares the average accuracy using the baseline and the answer selection framework. As can be seen, the answer ranking framework significantly improved performance on both Chinese and Japanese answer selection. As for the average top answer accuracy, there were 40% improvement over the baseline (Chinese) and 45% improvement over the baseline (Japanese). We also analyzed the degree to which the average accuracy was affected by answer similarity and relevance features. Table 2 compares the average top answer accuracy using the baseline, the answer relevance features, the answer similarity features and all feature combinations. Both the similarity and the relevance features significantly improved answer selection performance compared to the baseline, and combining both sets of features together produced the best performance. We further analyzed the utility of individual relevance features (Figure 4). For both languages, filtering was useful in ruling out wrong answers. The imBaseline Rel Sim All Chinese 0.442 0.482 0.597 0.619 Japanese 0.367 0.463 0.502 0.532 Table 2: Average top answer accuracy of individual features (Rel: merging relevance features, Sim: merging similarity features, ALL: merging all features). pact of the ontology was more positive for Japanese; we assume that this is because the Chinese ontology (HowNet) contains much less information overall than the Japanese ontology (Gengo GoiTaikei). The comparative impact of Wikipedia was similar. For Chinese, there were many fewer Wikipedia documents available. Even though we used Baidu as a supplemental resource for Chinese, this did not improve answer selection performance. On the other hand, the use of Wikipedia was very helpful for Japanese, improving performance by 26% over the baseline. This shows that the quality of answer relevance estimation is significantly affected by resource coverage. When comparing the data-driven features with the knowledge-based features, the data-driven features (such as Wikipedia and Google) tended to increase performance more than the knowledge-based features (such as gazetteers and WordNet). Table 3 shows the effect of individual similarity features on Chinese and Japanese answer selec789 Baseline FIL ONT GAZ GL WIKI All 0.30 0.35 0.40 0.45 0.50 0.55 Avg. Top Answer Accuracy Chinese Japanese Figure 4: Average top answer accuracy of individual answer relevance features.(FIL: filtering, ONT, ontology, GAZ: gazetteers, GL: Google, WIKI: Wikipedia, ALL: combination of all relevance features) Chinese Japanese 0.3 0.5 0.3 0.5 Cosine 0.597 0.597 0.488 0.488 Jaro-Winkler 0.544 0.518 0.410 0.415 Levenshtein 0.558 0.544 0.434 0.449 Synonyms 0.527 0.527 0.493 0.493 All 0.588 0.580 0.502 0.488 Table 3: Average accuracy using individual similarity features under different thresholds: 0.3 and 0.5 (“All”: combination of all similarity metrics) tion. As some string similarity features (e.g., Levenshtein distance) produce a number between 0 and 1 (where 1 means two strings are identical and 0 means they are different), similarity scores less than a threshold can be ignored. We used two thresholds: 0.3 and 0.5. In our experiments, using 0.3 as a threshold produced better results in Chinese. In Japanese, 0,5 was a better threshold for individual features. Among three different string similarity features (Levenshtein, Jaro-Winkler and Cosine similarity), cosine similarity tended to perform better than the others. When comparing synonym features with string similarity features, synonyms performed better than string similarity in Japanese, but not in Chinese. We had many more synonyms available for Japanese Data-driven features All features Chinese 0.606 0.619 Japanese 0.517 0.532 Table 4: Average top answer accuracy when using data-driven features v.s. when using all features. and they helped the system to better exploit answer redundancy. We also analyzed answer selection performance when combining all four similarity features (“All” in Table 3). Combining all similarity features improved the performance in Japanese, but hurt the performance in Chinese, because adding a small set of synonyms to the string metrics worsened the performance of logistic regression. 5.3 Utility of data-driven features In our experiments we used data-driven features as well as knowledge-based features. As knowledge-based features need manual effort to access language-specific resources for individual languages, we conducted an additional experiment only with data-driven features in order to see how much performance gain is available without the manual work. As Google, Wikipedia and string similarity metrics can be used without any additional manual effort when extended to other languages, we used these three features and compared the performance. Table 4 shows the performance when using datadriven features v.s. all features. It can be seen that data-driven features alone achieved significant improvement over the baseline. This indicates that the framework can easily be extended to any language where appropriate data resources are available, even if knowledge-based features and resources for the language are still under development. 6 Conclusion In this paper, we presented a generalized answer selection framework which was applied to Chinese and Japanese question answering. An empirical evaluation using NTCIR test questions showed that the framework significantly improves baseline answer selection performance. For Chinese, the performance improved by 40% over the baseline. For Japanese, the performance improved by 45% over 790 the baseline. This shows that our probabilistic framework can be easily extended for multiple languages by reusing data-driven features (with new corpora) and adding language-specific resources (ontologies, gazetteers) for knowledge-based features. In our previous work, we evaluated the performance of the framework for English QA using questions from past TREC evaluations (Ko et al., 2007). The experimental results showed that the combination of all answer ranking features improved performance by an average of 102% over the baseline. The relevance features improved performance by an average of 99% over the baseline, and the similarity features improved performance by an average of 46% over the baseline. Our hypothesis is that answer relevance features had a greater impact for English QA because the quality and coverage of the data resources available for English answer validation is much higher than the quality and coverage of existing resources for Japanese and Chinese. In future work, we will continue to evaluate the robustness of the framework. It is also clear from our comparison with English QA that more work can and should be done in acquiring data resources for answer validation in Chinese and Japanese. Acknowledgments We would like to thank Hideki Shima, Mengqiu Wang, Frank Lin, Justin Betteridge, Matthew Bilotti, Andrew Schlaikjer and Luo Si for their valuable support. This work was supported in part by ARDA/DTO AQUAINT program award number NBCHC040164. References D. Buscaldi and P. Rosso. 2006. Mining Knowledge from Wikipedia for the Question Answering task. In Proceedings of the International Conference on Language Resources and Evaluation. J. Chu-Carroll, J. Prager, C. Welty, K. Czuba, and D. Ferrucci. 2003. A Multi-Strategy and Multi-Source Approach to Question Answering. In Proceedings of Text REtrieval Conference. C. Clarke, G. Cormack, and T. Lynam. 2001. Exploiting redundancy in question answering. In Proceedings of SIGIR. Zhendong Dong. 2000. Hownet: http://www.keenage.com. A. Echihabi, U. Hermjakob, E. Hovy, D. Marcu, E. Melz, and D. Ravichandran. 2004. How to select an answer string? In T. Strzalkowski and S. Harabagiu, editors, Advances in Textual Question Answering. Kluwer. Mark A. Greenwood. 2006. Open-Domain Question Answering. Thesis. S. Harabagiu, D. Moldovan, M. Pasca, R. Mihalcea, M. Surdeanu, R. Bunsecu, R. Girju, V. Rus, and P. Morarescu. 2000. Falcon: Boosting knowledge for answer engines. In Proceedings of TREC. V. Jijkoun, J. van Rantwijk, D. Ahn, E. Tjong Kim Sang, and M. de Rijke. 2006. The University of Amsterdam at CLEF@QA 2006. In Working Notes CLEF. J. Ko, L. Si, and E. Nyberg. 2007. A Probabilistic Framework for Answer Selection in Question Answering. In Proceedings of NAACL/HLT. B. Magnini, M. Negri, R. Pervete, and H. Tanev. 2002. Comparing statistical and content-based techniques for answer validation on the web. In Proceedings of the VIII Convegno AI*IA. T. Minka. 2003. A Comparison of Numerical Optimizers for Logistic Regression. Unpublished draft. D. Moldovan, D. Clark, S. Harabagiu, and S. Maiorano. 2003. Cogex: A logic prover for question answering. In Proceedings of HLT-NAACL. E. Nyberg, T. Mitamura, J. Carbonell, J. Callan, K. Collins-Thompson, K. Czuba, M. Duggan, L. Hiyakumoto, N. Hu, Y. Huang, J. Ko, L. Lita, S. Murtagh, V. Pedro, and D. Svoboda. 2002. The JAVELIN Question-Answering System at TREC 2002. In Proceedings of Text REtrieval Conference. J. Prager, E. Brown, A. Coden, and D. Radev. 2000. Question answering by predictive annotation. In Proceedings of SIGIR. E. Voorhees. 2002. Overview of the TREC 2002 question answering track. In Proceedings of Text REtrieval Conference. M. Wang, K. Sagae, and T. Mitamura. 2006. A Fast, Accurate Deterministic Parser for Chinese. In Proceedings of COLING/ACL. J. Xu, A. Licuanan, J. May, S. Miller, and R. Weischedel. 2002. TREC 2002 QA at BBN: Answer Selection and Confidence Estimation. In Proceedings of Text REtrieval Conference. 791
2007
99
Proceedings of ACL-08: HLT, pages 1–9, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Mining Wiki Resources for Multilingual Named Entity Recognition Alexander E. Richman Patrick Schone Department of Defense Department of Defense Washington, DC 20310 Fort George G. Meade, MD 20755 [email protected] [email protected] Abstract In this paper, we describe a system by which the multilingual characteristics of Wikipedia can be utilized to annotate a large corpus of text with Named Entity Recognition (NER) tags requiring minimal human intervention and no linguistic expertise. This process, though of value in languages for which resources exist, is particularly useful for less commonly taught languages. We show how the Wikipedia format can be used to identify possible named entities and discuss in detail the process by which we use the Category structure inherent to Wikipedia to determine the named entity type of a proposed entity. We further describe the methods by which English language data can be used to bootstrap the NER process in other languages. We demonstrate the system by using the generated corpus as training sets for a variant of BBN's Identifinder in French, Ukrainian, Spanish, Polish, Russian, and Portuguese, achieving overall F-scores as high as 84.7% on independent, human-annotated corpora, comparable to a system trained on up to 40,000 words of human-annotated newswire. 1 Introduction Named Entity Recognition (NER) has long been a major task of natural language processing. Most of the research in the field has been restricted to a few languages and almost all methods require substantial linguistic expertise, whether creating a rulebased technique specific to a language or manually annotating a body of text to be used as a training set for a statistical engine or machine learning. In this paper, we focus on using the multilingual Wikipedia (wikipedia.org) to automatically create an annotated corpus of text in any given language, with no linguistic expertise required on the part of the user at run-time (and only English knowledge required during development). The expectation is that for any language in which Wikipedia is sufficiently well-developed, a usable set of training data can be obtained with minimal human intervention. As Wikipedia is constantly expanding, it follows that the derived models are continually improved and that increasingly many languages can be usefully modeled by this method. In order to make sure that the process is as language-independent as possible, we declined to make use of any non-English linguistic resources outside of the Wikimedia domain (specifically, Wikipedia and the English language Wiktionary (en.wiktionary.org)). In particular, we did not use any semantic resources such as WordNet or part of speech taggers. We used our automatically annotated corpus along with an internally modified variant of BBN's IdentiFinder (Bikel et al., 1999), specifically modified to emphasize fast text processing, called “PhoenixIDF,” to create several language models that could be tested outside of the Wikipedia framework. We built on top of an existing system, and left existing lists and tables intact. Depending on language, we evaluated our derived models against human or machine annotated data sets to test the system. 2 Wikipedia 2.1 Structure Wikipedia is a multilingual, collaborative encyclopedia on the Web which is freely available for research purposes. As of October 2007, there were over 2 million articles in English, with versions available in 250 languages. This includes 30 languages with at least 50,000 articles and another 40 with at least 10,000 articles. Each language is available for download (download.wikimedia.org) in a text format suitable for inclusion in a database. For the remainder of this paper, we refer to this format. 1 Within Wikipedia, we take advantage of five major features: • Article links, links from one article to another of the same language; • Category links, links from an article to special “Category” pages; • Interwiki links, links from an article to a presumably equivalent, article in another language; • Redirect pages, short pages which often provide equivalent names for an entity; and • Disambiguation pages, a page with little content that links to multiple similarly named articles. The first three types are collectively referred to as wikilinks. A typical sentence in the database format looks like the following: “Nescopeck Creek is a [[tributary]] of the [[North Branch Susquehanna River]] in [[Luzerne County, Pennsylvania|Luzerne County]].” The double bracket is used to signify wikilinks. In this snippet, there are three articles links to English language Wikipedia pages, titled “Tributary,” “North Branch Susquehanna River,” and “Luzerne County, Pennsylvania.” Notice that in the last link, the phrase preceding the vertical bar is the name of the article, while the following phrase is what is actually displayed to a visitor of the webpage. Near the end of the same article, we find the following representations of Category links: [[Category:Luzerne County, Pennsylvania]], [[Category:Rivers of Pennsylvania]], {{Pennsylvania-geo-stub}}. The first two are direct links to Category pages. The third is a link to a Template, which (among other things) links the article to “Category:Pennsylvania geography stubs”. We will typically say that a given entity belongs to those categories to which it is linked in these ways. The last major type of wikilink is the link between different languages. For example, in the Turkish language article “Kanuni Sultan Süleyman” one finds a set of links including [[en:Suleiman the Magnificent]] and [[ru:Сулейман I]]. These represent links to the English language article “Suleiman the Magnificent” and the Russian language article “Сулейман I.” In almost all cases, the articles linked in this manner represent articles on the same subject. A redirect page is a short entry whose sole purpose is to direct a query to the proper page. There are a few reasons that redirect pages exist, but the primary purpose is exemplified by the fact that “USA” is an entry which redirects the user to the page entitled “United States.” That is, in the vast majority of cases, redirect pages provide another name for an entity. A disambiguation page is a special article which contains little content but typically lists a number of entries which might be what the user was seeking. For instance, the page “Franklin” contains 70 links, including the singer “Aretha Franklin,” the town “Franklin, Virginia,” the “Franklin River” in Tasmania, and the cartoon character “Franklin (Peanuts).” Most disambiguation pages are in Category:Disambiguation or one of its subcategories. 2.2 Related Studies Wikipedia has been the subject of a considerable amount of research in recent years including Gabrilovich and Markovitch (2007), Strube and Ponzetto (2006), Milne et al. (2006), Zesch et al. (2007), and Weale (2007). The most relevant to our work are Kazama and Torisawa (2007), Toral and Muñoz (2006), and Cucerzan (2007). More details follow, but it is worth noting that all known prior results are fundamentally monolingual, often developing algorithms that can be adapted to other languages pending availability of the appropriate semantic resource. In this paper, we emphasize the use of links between articles of different languages, specifically between English (the largest and best linked Wikipedia) and other languages. Toral and Muñoz (2006) used Wikipedia to create lists of named entities. They used the first sentence of Wikipedia articles as likely definitions of the article titles, and used them to attempt to classify the titles as people, locations, organizations, or none. Unlike the method presented in this paper, their algorithm relied on WordNet (or an equivalent resource in another language). The authors noted that their results would need to pass a manual supervision step before being useful for the NER task, and thus did not evaluate their results in the context of a full NER system. Similarly, Kazama and Torisawa (2007) used Wikipedia, particularly the first sentence of each article, to create lists of entities. Rather than building entity dictionaries associating words and 2 phrases to the classical NER tags (PERSON, LOCATION, etc.) they used a noun phrase following forms of the verb “to be” to derive a label. For example, they used the sentence “Franz Fischler ... is an Austrian politician” to associate the label “politician” to the surface form “Franz Fischler.” They proceeded to show that the dictionaries generated by their method are useful when integrated into an NER system. We note that their technique relies upon a part of speech tagger, and thus was not appropriate for inclusion as part of our non-English system. Cucerzan (2007), by contrast to the above, used Wikipedia primarily for Named Entity Disambiguation, following the path of Bunescu and Paşca (2006). As in this paper, and unlike the above mentioned works, Cucerzan made use of the explicit Category information found within Wikipedia. In particular, Category and related listderived data were key pieces of information used to differentiate between various meanings of an ambiguous surface form. Unlike in this paper, Cucerzan did not make use of the Category information to identify a given entity as a member of any particular class. We also note that the NER component was not the focus of the research, and was specific to the English language. 3 Training Data Generation 3.1 Initial Set-up and Overview Our approach to multilingual NER is to pull back the decision-making process to English whenever possible, so that we could apply some level of linguistic expertise. In particular, by focusing on only one language, we could take maximum advantage of the Category structure, something very difficult to do in the general multilingual case. For computational feasibility, we downloaded various language Wikipedias and the English language Wiktionary in their text (.xml) format and stored each language as a table within a single MySQL database. We only stored the title, id number, and body (the portion between the <TEXT> and </TEXT> tags) of each article. We elected to use the ACE Named Entity types PERSON, GPE (Geo-Political Entities), ORGANIZATION, VEHICLE, WEAPON, LOCATION, FACILITY, DATE, TIME, MONEY, and PERCENT. Of course, if some of these types were not marked in an existing corpus or not needed for a given purpose, the system can easily be adapted. Our goal was to automatically annotate the text portion of a large number of non-English articles with tags like <ENAMEX TYPE=“GPE”>Place Name</ENAMEX> as used in MUC (Message Understanding Conference). In order to do so, our system first identifies words and phrases within the text that might represent entities, primarily through the use of wikilinks. The system then uses category links and/or interwiki links to associate that phrase with an English language phrase or set of Categories. Finally, it determines the appropriate type of the English language data and assumes that the original phrase is of the same type. In practice, the English language categorization should be treated as one-time work, since it is identical regardless of the language model being built. It is also the only stage of development at which we apply substantial linguistic knowledge, even of English. In the sections that follow, we begin by showing how the English language categorization is done. We go on to describe how individual nonEnglish phrases are associated with English language information. Next, we explain how possible entities are initially selected. Finally, we discuss some optional steps as well as how and why they could be used. 3.2 English Language Categorization For each article title of interest (specifically excluding Template pages, Wikipedia admistrative pages, and articles whose title begins with “List of”), we extracted the categories to which that entry was assigned. Certainly, some of these category assignments are much more useful than others For instance, we would expect that any entry in “Category:Living People” or “Category:British Lawyers” will refer to a person while any entry in “Category:Cities in Norway” will refer to a GPE. On the other hand, some are entirely unhelpful, such as “Category:1912 Establishments” which includes articles on Fenway Park (a facility), the Republic of China (a GPE), and the Better Business Bureau (an organization). Other categories can reliably be used to determine that the article does not refer to a named entity, such as “Category:Endangered species.” We manually derived a relatively small set of key phrases, the most important of which are shown in Table 1. 3 Table 1: Some Useful Key Category Phrases PERSON “People by”, “People in”, “People from”, “Living people”, “births”, “deaths”, “by occupation”, “Surname”, “Given names”, “Biography stub”, “human names” ORG “Companies”, “Teams”, “Organizations”, “Businesses”, “Media by”, “Political parties”, “Clubs”, “Advocacy groups”, “Unions”, “Corporations”, “Newspapers”, “Agencies”, “Colleges”, “Universities” , “Legislatures”, “Company stub”, “Team stub”, “University stub”, “Club stub” GPE “Cities”, “Countries”, “Territories”, “Counties”, “Villages”, “Municipalities”, “States” (not part of “United States”), “Republics”, “Regions”, “Settlements” DATE “Days”, “Months”, “Years”, “Centuries” NONE “Lists”, “List of”, “Wars”, “Incidents” For each article, we searched the category hierarchy until a threshold of reliability was passed or we had reached a preset limit on how far we would search. For example, when the system tries to classify “Jacqueline Bhabha,” it extracts the categories “British Lawyers,” “Jewish American Writers,” and “Indian Jews.” Though easily identifiable to a human, none of these matched any of our key phrases, so the system proceeded to extract the second order categories “Lawyers by nationality,” “British legal professionals,” “American writers by ethnicity,” “Jewish writers,” “Indian people by religion,” and “Indian people by ethnic or national origin” among others. “People by” is on our key phrase list, and the two occurrences passed our threshold, and she was then correctly identified. If an article is not classified by this method, we check whether it is a disambiguation page (which often are members solely of “Category:Disambiguation”). If it is, the links within are checked to see whether there is a dominant type. For instance, the page “Amanda Foreman” is a disambiguation page, with each link on the page leading to an easily classifiable article. Finally, we use Wiktionary, an online collaborative dictionary, to eliminate some common nouns. For example, “Tributary” is an entry in Wikipedia which would be classified as a Location if viewed solely by Category structure. However, it is found as a common noun in Wiktionary, overruling the category based result. 3.3 Multilingual Categorization When attempting to categorize a non-English term that has an entry in its language’s Wikipedia, we use two techniques to make a decision based on English language information. First, whenever possible, we find the title of an associated English language article by searching for a wikilink beginning with “en:”. If such a title is found, then we categorize the English article as shown in Section 3.2, and decide that the non-English title is of the same type as its English counterpart. We note that links to/from English are the most common interlingual wikilinks. Of course, not all articles worldwide have English equivalents (or are linked to such even if they do exist). In this case, we attempt to make a decision based on Category information, associating the categories with their English equivalents, when possible. Fortunately, many of the most useful categories have equivalents in many languages. For example, the Breton town of Erquy has a substantial article in the French language Wikipedia, but no article in English. The system proceeds by determining that Erquy belongs to four French language categories: “Catégorie:Commune des Côtes-d'Armor,” “Catégorie:Ville portuaire de France,” “Catégorie:Port de plaisance,” and “Catégorie:Station balnéaire.” The system proceeds to associate these, respectively, with “Category:Communes of Côtes-d'Armor,” UNKNOWN, “Category:Marinas,” and “Category:Seaside resorts” by looking in the French language pages of each for wikilinks of the form [[en:...]]. The first is a subcategory of “Category:Cities, towns and villages in France” and is thus easily identified by the system as a category consisting of entities of type GPE. The other two are ambiguous categories (facility and organization elements in addition to GPE). Erquy is then determined to be a GPE by majority vote of useful categories. We note that the second French category actually has a perfectly good English equivalent (Category:Port cities and towns in France), but no one has linked them as of this writing. We also note that the ambiguous categories are much more GPE-oriented in French. The system still makes the correct decision despite these factors. We do not go beyond the first level categories or do any disambiguation in the non-English case. Both are avenues for future improvement. 4 3.4 The Full System To generate a set of training data in a given language, we select a large number of articles from its Wikipedia (50,000 or more is recommended, when possible). We prepare the text by removing external links, links to images, category and interlingual links, as well as some formatting. The main processing of each article takes place in several stages, whose primary purposes are as follows: • The first pass uses the explicit article links within the text. • We then search an associated English language article, if available, for additional information. • A second pass checks for multi-word phrases that exist as titles of Wikipedia articles. • We look for certain types of person and organization instances. • We perform additional processing for alphabetic or space-separated languages, including a third pass looking for single word Wikipedia titles. • We use regular expressions to locate additional entities such as numeric dates. In the first pass, we attempt to replace all wikilinks with appropriate entity tags. We assume at this stage that any phrase identified as an entity at some point in the article will be an entity of the same type throughout the article, since it is common for contributors to make the explicit link only on the first occasion that it occurs. We also assume that a phrase in a bold font within the first 100 characters is an equivalent form of the title of the article as in this start of the article on Erquy: “Erquy (Erge-ar-Mor en breton, Erqi en gallo)”. The parenthetical notation gives alternate names in the Breton and Gallo languages. (In Wiki database format, bold font is indicated by three apostrophes in succession.) If the article has an English equivalent, we search that article for wikilinked phrases as well, on the assumption that both articles will refer to many of the same entities. As the English language Wikipedia is the largest, it frequently contains explicit references to and articles on secondary people and places mentioned, but not linked, within a given non-English article. After this point, the text to be annotated contains no Wikipedia specific information or formatting. In the second pass, we look for strings of 2 to 4 words which were not wikilinked but which have Wikipedia entries of their own or are partial matches to known people and organizations (i.e. “Mary Washington” in an article that contains “University of Mary Washington”). We require that each such string contains something other than a lower case letter (when a language does not use capitalization, nothing in that writing system is considered to be lower case for this purpose). When a word is in more than one such phrase, the longest match is used. We then do some special case processing. When an organization is followed by something in parentheses such as <ENAMEX TYPE=“ORGANIZATION”>Maktab al-Khadamāt</ENAMEX> (MAK), we hypothesize that the text in the parentheses is an alternate name of the organization. We also looked for unmarked strings of the form X.X. followed by a capitalized word, where X represents any capital letter, and marked each occurrence as a PERSON. For space-separated or alphabetic languages, we did some additional processing at this stage to attempt to identify more names of people. Using a list of names derived from Wiktionary (Appendix:Names) and optionally a list derived from Wikipedia (see Section 3.5.1), we mark possible parts of names. When two or more are adjacent, we mark the sequence as a PERSON. Also, we fill in partial lists of names by assuming single nonlower case words between marked names are actually parts of names themselves. That is, we would replace <ENAMEX TYPE=“PERSON”>Fred Smith</ENAMEX>, Somename <ENAMEX TYPE=“PERSON”>Jones </ENAMEX> with <ENAMEX TYPE=“PERSON”> Fred Smith</ENAMEX>, <ENAMEX TYPE= “PERSON”> Somename Jones</ENAMEX>. At this point, we performed a third pass through the article. We marked all non-lower case single words which had their own Wikipedia entry, were part of a known person's name, or were part of a known organization's name. Afterwards, we used a series of simple, language-neutral regular expressions to find additional TIME, PERCENT, and DATE entities such as “05:30” and “12-07-05”. We also executed code that included quantities of money within a NUMEX tag, as in converting 500 <NUMEX TYPE=“MONEY”>USD</NUMEX> into <NUMEX TYPE=“MONEY”>500 USD</NUMEX>. 5 3.5 Optional Processing 3.5.1 Recommended Additions All of the above could be run with almost no understanding of the language being modeled (knowing whether the language was space-separated and whether it was alphabetic or characterbased were the only things used). However, for most languages, we spent a small amount of time (less than one hour) browsing Wikipedia pages to improve performance in some areas. We suggest compiling a small list of stop words. For our purposes, the determiners and the most common prepositions are sufficient, though a longer list could be used for the purpose of computational efficiency. We also recommend compiling a list of number words as well as compiling a list of currencies, since they are not capitalized in many languages, and may not be explicitly linked either. Many languages have a page on ISO 4217 which contains all of the currency information, but the format varies sufficiently from language to language to make automatic extraction difficult. Together, these allow phrases like this (taken from the French Wikipedia) to be correctly marked in its entirety as an entity of type MONEY: “25 millions de dollars.” If a language routinely uses honorifics such as Mr. and Mrs., that information can also be found quickly. Their use can lead to significant improvements in PERSON recognition. During preprocessing, we typically collected a list of people names automatically, using the entity identification methods appropriate to titles of Wikipedia articles. We then used these names along with the Wiktionary derived list of names during the main processing. This does introduce some noise as the person identification is not perfect, but it ordinarily increases recall by more than it reduces precision. 3.5.2 Language Dependent Additions Our usual, language-neutral processing only considers wikilinks within a single article when determining the type of unlinked words and phrases. For example, if an article included the sentence “The [[Delaware River|Delaware]] forms the boundary between [[Pennsylvania]] and [[New Jersey]]”, our system makes the assumption that every occurrence of the unlinked word “Delaware” appearing in the same article is also referring to the river and thus mark it as a LOCATION. For some languages, we preferred an alternate approach, best illustrated by an example: The word “Washington” without context could refer to (among others) a person, a GPE, or an organization. We could work through all of the explicit wikilinks in all articles (as a preprocessing step) whose surface form is Washington and count the number pointing to each. We could then decide that every time the word Washington appears without an explicit link, it should be marked as its most common type. This is useful for the Slavic languages, where the nominative form is typically used as the title of Wikipedia articles, while other cases appear frequently (and are rarely wikilinked). At the same time, we can do a second type of preprocessing which allows more surface forms to be categorized. For instance, imagine that we were in a Wikipedia with no article or redirect associated to “District of Columbia” but that someone had made a wikilink of the form [[Washington|District of Columbia]]. We would then make the assumption that for all articles, District of Columbia is of the same type as Washington. For less developed wikipedias, this can be helpful. For languages that have reasonably well developed Wikipedias and where entities rarely, if ever, change form for grammatical reasons (such as French), this type of preprocessing is virtually irrelevant. Worse, this processing is definitely not recommended for languages that do not use capitalization because it is not unheard of for people to include sections like: “The [[Union Station|train station]] is located at ...” which would cause the phrase “train station” to be marked as a FACILITY each time it occurred. Of course, even in languages with capitalization, “train station” would be marked incorrectly in the article in which the above was located, but the mistake would be isolated, and should have minimal impact overall. 4 Evaluation and Results After each data set was generated, we used the text as a training set for input to PhoenixIDF. We had three human annotated test sets, Spanish, French and Ukrainian, consisting of newswire. When human annotated sets were not available, we held out more than 100,000 words of text generated by our wiki-mining process to use as a test set. For the above languages, we included wiki test sets for 6 comparison purposes. We will give our results as F-scores in the Overall, DATE, GPE, ORGANIZATION, and PERSON categories using the scoring metric in (Bikel et. al, 1999). The other ACE categories are much less common, and contribute little to the overall score. 4.1 Spanish Language Evaluation The Spanish Wikipedia is a substantial, well-developed Wikipedia, consisting of more than 290,000 articles as of October 2007. We used two test sets for comparison purposes. The first consists of 25,000 words of human annotated newswire derived from the ACE 2007 test set, manually modified to conform to our extended MUC-style standards. The second consists of 335,000 words of data generated by the Wiki process held-out during training. Table 2: Spanish Results F (prec. / recall) Newswire Wiki test set ALL .827 (.851 / .805) .846 (.843 / .848) DATE .912 (.861 / .970) .925 (.918 / .932) GPE .877 (.914 / .843) .877 (.886 / .868) ORG .629 (.681 / .585) .701 (.703 / .698) PERSON .906 (.921 / .892) .821 (.810 / .833) There are a few particularly interesting results to note. First, because of the optional processing, recall was boosted in the PERSON category at the expense of precision. The fact that this category scores higher against newswire than against the wiki data suggests that the not-uncommon, but isolated, occurrences of non-entities being marked as PERSONs in training have little effect on the overall system. Contrarily, we note that deletions are the dominant source of error in the ORGANIZATION category, as seen by the lower recall. The better performance on the wiki set seems to suggest that either Wikipedia is relatively poor in Organizations or that PhoenixIDF underperforms when identifying Organizations relative to other categories or a combination. An important question remains: “How do these results compare to other methodologies?” In particular, while we can get these results for free, how much work would traditional methods require to achieve comparable results? To attempt to answer this question, we trained PhoenixIDF on additional ACE 2007 Spanish language data converted to MUC-style tags, and scored its performance using the same set of newswire. Evidently, comparable performance to our Wikipedia derived system requires between 20,000 and 40,000 words of human-annotated newswire. It is worth noting that Wikipedia itself is not newswire, so we do not have a perfect comparison. Table 3: Traditional Training ~ Words of Training Overall F-score 3500 .746 10,000 .760 20,000 .807 40,000 .847 4.2 French Language Evaluation The French Wikipedia is one of the largest Wikipedias, containing more than 570,000 articles as of October 2007. For this evaluation, we have 25,000 words of human annotated newswire (Agence France Presse, 30 April and 1 May 1997) covering diverse topics. We used 920,000 words of Wiki-derived data for the second test. Table 4: French Results F (prec. / recall) Newswire Wiki test set ALL .847 (.877 / .819) .844 (.847 / .840) DATE .921 (.897 / .947) .910 (.888 / .934) GPE .907 (.933 / .882) .868 (.889 / .849) ORG .700 (.794 / .625) .718 (.747 / .691) PERSON .880 (.874 / .885) .823 (.818 / .827) The overall results seem comparable to the Spanish, with the slightly better overall performance likely correlated to the somewhat more developed Wikipedia. We did not have sufficient quantities of annotated data to run a test of the traditional methods, but Spanish and French are sufficiently similar languages that we expect this model is comparable to one created with about 40,000 words of humanannotated data. 7 4.3 Ukrainian Language Evaluation The Ukrainian Wikipedia is a medium-sized Wikipedia with 74,000 articles as of October 2007. Also, the typical article is shorter and less welllinked to other articles than in the French or Spanish versions. Moreover, entities tend to appear in many surface forms depending on case, leading us to expect somewhat worse results. In the Ukrainian case, the newswire consisted of approximately 25,000 words from various online news sites covering primarily political topics. We also held out around 395,000 words for testing. We were also able to run a comparison test as in Spanish. Table 5: Ukrainian Results F (prec. / recall) Newswire Wiki test set ALL .747 (.863 / .649) .807 (.809 / .806) DATE .780 (.759 / .803) .848 (.842 / .854) GPE .837 (.833 / .841) .887 (.901 / .874) ORG .585 (.800 / .462) .657 (.678 / .637) PERSON .764 (.899 / .664) .690 (.675 / .706) Table 6: Traditional Training ~ Words of Training Overall F-score 5000 .662 10,000 .692 15,000 .740 20,000 .761 The Ukrainian newswire contained a much higher proportion of organizations than the French or Spanish versions, contributing to the overall lower score. The Ukrainian language Wikipedia itself contains very few articles on organizations relative to other types, so the distribution of entities of the two test sets are quite different. We also see that the Wiki-derived model performs comparably to a model trained on 15-20,000 words of humanannotated text. 4.4 Other Languages For Portuguese, Russian, and Polish, we did not have human annotated corpora available for testing. In each case, at least 100,000 words were held out from training to be used as a test set. It seems safe to suppose that if suitable human-annotated sets were available for testing, the PERSON score would likely be higher, and the ORGANIZATION score would likely be lower, while the DATE and GPE scores would probably be comparable. Table 7: Other Language Results F-score Polish Portuguese Russian ALL .859 .804 .802 DATE .891 .861 .822 GPE .916 .826 .867 ORG .785 .706 .712 PERSON .836 .802 .751 5 Conclusions In conclusion, we have demonstrated that Wikipedia can be used to create a Named Entity Recognition system with performance comparable to one developed from 15-40,000 words of human-annotated newswire, while not requiring any linguistic expertise on the part of the user. This level of performance, usable on its own for many purposes, can likely be obtained currently in 20-40 languages, with the expectation that more languages will become available, and that better models can be developed, as Wikipedia grows. Moreover, it seems clear that a Wikipedia-derived system could be used as a supplement to other systems for many more languages. In particular, we have, for all practical purposes, embedded in our system an automatically generated entity dictionary. In the future, we would like to find a way to automatically generate the list of key words and phrases for useful English language categories. This could implement the work of Kazama and Torisawa, in particular. We also believe performance could be improved by using higher order nonEnglish categories and better disambiguation. We could also experiment with introducing automatically generated lists of entities into PhoenixIDF directly. Lists of organizations might be particularly useful, and “List of” pages are common in many languages. 8 References Bikel, D., R. Schwartz, and R. Weischedel. 1999. An algorithm that learns what's in a name. Machine Learning, 211-31. Bunescu, R and M. Paşca. 2006. Using Encyclopedic knowledge for named entity disambiguation. In Proceedings of EACL, 9-16. Cucerzan, S. 2007. Large-scale named entity disambiguation based on Wikipedia data. In Proceedings of EMNLP/CoNLL, 708-16. Gabrilovitch, E. and S. Markovitch. 2007. Computing semantic relatedness using Wikipediabased explicit semantic analysis. In Proceedings of IJCAI, 1606-11. Gabrilovitch, E. and S. Markovitch. 2006. Overcoming the brittleness bottleneck using Wikipedia: enhancing text categorization with encyclopedic knowledge. In Proceedings of AAAI, 1301-06. Gabrilovitch, E. and S. Markovitch. 2005. Feature generation for text categorization using world knowledge. In Proceedings of IJCAI, 1048-53. Kazama, J. and K. Torisawa. 2007. Exploiting Wikipedia as external knowledge for named entity recognition. In Proceedings of EMNLP/CoNLL, 698-707. Milne, D., O. Medelyan and I. Witten. 2006. Mining domain-specific thesauri from Wikipedia: a case study. Web Intelligence 2006, 442-48 Strube, M. and S. P. Ponzeto. 2006. WikiRelate! Computing semantic relatedness using Wikipedia. In Proceedings of AAAI, 1419-24. Toral, A. and R. Muñoz. 2006. A proposal to automatically build and maintain gazetteers for named entity recognition by using Wikipedia. In Proceedings of EACL, 56-61. Weale, T. 2006. Using Wikipedia categories for document classification. Ohio St. University, preprint. Zesch, T., I. Gurevych and M. Mühlhäuser. 2007. Analyzing and accessing Wikipedia as a lexical semantic resource. In Proceedings of GLDV, 213-21. 9
2008
1
Proceedings of ACL-08: HLT, pages 81–88, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Phrase Table Training For Precision and Recall: What Makes a Good Phrase and a Good Phrase Pair? Yonggang Deng∗, Jia Xu+ and Yuqing Gao∗ ∗IBM T.J. Watson Research Center, Yorktown Heights, NY 10598, USA {ydeng,yuqing}@us.ibm.com +Chair of Computer Science VI, RWTH Aachen University, D-52056 Aachen, Germany [email protected] Abstract In this work, the problem of extracting phrase translation is formulated as an information retrieval process implemented with a log-linear model aiming for a balanced precision and recall. We present a generic phrase training algorithm which is parameterized with feature functions and can be optimized jointly with the translation engine to directly maximize the end-to-end system performance. Multiple data-driven feature functions are proposed to capture the quality and confidence of phrases and phrase pairs. Experimental results demonstrate consistent and significant improvement over the widely used method that is based on word alignment matrix only. 1 Introduction Phrase has become the standard basic translation unit in Statistical Machine Translation (SMT) since it naturally captures context dependency and models internal word reordering. In a phrase-based SMT system, the phrase translation table is the defining component which specifies alternative translations and their probabilities for a given source phrase. In learning such a table from parallel corpus, two related issues need to be addressed (either separately or jointly): which pairs are considered valid translations and how to assign weights, such as probabilities, to them. The first problem is referred to as phrase pair extraction, which identifies phrase pairs that are supposed to be translations of each other. Methods have been proposed, based on syntax, that take advantage of linguistic constraints and alignment of grammatical structure, such as in Yamada and Knight (2001) and Wu (1995). The most widely used approach derives phrase pairs from word alignment matrix (Och and Ney, 2003; Koehn et al., 2003). Other methods do not depend on word alignments only, such as directly modeling phrase alignment in a joint generative way (Marcu and Wong, 2002), pursuing information extraction perspective (Venugopal et al., 2003), or augmenting with modelbased phrase pair posterior (Deng and Byrne, 2005). Using relative frequency as translation probability is a common practice to measure goodness of a phrase pair. Since most phrases appear only a few times in training data, a phrase pair translation is also evaluated by lexical weights (Koehn et al., 2003) or term weighting (Zhao et al., 2004) as additional features to avoid overestimation. The translation probability can also be discriminatively trained such as in Tillmann and Zhang (2006). The focus of this paper is the phrase pair extraction problem. As in information retrieval, precision and recall issues need to be addressed with a right balance for building a phrase translation table. High precision requires that identified translation candidates are accurate, while high recall wants as much valid phrase pairs as possible to be extracted, which is important and necessary for online translation that requires coverage. In the word-alignment derived phrase extraction approach, precision can be improved by filtering out most of the entries by using a statistical significance test (Johnson et al., 2007). On the other hand, there are valid translation pairs in the training corpus that are not learned due to word alignment errors as shown in Deng and Byrne (2005). 81 We would like to improve phrase translation accuracy and at the same time extract as many as possible valid phrase pairs that are missed due to incorrect word alignments. One approach is to leverage underlying word alignment quality such as in Ayan and Dorr (2006). In this work, we present a generic discriminative phrase pair extraction framework that can integrate multiple features aiming to identify correct phrase translation candidates. A significant deviation from most other approaches is that the framework is parameterized and can be optimized jointly with the decoder to maximize translation performance on a development set. Within the general framework, the main work is on investigating useful metrics. We employ features based on word alignment models and alignment matrix. We also propose information metrics that are derived from both bilingual and monolingual perspectives. All these features are data-driven and independent of languages. The proposed phrase extraction framework is general to apply linguistic features such as semantic, POS tags and syntactic dependency. 2 A Generic Phrase Training Procedure Let e = eI 1 denote an English sentence and let f = fJ 1 denote its translation in a foreign language, say Chinese. Phrase extraction begins with sentence-aligned parallel corpora {(ei, fi)}. We use E = eie ib and F = fje jb to denote an English and foreign phrases respectively, where ib(jb) is the position in the sentence of the beginning word of the English(foreign) phrase and ie(je) is the position of the ending word of the phrase. We first train word alignment models and will use them to evaluate the goodness of a phrase and a phrase pair. Let fk(E, F), k = 1, 2, · · · , K be K feature functions to be used to measure the quality of a given phrase pair (E, F). The generic phrase extraction procedure is an evaluation, ranking, filtering, estimation and tuning process, presented in Algorithm 1. Step 1 (line 1) is the preparation stage. Beginning with a flat lexicon, we train IBM Model-1 word alignment model with 10 iterations for each translation direction. We then train HMM word alignment models (Vogel et al., 1996) in two directions simultaneously by merging statistics collected in the Algorithm 1 A Generic Phrase Training Procedure 1: Train Model-1 and HMM word alignment models 2: for all sentence pair (e, f) do 3: Identify candidate phrases on each side 4: for all candidate phrase pair (E, F) do 5: Calculate its feature function values fk 6: Obtain the score q(E, F) = PK k=1 λkfk(E, F) 7: end for 8: Sort candidate phrase pairs by their final scores q 9: Find the maximum score qm = max q(E, F) 10: for all candidate phrase pair (E, F) do 11: If q(E, F) ≥qm −τ, dump the pair into the pool 12: end for 13: end for 14: Built a phrase translation table from the phrase pair pool 15: Discriminatively train feature weights λk and threshold τ E-step from two directions motivated by Zens et al. (2004) with 5 iterations. We use these models to define the feature functions of candidate phrase pairs such as phrase pair posterior distribution. More details will be given in Section 3. Step 2 (line 2) consists of phrase pair evaluation, ranking and filtering. Usually all n-grams up to a pre-defined length limit are considered as candidate phrases. This is also the place where linguistic constraints can be applied, say to avoid noncompositional phrases (Lin, 1999). Each normalized feature score derived from word alignment models or language models will be log-linearly combined to generate the final score. Phrase pair filtering is simply thresholding on the final score by comparing to the maximum within the sentence pair. Note that under the log-linear model, applying threshold for filtering is equivalent to comparing the “likelihood” ratio. Step 3 (line 14) pools all candidate phrase pairs that pass the threshold testing and estimates the final phrase translation table by maximum likelihood criterion. For each candidate phrase pair which is above the threshold, we assign HMM-based phrase pair posterior as its soft count when dumping them into the global phrase pair pool. Other possibilities for the weighting include assigning constant one or the exponential of the final score etc. One of the advantages of the proposed phrase training algorithm is that it is a parameterized procedure that can be optimized jointly with the trans82 lation engine to minimize the final translation errors measured by automatic metrics such as BLEU (Papineni et al., 2002). In the final step 4 (line 15), parameters {λk, τ} are discriminatively trained on a development set using the downhill simplex method (Nelder and Mead, 1965). This phrase training procedure is general in the sense that it is configurable and trainable with different feature functions and their parameters. The commonly used phrase extraction approach based on word alignment heuristics (referred as ViterbiExtract algorithm for comparison in this paper) as described in (Och, 2002; Koehn et al., 2003) is a special case of the algorithm, where candidate phrase pairs are restricted to those that respect word alignment boundaries. We rely on multiple feature functions that aim to describe the quality of candidate phrase translations and the generic procedure to figure out the best way of combining these features. A good feature function pops up valid translation pairs and pushes down incorrect ones. 3 Features Now we present several feature functions that we investigated to help extracting correct phrase translations. All these features are data-driven and defined based on models, such as statistical word alignment model or language model. 3.1 Model-based Phrase Pair Posterior In a statistical generative word alignment model (Brown et al., 1993), it is assumed that (i) a random variable a specifies how each target word fj is generated by (therefore aligned to) a source 1 word eaj; and (ii) the likelihood function f(f, a|e) specifies a generative procedure from the source sentence to the target sentence. Given a phrase pair in a sentence pair, there will be many generative paths that align the source phrase to the target phrase. The likelihood of those generative procedures can be accumulated to get the likelihood of the phrase pair (Deng and Byrne, 2005). This is implemented as the summation of the likelihood function over all valid hidden word alignments. 1The word source and target are in the sense of word alignment direction, not as in the source-channel formulation. More specifically, let A(j1,j2) (i1,i2) be the set of word alignment a that aligns the source phrase ej1 i1 to the target phrase fj2 j1 (links to NULL word are ignored for simplicity): A(j1,j2) (i1,i2) = {a : aj ∈[i1, i2] iff j ∈[j1, j2]} The alignment set given a phrase pair ignores those pairs with word links across the phrase boundary. Consequently, the phrase-pair posterior distribution is defined as Pθ(ei2 i1 →fj2 j1 |e, f) = P a∈A(j1,j2) (i1,i2) f(a, f|e; θ) P a f(a, f|e; θ) . (1) Switching the source and the target, we can obtain the posterior distribution in another translation direction. This distribution is applicable to all word alignment models that follow assumptions (i) and (ii). However, the complexity of the likelihood function could make it impractical to calculate the summations in Equation 1 unless an approximation is applied. Several feature functions will be defined on top of the posterior distribution. One of them is based on HMM word alignment model. We use the geometric mean of posteriors in two translation directions as a symmetric metric for phrase pair quality evaluation function under HMM alignment models. Table 1 shows the phrase pair posterior matrix of the example. Replacing the word alignment model with IBM Model-1 is another feature function that we added. IBM Model-1 is simple yet has been shown to be effective in many applications (Och et al., 2004). There is a close form solution to calculate the phrase pair posterior under Model-1. Moreover, word to word translation table under HMM is more concentrated than that under Model-1. Therefore, the posterior distribution evaluated by Model-1 is smoother and potentially it can alleviate the overestimation problem in HMM especially when training data size is small. 3.2 Bilingual Information Metric Trying to find phrase translations for any possible ngram is not a good idea for two reasons. First, due to data sparsity and/or alignment model’s capability, there would exist n-grams that cannot be aligned 83 f1 f2 f3 (that) (is) (what) what’s that e1 e2 e1 1 e2 1 e2 2 HBL(f j2 j1 ) f 1 1 0.0006 0.012 0.89 0.08 f 2 1 0.0017 0.035 0.343 0.34 f 3 1 0.07 0.999 0.0004 0.24 f 2 2 0.03 0.0001 0.029 0.7 f 3 2 0.89 0.006 0.006 0.05 f 3 3 0.343 0.002 0.002 0.06 HBL(ei2 i1) 0.869 0.26 0.70 Table 1: Phrase pair posterior distribution for the example well, for instance, n-grams that are part of a paraphrase translation or metaphorical expression. To give an example, the unigram ‘tomorrow’ in ‘the day after tomorrow’ whose Chinese translation is a single word ‘€’. Extracting candidate translations for such kind of n-grams for the sake of improving coverage (recall) might hurt translation quality (precision). We will define a confidence metric to estimate how reliably the model can align an n-gram in one side to a phrase on the other side given a parallel sentence. Second, some n-grams themselves carry no linguistic meaning; their phrase translations can be misleading, for example non-compositional phrases (Lin, 1999). We will address this in section 3.3. Given a sentence pair, the basic assumption is that if the HMM word alignment model can align an English phrase well to a foreign phrase, the posterior distribution of the English phrase generating all foreign phrases on the other side is significantly biased. For instance, the posterior of one foreign phrase is far larger than that of the others. We use the entropy of the posterior distribution as the confidence metric: HBL(ei2 i1|e, f) = H( ˆPθHMM (ei2 i1 →∗)) (2) where H(P) = −P x P(x) log P(x) is the entropy of a distribution P(x), ˆPθHMM (ei2 i1 →∗) is the normalized probability (sum up to 1) of the posterior PθHMM (ei2 i1 →∗) as defined in Equation 1. Low entropy signals a high confidence that the English phrase can be aligned correctly. On the other hand, high entropy implies ambiguity presented in discriminating the correct foreign phrase from the others from the viewpoint of the model. Similarly we calculate the confidence metric of aligning a foreign phrase correctly with the word alignment model in foreign to English direction. Table 1 shows the entropy of phrases. The unigram of foreign side f2 2 is unlikely to survive with such high ambiguity. Adding the entropy in two directions defines the bilingual information metric as another feature function, which describes the reliability of aligning each phrase correctly by the model. Note that we used HMM word alignment model to find the posterior distribution. Other models such as Model-1 can be applied in the same way. This feature function quantitatively captures the goodness of phrases. During phrase pair ranking, it can help to move upward phrases that can be aligned well and push downward phrases that are difficult for the model to find correct translations. 3.3 Monolingual Information Metric Now we turn to monolingual resources to evaluate the quality of an n-gram being a good phrase. A phrase in a sentence is specified by its boundaries. We assume that the boundaries of a good phrase should be the “right” place to break. More generally, we want to quantify how effective a word boundary is as a phrase boundary. One would perform say NP-chunking or parsing to avoid splitting a linguistic constituent. We apply a language model (LM) to describe the predictive uncertainty (PU) between words in two directions. Given a history wn−1 1 , a language model specifies a conditional distribution of the future word being predicted to follow the history. We can find the entropy of such pdf: HLM(wn−1 1 ) = H(P(·|wn−1 1 )). So given a sentence wN 1 , the PU of the boundary between word wi and wi+1 is established by two-way entropy sum using a forward and backward language model: PU(wN 1 , i) = HLMF (wi 1) + HLMB(wi+1 N ) We assume that the higher the predictive uncertainty is, the more likely the left or right part of the word boundary can be “cut-and-pasted” to form another reasonable sentence. So a good phrase is characterized with high PU values on the boundaries. For example, in ‘we want to have a table near the window’, the PU value of the point after ‘table’ is 0.61, higher than that between ‘near’ and ‘the’ 0.3, using trigram LMs. With this, the feature function derived from 84 monolingual clue for a phrase pair can be defined as the product of PUs of the four word boundaries. 3.4 Word Alignments Induced Metric The widely used ViterbiExtract algorithm relies on word alignment matrix and no-crossing-link assumption to extract phrase translation candidates. Practically it has been proved to work well. However, discarding correct phrase pairs due to incorrect word links leaves room for improving recall. This is especially true for not significantly large training corpora. Provided with a word alignment matrix, we define within phrase pair consistency ratio (WPPCR) as another feature function. WPPCR was used as one of the scores in (Venugopal et al., 2003) for phrase extraction. It is defined as the number of consistent word links associated with any words within the phrase pair divided by the number of all word links associated with any words within the phrase pair. An inconsistent link connects a word within the phrase pair to a word outside the phrase pair. For example, the WPPCR for (e2 1, f2 1 ) in Table 1 is 2/3. As a special case, the ViterbiExtract algorithm extracts only phrase pairs with WPPCR is 1. To further discriminate the pairs with higher WPPCR from those with lower ratio, we apply a BiLinear Transform (BLT) (Oppenheim and Schafer, 1989) mapping. BLT is commonly used in signal processing to attenuate the low frequency parts. When used to map WPPCR, it exaggerates the difference between phrase pairs with high WPPCR and those with low WPPCR, making the pairs with low ratio more unlikely to be selected as translation candidates. One of the nice properties of BLT is that there is a parameter that can be changed to adjust the degree of attenuation, which provides another dimension for system optimization. 4 Experimental Results We evaluate the effect of the proposed phrase extraction algorithm with translation performance. We do experiments on IWSLT (Paul, 2006) 2006 ChineseEnglish corpus. The task is to translate Chinese utterances in travel domain into English. We report only text (speech transcription) translation results. The training corpus consists of 40K ChineseEnglish parallel sentences in travel domain with toEval Set 04dev 04test 05test 06dev 06test # of sentences 506 500 506 489 500 # of words 2808 2906 3209 5214 5550 # of refs 16 16 16 7 7 Table 2: Dev/test set statistics tal 306K English words and 295K Chinese words. In the data processing step, Chinese characters are segmented into words. English text are normalized and lowercased. All punctuation is removed. There are five sets of evaluation sentences in tourism domain for development and test. Their statistics are shown in Table 2. We will tune training and decoding parameters on 06dev and report results on other sets. 4.1 Training and Translation Setup Our decoder is a phrase-based multi-stack implementation of the log-linear model similar to Pharaoh (Koehn et al., 2003). Like other log-linear model based decoders, active features in our translation engine include translation models in two directions, lexicon weights in two directions, language model, lexicalized distortion models, sentence length penalty and other heuristics. These feature weights are tuned on the dev set to achieve optimal translation performance using downhill simplex method. The language model is a statistical trigram model estimated with Modified Kneser-Ney smoothing (Chen and Goodman, 1996) using only English sentences in the parallel training data. Starting from the collection of parallel training sentences, we build word alignment models in two translation directions, from English to Chinese and from Chinese to English, and derive two sets of Viterbi alignments. By combining word alignments in two directions using heuristics (Och and Ney, 2003), a single set of static word alignments is then formed. Based on alignment models and word alignment matrices, we compare different approaches of building a phrase translation table and show the final translation results. We measure translation performance by the BLEU (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005) scores with multiple translation references. 85 BLEU Scores Table 04dev 04test 05test 06dev 06test HMM 0.367 0.407 0.473 0.200 0.190 Model-4 0.380 0.403 0.485 0.210 0.204 New 0.411 0.427 0.500 0.216 0.208 METEOR Scores Table 04dev 04test 05test 06dev 06test HMM 0.532 0.586 0.675 0.482 0.471 Model-4 0.540 0.593 0.682 0.492 0.480 New 0.568 0.614 0.691 0.505 0.487 Table 3: Translation Results 4.2 Translation Results Our baseline phrase table training method is the ViterbiExtract algorithm. All phrase pairs with respect to the word alignment boundary constraint are identified and pooled to build phrase translation tables with the Maximum Likelihood criterion. We prune phrase translation entries by their probabilities. The maximum number of words in Chinese and English phrases is set to 8 and 25 respectively for all conditions2. We perform online style phrase training, i.e., phrase extraction is not particular for any evaluation set. Two different word alignment models are trained as the baseline, one is symmetric HMM word alignment model, the other is IBM Model-4 as implemented in the GIZA++ toolkit (Och and Ney, 2003). The translation results as measured by BLEU and METEOR scores are presented in Table 3. We notice that Model-4 based phrase table performs roughly 1% better in terms of both BLEU and METEOR scores than that based on HMM. We follow the generic phrase training procedure as described in section 2. The most time consuming part is calculating posteriors, which is carried out in parallel with 30 jobs in less than 1.5 hours. We use the Viterbi word alignments from HMM to define within phrase pair consistency ratio as discussed in section 3.4. Although Table 3 implies that Model-4 word alignment quality is better than that of HMM, we did not get benefits by switching to Model-4 to compute word alignments based feature values. In estimating phrase translation probability, we use accumulated HMM-based phrase pair posteriors 2We chose large numbers for phrase length limit to build a strong baseline and to avoid impact of longer phase length. as their ‘soft’ frequencies and then the final translation probability is the relative frequency. HMMbased posterior was shown to be better than treating each occurrence as count one. Once we have computed all feature values for all phrase pairs in the training corpus, we discriminatively train feature weights λks and the threshold τ using the downhill simplex method to maximize the BLEU score on 06dev set. Since the translation engine implements a log-linear model, the discriminative training of feature weights in the decoder should be embedded in the whole end-to-end system jointly with the discriminative phrase table training process. This is globally optimal but computationally demanding. As a compromise, we fix the decoder feature weights and put all efforts on optimizing phrase training parameters to find out the best phrase table. The translation results with the discriminatively trained phrase table are shown as the row of “New” in Table 3. We observe that the new approach is consistently better than the baseline ViterbiExtract algorithm with either Model-4 or HMM word alignments on all sets. Roughly, it has 0.5% higher BLEU score on 2006 sets and 1.5% to 3% higher on other sets than Model-4 based ViterbiExtract method. Similar superior results are observed when measured with METEOR score. 5 Discussions The generic phrase training algorithm follows an information retrieval perspective as in (Venugopal et al., 2003) but aims to improve both precision and recall with the trainable log-linear model. A clear advantage of the proposed approach over the widely used ViterbiExtract method is trainability. Under the general framework, one can put as many features as possible together under the log-linear model to evaluate the quality of a phrase and a phase pair. The phrase table extracting procedure is trainable and can be optimized jointly with the translation engine. Another advantage is flexibility, which is provided partially by the threshold τ. As the figure 1 shows, when we increase the threshold by allowing more candidate phrase pair hypothesized as valid translation, we observe the phrase table size increases monotonically. On the other hand, we notice 86 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 0.17 0.18 0.19 0.2 Threshold Thresholding Effects Translation Performance 0 5 10 155 5.5 6 6.5 Log10 of the number of Entries in the PhraseTable BLEU Phrasetable Size Figure 1: Thresholding effects on translation performance and phrase table size that the translation performance improves gradually. After reaching its peak, the BLEU score drops as the threshold τ increases. When τ is large enough, the translation performance is not changing much but still worse than the peak value. It implies a balancing process between precision and recall. The final optimal threshold τ is around 5. The flexibility is also enabled by multiple configurable features used to evaluate the quality of a phrase and a phrase pair. Ideally, a perfect combination of feature functions divides the correct and incorrect candidate phrase pairs within a parallel sentence into two ordered separate sets. We use feature functions to decide the order and the threshold τ to locate the boundary guided with a development set. So the main issue to investigate now is which features are important and valuable in ranking candidate phrase pairs. We propose several information metrics derived from posterior distribution, language model and word alignments as feature functions. The ViterbiExtract is a special case where a single binary feature function defined from word alignments is used. Its good performance (as shown in Table 3) suggests that word alignments are very indicative of phrase pair quality. So we design comparative experiments to capture word alignment impact only. We start with basic features that include model-based posterior, bilingual and monolingual information metrics. Its results on different test sets are presented in the “basic” row of Table 4. We add word alignment feature (“+align” row), and Features 04dev 04test 05test 06dev 06test basic 0.393 0.406 0.496 0.205 0.199 +align 0.401 0.429 0.502 0.208 0.196 +align BLT 0.411 0.427 0.500 0.216 0.208 Table 4: Translation Results (BLEU) of discriminative phrase training approach using different features 75K 250K 132K PP1 PP3 PP2 Model−4 New Features 04dev 04test 05test 06dev 06test PP2 0.380 0.395 0.480 0.207 0.202 PP1+PP2 0.380 0.403 0.485 0.210 0.204 PP2+PP3 0.411 0.427 0.500 0.216 0.208 PP1+PP2+PP3 0.412 0.432 0.500 0.217 0.214 Table 5: Translation Results (BLEU) of Different Phrase Pair Combination then apply bilinear transform to the consistency ratio WPPCR as described in section 3.4 (“+align BLT” row). The parameter controlling the degree of attenuation in BLT is also optimized together with other feature weights. With the basic features, the new phrase extraction approach performs better than the baseline method with HMM word alignment models but similar to the baseline method with Model-4. With the word alignment based feature WPPCR, we obtain a 2% improvement on 04test set but not much on other sets except slight degradation on 06test. Finally, applying BLT transform to WPPCR leads to additional 0.8 BLEU point on 06dev set and 1.2 point on 06test set. This confirms the effectiveness of word alignment based features. Now we compare the phrase table using the proposed method to that extracted using the baseline ViterbiExtract method with Model-4 word alignments. The Venn diagram in Table 5 shows how the two phrase tables overlap with each other and size of each part. As expected, they have a large number of common phrase pairs (PP2). The new method is able to extract more phrase pairs than the baseline with Model-4. PP1 is the set of phrase pairs found by Model-4 alignments. Removing PP1 from the baseline phrase table (comparing the first group of scores) or adding PP1 to the new phrase table 87 (the second group of scores) overall results in no or marginal performance change. On the other hand, adding phrase pairs extracted by the new method only (PP3) can lead to significant BLEU score increases (comparing row 1 vs. 3, and row 2 vs. 4). 6 Conclusions In this paper, the problem of extracting phrase translation is formulated as an information retrieval process implemented with a log-linear model aiming for a balanced precision and recall. We have presented a generic phrase translation extraction procedure which is parameterized with feature functions. It can be optimized jointly with the translation engine to directly maximize the end-to-end translation performance. Multiple feature functions were investigated. Our experimental results on IWSLT ChineseEnglish corpus have demonstrated consistent and significant improvement over the widely used word alignment matrix based extraction method. 3 Acknowledgement We would like to thank Xiaodong Cui, Radu Florian and other IBM colleagues for useful discussions and the anonymous reviewers for their constructive suggestions. References N. Ayan and B. Dorr. 2006. Going beyond AER: An extensive analysis of word alignments and their impact on MT. In Proc. of ACL, pages 9–16. S. Banerjee and A. Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proc. of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72. P. Brown, S. Della Pietra, V. Della Pietra, and R. Mercer. 1993. The mathematics of machine translation: Parameter estimation. Computational Linguistics, 19:263–312. S. F. Chen and J. Goodman. 1996. An empirical study of smoothing techniques for language modeling. In Proc. of ACL, pages 310–318. Y. Deng and W. Byrne. 2005. HMM word and phrase alignment for statistical machine translation. In Proc. of HLT-EMNLP, pages 169–176. 3By parallelism, we have shown the feasibility and effectiveness (results not presented here) of the proposed method in handling millions of sentence pairs. H. Johnson, J. Martin, G. Foster, and R. Kuhn. 2007. Improving translation quality by discarding most of the phrasetable. In Proc. of EMNLP-CoNLL, pages 967– 975. P. Koehn, F. Och, and D. Marcu. 2003. Statistical phrasebased translation. In Proc. of HLT-NAACL, pages 48– 54. D. Lin. 1999. Automatic identification of noncompositional phrases. In Proc. of ACL, pages 317– 324. D. Marcu and D. Wong. 2002. A phrase-based, joint probability model for statistical machine translation. In Proc. of EMNLP, pages 133–139. J. A. Nelder and R. Mead. 1965. A simplex method for function minimization. Computer Journal, 7:308– 313. F. J. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. F. J. Och, D. Gildea, and et al. 2004. A smorgasbord of features for statistical machine translation. In Proc. of HLT-NAACL, pages 161–168. F. Och. 2002. Statistical Machine Translation: From Single Word Models to Alignment Templates. Ph.D. thesis, RWTH Aachen, Germany. A. V. Oppenheim and R. W. Schafer. 1989. DiscreteTime Signal Processing. Prentice-Hall. K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proc. of ACL, pages 311–318. M. Paul. 2006. Overview of the IWSLT 2006 evaluation campaign. In Proc. of IWSLT, pages 1–15. C. Tillmann and T. Zhang. 2006. A discriminative global training algorithm for statistical MT. In Proc. of ACL, pages 721–728. A. Venugopal, S. Vogel, and A. Waibel. 2003. Effective phrase translation extraction from alignment models. In Proc. of ACL, pages 319–326. S. Vogel, H. Ney, and C. Tillmann. 1996. HMM based word alignment in statistical translation. In Proc. of the COLING. D. Wu. 1995. An algorithm for simultaneously bracketing parallel texts by aligning words. In Proc. of ACL, pages 244–251. K. Yamada and K. Knight. 2001. A syntax-based statistical translation model. In Proc. of ACL, pages 523– 530. R. Zens, E. Matusov, and H. Ney. 2004. Improved word alignment using a symmetric lexicon model. In Proc. of COLING, pages 36–42. B. Zhao, S. Vogel, M. Eck, and A. Waibel. 2004. Phrase pair rescoring with term weighting for statistical machine translation. In Proc. of EMNLP, pages 206–213. 88
2008
10
Proceedings of ACL-08: HLT, pages 879–887, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Analyzing the Errors of Unsupervised Learning Percy Liang Dan Klein Computer Science Division, EECS Department University of California at Berkeley Berkeley, CA 94720 {pliang,klein}@cs.berkeley.edu Abstract We identify four types of errors that unsupervised induction systems make and study each one in turn. Our contributions include (1) using a meta-model to analyze the incorrect biases of a model in a systematic way, (2) providing an efficient and robust method of measuring distance between two parameter settings of a model, and (3) showing that local optima issues which typically plague EM can be somewhat alleviated by increasing the number of training examples. We conduct our analyses on three models: the HMM, the PCFG, and a simple dependency model. 1 Introduction The unsupervised induction of linguistic structure from raw text is an important problem both for understanding language acquisition and for building language processing systems such as parsers from limited resources. Early work on inducing grammars via EM encountered two serious obstacles: the inappropriateness of the likelihood objective and the tendency of EM to get stuck in local optima. Without additional constraints on bracketing (Pereira and Shabes, 1992) or on allowable rewrite rules (Carroll and Charniak, 1992), unsupervised grammar learning was ineffective. Since then, there has been a large body of work addressing the flaws of the EM-based approach. Syntactic models empirically more learnable than PCFGs have been developed (Clark, 2001; Klein and Manning, 2004). Smith and Eisner (2005) proposed a new objective function; Smith and Eisner (2006) introduced a new training procedure. Bayesian approaches can also improve performance (Goldwater and Griffiths, 2007; Johnson, 2007; Kurihara and Sato, 2006). Though these methods have improved induction accuracy, at the core they all still involve optimizing non-convex objective functions related to the likelihood of some model, and thus are not completely immune to the difficulties associated with early approaches. It is therefore important to better understand the behavior of unsupervised induction systems in general. In this paper, we take a step back and present a more statistical view of unsupervised learning in the context of grammar induction. We identify four types of error that a system can make: approximation, identifiability, estimation, and optimization errors (see Figure 1). We try to isolate each one in turn and study its properties. Approximation error is caused by a mis-match between the likelihood objective optimized by EM and the true relationship between sentences and their syntactic structures. Our key idea for understanding this mis-match is to “cheat” and initialize EM with the true relationship and then study the ways in which EM repurposes our desired syntactic structures to increase likelihood. We present a metamodel of the changes that EM makes and show how this tool can shed some light on the undesired biases of the HMM, the PCFG, and the dependency model with valence (Klein and Manning, 2004). Identifiability error can be incurred when two distinct parameter settings yield the same probability distribution over sentences. One type of nonidentifiability present in HMMs and PCFGs is label symmetry, which even makes computing a meaningful distance between parameters NP-hard. We present a method to obtain lower and upper bounds on such a distance. Estimation error arises from having too few training examples, and optimization error stems from 879 EM getting stuck in local optima. While it is to be expected that estimation error should decrease as the amount of data increases, we show that optimization error can also decrease. We present striking experiments showing that if our data actually comes from the model family we are learning with, we can sometimes recover the true parameters by simply running EM without clever initialization. This result runs counter to the conventional attitude that EM is doomed to local optima; it suggests that increasing the amount of data might be an effective way to partially combat local optima. 2 Unsupervised models Let x denote an input sentence and y denote the unobserved desired output (e.g., a parse tree). We consider a model family P = {pθ(x, y) : θ ∈Θ}. For example, if P is the set of all PCFGs, then the parameters θ would specify all the rule probabilities of a particular grammar. We sometimes use θ and pθ interchangeably to simplify notation. In this paper, we analyze the following three model families: In the HMM, the input x is a sequence of words and the output y is the corresponding sequence of part-of-speech tags. In the PCFG, the input x is a sequence of POS tags and the output y is a binary parse tree with yield x. We represent y as a multiset of binary rewrites of the form (y →y1 y2), where y is a nonterminal and y1, y2 can be either nonterminals or terminals. In the dependency model with valence (DMV) (Klein and Manning, 2004), the input x = (x1, . . . , xm) is a sequence of POS tags and the output y specifies the directed links of a projective dependency tree. The generative model is as follows: for each head xi, we generate an independent sequence of arguments to the left and to the right from a direction-dependent distribution over tags. At each point, we stop with a probability parametrized by the direction and whether any arguments have already been generated in that direction. See Klein and Manning (2004) for a formal description. In all our experiments, we used the Wall Street Journal (WSJ) portion of the Penn Treebank. We binarized the PCFG trees and created gold dependency trees according to the Collins head rules. We trained 45-state HMMs on all 49208 sentences, 11-state PCFGs on WSJ-10 (7424 sentences) and DMVs on WSJ-20 (25523 sentences) (Klein and Manning, 2004). We ran EM for 100 iterations with the parameters initialized uniformly (always plus a small amount of random noise). We evaluated the HMM and PCFG by mapping model states to Treebank tags to maximize accuracy. 3 Decomposition of errors Now we will describe the four types of errors (Figure 1) more formally. Let p∗(x, y) denote the distribution which governs the true relationship between the input x and output y. In general, p∗does not live in our model family P. We are presented with a set of n unlabeled examples x(1), . . . , x(n) drawn i.i.d. from the true p∗. In unsupervised induction, our goal is to approximate p∗by some model pθ ∈P in terms of strong generative capacity. A standard approach is to use the EM algorithm to optimize the empirical likelihood ˆE log pθ(x).1 However, EM only finds a local maximum, which we denote ˆθEM, so there is a discrepancy between what we get (pˆθEM) and what we want (p∗). We will define this discrepancy later, but for now, it suffices to remark that the discrepancy depends on the distribution over y whereas learning depends only on the distribution over x. This is an important property that distinguishes unsupervised induction from more standard supervised learning or density estimation scenarios. Now let us walk through the four types of error bottom up. First, ˆθEM, the local maximum found by EM, is in general different from ˆθ ∈ argmaxθ ˆE log pθ(x), any global maximum, which we could find given unlimited computational resources. Optimization error refers to the discrepancy between ˆθ and ˆθEM. Second, our training data is only a noisy sample from the true p∗. If we had infinite data, we would choose an optimal parameter setting under the model, θ∗ 2 ∈argmaxθ E log pθ(x), where now the expectation E is taken with respect to the true p∗instead of the training data. The discrepancy between θ∗ 2 and ˆθ is the estimation error. Note that θ∗ 2 might not be unique. Let θ∗ 1 denote 1Here, the expectation ˆEf(x) def = 1 n Pn i=1 f(x(i)) denotes averaging some function f over the training data. 880 p∗ = true model Approximation error (Section 4) θ∗ 1 = Best(argmaxθ E log pθ(x)) Identifiability error (Section 5) θ∗ 2 ∈argmaxθ E log pθ(x) Estimation error (Section 6) ˆθ ∈argmaxθ ˆE log pθ(x) Optimization error (Section 7) ˆθEM = EM(ˆE log pθ(x)) P Figure 1: The discrepancy between what we get (ˆθEM) and what we want (p∗) can be decomposed into four types of errors. The box represents our model family P, which is the set of possible parametrized distributions we can represent. Best(S) returns the θ ∈S which has the smallest discrepancy with p∗. the maximizer of E log pθ(x) that has the smallest discrepancy with p∗. Since θ∗ 1 and θ∗ 2 have the same value under the objective function, we would not be able to choose θ∗ 1 over θ∗ 2, even with infinite data or unlimited computation. Identifiability error refers to the discrepancy between θ∗ 1 and θ∗ 2. Finally, the model family P has fundamental limitations. Approximation error refers to the discrepancy between p∗and pθ∗ 1. Note that θ∗ 1 is not necessarily the best in P. If we had labeled data, we could find a parameter setting in P which is closer to p∗by optimizing joint likelihood E log pθ(x, y) (generative training) or even conditional likelihood E log pθ(y | x) (discriminative training). In the remaining sections, we try to study each of the four errors in isolation. In practice, since it is difficult to work with some of the parameter settings that participate in the error decomposition, we use computationally feasible surrogates so that the error under study remains the dominant effect. 4 Approximation error We start by analyzing approximation error, the discrepancy between p∗and pθ∗ 1 (the model found by optimizing likelihood), a point which has been dis20 40 60 80 100 iteration -18.4 -18.0 -17.6 -17.2 -16.7 log-likelihood 20 40 60 80 100 iteration 0.2 0.4 0.6 0.8 1.0 Labeled F1 Figure 2: For the PCFG, when we initialize EM with the supervised estimate ˆθgen, the likelihood increases but the accuracy decreases. cussed by many authors (Merialdo, 1994; Smith and Eisner, 2005; Haghighi and Klein, 2006).2 To confront the question of specifically how the likelihood diverges from prediction accuracy, we perform the following experiment: we initialize EM with the supervised estimate3 ˆθgen = argmaxθ ˆE log pθ(x, y), which acts as a surrogate for p∗. As we run EM, the likelihood increases but the accuracy decreases (Figure 2 shows this trend for the PCFG; the HMM and DMV models behave similarly). We believe that the initial iterations of EM contain valuable information about the incorrect biases of these models. However, EM is changing hundreds of thousands of parameters at once in a non-trivial way, so we need a way of characterizing the important changes. One broad observation we can make is that the first iteration of EM reinforces the systematic mistakes of the supervised initializer. In the first E-step, the posterior counts that are computed summarize the predictions of the supervised system. If these match the empirical counts, then the M-step does not change the parameters. But if the supervised system predicts too many JJs, for example, then the M-step will update the parameters to reinforce this bias. 4.1 A meta-model for analyzing EM We would like to go further and characterize the specific changes EM makes. An initial approach is to find the parameters that changed the most during the first iteration (weighted by the correspond2Here, we think of discrepancy between p and p′ as the error incurred when using p′ for prediction on examples generated from p; in symbols, E(x,y)∼ploss(y, argmaxy′ p′(y′ | x)). 3For all our models, the supervised estimate is solved in closed form by taking ratios of counts. 881 ing expected counts computed in the E-step). For the HMM, the three most changed parameters are the transitions 2:DT→8:JJ, START→0:NNP, and 8:JJ→3:NN.4 If we delve deeper, we can see that 2:DT→3:NN (the parameter with the 10th largest change) fell and 2:DT→8:JJ rose. After checking with a few examples, we can then deduce that some nouns were retagged as adjectives. Unfortunately, this type of ad-hoc reasoning requires considerable manual effort and is rather subjective. Instead, we propose using a general meta-model to analyze the changes EM makes in an automatic and objective way. Instead of treating parameters as the primary object of study, we look at predictions made by the model and study how they change over time. While a model is a distribution over sentences, a meta-model is a distribution over how the predictions of the model change. Let R(y) denote the set of parts of a prediction y that we are interested in tracking. Each part (c, l) ∈R(y) consists of a configuration c and a location l. For a PCFG, we define a configuration to be a rewrite rule (e.g., c = PP→IN NP), and a location l = [i, k, j] to be a span [i, j] split at k, where the rewrite c is applied. In this work, each configuration is associated with a parameter of the model, but in general, a configuration could be a larger unit such as a subtree, allowing one to track more complex changes. The size of a configuration governs how much the meta-model generalizes from individual examples. Let y(i,t) denote the model prediction on the i-th training example after t iterations of EM. To simplify notation, we write Rt = R(y(i,t)). The metamodel explains how Rt became Rt+1.5 In general, we expect a part in Rt+1 to be explained by a part in Rt that has a similar location and furthermore, we expect the locations of the two parts to be related in some consistent way. The metamodel uses two notions to formalize this idea: a distance d(l, l′) and a relation r(l, l′). For the PCFG, d(l, l′) is the number of positions among i,j,k that are the same as the corresponding ones in l′, and r((i, k, j), (i′, k′, j′)) = (sign(i −i′), sign(j − 4Here 2:DT means state 2 of the HMM, which was greedily mapped to DT. 5If the same part appears in both Rt and Rt+1, we remove it from both sets. j′), sign(k −k′)) is one of 33 values. We define a migration as a triple (c, c′, r(l, l′)); this is the unit of change we want to extract from the meta-model. Our meta-model provides the following generative story of how Rt becomes Rt+1: each new part (c′, l′) ∈Rt+1 chooses an old part (c, l) ∈Rt with some probability that depends on (1) the distance between the locations l and l′ and (2) the likelihood of the particular migration. Formally: pmeta(Rt+1 | Rt) = Y (c′,l′)∈Rt+1 X (c,l)∈Rt Z−1 l′ e−αd(l,l′)p(c′ | c, r(l, l′)), where Zl = P (c,l)∈Rt e−αd(l,l′) is a normalization constant, and α is a hyperparameter controlling the possibility of distant migrations (set to 3 in our experiments). We learn the parameters of the meta-model with an EM algorithm similar to the one for IBM model 1. Fortunately, the likelihood objective is convex, so we need not worry about local optima. 4.2 Results of the meta-model We used our meta-model to analyze the approximation errors of the HMM, DMV, and PCFG. For these models, we initialized EM with the supervised estimate ˆθgen and collected the model predictions as EM ran. We then trained the meta-model on the predictions between successive iterations. The metamodel gives us an expected count for each migration. Figure 3 lists the migrations with the highest expected counts. From these migrations, we can see that EM tries to explain x better by making the corresponding y more regular. In fact, many of the HMM migrations on the first iteration attempt to resolve inconsistencies in gold tags. For example, noun adjuncts (e.g., stock-index), tagged as both nouns and adjectives in the Treebank, tend to become consolidated under adjectives, as captured by migration (B). EM also re-purposes under-utilized states to better capture distributional similarities. For example, state 24 has migrated to state 40 (N), both of which are now dominated by proper nouns. State 40 initially contained only #, but was quickly overrun with distributionally similar proper nouns such as Oct. and Chapter, which also precede numbers, just as # does. 882 Iteration 0→1 (A) START 4:NN 24:NNP (B) 4:NN 8:JJ 4:NN (C) 24:NNP 24:NNP 36:NNPS Iteration 1→2 (D) 4:NN 8:JJ 4:NN (E) START 4:NN 24:NNP (F) 8:JJ 11:RB 27:TO Iteration 2→3 (G) 24:NNP 8:JJ U.S. (H) 24:NNP 8:JJ 4:NN (I) 3:DT 24:NNP 8:JJ Iteration 3→4 (J) 11:RB 32:RP up (K) 24:NNP 8:JJ U.S. (L) 19:, 11:RB 32:RP Iteration 4→5 (M) 24:NNP 34:$ 15:CD (N) 2:IN 24:NNP 40:NNP (O) 11:RB 32:RP down (a) Top HMM migrations. Example: migration (D) means a NN→NN transition is replaced by JJ→NN. Iteration 0→1 Iteration 1→2 Iteration 2→3 Iteration 3→4 Iteration 4→5 (A) DT NN NN (D) NNP NNP NNP (G) DT JJ NNS (J) DT JJ NN (M) POS JJ NN (B) JJ NN NN (E) NNP NNP NNP (H) MD RB VB (K) DT NNP NN (N) NNS RB VBP (C) NNP NNP (F) DT NNP NNP (I) VBP RB VB (L) PRP$ JJ NN (O) NNS RB VBD (b) Top DMV migrations. Example: migration (A) means a DT attaches to the closer NN. Iteration 0→1 Iteration 1→2 Iteration 2→3 Iteration 3→4 Iteration 4→5 (A) RB 1:VP 4:S RB 1:VP 1:VP (D) NNP 0:NP 0:NP NNP NNP 0:NP (G) DT 0:NP 0:NP DT NN 0:NP (J) TO VB 1:VP TO VB 2:PP (M) CD NN 0:NP CD NN 3:ADJP (B) 0:NP 2:PP 0:NP 1:VP 2:PP 1:VP (E) VBN 2:PP 1:VP 1:VP 2:PP 1:VP (H) 0:NP 1:VP 4:S 0:NP 1:VP 4:S (K) MD 1:VP 1:VP MD VB 1:VP (N) VBD 0:NP 1:VP VBD 3:ADJP 1:VP (C) VBZ 0:NP 1:VP VBZ 0:NP 1:VP (F) 0:NP 1:VP 4:S 0:NP 1:VP 4:S (I) TO VB 1:VP TO VB 2:PP (L) NNP NNP 0:NP NNP NNP 6:NP (O) 0:NP NN 0:NP 0:NP NN 0:NP (c) Top PCFG migrations. Example: migration (D) means a NP→NNP NP rewrite is replaced by NP→NNP NNP, where the new NNP right child spans less than the old NP right child. Figure 3: We show the prominent migrations that occur during the first 5 iterations of EM for the HMM, DMV, and PCFG, as recovered by our meta-model. We sort the migrations across each iteration by their expected counts under the meta-model and show the top 3. Iteration 0 corresponds to the correct outputs. Blue indicates the new iteration, red indicates the old. DMV migrations also try to regularize model predictions, but in a different way—in terms of the number of arguments. Because the stop probability is different for adjacent and non-adjacent arguments, it is statistically much cheaper to generate one argument rather than two or more. For example, if we train a DMV on only DT JJ NN, it can fit the data perfectly by using a chain of single arguments, but perfect fit is not possible if NN generates both DT and JJ (which is the desired structure); this explains migration (J). Indeed, we observed that the variance of the number of arguments decreases with more EM iterations (for NN, from 1.38 to 0.41). In general, low-entropy conditional distributions are preferred. Migration (H) explains how adverbs now consistently attach to verbs rather than modals. After a few iterations, the modal has committed itself to generating exactly one verb to the right, which is statistically advantageous because there must be a verb after a modal, while the adverb is optional. This leaves the verb to generate the adverb. The PCFG migrations regularize categories in a manner similar to the HMM, but with the added complexity of changing bracketing structures. For example, sentential adverbs are re-analyzed as VP adverbs (A). Sometimes, multiple migrations explain the same phenomenon.6 For example, migrations (B) and (C) indicate that PPs that previously attached to NPs are now raised to the verbal level. Tree rotation is another common phenomenon, leading to many left-branching structures (D,G,H). The migrations that happen during one iteration can also trigger additional migrations in the next. For example, the raising of the PP (B,C) inspires more of the 6We could consolidate these migrations by using larger configurations, but at the risk of decreased generalization. 883 same raising (E). As another example, migration (I) regularizes TO VB infinitival clauses into PPs, and this momentum carries over to the next iteration with even greater force (J). In summary, the meta-model facilitates our analyses by automatically identifying the broad trends. We believe that the central idea of modeling the errors of a system is a powerful one which can be used to analyze a wide range of models, both supervised and unsupervised. 5 Identifiability error While approximation error is incurred when likelihood diverges from accuracy, identifiability error is concerned with the case where likelihood is indifferent to accuracy. We say a set of parameters S is identifiable (in terms of x) if pθ(x) ̸= pθ′(x) for every θ, θ′ ∈S where θ ̸= θ′.7 In general, identifiability error is incurred when the set of maximizers of E log pθ(x) is non-identifiable.8 Label symmetry is perhaps the most familiar example of non-identifiability and is intrinsic to models with hidden labels (HMM and PCFG, but not DMV). We can permute the hidden labels without changing the objective function or even the nature of the solution, so there is no reason to prefer one permutation over another. While seemingly benign, this symmetry actually presents a serious challenge in measuring discrepancy (Section 5.1). Grenager et al. (2005) augments an HMM to allow emission from a generic stopword distribution at any position with probability q. Their model would definitely not be identifiable if q were a free parameter, since we can set q to 0 and just mix in the stopword distribution with each of the other emission distributions to obtain a different parameter setting yielding the same overall distribution. This is a case where our notion of desired structure is absent in the likelihood, and a prior over parameters could help break ties. 7For our three model families, θ is identifiable in terms of (x, y), but not in terms of x alone. 8We emphasize that non-identifiability is in terms of x, so two parameter settings could still induce the same marginal distribution on x (weak generative capacity) while having different joint distributions on (x, y) (strong generative capacity). Recall that discrepancy depends on the latter. The above non-identifiabilities apply to all parameter settings, but another type of non-identifiability concerns only the maximizers of E log pθ(x). Suppose the true data comes from a K-state HMM. If we attempt to fit an HMM with K + 1 states, we can split any one of the K states and maintain the same distribution on x. Or, if we learn a PCFG on the same HMM data, then we can choose either the left- or right-branching chain structures, which both mimic the true HMM equally well. 5.1 Permutation-invariant distance KL-divergence is a natural measure of discrepancy between two distributions, but it is somewhat nontrivial to compute—for our three recursive models, it requires solving fixed point equations, and becomes completely intractable in face of label symmetry. Thus we propose a more manageable alternative: dµ(θ || θ′) def = P j µj|θj −θ′ j| P j µj , (1) where we weight the difference between the j-th component of the parameter vectors by µj, the jth expected sufficient statistic with respect to pθ (the expected counts computed in the E-step).9 Unlike KL, our distance dµ is only defined on distributions in the model family and is not invariant to reparametrization. Like KL, dµ is asymmetric, with the first argument holding the status of being the “true” parameter setting. In our case, the parameters are conditional probabilities, so 0 ≤dµ(θ || θ′) ≤1, so we can interpret dµ as an expected difference between these probabilities. Unfortunately, label symmetry can wreak havoc on our distance measure dµ. Suppose we want to measure the distance between θ and θ′. If θ′ is simply θ with the labels permuted, then dµ(θ || θ′) would be substantial even though the distance ought to be zero. We define a revised distance to correct for this by taking the minimum distance over all label permutations: Dµ(θ || θ′) = min π dµ(θ || π(θ′)), (2) 9Without this factor, rarely used components could contribute to the sum as much as frequently used ones, thus, making the distance overly pessimistic. 884 where π(θ′) denotes the parameter setting resulting from permuting the labels according to π. (The DMV has no label symmetries, so just dµ works.) For mixture models, we can compute Dµ(θ || θ′) efficiently as follows. Note that each term in the summation of (1) is associated with one of the K labels. We can form a K ×K matrix M, where each entry Mij is the distance between the parameters involving label i of θ and label j of θ′. Dµ(θ || θ′) can then be computed by finding a maximum weighted bipartite matching on M using the O(K3) Hungarian algorithm (Kuhn, 1955). For models such as the HMM and PCFG, computing Dµ is NP-hard, since the summation in dµ (1) contains both first-order terms which depend on one label (e.g., emission parameters) and higher-order terms which depend on more than one label (e.g., transitions or rewrites). We cannot capture these problematic higher-order dependencies in M. However, we can bound Dµ(θ || θ′) as follows. We create M using only first-order terms and find the best matching (permutation) to obtain a lower bound Dµ and an associated permutation π0 achieving it. Since Dµ(θ || θ′) takes the minimum over all permutations, dµ(θ || π(θ′)) is an upper bound for any π, in particular for π = π0. We then use a local search procedure that changes π to further tighten the upper bound. Let Dµ denote the final value. 6 Estimation error Thus far, we have considered approximation and identifiability errors, which have to do with flaws of the model. The remaining errors have to do with how well we can fit the model. To focus on these errors, we consider the case where the true model is in our family (p∗∈P). To keep the setting as realistic as possible, we do supervised learning on real labeled data to obtain θ∗= argmaxθ ˆE log p(x, y). We then throw away our real data and let p∗= pθ∗. Now we start anew: sample new artificial data from θ∗, learn a model using this artificial data, and see how close we get to recovering θ∗. In order to compute estimation error, we need to compare θ∗with ˆθ, the global maximizer of the likelihood on our generated data. However, we cannot compute ˆθ exactly. Let us therefore first consider the simpler supervised scenario. Here, ˆθgen has a closed form solution, so there is no optimization error. Using our distance Dµ (defined in Section 5.1) to quantify estimation error, we see that, for the HMM, ˆθgen quickly approaches θ∗as we increase the amount of data (Table 1). # examples 500 5K 50K 500K Dµ(θ∗|| ˆθgen) 0.003 6.3e-4 2.7e-4 8.5e-5 Dµ(θ∗|| ˆθgen) 0.005 0.001 5.2e-4 1.7e-4 Dµ(θ∗|| ˆθgen-EM) 0.022 0.018 0.008 0.002 Dµ(θ∗|| ˆθgen-EM) 0.049 0.039 0.016 0.004 Table 1: Lower and upper bounds on the distance from the true θ∗for the HMM as we increase the number of examples. In the unsupervised case, we use the following procedure to obtain a surrogate for ˆθ: initialize EM with the supervised estimate ˆθgen and run EM for 100 iterations. Let ˆθgen-EM denote the final parameters, which should be representative of ˆθ. Table 1 shows that the estimation error of ˆθgen-EM is an order of magnitude higher than that of ˆθgen, which is to expected since ˆθgen-EM does not have access to labeled data. However, this error can also be driven down given a moderate number of examples. 7 Optimization error Finally, we study optimization error, which is the discrepancy between the global maximizer ˆθ and ˆθEM, the result of running EM starting from a uniform initialization (plus some small noise). As before, we cannot compute ˆθ, so we use ˆθgen-EM as a surrogate. Also, instead of comparing ˆθgen-EM and ˆθ with each other, we compare each of their discrepancies with respect to θ∗. Let us first consider optimization error in terms of prediction error. The first observation is that there is a gap between the prediction accuracies of ˆθgen-EM and ˆθEM, but this gap shrinks considerably as we increase the number of examples. Figures 4(a,b,c) support this for all three model families: for the HMM, both ˆθgen-EM and ˆθEM eventually achieve around 90% accuracy; for the DMV, 85%. For the PCFG, ˆθEM still lags ˆθgen-EM by 10%, but we believe that more data can further reduce this gap. Figure 4(d) shows that these trends are not particular to artificial data. On real WSJ data, the gap 885 500 5K 50K 500K # examples 0.6 0.7 0.8 0.9 1.0 Accuracy 500 5K 50K 500K # examples 0.6 0.7 0.8 0.9 1.0 Directed F1 500 5K 50K # examples 0.5 0.6 0.8 0.9 1.0 Labeled F1 1K 3K 10K 40K # examples 0.3 0.4 0.6 0.7 0.8 Accuracy (a) HMM (artificial data) (b) DMV (artificial data) (c) PCFG (artificial data) (d) HMM (real data) 500 5K 50K 500K # examples 0.02 0.05 0.07 0.1 0.12 Dµ(θ∗|| ·) ˆθgen-EM ˆθEM (rand 1) ˆθEM (rand 2) ˆθEM (rand 3) 20 40 60 80 100 iteration -173.3 -171.4 -169.4 -167.4 -165.5 log-likelihood 20 40 60 80 100 iteration 0.2 0.4 0.6 0.8 1.0 Accuracy Sup. init. Unif. init. (e) HMM (artificial data) (f) HMM log-likelihood/accuracy on 500K examples Figure 4: Compares the performance of ˆθEM (EM with a uniform initialization) against ˆθgen-EM (EM initialized with the supervised estimate) on (a–c) various models, (d) real data. (e) measures distance instead of accuracy and (f) shows a sample EM run. between ˆθgen-EM and ˆθEM also diminishes for the HMM. To reaffirm the trends, we also measure distance Dµ. Figure 4(e) shows that the distance from ˆθEM to the true parameters θ∗decreases, but the gap between ˆθgen-EM and ˆθEM does not close as decisively as it did for prediction error. It is quite surprising that by simply running EM with a neutral initialization, we can accurately learn a complex model with thousands of parameters. Figures 4(f,g) show how both likelihood and accuracy, which both start quite low, improve substantially over time for the HMM on artificial data. Carroll and Charniak (1992) report that EM fared poorly with local optima. We do not claim that there are no local optima, but only that the likelihood surface that EM is optimizing can become smoother with more examples. With more examples, there is less noise in the aggregate statistics, so it might be easier for EM to pick out the salient patterns. Srebro et al. (2006) made a similar observation in the context of learning Gaussian mixtures. They characterized three regimes: one where EM was successful in recovering the true clusters (given lots of data), another where EM failed but the global optimum was successful, and the last where both failed (without much data). There is also a rich body of theoretical work on learning latent-variable models. Specialized algorithms can provably learn certain constrained discrete hidden-variable models, some in terms of weak generative capacity (Ron et al., 1998; Clark and Thollard, 2005; Adriaans, 1999), others in term of strong generative capacity (Dasgupta, 1999; Feldman et al., 2005). But with the exception of Dasgupta and Schulman (2007), there is little theoretical understanding of EM, let alone on complex model families such as the HMM, PCFG, and DMV. 8 Conclusion In recent years, many methods have improved unsupervised induction, but these methods must still deal with the four types of errors we have identified in this paper. One of our main contributions of this paper is the idea of using the meta-model to diagnose the approximation error. Using this tool, we can better understand model biases and hopefully correct for them. We also introduced a method for measuring distances in face of label symmetry and ran experiments exploring the effectiveness of EM as a function of the amount of data. Finally, we hope that setting up the general framework to understand the errors of unsupervised induction systems will aid the development of better methods and further analyses. 886 References P. W. Adriaans. 1999. Learning shallow context-free languages under simple distributions. Technical report, Stanford University. G. Carroll and E. Charniak. 1992. Two experiments on learning probabilistic dependency grammars from corpora. In Workshop Notes for Statistically-Based NLP Techniques, pages 1–13. A. Clark and F. Thollard. 2005. PAC-learnability of probabilistic deterministic finite state automata. JMLR, 5:473–497. A. Clark. 2001. Unsupervised induction of stochastic context free grammars with distributional clustering. In CoNLL. S. Dasgupta and L. Schulman. 2007. A probabilistic analysis of EM for mixtures of separated, spherical Gaussians. JMLR, 8. S. Dasgupta. 1999. Learning mixtures of Gaussians. In FOCS. J. Feldman, R. O’Donnell, and R. A. Servedio. 2005. Learning mixtures of product distributions over discrete domains. In FOCS, pages 501–510. S. Goldwater and T. Griffiths. 2007. A fully Bayesian approach to unsupervised part-of-speech tagging. In ACL. T. Grenager, D. Klein, and C. D. Manning. 2005. Unsupervised learning of field segmentation models for information extraction. In ACL. A. Haghighi and D. Klein. 2006. Prototype-based grammar induction. In ACL. M. Johnson. 2007. Why doesn’t EM find good HMM POS-taggers? In EMNLP/CoNLL. D. Klein and C. D. Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. In ACL. H. W. Kuhn. 1955. The Hungarian method for the assignment problem. Naval Research Logistic Quarterly, 2:83–97. K. Kurihara and T. Sato. 2006. Variational Bayesian grammar induction for natural language. In International Colloquium on Grammatical Inference. B. Merialdo. 1994. Tagging English text with a probabilistic model. Computational Linguistics, 20:155– 171. F. Pereira and Y. Shabes. 1992. Inside-outside reestimation from partially bracketed corpora. In ACL. D. Ron, Y. Singer, and N. Tishby. 1998. On the learnability and usage of acyclic probabilistic finite automata. Journal of Computer and System Sciences, 56:133– 152. N. Smith and J. Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In ACL. N. Smith and J. Eisner. 2006. Annealing structural bias in multilingual weighted grammar induction. In ACL. N. Srebro, G. Shakhnarovich, and S. Roweis. 2006. An investigation of computational and informational limits in Gaussian mixture clustering. In ICML, pages 865–872. 887
2008
100
Proceedings of ACL-08: HLT, pages 888–896, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Joint Word Segmentation and POS Tagging using a Single Perceptron Yue Zhang and Stephen Clark Oxford University Computing Laboratory Wolfson Building, Parks Road Oxford OX1 3QD, UK {yue.zhang,stephen.clark}@comlab.ox.ac.uk Abstract For Chinese POS tagging, word segmentation is a preliminary step. To avoid error propagation and improve segmentation by utilizing POS information, segmentation and tagging can be performed simultaneously. A challenge for this joint approach is the large combined search space, which makes efficient decoding very hard. Recent research has explored the integration of segmentation and POS tagging, by decoding under restricted versions of the full combined search space. In this paper, we propose a joint segmentation and POS tagging model that does not impose any hard constraints on the interaction between word and POS information. Fast decoding is achieved by using a novel multiple-beam search algorithm. The system uses a discriminative statistical model, trained using the generalized perceptron algorithm. The joint model gives an error reduction in segmentation accuracy of 14.6% and an error reduction in tagging accuracy of 12.2%, compared to the traditional pipeline approach. 1 Introduction Since Chinese sentences do not contain explicitly marked word boundaries, word segmentation is a necessary step before POS tagging can be performed. Typically, a Chinese POS tagger takes segmented inputs, which are produced by a separate word segmentor. This two-step approach, however, has an obvious flaw of error propagation, since word segmentation errors cannot be corrected by the POS tagger. A better approach would be to utilize POS information to improve word segmentation. For example, the POS-word pattern “number word” + “Ç (a common measure word)” can help in segmenting the character sequence “Ç|” into the word sequence “ (one) Ç (measure word) | (person)” instead of “ (one) Ç| (personal; adj)”. Moreover, the comparatively rare POS pattern “number word” + “number word” can help to prevent segmenting a long number word into two words. In order to avoid error propagation and make use of POS information for word segmentation, segmentation and POS tagging can be viewed as a single task: given a raw Chinese input sentence, the joint POS tagger considers all possible segmented and tagged sequences, and chooses the overall best output. A major challenge for such a joint system is the large search space faced by the decoder. For a sentence with n characters, the number of possible output sequences is O(2n−1 · T n), where T is the size of the tag set. Due to the nature of the combined candidate items, decoding can be inefficient even with dynamic programming. Recent research on Chinese POS tagging has started to investigate joint segmentation and tagging, reporting accuracy improvements over the pipeline approach. Various decoding approaches have been used to reduce the combined search space. Ng and Low (2004) mapped the joint segmentation and POS tagging task into a single character sequence tagging problem. Two types of tags are assigned to each character to represent its segmentation and POS. For example, the tag “b NN” indicates a character at the beginning of a noun. Using this method, POS features are allowed to interact with segmentation. 888 Since tagging is restricted to characters, the search space is reduced to O((4T)n), and beam search decoding is effective with a small beam size. However, the disadvantage of this model is the difficulty in incorporating whole word information into POS tagging. For example, the standard “word + POS tag” feature is not explicitly applicable. Shi and Wang (2007) introduced POS information to segmentation by reranking. N-best segmentation outputs are passed to a separately-trained POS tagger, and the best output is selected using the overall POSsegmentation probability score. In this system, the decoding for word segmentation and POS tagging are still performed separately, and exact inference for both is possible. However, the interaction between POS and segmentation is restricted by reranking: POS information is used to improve segmentation only for the N segmentor outputs. In this paper, we propose a novel joint model for Chinese word segmentation and POS tagging, which does not limiting the interaction between segmentation and POS information in reducing the combined search space. Instead, a novel multiple beam search algorithm is used to do decoding efficiently. Candidate ranking is based on a discriminative joint model, with features being extracted from segmented words and POS tags simultaneously. The training is performed by a single generalized perceptron (Collins, 2002). In experiments with the Chinese Treebank data, the joint model gave an error reduction of 14.6% in segmentation accuracy and 12.2% in the overall segmentation and tagging accuracy, compared to the traditional pipeline approach. In addition, the overall results are comparable to the best systems in the literature, which exploit knowledge outside the training data, even though our system is fully data-driven. Different methods have been proposed to reduce error propagation between pipelined tasks, both in general (Sutton et al., 2004; Daum´e III and Marcu, 2005; Finkel et al., 2006) and for specific problems such as language modeling and utterance classification (Saraclar and Roark, 2005) and labeling and chunking (Shimizu and Haas, 2006). Though our model is built specifically for Chinese word segmentation and POS tagging, the idea of using the perceptron model to solve multiple tasks simultaneously can be generalized to other tasks. 1 word w 2 word bigram w1w2 3 single-character word w 4 a word of length l with starting character c 5 a word of length l with ending character c 6 space-separated characters c1 and c2 7 character bigram c1c2 in any word 8 the first / last characters c1 / c2 of any word 9 word w immediately before character c 10 character c immediately before word w 11 the starting characters c1 and c2 of two consecutive words 12 the ending characters c1 and c2 of two consecutive words 13 a word of length l with previous word w 14 a word of length l with next word w Table 1: Feature templates for the baseline segmentor 2 The Baseline System We built a two-stage baseline system, using the perceptron segmentation model from our previous work (Zhang and Clark, 2007) and the perceptron POS tagging model from Collins (2002). We use baseline system to refer to the system which performs segmentation first, followed by POS tagging (using the single-best segmentation); baseline segmentor to refer to the segmentor from (Zhang and Clark, 2007) which performs segmentation only; and baseline POStagger to refer to the Collins tagger which performs POS tagging only (given segmentation). The features used by the baseline segmentor are shown in Table 1. The features used by the POS tagger, some of which are different to those from Collins (2002) and are specific to Chinese, are shown in Table 2. The word segmentation features are extracted from word bigrams, capturing word, word length and character information in the context. The word length features are normalized, with those more than 15 being treated as 15. The POS tagging features are based on contextual information from the tag trigram, as well as the neighboring three-word window. To reduce overfitting and increase the decoding speed, templates 4, 5, 6 and 7 only include words with less than 3 characters. Like the baseline segmentor, the baseline tagger also normalizes word length features. 889 1 tag t with word w 2 tag bigram t1t2 3 tag trigram t1t2t3 4 tag t followed by word w 5 word w followed by tag t 6 word w with tag t and previous character c 7 word w with tag t and next character c 8 tag t on single-character word w in character trigram c1wc2 9 tag t on a word starting with char c 10 tag t on a word ending with char c 11 tag t on a word containing char c (not the starting or ending character) 12 tag t on a word starting with char c0 and containing char c 13 tag t on a word ending with char c0 and containing char c 14 tag t on a word containing repeated char cc 15 tag t on a word starting with character category g 16 tag t on a word ending with character category g Table 2: Feature templates for the baseline POS tagger Templates 15 and 16 in Table 2 are inspired by the CTBMorph feature templates in Tseng et al. (2005), which gave the most accuracy improvement in their experiments. Here the category of a character is the set of tags seen on the character during training. Other morphological features from Tseng et al. (2005) are not used because they require extra web corpora besides the training data. During training, the baseline POS tagger stores special word-tag pairs into a tag dictionary (Ratnaparkhi, 1996). Such information is used by the decoder to prune unlikely tags. For each word occurring more than N times in the training data, the decoder can only assign a tag the word has been seen with in the training data. This method led to improvement in the decoding speed as well as the output accuracy for English POS tagging (Ratnaparkhi, 1996). Besides tags for frequent words, our baseline POS tagger also uses the tag dictionary to store closed-set tags (Xia, 2000) – those associated only with a limited number of Chinese words. 3 Joint Segmentation and Tagging Model In this section, we build a joint word segmentation and POS tagging model that uses exactly the same source of information as the baseline system, by applying the feature templates from the baseline word segmentor and POS tagger. No extra knowledge is used by the joint model. However, because word segmentation and POS tagging are performed simultaneously, POS information participates in word segmentation. 3.1 Formulation of the joint model We formulate joint word segmentation and POS tagging as a single problem, which maps a raw Chinese sentence to a segmented and POS tagged output. Given an input sentence x, the output F(x) satisfies: F(x) = arg max y∈GEN(x) Score(y) where GEN(x) represents the set of possible outputs for x. Score(y) is computed by a feature-based linear model. Denoting the global feature vector for the tagged sentence y with Φ(y), we have: Score(y) = Φ(y) · ⃗w where ⃗w is the parameter vector in the model. Each element in ⃗w gives a weight to its corresponding element in Φ(y), which is the count of a particular feature over the whole sentence y. We calculate the ⃗w value by supervised learning, using the averaged perceptron algorithm (Collins, 2002), given in Figure 1. 1 We take the union of feature templates from the baseline segmentor (Table 1) and POS tagger (Table 2) as the feature templates for the joint system. All features are treated equally and processed together according to the linear model, regardless of whether they are from the baseline segmentor or tagger. In fact, most features from the baseline POS tagger, when used in the joint model, represent segmentation patterns as well. For example, the aforementioned pattern “number word” + “Ç”, which is 1In order to provide a comparison for the perceptron algorithm we also tried SVMstruct (Tsochantaridis et al., 2004) for parameter estimation, but this training method was prohibitively slow. 890 Inputs: training examples (xi, yi) Initialization: set ⃗w = 0 Algorithm: for t = 1..T, i = 1..N calculate zi = arg maxy∈GEN(xi) Φ(y) · ⃗w if zi ̸= yi ⃗w = ⃗w + Φ(yi) −Φ(zi) Outputs: ⃗w Figure 1: The perceptron learning algorithm useful only for the POS “number word” in the baseline tagger, is also an effective indicator of the segmentation of the two words (especially “Ç”) in the joint model. 3.2 The decoding algorithm One of the main challenges for the joint segmentation and POS tagging system is the decoding algorithm. The speed and accuracy of the decoder is important for the perceptron learning algorithm, but the system faces a very large search space of combined candidates. Given the linear model and feature templates, exact inference is very hard even with dynamic programming. Experiments with the standard beam-search decoder described in (Zhang and Clark, 2007) resulted in low accuracy. This beam search algorithm processes an input sentence incrementally. At each stage, the incoming character is combined with existing partial candidates in all possible ways to generate new partial candidates. An agenda is used to control the search space, keeping only the B best partial candidates ending with the current character. The algorithm is simple and efficient, with a linear time complexity of O(BTn), where n is the size of input sentence, and T is the size of the tag set (T = 1 for pure word segmentation). It worked well for word segmentation alone (Zhang and Clark, 2007), even with an agenda size as small as 8, and a simple beam search algorithm also works well for POS tagging (Ratnaparkhi, 1996). However, when applied to the joint model, it resulted in a reduction in segmentation accuracy (compared to the baseline segmentor) even with B as large as 1024. One possible cause of the poor performance of the standard beam search method is the combined nature of the candidates in the search space. In the baseInput: raw sentence sent – a list of characters Variables: candidate sentence item – a list of (word, tag) pairs; maximum word-length record maxlen for each tag; the agenda list agendas; the tag dictionary tagdict; start index for current word; end index for current word Initialization: agendas[0] = [“”], agendas[i] = [] (i! = 0) Algorithm: for end index = 1 to sent.length: foreach tag: for start index = max(1, end index −maxlen[tag] + 1) to end index: word = sent[start index..end index] if (word, tag) consistent with tagdict: for item ∈agendas[start index −1]: item1 = item item1.append((word,tag)) agendas[end index].insert(item1) Outputs: agendas[sent.length].best item Figure 2: The decoding algorithm for the joint word segmentor and POS tagger line POS tagger, candidates in the beam are tagged sequences ending with the current word, which can be compared directly with each other. However, for the joint problem, candidates in the beam are segmented and tagged sequences up to the current character, where the last word can be a complete word or a partial word. A problem arises in whether to give POS tags to incomplete words. If partial words are given POS tags, it is likely that some partial words are “justified” as complete words by the current POS information. On the other hand, if partial words are not given POS tag features, the correct segmentation for long words can be lost during partial candidate comparison (since many short completed words with POS tags are likely to be preferred to a long incomplete word with no POS tag features).2 2We experimented with both assigning POS features to partial words and omitting them; the latter method performed better but both performed significantly worse than the multiple beam search method described below. 891 Another possible cause is the exponential growth in the number of possible candidates with increasing sentence size. The number increases from O(T n) for the baseline POS tagger to O(2n−1T n) for the joint system. As a result, for an incremental decoding algorithm, the number of possible candidates increases exponentially with the current word or character index. In the POS tagging problem, a new incoming word enlarges the number of possible candidates by a factor of T (the size of the tag set). For the joint problem, however, the enlarging factor becomes 2T with each incoming character. The speed of search space expansion is much faster, but the number of candidates is still controlled by a single, fixed-size beam at any stage. If we assume that the beam is not large enough for all the candidates at at each stage, then, from the newly generated candidates, the baseline POS tagger can keep 1/T for the next processing stage, while the joint model can keep only 1/2T, and has to discard the rest. Therefore, even when the candidate comparison standard is ignored, we can still see that the chance for the overall best candidate to fall out of the beam is largely increased. Since the search space growth is exponential, increasing the fixed beam size is not effective in solving the problem. To solve the above problems, we developed a multiple beam search algorithm, which compares candidates only with complete tagged words, and enables the size of the search space to scale with the input size. The algorithm is shown in Figure 2. In this decoder, an agenda is assigned to each character in the input sentence, recording the B best segmented and tagged partial candidates ending with the character. The input sentence is still processed incrementally. However, now when a character is processed, existing partial candidates ending with any previous characters are available. Therefore, the decoder enumerates all possible tagged words ending with the current character, and combines each word with the partial candidates ending with its previous character. All input characters are processed in the same way, and the final output is the best candidate in the final agenda. The time complexity of the algorithm is O(WTBn), with W being the maximum word size, T being the total number of POS tags and n the number of characters in the input. It is also linear in the input size. Moreover, the decoding algorithm gives competent accuracy with a small agenda size of B = 16. To further limit the search space, two optimizations are used. First, the maximum word length for each tag is recorded and used by the decoder to prune unlikely candidates. Because the majority of tags only apply to words with length 1 or 2, this method has a strong effect. Development tests showed that it improves the speed significantly, while having a very small negative influence on the accuracy. Second, like the baseline POS tagger, the tag dictionary is used for Chinese closed set tags and the tags for frequent words. To words outside the tag dictionary, the decoder still tries to assign every possible tag. 3.3 Online learning Apart from features, the decoder maintains other types of information, including the tag dictionary, the word frequency counts used when building the tag dictionary, the maximum word lengths by tag, and the character categories. The above data can be collected by scanning the corpus before training starts. However, in both the baseline tagger and the joint POS tagger, they are updated incrementally during the perceptron training process, consistent with online learning.3 The online updating of word frequencies, maximum word lengths and character categories is straightforward. For the online updating of the tag dictionary, however, the decision for frequent words must be made dynamically because the word frequencies keep changing. This is done by caching the number of occurrences of the current most frequent word M, and taking all words currently above the threshold M/5000 + 5 as frequent words. 5000 is a rough figure to control the number of frequent words, set according to Zipf’s law. The parameter 5 is used to force all tags to be enumerated before a word is seen more than 5 times. 4 Related Work Ng and Low (2004) and Shi and Wang (2007) were described in the Introduction. Both models reduced 3We took this approach because we wanted the whole training process to be online. However, for comparison purposes, we also tried precomputing the above information before training and the difference in performance was negligible. 892 the large search space by imposing strong restrictions on the form of search candidates. In particular, Ng and Low (2004) used character-based POS tagging, which prevents some important POS tagging features such as word + POS tag; Shi and Wang (2007) used an N-best reranking approach, which limits the influence of POS tagging on segmentation to the N-best list. In comparison, our joint model does not impose any hard limitations on the interaction between segmentation and POS information.4 Fast decoding speed is achieved by using a novel multiple-beam search algorithm. Nakagawa and Uchimoto (2007) proposed a hybrid model for word segmentation and POS tagging using an HMM-based approach. Word information is used to process known-words, and character information is used for unknown words in a similar way to Ng and Low (2004). In comparison, our model handles character and word information simultaneously in a single perceptron model. 5 Experiments The Chinese Treebank (CTB) 4 is used for the experiments. It is separated into two parts: CTB 3 (420K characters in 150K words / 10364 sentences) is used for the final 10-fold cross validation, and the rest (240K characters in 150K words / 4798 sentences) is used as training and test data for development. The standard F-scores are used to measure both the word segmentation accuracy and the overall segmentation and tagging accuracy, where the overall accuracy is TF = 2pr/(p + r), with the precision p being the percentage of correctly segmented and tagged words in the decoder output, and the recall r being the percentage of gold-standard tagged words that are correctly identified by the decoder. For direct comparison with Ng and Low (2004), the POS tagging accuracy is also calculated by the percentage of correct tags on each character. 5.1 Development experiments The learning curves of the baseline and joint models are shown in Figure 3, Figure 4 and Figure 5, respectively. These curves are used to show the conver4Apart from the beam search algorithm, we do impose some minor limitations on the search space by methods such as the tag dictionary, but these can be seen as optional pruning methods for optimization. 0.88 0.89 0.9 0.91 0.92 1 2 3 4 5 6 7 8 9 10 Number of training iterations F-score Figure 3: The learning curve of the baseline segmentor 0.86 0.87 0.88 0.89 0.9 1 2 3 4 5 6 7 8 9 10 Number of training iterations F-score Figure 4: The learning curve of the baseline tagger 0.8 0.82 0.84 0.86 0.88 0.9 0.92 1 2 3 4 5 6 7 8 9 10 Number of training iterations F-score segmentation accuracy overall accuracy Figure 5: The learning curves of the joint system gence of perceptron and decide the number of training iterations for the test. It should be noticed that the accuracies from Figure 4 and Figure 5 are not comparable because gold-standard segmentation is used as the input for the baseline tagger. According to the figures, the number of training iterations 893 Tag Seg NN NR VV AD JJ CD NN 20.47 – 0.78 4.80 0.67 2.49 0.04 NR 5.95 3.61 – 0.19 0.04 0.07 0 VV 12.13 6.51 0.11 – 0.93 0.56 0.04 AD 3.24 0.30 0 0.71 – 0.33 0.22 JJ 3.09 0.93 0.15 0.26 0.26 – 0.04 CD 1.08 0.04 0 0 0.07 0 – Table 3: Error analysis for the joint model for the baseline segmentor, POS tagger, and the joint system are set to 8, 6, and 7, respectively for the remaining experiments. There are many factors which can influence the accuracy of the joint model. Here we consider the special character category features and the effect of the tag dictionary. The character category features (templates 15 and 16 in Table 2) represent a Chinese character by all the tags associated with the character in the training data. They have been shown to improve the accuracy of a Chinese POS tagger (Tseng et al., 2005). In the joint model, these features also represent segmentation information, since they concern the starting and ending characters of a word. Development tests showed that the overall tagging F-score of the joint model increased from 84.54% to 84.93% using the character category features. In the development test, the use of the tag dictionary improves the decoding speed of the joint model, reducing the decoding time from 416 seconds to 256 seconds. The overall tagging accuracy also increased slightly, consistent with observations from the pure POS tagger. The error analysis for the development test is shown in Table 3. Here an error is counted when a word in the standard output is not produced by the decoder, due to incorrect segmentation or tag assignment. Statistics about the six most frequently mistaken tags are shown in the table, where each row presents the analysis of one tag from the standard output, and each column gives a wrongly assigned value. The column “Seg” represents segmentation errors. Each figure in the table shows the percentage of the corresponding error from all the errors. It can be seen from the table that the NN-VV and VV-NN mistakes were the most commonly made by the decoder, while the NR-NN mistakes are also freBaseline Joint # SF TF TA SF TF TA 1 96.98 92.91 94.14 97.21 93.46 94.66 2 97.16 93.20 94.34 97.62 93.85 94.79 3 95.02 89.53 91.28 95.94 90.86 92.38 4 95.51 90.84 92.55 95.92 91.60 93.31 5 95.49 90.91 92.57 96.06 91.72 93.25 6 93.50 87.33 89.87 94.56 88.83 91.14 7 94.48 89.44 91.61 95.30 90.51 92.41 8 93.58 88.41 90.93 95.12 90.30 92.32 9 93.92 89.15 91.35 94.79 90.33 92.45 10 96.31 91.58 93.01 96.45 91.96 93.45 Av. 95.20 90.33 92.17 95.90 91.34 93.02 Table 4: The accuracies by 10-fold cross validation SF – segmentation F-score, TF – overall F-score, TA – tagging accuracy by character. quent. These three types of errors significantly outnumber the rest, together contributing 14.92% of all the errors. Moreover, the most commonly mistaken tags are NN and VV, while among the most frequent tags in the corpus, PU, DEG and M had comparatively less errors. Lastly, segmentation errors contribute around half (51.47%) of all the errors. 5.2 Test results 10-fold cross validation is performed to test the accuracy of the joint word segmentor and POS tagger, and to make comparisons with existing models in the literature. Following Ng and Low (2004), we partition the sentences in CTB 3, ordered by sentence ID, into 10 groups evenly. In the nth test, the nth group is used as the testing data. Table 4 shows the detailed results for the cross validation tests, each row representing one test. As can be seen from the table, the joint model outperforms the baseline system in each test. Table 5 shows the overall accuracies of the baseline and joint systems, and compares them to the relevant models in the literature. The accuracy of each model is shown in a row, where “Ng” represents the models from Ng and Low (2004) and “Shi” represents the models from Shi and Wang (2007). Each accuracy measure is shown in a column, including the segmentation F-score (SF), the overall tagging 894 Model SF TF TA Baseline+ (Ng) 95.1 – 91.7 Joint+ (Ng) 95.2 – 91.9 Baseline+* (Shi) 95.85 91.67 – Joint+* (Shi) 96.05 91.86 – Baseline (ours) 95.20 90.33 92.17 Joint (ours) 95.90 91.34 93.02 Table 5: The comparison of overall accuracies by 10-fold cross validation using CTB + – knowledge about sepcial characters, * – knowledge from semantic net outside CTB. F-score (TF) and the tagging accuracy by characters (TA). As can be seen from the table, our joint model achieved the largest improvement over the baseline, reducing the segmentation error by 14.58% and the overall tagging error by 12.18%. The overall tagging accuracy of our joint model was comparable to but less than the joint model of Shi and Wang (2007). Despite the higher accuracy improvement from the baseline, the joint system did not give higher overall accuracy. One likely reason is that Shi and Wang (2007) included knowledge about special characters and semantic knowledge from web corpora (which may explain the higher baseline accuracy), while our system is completely data-driven. However, the comparison is indirect because our partitions of the CTB corpus are different. Shi and Wang (2007) also chunked the sentences before doing 10-fold cross validation, but used an uneven split. We chose to follow Ng and Low (2004) and split the sentences evenly to facilitate further comparison. Compared with Ng and Low (2004), our baseline model gave slightly better accuracy, consistent with our previous observations about the word segmentors (Zhang and Clark, 2007). Due to the large accuracy gain from the baseline, our joint model performed much better. In summary, when compared with existing joint word segmentation and POS tagging systems in the literature, our proposed model achieved the best accuracy boost from the cascaded baseline, and competent overall accuracy. 6 Conclusion and Future Work We proposed a joint Chinese word segmentation and POS tagging model, which achieved a considerable reduction in error rate compared to a baseline twostage system. We used a single linear model for combined word segmentation and POS tagging, and chose the generalized perceptron algorithm for joint training. and beam search for efficient decoding. However, the application of beam search was far from trivial because of the size of the combined search space. Motivated by the question of what are the comparable partial hypotheses in the space, we developed a novel multiple beam search decoder which effectively explores the large search space. Similar techniques can potentially be applied to other problems involving joint inference in NLP. Other choices are available for the decoding of a joint linear model, such as exact inference with dynamic programming, provided that the range of features allows efficient processing. The baseline feature templates for Chinese segmentation and POS tagging, when added together, makes exact inference for the proposed joint model very hard. However, the accuracy loss from the beam decoder, as well as alternative decoding algorithms, are worth further exploration. The joint system takes features only from the baseline segmentor and the baseline POS tagger to allow a fair comparison. There may be additional features that are particularly useful to the joint system. Open features, such as knowledge of numbers and European letters, and relationships from semantic networks (Shi and Wang, 2007), have been reported to improve the accuracy of segmentation and POS tagging. Therefore, given the flexibility of the feature-based linear model, an obvious next step is the study of open features in the joint segmentor and POS tagger. Acknowledgements We thank Hwee-Tou Ng and Mengqiu Wang for their helpful discussions and sharing of experimental data, and the anonymous reviewers for their suggestions. This work is supported by the ORS and Clarendon Fund. 895 References Michael Collins. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proceedings of the EMNLP conference, pages 1–8, Philadelphia, PA. Hal Daum´e III and Daniel Marcu. 2005. Learning as search optimization: Approximate large margin methods for structured prediction. In Proceedings of the ICML Conference, pages 169–176, Bonn, Germany. Jenny Rose Finkel, Christopher D. Manning, and Andrew Y. Ng. 2006. Solving the problem of cascading errors: Approximate Bayesian inference for linguistic annotation pipelines. In Proceedings of the EMNLP Conference, pages 618–626, Sydney, Australia. Tetsuji Nakagawa and Kiyotaka Uchimoto. 2007. A hybrid approach to word segmentation and pos tagging. In Proceedings of ACL Demo and Poster Session, pages 217–220, Prague, Czech Republic. Hwee Tou Ng and Jin Kiat Low. 2004. Chinese part-of-speech tagging: One-at-a-time or all-at-once? Word-based or character-based? In Proceedings of the EMNLP Conference, pages 277–284, Barcelona, Spain. Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proceedings of the EMNLP Conference, pages 133–142, Philadelphia, PA. Murat Saraclar and Brian Roark. 2005. Joint discriminative language modeling and utterance classification. In Proceedings of the ICASSP Conference, volume 1, Philadelphia, USA. Yanxin Shi and Mengqiu Wang. 2007. A dual-layer CRF based joint decoding method for cascade segmentation and labelling tasks. In Proceedings of the IJCAI Conference, Hyderabad, India. Nobuyuki Shimizu and Andrew Haas. 2006. Exact decoding for jointly labeling and chunking sequences. In Proceedings of the COLING/ACL Conference, Poster Sessions, Sydney, Australia. Charles Sutton, Khashayar Rohanimanesh, and Andrew McCallum. 2004. Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data. In Proceedings of the ICML Conference, Banff, Canada. Huihsin Tseng, Daniel Jurafsky, and Christopher Manning. 2005. Morphological features help POS tagging of unknown words across language varieties. In Proceedings of the Fourth SIGHAN Workshop, Jeju Island, Korea. I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. 2004. Support vector machine learning for interdependent and structured output spaces. In Proceedings of the ICML Conference, Banff, Canada. Fei Xia. 2000. The part-of-speech tagging guidelines for the Chinese Treebank (3.0). IRCS Report, University of Pennsylvania. Yue Zhang and Stephen Clark. 2007. Chinese segmentation with a word-based perceptron algorithm. In Proceedings of the ACL Conference, pages 840–847, Prague, Czech Republic. 896
2008
101
Proceedings of ACL-08: HLT, pages 897–904, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics A Cascaded Linear Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging Wenbin Jiang † Liang Huang ‡ Qun Liu † Yajuan L¨u † †Key Lab. of Intelligent Information Processing ‡Department of Computer & Information Science Institute of Computing Technology University of Pennsylvania Chinese Academy of Sciences Levine Hall, 3330 Walnut Street P.O. Box 2704, Beijing 100190, China Philadelphia, PA 19104, USA [email protected] [email protected] Abstract We propose a cascaded linear model for joint Chinese word segmentation and partof-speech tagging. With a character-based perceptron as the core, combined with realvalued features such as language models, the cascaded model is able to efficiently utilize knowledge sources that are inconvenient to incorporate into the perceptron directly. Experiments show that the cascaded model achieves improved accuracies on both segmentation only and joint segmentation and part-of-speech tagging. On the Penn Chinese Treebank 5.0, we obtain an error reduction of 18.5% on segmentation and 12% on joint segmentation and part-of-speech tagging over the perceptron-only baseline. 1 Introduction Word segmentation and part-of-speech (POS) tagging are important tasks in computer processing of Chinese and other Asian languages. Several models were introduced for these problems, for example, the Hidden Markov Model (HMM) (Rabiner, 1989), Maximum Entropy Model (ME) (Ratnaparkhi and Adwait, 1996), and Conditional Random Fields (CRFs) (Lafferty et al., 2001). CRFs have the advantage of flexibility in representing features compared to generative ones such as HMM, and usually behaves the best in the two tasks. Another widely used discriminative method is the perceptron algorithm (Collins, 2002), which achieves comparable performance to CRFs with much faster training, so we base this work on the perceptron. To segment and tag a character sequence, there are two strategies to choose: performing POS tagging following segmentation; or joint segmentation and POS tagging (Joint S&T). Since the typical approach of discriminative models treats segmentation as a labelling problem by assigning each character a boundary tag (Xue and Shen, 2003), Joint S&T can be conducted in a labelling fashion by expanding boundary tags to include POS information (Ng and Low, 2004). Compared to performing segmentation and POS tagging one at a time, Joint S&T can achieve higher accuracy not only on segmentation but also on POS tagging (Ng and Low, 2004). Besides the usual character-based features, additional features dependent on POS’s or words can also be employed to improve the performance. However, as such features are generated dynamically during the decoding procedure, two limitation arise: on the one hand, the amount of parameters increases rapidly, which is apt to overfit on training corpus; on the other hand, exact inference by dynamic programming is intractable because the current predication relies on the results of prior predications. As a result, many theoretically useful features such as higherorder word or POS n-grams are difficult to be incorporated in the model efficiently. To cope with this problem, we propose a cascaded linear model inspired by the log-linear model (Och and Ney, 2004) widely used in statistical machine translation to incorporate different kinds of knowledge sources. Shown in Figure 1, the cascaded model has a two-layer architecture, with a characterbased perceptron as the core combined with other real-valued features such as language models. We 897 Core Linear Model (Perceptron) g1 = P i αi × fi ⃗α Outside-layer Linear Model S = P j wj × gj ⃗w f1 f2 f|R| g1 Word LM: g2 = Pwlm(W) g2 POS LM: g3 = Ptlm(T) g3 Labelling: g4 = P(T|W) g4 Generating: g5 = P(W|T) g5 Length: g6 = |W| g6 S Figure 1: Structure of Cascaded Linear Model. |R| denotes the scale of the feature space of the core perceptron. will describe it in detail in Section 4. In this architecture, knowledge sources that are intractable to incorporate into the perceptron, can be easily incorporated into the outside linear model. In addition, as these knowledge sources are regarded as separate features, we can train their corresponding models independently with each other. This is an interesting approach when the training corpus is large as it reduces the time and space consumption. Experiments show that our cascaded model can utilize different knowledge sources effectively and obtain accuracy improvements on both segmentation and Joint S&T. 2 Segmentation and POS Tagging Given a Chinese character sequence: C1:n = C1 C2 .. Cn the segmentation result can be depicted as: C1:e1 Ce1+1:e2 .. Cem−1+1:em while the segmentation and POS tagging result can be depicted as: C1:e1/t1 Ce1+1:e2/t2 .. Cem−1+1:em/tm Here, Ci (i = 1..n) denotes Chinese character, ti (i = 1..m) denotes POS tag, and Cl:r (l ≤r) denotes character sequence ranges from Cl to Cr. We can see that segmentation and POS tagging task is to divide a character sequence into several subsequences and label each of them a POS tag. It is a better idea to perform segmentation and POS tagging jointly in a uniform framework. According to Ng and Low (2004), the segmentation task can be transformed to a tagging problem by assigning each character a boundary tag of the following four types: • b: the begin of the word • m: the middle of the word • e: the end of the word • s: a single-character word We can extract segmentation result by splitting the labelled result into subsequences of pattern s or bm∗e which denote single-character word and multicharacter word respectively. In order to perform POS tagging at the same time, we expand boundary tags to include POS information by attaching a POS to the tail of a boundary tag as a postfix following Ng and Low (2004). As each tag is now composed of a boundary part and a POS part, the joint S&T problem is transformed to a uniform boundary-POS labelling problem. A subsequence of boundary-POS labelling result indicates a word with POS t only if the boundary tag sequence composed of its boundary part conforms to s or bm∗e style, and all POS tags in its POS part equal to t. For example, a tag sequence b NN m NN e NN represents a threecharacter word with POS tag NN. 3 The Perceptron The perceptron algorithm introduced into NLP by Collins (2002), is a simple but effective discriminative training method. It has comparable performance 898 Non-lexical-target Instances Cn (n = −2..2) C−2=e, C−1=…, C0=U, C1=/, C2=¡ CnCn+1 (n = −2..1) C−2C−1=e…, C−1C0=…U, C0C1=U/, C1C2=/¡ C−1C1 C−1C1=…/ Lexical-target Instances C0Cn (n = −2..2) C0C−2=Ue, C0C−1=U…, C0C0=UU, C0C1=U/, C0C2=U¡ C0CnCn+1 (n = −2..1) C0C−2C−1=Ue…, C0C−1C0=U…U, C0C0C1=UU/, C0C1C2=U/¡ C0C−1C1 C0C−1C1 = U…/ Table 1: Feature templates and instances. Suppose we are considering the third character ”U” in ”e… U /¡”. to CRFs, while with much faster training. The perceptron has been used in many NLP tasks, such as POS tagging (Collins, 2002), Chinese word segmentation (Ng and Low, 2004; Zhang and Clark, 2007) and so on. We trained a character-based perceptron for Chinese Joint S&T, and found that the perceptron itself could achieve considerably high accuracy on segmentation and Joint S&T. In following subsections, we describe the feature templates and the perceptron training algorithm. 3.1 Feature Templates The feature templates we adopted are selected from those of Ng and Low (2004). To compare with others conveniently, we excluded the ones forbidden by the close test regulation of SIGHAN, for example, Pu(C0), indicating whether character C0 is a punctuation. All feature templates and their instances are shown in Table 1. C represents a Chinese character while the subscript of C indicates its position in the sentence relative to the current character (it has the subscript 0). Templates immediately borrowed from Ng and Low (2004) are listed in the upper column named non-lexical-target. We called them non-lexical-target because predications derived from them can predicate without considering the current character C0. Templates in the column below are expanded from the upper ones. We add a field C0 to each template in the upper column, so that it can carry out predication according to not only the context but also the current character itself. As predications generated from such templates depend on the current character, we name these templates lexical-target. Note that the templates of Ng and Low (2004) have already contained some lexical-target ones. With the two kinds Algorithm 1 Perceptron training algorithm. 1: Input: Training examples (xi, yi) 2: ⃗α ←0 3: for t ←1 .. T do 4: for i ←1 .. N do 5: zi ←argmaxz∈GEN(xi) Φ(xi, z) · ⃗α 6: if zi ̸= yi then 7: ⃗α ←⃗α + Φ(xi, yi) −Φ(xi, zi) 8: Output: Parameters ⃗α of predications, the perceptron model will do exact predicating to the best of its ability, and can back off to approximately predicating if exact predicating fails. 3.2 Training Algorithm We adopt the perceptron training algorithm of Collins (2002) to learn a discriminative model mapping from inputs x ∈X to outputs y ∈Y , where X is the set of sentences in the training corpus and Y is the set of corresponding labelled results. Following Collins, we use a function GEN(x) generating all candidate results of an input x , a representation Φ mapping each training example (x, y) ∈X × Y to a feature vector Φ(x, y) ∈Rd, and a parameter vector ⃗α ∈Rd corresponding to the feature vector. d means the dimension of the vector space, it equals to the amount of features in the model. For an input character sequence x, we aim to find an output F(x) satisfying: F(x) = argmax y∈GEN(x) Φ(x, y) · ⃗α (1) Φ(x, y) · ⃗α represents the inner product of feature vector Φ(x, y) and the parameter vector ⃗α. We used the algorithm depicted in Algorithm 1 to tune the parameter vector ⃗α. 899 To alleviate overfitting on the training examples, we use the refinement strategy called “averaged parameters” (Collins, 2002) to the algorithm in Algorithm 1. 4 Cascaded Linear Model In theory, any useful knowledge can be incorporated into the perceptron directly, besides the characterbased features already adopted. Additional features most widely used are related to word or POS ngrams. However, such features are generated dynamically during the decoding procedure so that the feature space enlarges much more rapidly. Figure 2 shows the growing tendency of feature space with the introduction of these features as well as the character-based ones. We noticed that the templates related to word unigrams and bigrams bring to the feature space an enlargement much rapider than the character-base ones, not to mention the higher-order grams such as trigrams or 4-grams. In addition, even though these higher grams were managed to be used, there still remains another problem: as the current predication relies on the results of prior ones, the decoding procedure has to resort to approximate inference by maintaining a list of N-best candidates at each predication position, which evokes a potential risk to depress the training. To alleviate the drawbacks, we propose a cascaded linear model. It has a two-layer architecture, with a perceptron as the core and another linear model as the outside-layer. Instead of incorporating all features into the perceptron directly, we first trained the perceptron using character-based features, and several other sub-models using additional ones such as word or POS n-grams, then trained the outside-layer linear model using the outputs of these sub-models, including the perceptron. Since the perceptron is fixed during the second training step, the whole training procedure need relative small time and memory cost. The outside-layer linear model, similar to those in SMT, can synthetically utilize different knowledge sources to conduct more accurate comparison between candidates. In this layer, each knowledge source is treated as a feature with a corresponding weight denoting its relative importance. Suppose we have n features gj (j = 1..n) coupled with n corre 0 300000 600000 900000 1.2e+006 1.5e+006 1.8e+006 2.1e+006 2.4e+006 2.7e+006 3e+006 3.3e+006 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 Feature space Introduction of features growing curve Figure 2: Feature space growing curve. The horizontal scope X[i:j] denotes the introduction of different templates. X[0:5]: Cn (n = −2..2); X[5:9]: CnCn+1 (n = −2..1); X[9:10]: C−1C1; X[10:15]: C0Cn (n = −2..2); X[15:19]: C0CnCn+1 (n = −2..1); X[19:20]: C0C−1C1; X[20:21]: W0; X[21:22]: W−1W0. W0 denotes the current considering word, while W−1 denotes the word in front of W0. All the data are collected from the training procedure on MSR corpus of SIGHAN bakeoff 2. sponding weights wj (j = 1..n), each feature gj gives a score gj(r) to a candidate r, then the total score of r is given by: S(r) = X j=1..n wj × gj(r) (2) The decoding procedure aims to find the candidate r∗with the highest score: r∗= argmax r S(r) (3) While the mission of the training procedure is to tune the weights wj(j = 1..n) to guarantee that the candidate r with the highest score happens to be the best result with a high probability. As all the sub-models, including the perceptron, are regarded as separate features of the outside-layer linear model, we can train them respectively with special algorithms. In our experiments we trained a 3-gram word language model measuring the fluency of the segmentation result, a 4-gram POS language model functioning as the product of statetransition probabilities in HMM, and a word-POS co-occurrence model describing how much probably a word sequence coexists with a POS sequence. As shown in Figure 1, the character-based perceptron is used as the inside-layer linear model and sends its output to the outside-layer. Besides the output of the perceptron, the outside-layer also receive the outputs 900 of the word LM, the POS LM, the co-occurrence model and a word count penalty which is similar to the translation length penalty in SMT. 4.1 Language Model Language model (LM) provides linguistic probabilities of a word sequence. It is an important measure of fluency of the translation in SMT. Formally, an n-gram word LM approximates the probability of a word sequence W = w1:m with the following product: Pwlm(W) = m Y i=1 Pr(wi|wmax(0,i−n+1):i−1) (4) Similarly, the n-gram POS LM of a POS sequence T = t1:m is: Ptlm(T) = m Y i=1 Pr(ti|tmax(0,i−n+1):i−1) (5) Notice that a bi-gram POS LM functions as the product of transition probabilities in HMM. 4.2 Word-POS Co-occurrence Model Given a training corpus with POS tags, we can train a word-POS co-occurrence model to approximate the probability that the word sequence of the labelled result co-exists with its corresponding POS sequence. Using W = w1:m to denote the word sequence, T = t1:m to denote the corresponding POS sequence, P(T|W) to denote the probability that W is labelled as T, and P(W|T) to denote the probability that T generates W, we can define the cooccurrence model as follows: Co(W, T) = P(T|W)λwt × P(W|T)λtw (6) λwt and λtw denote the corresponding weights of the two components. Suppose the conditional probability Pr(t|w) describes the probability that the word w is labelled as the POS t, while Pr(w|t) describes the probability that the POS t generates the word w, then P(T|W) can be approximated by: P(T|W) ≃ m Y k=1 Pr(tk|wk) (7) And P(W|T) can be approximated by: P(W|T) ≃ m Y k=1 Pr(wk|tk) (8) Pr(w|t) and Pr(t|w) can be easily acquired by Maximum Likelihood Estimates (MLE) over the corpus. For instance, if the word w appears N times in training corpus and is labelled as POS t for n times, the probability Pr(t|w) can be estimated by the formula below: Pr(t|w) ≃n N (9) The probability Pr(w|t) could be estimated through the same approach. To facilitate tuning the weights, we use two components of the co-occurrence model Co(W, T) to represent the co-occurrence probability of W and T, rather than use Co(W, T) itself. In the rest of the paper, we will call them labelling model and generating model respectively. 5 Decoder Sequence segmentation and labelling problem can be solved through a viterbi style decoding procedure. In Chinese Joint S&T, the mission of the decoder is to find the boundary-POS labelled sequence with the highest score. Given a Chinese character sequence C1:n, the decoding procedure can proceed in a left-right fashion with a dynamic programming approach. By maintaining a stack of size N at each position i of the sequence, we can preserve the top N best candidate labelled results of subsequence C1:i during decoding. At each position i, we enumerate all possible word-POS pairs by assigning each POS to each possible word formed from the character subsequence spanning length l = 1.. min(i, K) (K is assigned 20 in all our experiments) and ending at position i, then we derive all candidate results by attaching each word-POS pair p (of length l) to the tail of each candidate result at the prior position of p (position i−l), and select for position i a N-best list of candidate results from all these candidates. When we derive a candidate result from a word-POS pair p and a candidate q at prior position of p, we calculate the scores of the word LM, the POS LM, the labelling probability and the generating probability, 901 Algorithm 2 Decoding algorithm. 1: Input: character sequence C1:n 2: for i ←1 .. n do 3: L ←∅ 4: for l ←1 .. min(i, K) do 5: w ←Ci−l+1:i 6: for t ∈POS do 7: p ←label w as t 8: for q ∈V[i −l] do 9: append D(q, p) to L 10: sort L 11: V[i] ←L[1 : N] 12: Output: n-best results V[n] as well as the score of the perceptron model. In addition, we add the score of the word count penalty as another feature to alleviate the tendency of LMs to favor shorter candidates. By equation 2, we can synthetically evaluate all these scores to perform more accurately comparing between candidates. Algorithm 2 shows the decoding algorithm. Lines 3 −11 generate a N-best list for each character position i. Line 4 scans words of all possible lengths l (l = 1.. min(i, K), where i points to the current considering character). Line 6 enumerates all POS’s for the word w spanning length l and ending at position i. Line 8 considers each candidate result in N-best list at prior position of the current word. Function D derives the candidate result from the word-POS pair p and the candidate q at prior position of p. 6 Experiments We reported results from two set of experiments. The first was conducted to test the performance of the perceptron on segmentation on the corpus from SIGHAN Bakeoff 2, including the Academia Sinica Corpus (AS), the Hong Kong City University Corpus (CityU), the Peking University Corpus (PKU) and the Microsoft Research Corpus (MSR). The second was conducted on the Penn Chinese Treebank 5.0 (CTB5.0) to test the performance of the cascaded model on segmentation and Joint S&T. In all experiments, we use the averaged parameters for the perceptrons, and F-measure as the accuracy measure. With precision P and recall R, the balance F-measure is defined as: F = 2PR/(P + R). 0.966 0.968 0.97 0.972 0.974 0.976 0.978 0.98 0.982 0.984 0 1 2 3 4 5 6 7 8 9 10 F-meassure number of iterations Perceptron Learning Curve Non-lex + avg Lex + avg Figure 3: Averaged perceptron learning curves with Nonlexical-target and Lexical-target feature templates. AS CityU PKU MSR SIGHAN best 0.952 0.943 0.950 0.964 Zhang & Clark 0.946 0.951 0.945 0.972 our model 0.954 0.958 0.940 0.975 Table 2: F-measure on SIGHAN bakeoff 2. SIGHAN best: best scores SIGHAN reported on the four corpus, cited from Zhang and Clark (2007). 6.1 Experiments on SIGHAN Bakeoff For convenience of comparing with others, we focus only on the close test, which means that any extra resource is forbidden except the designated training corpus. In order to test the performance of the lexical-target templates and meanwhile determine the best iterations over the training corpus, we randomly chosen 2, 000 shorter sentences (less than 50 words) as the development set and the rest as the training set (84, 294 sentences), then trained a perceptron model named NON-LEX using only nonlexical-target features and another named LEX using both the two kinds of features. Figure 3 shows their learning curves depicting the F-measure on the development set after 1 to 10 training iterations. We found that LEX outperforms NON-LEX with a margin of about 0.002 at each iteration, and its learning curve reaches a tableland at iteration 7. Then we trained LEX on each of the four corpora for 7 iterations. Test results listed in Table 2 shows that this model obtains higher accuracy than the best of SIGHAN Bakeoff 2 in three corpora (AS, CityU and MSR). On the three corpora, it also outperformed the word-based perceptron model of Zhang and Clark (2007). However, the accuracy on PKU corpus is obvious lower than the best score SIGHAN 902 Training setting Test task F-measure POSSegmentation 0.971 POS+ Segmentation 0.973 POS+ Joint S&T 0.925 Table 3: F-measure on segmentation and Joint S&T of perceptrons. POS-: perceptron trained without POS, POS+: perceptron trained with POS. reported, we need to conduct further research on this problem. 6.2 Experiments on CTB5.0 We turned to experiments on CTB 5.0 to test the performance of the cascaded model. According to the usual practice in syntactic analysis, we choose chapters 1 −260 (18074 sentences) as training set, chapter 271 −300 (348 sentences) as test set and chapter 301 −325 (350 sentences) as development set. At the first step, we conducted a group of contrasting experiments on the core perceptron, the first concentrated on the segmentation regardless of the POS information and reported the F-measure on segmentation only, while the second performed Joint S&T using POS information and reported the F-measure both on segmentation and on Joint S&T. Note that the accuracy of Joint S&T means that a word-POS pair is recognized only if both the boundary tags and the POS’s are correctly labelled. The evaluation results are shown in Table 3. We find that Joint S&T can also improve the segmentation accuracy. However, the F-measure on Joint S&T is obvious lower, about a rate of 95% to the F-measure on segmentation. Similar trend appeared in experiments of Ng and Low (2004), where they conducted experiments on CTB 3.0 and achieved Fmeasure 0.919 on Joint S&T, a ratio of 96% to the F-measure 0.952 on segmentation. As the next step, a group of experiments were conducted to investigate how well the cascaded linear model performs. Here the core perceptron was just the POS+ model in experiments above. Besides this perceptron, other sub-models are trained and used as additional features of the outside-layer linear model. We used SRI Language Modelling Toolkit (Stolcke and Andreas, 2002) to train a 3gram word LM with modified Kneser-Ney smoothing (Chen and Goodman, 1998), and a 4-gram POS Features Segmentation F1 Joint S&T F1 All 0.9785 0.9341 All - PER 0.9049 0.8432 All - WLM 0.9785 0.9340 All - PLM 0.9752 0.9270 All - GPR 0.9774 0.9329 All - LPR 0.9765 0.9321 All - LEN 0.9772 0.9325 Table 4: Contribution of each feture. ALL: all features, PER: perceptron model, WLM: word language model, PLM: POS language model, GPR: generating model, LPR: labelling model, LEN: word count penalty. LM with Witten-Bell smoothing, and we trained a word-POS co-occurrence model simply by MLE without smoothing. To obtain their corresponding weights, we adapted the minimum-error-rate training algorithm (Och, 2003) to train the outside-layer model. In order to inspect how much improvement each feature brings into the cascaded model, every time we removed a feature while retaining others, then retrained the model and tested its performance on the test set. Table 4 shows experiments results. We find that the cascaded model achieves a F-measure increment of about 0.5 points on segmentation and about 0.9 points on Joint S&T, over the perceptron-only model POS+. We also find that the perceptron model functions as the kernel of the outside-layer linear model. Without the perceptron, the cascaded model (if we can still call it “cascaded”) performs poorly on both segmentation and Joint S&T. Among other features, the 4-gram POS LM plays the most important role, removing this feature causes F-measure decrement of 0.33 points on segmentation and 0.71 points on Joint S&T. Another important feature is the labelling model. Without it, the F-measure on segmentation and Joint S&T both suffer a decrement of 0.2 points. The generating model, which functions as that in HMM, brings an improvement of about 0.1 points to each test item. However unlike the three features, the word LM brings very tiny improvement. We suppose that the character-based features used in the perceptron play a similar role as the lowerorder word LM, and it would be helpful if we train a higher-order word LM on a larger scale corpus. Finally, the word count penalty gives improvement to the cascaded model, 0.13 points on segmentation 903 and 0.16 points on Joint S&T. In summary, the cascaded model can utilize these knowledge sources effectively, without causing the feature space of the percptron becoming even larger. Experimental results show that, it achieves obvious improvement over the perceptron-only model, about from 0.973 to 0.978 on segmentation, and from 0.925 to 0.934 on Joint S&T, with error reductions of 18.5% and 12% respectively. 7 Conclusions We proposed a cascaded linear model for Chinese Joint S&T. Under this model, many knowledge sources that may be intractable to be incorporated into the perceptron directly, can be utilized effectively in the outside-layer linear model. This is a substitute method to use both local and non-local features, and it would be especially useful when the training corpus is very large. However, can the perceptron incorporate all the knowledge used in the outside-layer linear model? If this cascaded linear model were chosen, could more accurate generative models (LMs, word-POS co-occurrence model) be obtained by training on large scale corpus even if the corpus is not correctly labelled entirely, or by self-training on raw corpus in a similar approach to that of McClosky (2006)? In addition, all knowledge sources we used in the core perceptron and the outside-layer linear model come from the training corpus, whereas many open knowledge sources (lexicon etc.) can be used to improve performance (Ng and Low, 2004). How can we utilize these knowledge sources effectively? We will investigate these problems in the following work. Acknowledgement This work was done while L. H. was visiting CAS/ICT. The authors were supported by National Natural Science Foundation of China, Contracts 60736014 and 60573188, and 863 State Key Project No. 2006AA010108 (W. J., Q. L., and Y. L.), and by NSF ITR EIA-0205456 (L. H.). We would also like to Hwee-Tou Ng for sharing his code, and Yang Liu and Yun Huang for suggestions. References Stanley F. Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Harvard University Center for Research in Computing Technology. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of EMNLP, pages 1–8, Philadelphia, USA. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th ICML, pages 282–289, Massachusetts, USA. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Reranking and self-training for parser adaptation. In Proceedings of ACL 2006. Hwee Tou Ng and Jin Kiat Low. 2004. Chinese part-ofspeech tagging: One-at-a-time or all-at-once? wordbased or character-based? In Proceedings of EMNLP. Franz Joseph Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30:417–449. Franz Joseph Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL 2003, pages 160–167. Lawrence. R. Rabiner. 1989. A tutorial on hidden markov models and selected applications in speech recognition. In Proceedings of IEEE, pages 257–286. Ratnaparkhi and Adwait. 1996. A maximum entropy part-of-speech tagger. In Proceedings of the Empirical Methods in Natural Language Processing Conference. Stolcke and Andreas. 2002. Srilm - an extensible language modeling toolkit. In Proceedings of the International Conference on Spoken Language Processing, pages 311–318. Nianwen Xue and Libin Shen. 2003. Chinese word segmentation as lmr tagging. In Proceedings of SIGHAN Workshop. Yue Zhang and Stephen Clark. 2007. Chinese segmentation with a word-based perceptron algorithm. In Proceedings of ACL 2007. 904
2008
102
Proceedings of ACL-08: HLT, pages 905–913, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Joint Processing and Discriminative Training for Letter-to-Phoneme Conversion Sittichai Jiampojamarn† Colin Cherry‡ Grzegorz Kondrak† †Department of Computing Science ‡Microsoft Research University of Alberta One Microsoft Way Edmonton, AB, T6G 2E8, Canada Redmond, WA, 98052 {sj,kondrak}@cs.ualberta.ca [email protected] Abstract We present a discriminative structureprediction model for the letter-to-phoneme task, a crucial step in text-to-speech processing. Our method encompasses three tasks that have been previously handled separately: input segmentation, phoneme prediction, and sequence modeling. The key idea is online discriminative training, which updates parameters according to a comparison of the current system output to the desired output, allowing us to train all of our components together. By folding the three steps of a pipeline approach into a unified dynamic programming framework, we are able to achieve substantial performance gains. Our results surpass the current state-of-the-art on six publicly available data sets representing four different languages. 1 Introduction Letter-to-phoneme (L2P) conversion is the task of predicting the pronunciation of a word, represented as a sequence of phonemes, from its orthographic form, represented as a sequence of letters. The L2P task plays a crucial role in speech synthesis systems (Schroeter et al., 2002), and is an important part of other applications, including spelling correction (Toutanova and Moore, 2001) and speech-to-speech machine translation (Engelbrecht and Schultz, 2005). Converting a word into its phoneme representation is not a trivial task. Dictionary-based approaches cannot achieve this goal reliably, due to unseen words and proper names. Furthermore, the construction of even a modestly-sized pronunciation dictionary requires substantial human effort for each new language. Effective rule-based approaches can be designed for some languages such as Spanish. However, Kominek and Black (2006) show that in languages with a less transparent relationship between spelling and pronunciation, such as English, Dutch, or German, the number of letter-to-sound rules grows almost linearly with the lexicon size. Therefore, most recent work in this area has focused on machine-learning approaches. In this paper, we present a joint framework for letter-to-phoneme conversion, powered by online discriminative training. By updating our model parameters online, considering only the current system output and its feature representation, we are able to not only incorporate overlapping features, but also to use the same learning framework with increasingly complex search techniques. We investigate two online updates: averaged perceptron and Margin Infused Relaxed Algorithm (MIRA). We evaluate our system on L2P data sets covering English, French, Dutch and German. In all cases, our system outperforms the current state of the art, reducing the best observed error rate by as much as 46%. 2 Previous work Letter-to-phoneme conversion is a complex task, for which a number of diverse solutions have been proposed. It is a structure prediction task; both the input and output are structured, consisting of sequences of letters and phonemes, respectively. This makes L2P a poor fit for many machine-learning techniques that are formulated for binary classification. 905 The L2P task is also characterized by the existence of a hidden structure connecting input to output. The training data consists of letter strings paired with phoneme strings, without explicit links connecting individual letters to phonemes. The subtask of inserting these links, called letter-to-phoneme alignment, is not always straightforward. For example, consider the word “phoenix” and its corresponding phoneme sequence [f i n I k s], where we encounter cases of two letters generating a single phoneme (ph→f), and a single letter generating two phonemes (x→k s). Fortunately, alignments between letters and phonemes can be discovered reliably with unsupervised generative models. Originally, L2P systems assumed one-to-one alignment (Black et al., 1998; Damper et al., 2005), but recently many-to-many alignment has been shown to perform better (Bisani and Ney, 2002; Jiampojamarn et al., 2007). Given such an alignment, L2P can be viewed either as a sequence of classification problems, or as a sequence modeling problem. In the classification approach, each phoneme is predicted independently using a multi-class classifier such as decision trees (Daelemans and Bosch, 1997; Black et al., 1998) or instance-based learning (Bosch and Daelemans, 1998). These systems predict a phoneme for each input letter, using the letter and its context as features. They leverage the structure of the input but ignore any structure in the output. L2P can also be viewed as a sequence modeling, or tagging problem. These approaches model the structure of the output, allowing previously predicted phonemes to inform future decisions. The supervised Hidden Markov Model (HMM) applied by Taylor (2005) achieved poor results, mostly because its maximum-likelihood emission probabilities cannot be informed by the emitted letter’s context. Other approaches, such as those of Bisani and Ney (2002) and Marchand and Damper (2000), have shown that better performance can be achieved by pairing letter substrings with phoneme substrings, allowing context to be captured implicitly by these groupings. Recently, two hybrid methods have attempted to capture the flexible context handling of classification-based methods, while also modeling the sequential nature of the output. The constraint satisfaction inference (CSInf) approach (Bosch and Canisius, 2006) improves the performance of instance-based classification (Bosch and Daelemans, 1998) by predicting for each letter a trigram of phonemes consisting of the previous, current and next phonemes in the sequence. The final output sequence is the sequence of predicted phonemes that satisfies the most unigram, bigram and trigram agreement constraints. The second hybrid approach (Jiampojamarn et al., 2007) also extends instance-based classification. It employs a many-to-many letter-to-phoneme alignment model, allowing substrings of letters to be classified into substrings of phonemes, and introducing an input segmentation step before prediction begins. The method accounts for sequence information with post-processing: the numerical scores of possible outputs from an instance-based phoneme predictor are combined with phoneme transition probabilities in order to identify the most likely phoneme sequence. 3 A joint approach By observing the strengths and weaknesses of previous approaches, we can create the following prioritized desiderata for any L2P system: 1. The phoneme predicted for a letter should be informed by the letter’s context in the input word. 2. In addition to single letters, letter substrings should also be able to generate phonemes. 3. Phoneme sequence information should be included in the model. Each of the previous approaches focuses on one or more of these items. Classification-based approaches such as the decision tree system (Black et al., 1998) and instance-based learning system (Bosch and Daelemans, 1998) take into account the letter’s context (#1). By pairing letter substrings with phoneme substrings, the joint n-gram approach (Bisani and Ney, 2002) accounts for all three desiderata, but each operation is informed only by a limited amount of left context. The manyto-many classifier of Jiampojamarn et al. (2007) also attempts to account for all three, but it adheres 906                                                                                                        Figure 1: Collapsing the pipeline. strictly to the pipeline approach illustrated in Figure 1a. It applies in succession three separately trained modules for input segmentation, phoneme prediction, and sequence modeling. Similarly, the CSInf approach modifies independent phoneme predictions (#1) in order to assemble them into a cohesive sequence (#3) in post-processing. The pipeline approaches are undesirable for two reasons. First, when decisions are made in sequence, errors made early in the sequence can propagate forward and throw off later processing. Second, each module is trained independently, and the training methods are not aware of the tasks performed later in the pipeline. For example, optimal parameters for a phoneme prediction module may vary depending on whether or not the module will be used in conjunction with a phoneme sequence model. We propose a joint approach to L2P conversion, grounded in dynamic programming and online discriminative training. We view L2P as a tagging task that can be performed with a discriminative learning method, such as the Perceptron HMM (Collins, 2002). The Perceptron HMM naturally handles phoneme prediction (#1) and sequence modeling (#3) simultaneously, as shown in Figure 1b. Furthermore, unlike a generative HMM, it can incorporate many overlapping source n-gram features to represent context. In order to complete the conversion from a pipeline approach to a joint approach, we fold our input segmentation step into the exact search framework by replacing a separate segmentation module (#2) with a monotone phrasal decoder (Zens and Ney, 2004). At this point all three of our desiderata are incorporated into a single module, Algorithm 1 Online discriminative training. 1: α = ⃗0 2: for K iterations over training set do 3: for all letter-phoneme sequence pairs (x, y) in the training set do 4: ˆy = arg maxy′∈Y [α · Φ(x, y′)] 5: update weights α according to ˆy and y 6: end for 7: end for 8: return α as shown in Figure 1c. Our joint approach to L2P lends itself to several refinements. We address an underfitting problem of the perceptron by replacing it with a more robust Margin Infused Relaxed Algorithm (MIRA), which adds an explicit notion of margin and takes into account the system’s current n-best outputs. In addition, with all of our features collected under a unified framework, we are free to conjoin context features with sequence features to create a powerful linearchain model (Sutton and McCallum, 2006). 4 Online discriminative training In this section, we describe our entire L2P system. An outline of our discriminative training process is presented in Algorithm 1. An online process repeatedly finds the best output(s) given the current weights, and then updates those weights to make the model favor the correct answer over the incorrect ones. The system consists of the following three main components, which we describe in detail in Sections 4.1, 4.2 and 4.3, respectively. 1. A scoring model, represented by a weighted linear combination of features (α · Φ(x, y)). 2. A search for the highest scoring phoneme sequence for a given input word (Step 4). 3. An online update equation to move the model away from incorrect outputs and toward the correct output (Step 5). 4.1 Model Given an input word x and an output phoneme sequence y, we define Φ(x, y) to be a feature vector 907 representing the evidence for the sequence y found in x, and α to be a feature weight vector providing a weight for each component of Φ(x, y). We assume that both the input and output consist of m substrings, such that xi generates yi, 0 ≤i < m. At training time, these substrings are taken from a many-to-many letter-to-phoneme alignment. At test time, input segmentation is handled by either a segmentation module or a phrasal decoder. Table 1 shows our feature template that we include in Φ(x, y). We use only indicator features; each feature takes on a binary value indicating whether or not it is present in the current (x, y) pair. The context features express letter evidence found in the input string x, centered around the generator xi of each yi. The parameter c establishes the size of the context window. Note that we consider not only letter unigrams but all n-grams that fit within the window, which enables the model to assign phoneme preferences to contexts containing specific sequences, such as ing and tion. The transition features are HMM-like sequence features, which enforce cohesion on the output side. We include only first-order transition features, which look back to the previous phoneme substring generated by the system, because our early development experiments indicated that larger histories had little impact on performance; however, the number of previous substrings that are taken into account could be extended at a polynomial cost. Finally, the linearchain features (Sutton and McCallum, 2006) associate the phoneme transitions between yi−1 and yi with each n-gram surrounding xi. This combination of sequence and context data provides the model with an additional degree of control. 4.2 Search Given the current feature weight vector α, we are interested in finding the highest-scoring phoneme sequence ˆy in the set Y of all possible phoneme sequences. In the pipeline approach (Figure 1b), the input word is segmented into letter substrings by an instance-based classifier (Aha et al., 1991), which learns a letter segmentation model from many-tomany alignments (Jiampojamarn et al., 2007). The search for the best output sequence is then effectively a substring tagging problem, and we can compute the arg max operation in line 4 of Algorithm 1 context xi−c, yi .. . xi+c, yi xi−cxi−c+1, yi .. . xi+c−1xi+c, yi .. . . . . xi−c . . . xi+c, yi transition yi−1, yi linear xi−c, yi−1, yi chain .. . xi+c, yi−1, yi xi−cxi−c+1, yi−1, yi .. . xi+c−1xi+c, yi−1, yi .. . . . . xi−c . . . xi+c, yi−1, yi Table 1: Feature template. with the standard HMM Viterbi search algorithm. In the joint approach (Figure 1c), we perform segmentation and L2P prediction simultaneously by applying the monotone search algorithm developed for statistical machine translation (Zens and Ney, 2004). Thanks to its ability to translate phrases (in our case, letter substrings), we can accomplish the arg max operation without specifying an input segmentation in advance; the search enumerates all possible segmentations. Furthermore, the language model functionality of the decoder allows us to keep benefiting from the transition and linear-chain features, which are explicit in the previous HMM approach. The search can be efficiently performed by the dynamic programming recurrence shown below. We define Q(j, p) as the maximum score of the phoneme sequence ending with the phoneme p generated by the letter sequence x1 . . . xj. Since we are no longer provided an input segmentation in advance, in this framework we view x as a sequence of J letters, as opposed to substrings. The phoneme p′ is the phoneme produced in the previous step. The expression φ(xj j′+1, p′, p) is a convenient way to express the subvector of our complete feature vector Φ(x, y) that describes the substring pair (xi, yi i−1), where xi = xj j′+1, yi−1 = p′ and yi = p. The value N limits the size of the dynamically created 908 substrings. We use N = 2, which reflects a similar limit in our many-to-many aligner. The special symbol $ represents a starting phoneme or ending phoneme. The value in Q(I + 1, $) is the score of highest scoring phoneme sequence corresponding to the input word. The actual sequence can be retrieved by backtracking through the table Q. Q(0, $) = 0 Q(j, p) = max p′,p, j−N≤j′<j {α · φ(xj j′+1, p′, p) + Q(j′, p′)} Q(J + 1, $) = max p′ {α · φ($, p′, $) + Q(J, p′)} 4.3 Online update We investigate two model updates to drive our online discriminative learning. The simple perceptron update requires only the system’s current output, while MIRA allows us to take advantage of the system’s current n-best outputs. Perceptron Learning a discriminative structure prediction model with a perceptron update was first proposed by Collins (2002). The perceptron update process is relatively simple, involving only vector addition. In line 5 of Algorithm 1, the weight vector α is updated according to the best output ˆy under the current weights and the true output y in the training data. If ˆy = y, there is no update to the weights; otherwise, the weights are updated as follows: α = α + Φ(x, y) −Φ(x, ˆy) (1) We iterate through the training data until the system performance drops on a held-out set. In a separable case, the perceptron will find an α such that: ∀ˆy ∈Y −{y} : α · Φ(x, y) > α · Φ(x, ˆy) (2) Since real-world data is not often separable, the average of all α values seen throughout training is used in place of the final α, as the average generalizes better to unseen data. MIRA In the perceptron training algorithm, no update is derived from a particular training example so long as the system is predicting the correct phoneme sequence. The perceptron has no notion of margin: a slim preference for the correct sequence is just as good as a clear preference. During development, we observed that this lead to underfitting the training examples; useful and consistent evidence was ignored because of the presence of stronger evidence in the same example. The MIRA update provides a principled method to resolve this problem. The Margin Infused Relaxed Algorithm or MIRA (Crammer and Singer, 2003) updates the model based on the system’s n-best output. It employs a margin update which can induce an update even when the 1-best answer is correct. It does so by finding a weight vector that separates incorrect sequences in the n-best list from the correct sequence by a variable width margin. The update process finds the smallest change in the current weights so that the new weights will separate the correct answer from each incorrect answer by a margin determined by a structured loss function. The loss function describes the distance between an incorrect prediction and the correct one; that is, it quantifies just how wrong the proposed sequence is. This update process can be described as an optimization problem: minαn ∥αn −αo ∥ subject to ∀ˆy ∈Yn : αn · (Φ(x, y) −Φ(x, ˆy)) ≥ℓ(y, ˆy) (3) where Yn is a set of n-best outputs found under the current model, y is the correct answer, αo is the current weight vector, αn is the new weight vector, and ℓ(y, ˆy) is the loss function. Since our direct objective is to produce the correct phoneme sequence for a given word, the most intuitive way to define the loss function ℓ(y, ˆy) is binary: 0 if ˆy = y, and 1 otherwise. We refer to this as 0-1 loss. Another possibility is to base the loss function on the phoneme error rate, calculated as the Levenshtein distance between y and ˆy. We can also compute a combined loss function as an equally-weighted linear combination of the 0-1 and phoneme loss functions. MIRA training is similar to averaged perceptron training, but instead of finding the single best answer, we find the n-best answers (Yn) and update weights according to Equation 3. To find the n-best answers, we modify the HMM and monotone search algorithms to keep track of the n-best phonemes at 909 0.0 10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0 0 1 2 3 4 5 6 7 8 Context size Word accuracy (%) Figure 2: Perceptron update with different context size. each cell of the dynamic programming matrix. The optimization in Equation 3 is a standard quadratic programming problem that can be solved by using Hildreth’s algorithm (Censor and Zenios, 1997). The details of our implementation of MIRA within the SVMlight framework (Joachims, 1999) are given in the Appendix A. Like the perceptron algorithm, MIRA returns the average of all weight vectors produced during learning. 5 Evaluation We evaluated our approach on English, German and Dutch CELEX (Baayen et al., 1996), French Brulex, English Nettalk and English CMUDict data sets. Except for English CELEX, we used the data sets from the PRONALSYL letter-to-phoneme conversion challenge1. Each data set is divided into 10 folds: we used the first one for testing, and the rest for training. In all cases, we hold out 5% of our training data to determine when to stop perceptron or MIRA training. We ignored one-to-one alignments included in the PRONALSYL data sets, and instead induced many-to-many alignments using the method of Jiampojamarn et al. (2007). Our English CELEX data set was extracted directly from the CELEX database. After removing duplicate words, phrases, and abbreviations, the data set contained 66,189 word-phoneme pairs, of which 10% was designated as the final test set, and the rest as the training set. We performed our development experiments on the latter part, and then used the final 1Available at http://www.pascal-network.org/ Challenges/PRONALSYL/. The results have not been announced. 83.0 84.0 85.0 86.0 87.0 88.0 89.0 0 10 20 30 40 50 n-best list size Word accuracy (%) Figure 3: MIRA update with different size of n-best list. test set to compare the performance of our system to other results reported in the literature. We report the system performance in terms of word accuracy, which rewards only completely correct phoneme sequences. Word accuracy is more demanding than phoneme accuracy, which considers the number of correct phonemes. We feel that word accuracy is a more appropriate error metric, given the quality of current L2P systems. Phoneme accuracy is not sensitive enough to detect improvements in highly accurate L2P systems: Black et al. (1998) report 90% phoneme accuracy is equivalent to approximately 60% word accuracy, while 99% phoneme accuracy corresponds to only 90% word accuracy. 5.1 Development Experiments We began development with a zero-order Perceptron HMM with an external segmenter, which uses only the context features from Table 1. The zero-order Perceptron HMM is equivalent to training a multiclass perceptron to make independent substring-tophoneme predictions; however, this framework allows us to easily extend to structured models. We investigate the effect of augmenting this baseline system in turn with larger context sizes, the MIRA update, joint segmentation, and finally sequence features. We report the impact of each contribution on our English CELEX development set. Figure 2 shows the performance of our baseline L2P system with different context size values (c). Increasing the context size has a dramatic effect on accuracy, but the effect begins to level off for context sizes greater than 5. Henceforth, we report the 910 Perceptron MIRA Separate segmentation 84.5% 85.8% Phrasal decoding 86.6% 88.0% Table 2: Separate segmentation versus phrasal decoding in terms of word accuracy. results with context size c = 5. Figure 3 illustrates the effect of varying the size of n-best list in the MIRA update. n = 1 is equivalent to taking into account only the best answer, which does not address the underfitting problem. A large n-best list makes it difficult for the optimizer to separate the correct and incorrect answers, resulting in large updates at each step. We settle on n = 10 for the subsequent experiments. The choice of MIRA’s loss function has a minimal impact on performance, probably because our baseline system already has a very high phoneme accuracy. We employ the loss function that combines 0-1 and phoneme error rate, due to its marginal improvement over 0-1 loss on the development set. Looking across columns in Table 2, we observe over 8% reduction in word error rate when the perceptron update is replaced with the MIRA update. Since the perceptron is a considerably simpler algorithm, we continue to report the results of both variants throughout this section. Table 2 also shows the word accuracy of our system after adding the option to conduct joint segmentation through phrasal decoding. The 15% relative reduction in error rate in the second row demonstrates the utility of folding the segmentation step into the search. It also shows that the joint framework enables the system to reduce and compensate for errors that occur in a pipeline. This is particularly interesting because our separate instance-based segmenter is highly accurate, achieving 98% segmentation accuracy. Our experiments indicate that the application of joint segmentation recovers more than 60% of the available improvements, according to an upper bound determined by utilizing perfect segmentation.2 Table 3 illustrates the effect of our sequence features on both the perceptron and MIRA systems. 2Perfect with respect to our many-to-many alignment (Jiampojamarn et al., 2007), but not necessarily in any linguistic sense. Feature Perceptron MIRA zero order 86.6% 88.0% + 1st order HMM 87.1% 88.3% + linear-chain 87.5% 89.3% All features 87.8% 89.4% Table 3: The effect of sequence features on the joint system in terms of word accuracy. Replacing the zero-order HMM with the first-order HMM makes little difference by itself, but combined with the more powerful linear-chain features, it results in a relative error reduction of about 12%. In general, the linear-chain features make a much larger difference than the relatively simple transition features, which underscores the importance of using source-side context when assessing sequences of phonemes. The results reported in Tables 2 and 3 were calculated using cross validation on the training part of the CELEX data set. With the exception of adding the 1st order HMM, the differences between versions are statistically significant according to McNemar’s test at 95% confidence level. On one CPU of AMD Opteron 2.2GHz with 6GB of installed memory, it takes approximately 32 hours to train the MIRA model with all features, compared to 12 hours for the zero-order model. 5.2 System Comparison Table 4 shows the comparison between our approach and other systems on the evaluation data sets. We trained our system using n-gram context, transition, and linear-chain features. All parameters, including the size of n-best list, size of letter context, and the choice of loss functions, were established on the English CELEX development set, as presented in our previous experiments. With the exception of the system described in (Jiampojamarn et al., 2007), which we re-ran on our current test sets, the results of other systems are taken from the original papers. Although these comparisons are necessarily indirect due to different experimental settings, they strongly suggest that our system outperforms all previous published results on all data sets, in some case by large margins. When compared to the current stateof-the-art performance of each data set, the relative reductions in error rate range from 7% to 46%. 911 Corpus MIRA Perceptron M-M HMM Joint n-gram∗ CSInf∗ PbA∗ CART∗ Eng. CELEX 90.51% 88.44% 84.81% 76.3% 84.5% Dutch CELEX 95.32% 95.13% 91.69% 94.5% German CELEX 93.61% 92.84% 90.31% 92.5% 89.38% Nettalk 67.82% 64.87% 59.32% 64.6% 65.35% CMUDict 71.99% 71.03% 65.38% 57.80% Brulex 94.51% 93.89% 89.77% 89.1% Table 4: Word accuracy on the evaluated data sets. MIRA, Perceptron: our systems. M-M HMM: Many-to-Many HMM system (Jiampojamarn et al., 2007). Joint n-gram: Joint n-gram model (Demberg et al., 2007). CSInf: Constraint satisfaction inference (Bosch and Canisius, 2006). PbA: Pronunciation by Analogy (Marchand and Damper, 2006). CART: CART decision tree system (Black et al., 1998). The columns marked with * contain results reported in the literature. “-” indicates no reported results. We have underlined the best previously reported results. 6 Conclusion We have presented a joint framework for letter-tophoneme conversion, powered by online discriminative training. We introduced two methods to convert multi-letter substrings into phonemes: one relying on a separate segmenter, and the other incorporating a unified search that finds the best input segmentation while generating the output sequence. We investigated two online update algorithms: the perceptron, which is straightforward to implement, and MIRA, which boosts performance by avoiding underfitting. Our systems employ source n-gram features and linear-chain features, which substantially increase L2P accuracy. Our experimental results demonstrate the power of a joint approach based on online discriminative training with large feature sets. In all cases, our MIRA-based system advances the current state of the art by reducing the best reported error rate. Appendix A. MIRA Implementation We optimize the objective shown in Equation 3 using the SVMlight framework (Joachims, 1999), which provides the quadratic program solver shown in Equation 4. minw,ξ 1 2 ∥w ∥2 +C P i ξi subject to ∀i, w · ti ≥rhsi −ξi (4) In order to approximate a hard margin using the soft-margin optimizer of SVMlight, we assign a very large penalty value to C, thus making the use of any slack variables (ξi) prohibitively expensive. We define the vector w as the difference between the new and previous weights: w = αn −αo. We constrain w to mirror the constraints in Equation 3. Since each ˆy in the n-best list (Yn) needs a constraint based on its feature difference vector, we define a ti for each: ∀ˆy ∈Yn : ti = Φ(x, y) −Φ(x, ˆy) Substituting that equation along with the inferred equation an = ao + w into our original MIRA constraints yields: (αo + w) · ti ≥ℓ(y, ˆy) Moving αo to the right-hand-side to isolate w · ti on the left, we get a set of mappings that implement MIRA in SVMlight’s optimizer: w αn −αo ti Φ(x, y) −Φ(x, ˆy) rhsi ℓ(y, ˆy) −αo · ti The output of the SVMlight optimizer is an update vector w to be added to the current αo. Acknowledgements This research was supported by the Alberta Ingenuity Fund, and the Natural Sciences and Engineering Research Council of Canada. References David W. Aha, Dennis Kibler, and Marc K. Albert. 1991. Instance-based learning algorithms. Machine Learning, 6(1):37–66. Harald Baayen, Richard Piepenbrock, and Leon Gulikers. 1996. The CELEX2 lexical database. LDC96L14. 912 Maximilian Bisani and Hermann Ney. 2002. Investigations on joint-multigram models for grapheme-tophoneme conversion. In Proceedings of the 7th International Conference on Spoken Language Processing, pages 105–108. Alan W. Black, Kevin Lenzo, and Vincent Pagel. 1998. Issues in building general letter to sound rules. In The Third ESCA Workshop in Speech Synthesis, pages 77– 80. Antal Van Den Bosch and Sander Canisius. 2006. Improved morpho-phonological sequence processing with constraint satisfaction inference. Proceedings of the Eighth Meeting of the ACL Special Interest Group in Computational Phonology, SIGPHON ’06, pages 41–49. Antal Van Den Bosch and Walter Daelemans. 1998. Do not forget: Full memory in memory-based learning of word pronunciation. In Proceedings of NeMLaP3/CoNLL98, pages 195–204, Sydney, Australia. Yair Censor and Stavros A. Zenios. 1997. Parallel Optimization: Theory, Algorithms, and Applications. Oxford University Press. Michael Collins. 2002. Discriminative training methods for Hidden Markov Models: theory and experiments with perceptron algorithms. In EMNLP ’02: Proceedings of the ACL-02 conference on Empirical methods in natural language processing, pages 1–8, Morristown, NJ, USA. Koby Crammer and Yoram Singer. 2003. Ultraconservative online algorithms for multiclass problems. The Journal of Machine Learning Research, 3:951–991. Walter Daelemans and Antal Van Den Bosch. 1997. Language-independent data-oriented grapheme-tophoneme conversion. In Progress in Speech Synthesis, pages 77–89. New York, USA. Robert I. Damper, Yannick Marchand, John DS. Marsters, and Alexander I. Bazin. 2005. Aligning text and phonemes for speech technology applications using an EM-like algorithm. International Journal of Speech Technology, 8(2):147–160. Vera Demberg, Helmut Schmid, and Gregor M¨ohler. 2007. Phonological constraints and morphological preprocessing for grapheme-to-phoneme conversion. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 96–103, Prague, Czech Republic. Herman Engelbrecht and Tanja Schultz. 2005. Rapid development of an afrikaans-english speech-to-speech translator. In International Workshop of Spoken Language Translation (IWSLT), Pittsburgh, PA, USA. Sittichai Jiampojamarn, Grzegorz Kondrak, and Tarek Sherif. 2007. Applying many-to-many alignments and hidden markov models to letter-to-phoneme conversion. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 372–379, Rochester, New York, USA. Thorsten Joachims. 1999. Making large-scale support vector machine learning practical. pages 169–184. MIT Press, Cambridge, MA, USA. John Kominek and Alan W Black. 2006. Learning pronunciation dictionaries: Language complexity and word selection strategies. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 232–239, New York City, USA. Yannick Marchand and Robert I. Damper. 2000. A multistrategy approach to improving pronunciation by analogy. Computational Linguistics, 26(2):195–219. Yannick Marchand and Robert I. Damper. 2006. Can syllabification improve pronunciation by analogy of English? Natural Language Engineering, 13(1):1–24. Juergen Schroeter, Alistair Conkie, Ann Syrdal, Mark Beutnagel, Matthias Jilka, Volker Strom, Yeon-Jun Kim, Hong-Goo Kang, and David Kapilow. 2002. A perspective on the next challenges for TTS research. In IEEE 2002 Workshop on Speech Synthesis. Charles Sutton and Andrew McCallum. 2006. An introduction to conditional random fields for relational learning. In Lise Getoor and Ben Taskar, editors, Introduction to Statistical Relational Learning. MIT Press. Paul Taylor. 2005. Hidden Markov Models for grapheme to phoneme conversion. In Proceedings of the 9th European Conference on Speech Communication and Technology. Kristina Toutanova and Robert C. Moore. 2001. Pronunciation modeling for improved spelling correction. In ACL ’02: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 144–151, Morristown, NJ, USA. Richard Zens and Hermann Ney. 2004. Improvements in phrase-based statistical machine translation. In HLTNAACL 2004: Main Proceedings, pages 257–264, Boston, Massachusetts, USA. 913
2008
103
Proceedings of ACL-08: HLT, pages 914–922, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics A Probabilistic Model for Fine-Grained Expert Search Shenghua Bao1, Huizhong Duan1, Qi Zhou1, Miao Xiong1, Yunbo Cao1,2, Yong Yu1 1Shanghai Jiao Tong University, 2Microsoft Research Asia Shanghai, China, 200240 Beijing, China, 100080 {shhbao,summer,jackson,xiongmiao,yyu} @apex.sjtu.edu.cn [email protected] Abstract Expert search, in which given a query a ranked list of experts instead of documents is returned, has been intensively studied recently due to its importance in facilitating the needs of both information access and knowledge discovery. Many approaches have been proposed, including metadata extraction, expert profile building, and formal model generation. However, all of them conduct expert search with a coarse-grained approach. With these, further improvements on expert search are hard to achieve. In this paper, we propose conducting expert search with a fine-grained approach. Specifically, we utilize more specific evidences existing in the documents. An evidence-oriented probabilistic model for expert search and a method for the implementation are proposed. Experimental results show that the proposed model and the implementation are highly effective. 1 Introduction Nowadays, team work plays a more important role than ever in problem solving. For instance, within an enterprise, people handle new problems usually by leveraging the knowledge of experienced colleagues. Similarly, within research communities, novices step into a new research area often by learning from well-established researchers in the research area. All these scenarios involve asking the questions like “who is an expert on X?” or “who knows about X?” Such questions, which cannot be answered easily through traditional document search, raise a new requirement of searching people with certain expertise. To meet that requirement, a new task, called expert search, has been proposed and studied intensively. For example, TREC 2005, 2006, and 2007 provide the task of expert search within the enterprise track. In the TREC setting, expert search is defined as: given a query, a ranked list of experts is returned. In this paper, we engage our study in the same setting. Many approaches to expert search have been proposed by the participants of TREC and other researchers. These approaches include metadata extraction (Cao et al., 2005), expert profile building (Craswell, 2001, Fu et al., 2007), data fusion (Maconald and Ounis, 2006), query expansion (Macdonald and Ounis, 2007), hierarchical language model (Petkova and Croft, 2006), and formal model generation (Balog et al., 2006; Fang et al., 2006). However, all of them conduct expert search with what we call a coarse-grained approach. The discovering and use of evidence for expert locating is carried out under a grain of document. With it, further improvements on expert search are hard to achieve. This is because different blocks (or segments) of electronic documents usually present different functions and qualities and thus different impacts for expert locating. In contrast, this paper is concerned with proposing a probabilistic model for fine-grained expert search. In fine-grained expert search, we are to extract and use evidence of expert search (usually blocks of documents) directly. Thus, the proposed probabilistic model incorporates evidence of expert search explicitly as a part of it. A piece of finegrained evidence is formally defined as a quadruple, <topic, person, relation, document>, which denotes the fact that a topic and a person, with a certain relation between them, are found in a specific document. The intuition behind the quadruple is that a query may be matched with phrases in various forms (denoted as topic here) and an expert candidate may appear with various name masks (denoted as person here), e.g., full name, email, or abbreviated names. Given a topic and person, relation type is used to measure their closeness and 914 document serves as a context indicating whether it is good evidence. Our proposed model for fine-grained expert search results in an implementation of two stages. 1) Evidence Extraction: document segments in various granularities are identified and evidences are extracted from them. For example, we can have segments in which an expert candidate and a queried topic co-occur within a same section of document-001: “…later, Berners-Lee describes a semantic web search engine experience…” As the result, we can extract an evidence by using samesection relation, i.e., <semantic web search engine, Berners-Lee, same-section, document-001>. 2) Evidence Quality Evaluation: the quality (or reliability) of evidence is evaluated. The quality of a quadruple of evidence consists of four aspects, namely topic-matching quality, person-namematching quality, relation quality, and document quality. If we regard evidence as link of expert candidate and queried topic, the four aspects will correspond to the strength of the link to query, the strength of the link to expert candidate, the type of the link, and the document context of the link respectively. All the evidences with their scores of quality are merged together to generate a single score for each expert candidate with regard to a given query. We empirically evaluate our proposed model and implementation on the W3C corpus which is used in the expert search task at TREC 2005 and 2006. Experimental results show that both explored evidences and evaluation of evidence quality can improve the expert search significantly. Compared with existing state-of-the-art expert search methods, the probabilistic model for fine-grained expert search shows promising improvement. The rest of the paper is organized as follows. Section 2 surveys existing studies on expert search. Section 3 and Section 4 present the proposed probabilistic model and its implementation, respectively. Section 5 gives the empirical evaluation. Finally, Section 6 concludes the work. 2 Related Work 2.1 Expert Search Systems One setting for automatic expert search is to assume that data from specific resources are available. For example, Expertise Recommender (Kautz et al., 1996), Expertise Browser (Mockus and Herbsleb, 2002) and the system in (McDonald and Ackerman, 1998) make use of log data in software development systems to find experts. Yet another approach is to mine expert and expertise from email communications (Campbell et al., 2003; Dom et al. 2003; Sihn and Heeren, 2001). Searching expert from general documents has also been studied (Davenport and Prusak, 1998; Mattox et al., 1999; Hertzum and Pejtersen, 2000). P@NOPTIC employs what is referred to as the ‘profile-based’ approach in searching for experts (Craswell et al., 2001). Expert/Expert-Locating (EEL) system (Steer and Lochbaum, 1988) uses the same approach in searching for expert groups. DEMOIR (Yimam, 1996) enhances the profilebased approach by separating co-occurrences into different types. In essence, the profile-based approach utilizes the co-occurrences between query words and people within documents. 2.2 Expert Search at TREC A task on expert search was organized within the enterprise track at TREC 2005, 2006 and 2007 (Craswell et al., 2005; Soboroff et al., 2006; Bailey et al., 2007). Many approaches have been proposed for tackling the expert search task within the TREC track. Cao et al. (2005) propose a two-stage model with a set of extracted metadata. Balog et al. (2006) compare two generative models for expert search. Fang et al. (2006) further extend their generative model by introducing the prior of expert distribution and relevance feedback. Petkova and Croft (2006) further extend the profile based method by using a hierarchical language model. Macdonald and Ounis (2006) investigate the effectiveness of the voting approach and the associated data fusion techniques. However, such models are conducted in a coarse-grain scope of document as discussed before. In contrast, our study focuses on proposing a model for conducting expert search in a finegrain scope of evidence (local context). 3 Fine-grained Expert Search Our research is to investigate a direct use of the local contexts for expert search. We call each local context of such kind as fine-grained evidence. In this work, a fine-grained evidence is formally defined as a quadruple, <topic, person, relation, 915 document>. Such a quadruple denotes that a topic and a person occurrence, with a certain relation between them, are found in a specific document. Recall that topic is different from query. For example, given a query “semantic web coordination”, the corresponding topic may be either “semantic web” or “web coordination”. Similarly, person here is different from expert candidate. E.g, given an expert candidate “Ritu Raj Tiwari”, the matched person may be “Ritu Raj Tiwari”, “Tiwari”, or “RRT” etc. Although both the topics and persons may not match the query and expert candidate exactly, they do have certain indication on the connection of query “semantic web coordination” and expert “Ritu Raj Tiwari”. 3.1 Evidence-Oriented Expert Search Model We conduct fine-grained expert search by incorporating evidence of local context explicitly in a probabilistic model which we call an evidenceoriented expert search model. Given a query q, the probability of a candidate c being an expert (or knowing something about q) is estimated as ( | ) ( , | ) ( | , ) ( | ) e e P c q P c e q P c e q P e q = = ! ! , (1) where e denotes a quadruple of evidence. Using the relaxation that the probability of c is independent of a query q given an evidence e, we can reduce Equation (1) as, ( | ) ( | ) ( | ) e P c q P c e P e q =! . (2) Compared to previous work, our model conducts expert search with a new way in which local contexts of evidence are used to bridge a query q and an expert candidate c. The new way enables the expert search system to explore various local contexts in a precise manner. In the following sub-sections, we will detail two sub-models: the expert matching model P(c|e) and the evidence matching model P(e|q). 3.2 Expert Matching Model We expand the evidence e as quadruple <topic, people, relation, document> (<t, p, r, d> for short) for expert matching. Given a set of related evidences, we assume that the generation of an expert candidate c is independent with topic t and omit it in expert matching. Therefore, we simplify the expert matching formula as below: ) , | ( ) | ( ) , , | ( ) | ( d r p P p c P d r p c P e c P = = , (3) where P(c|p) depends on how an expert candidate c matches to a person occurrence p (e.g. full name or email of a person). The different ways of matching an expert candidate c with a person occurrence p results in varied qualities. P(c|p) represents the quality. P(p|r,d) expresses the probability of an occurrence p given a relation r and a document d. P(p|r,d) is estimated in MLE as, ) , ( ) , , ( ) , | ( d r L d r p freq d r p P = , (4) where freq(p,r,d) is the frequency of person p matched by relation r in document d, and L(r, d) is the frequency of all the persons matched by relation r in d. This estimation can further be smoothed by using the evidence collection as follows: ! " # + = D d S D d r p P d r p P d r p P ' | | )' , | ( ) 1( ) , | ( ) , | ( µ µ , (5) where D denotes the whole document collection. |D| is the total number of documents. We use Dirichlet prior in smoothing of parameter µ: K d r L d r L + = ) , ( ) , ( µ , (6) where K is the average frequency of all the experts in the collection. 3.3 Evidence Matching Model By expanding the evidence e and employing independence assumption, we have the following formula for evidence matching: ) | ( ) | ( ) | ( ) | ( ) | , , , ( ) | ( q d P q r P q p P q t P q d r p t P q e P = = . (7) In the following, we are to explain what these four terms represent and how they can be estimated. The first term P(t|q) represents the probability that a query q matches to a topic t in evidence. Recall that a query q may match a topic t in various ways, not necessarily being identical to t. For example, both topic “semantic web” and “semantic web search engine” can match the query “semantic web search engine”. The probability is defined as 916 ( ) ) , ( ) | ( q t type P q t P ! , (8) where type(t, q) represents the way that q matches to t, e.g., phrase matching. Different matching methods are associated with different probabilities. The second term P(p|q) represents the probability that a person p is generated from a query q. The probability is further approximated by the prior probability of p, ) ( ) | ( p P q p P ! . (9) The prior probability can be estimated by MLE, i.e., the ratio of total occurrences of person p in the collection. The third term represents the probability that a relation r is generated from a query q. Here, we approximate the probability as )) ( ( ) | ( r type P q r P ! , (10) where type(r) represents the way r connecting query and expert. P(type(r)) represents the reliability of relation type of r. Following the Bayes rule, the last term can be transformed as ) ( ) | ( ) ( ) ( ) | ( ) | ( d P d q P q P d P d q P q d P ! = , (11) where priority distribution P(d) can be estimated based on static rank, e.g., PageRank (Brin and Page, 1998). P(q|d) can be estimated by using a standard language model for IR (Ponte and Croft, 1998). In summary, Equation (7) is converted to ( ) ) ( ) | ( )) ( ( ) ( ) , ( ) | ( d P d q P r type P p P q t type P q e P ! . (12) 3.4 Evidence Merging We assume that the ranking score of an expert can be acquired by summing up together all scores of the supporting evidences. Thus we calculate experts’ scores by aggregating the scores from all evidences as in Equation (1). 4 Implementation The implementation of the proposed model consists of two stages, namely evidence extraction and evidence quality evaluation. 4.1 Evidence Extraction Recall that we define an evidence for expert search as a quadruple <topic, person, relation, document>. The evidence extraction covers the extraction of the first three elements, namely person identification, topic discovering and relation extraction. 4.1.1 Person Identification The occurrences of an expert can be in various forms, such as name and email address. We call each type of form an expert mask. Table 1 provides a statistic on various masks on the basis of W3C corpus. In Table 1, rate is the proportion of the person occurrences with relevant masks to the person occurrences with any of the masks, and ambiguity is defined as the probability that a mask is shared by more than one expert. Mask Rate/Ambiguity Sample Full Name(NF) 48.2% / 0.0000 Ritu Raj Tiwari Email Name(NE) 20.1% / 0.0000 [email protected] Combined Name (NC) 4.2% /0.3992 Tiwari, Ritu R; R R Tiwari Abbr. Name(NA) 21.2% / 0.4890 Ritu Raj ; Ritu Short Name(NS) 0.7% / 0.6396 RRT Alias, new email (NAE) 7% / 0.4600 Ritiwari [email protected] Table 1. Various masks and their ambiguity 1) Every occurrence of a candidate’s email address is normalized to the appropriate candidate_id. 2) Every occurrence of a candidate’s full_name is normalized to the appropriate candidate_id if there is no ambiguity; otherwise, the occurrence is normalized to the candidate_id of the most frequent candidate with that full_name. 3) Every occurrence of combined name, abbreviated name, and email alias is normalized to the appropriate candidate_id if there is no ambiguity; otherwise, the occurrence may be normalized to the candidate_id of a candidate whose full name also appears in the document. 4) All the personal occurrences other than those covered by Heuristic 1) ~ 3) are ignored. Table 2. Heuristic rules for expert extraction As Table 1 demonstrates, it is not an easy task to identify all the masks with regards to an expert. On one hand, the extraction of full name and email address is straightforward but suffers from low coverage. On the other hand, the extraction of 917 combined name and abbreviated name can complement the coverage, while needs handling of ambiguity. Table 2 provides the heuristic rules that we use for expert identification. In the step 2) and 3), the rules use frequency and context discourse for resolving ambiguities respectively. With frequency, each expert candidate actually is assigned a prior probability. With context discourse, we utilize the intuition that person names appearing similar in a document usually refers to the same person. 4.1.2 Topic Discovering A queried topic can occur within documents in various forms, too. We use a set of query processing techniques to handle the issue. After the processing, a set of topics transformed from an original query will be obtained and then be used in the search for experts. Table 3 shows five forms of topic discovering from a given query. Forms Description Sample Phrase Match(QP) The exact match with original query given by users “semantic web search engine” Bi-gram Match(QB) A set of matches formed by extracting bi-gram of words in the original query “semantic web” “search engine” Proximity Match(QPR) Each query term appears as a neighborhood within a window of specified size “semantic web enhanced search engine” Fuzzy Match(QF) A set of matches, each of which resembles the original query in appearance. “sementic web seerch engine” Stemmed Match(QS) A match formed by stemming the original query. “sementic web seerch engin” Table 3. Discovered topics from query “semantic web search engine” 4.1.3 Relation Extraction We focus on extracting relations between topics and expert candidates within a span of a document. To make the extraction easier, we partition a document into a pre-defined layout. Figure 1 provides a template in Backus–Naur form. Figure 2 provides a practical use of the template. Note that we are not restricting the use of the template only for certain corpus. Actually the template can be applied to many kinds of documents. For example, for web pages, we can construct the <Title> from either the ‘title’ metadata or the content of web pages (Hu et al., 2006). As for e-mail, we can use the ‘subject’ field as the <Title>. Figure. 1. A template of document layout ... ... ... RDF Primer Editors: Frank Manola, [email protected] Eric Miller, W3C, [email protected] 2. Making Statements About Resources RDF is intended to provide a simple way to make state These capabilities (the normative specification describe) 2.1 Basic Concepts Imagine trying to state that someone named John Smith The form of a simple statement such as: <Title> <Author> <Body> <Section Title> <Section> <Section Body> Figure 2. An example use of the layout template With the layout of partitioned documents, we can then explore many types of relations among different blocks. In this paper, we demonstrate the use of five types of relations by extending the study in (Cao et al., 2005). Section Relation (RS): The queried topic and the expert candidate occur in the same <Section>. Windowed Section Relation (RWS): The queried topic and the expert candidate occur within a fixed window of a <Section>. In our experiment, we used a window of 200 words. Reference Section Relation (RRS): Some <Section>s should be treated specially. For example, the <Section> consisting of reference information like a list of <book, author> can serve as a reliable source connecting a topic and an expert candidate. We call the relation appearing in a special type of <Section> a special reference section relation. It might be argued whether the use of special sections can be generalized. According to our survey, the special <Section>s can be found in various sites such as Wikipedia as well as W3C. Title-Author Relation (RTA): The queried topic appears in the <Title> and the expert candidate appears in the <Author>. 918 Section Title-Body Relation (RSTB): The queried topic and the expert candidate appear in the <Section Title> and <Section Body> of the same <Section>, respectively. Reversely, the queried topic and the expert candidate can appear in the <Section Body> and <Section Title> of a <Section>. The latter case is used to characterize the documents introducing certain expert or the expert introducing certain document. Note that our model is not restricted to use these five relations. We use them only for the aim of demonstrating the flexibility and effectiveness of fine-grained expert search. 4.2 Evidence Quality Evaluation In this section, we elaborate the mechanism used for evaluating the quality of evidence. 4.2.1 Topic-Matching Quality In Section 4.1.2, we use five techniques in processing query matches, which yield five sets of match types for a given query. Obviously, the different query matches should be associated with different weights because they represent different qualities. We further note that different bi-grams generated from the same query with the bi-gram matching method might also present different qualities. For example, both topic “css test” and “test suite” are the bi-gram matching for query “css test suite”; however, the former might be more informative. To model that, we use the number of returned documents to refine the query weight. The intuition behind that is similar to the thought of IDF popularly used in IR as we prefer to the distinctive bigrams. Taking into consideration the above two factors, we calculate the topic-matching quality Qt (corresponding to P(type(t,q)) in Equation (12) ) for the given query q as t t t t df df MIN q t type W Q ) ( )) , ( ( ' ' = , (13) where t means the discovered topic from a document and type(t,q) is the matching type between topic t and query q. W(type(t,q)) is the weight for a certain query type, dft is the number of returned documents matched by topic t. In our experiment, we use the 10 training topics of TREC2005 as our training data, and the best quality scores for phrase match, bi-gram match, proximity match, fuzzy match, and stemmed match are 1, 0.01, 0.05, 10-8, and 10-4, respectively. 4.2.2 Person-Matching Quality An expert candidate can occur in the documents in various ways. The most confident occurrence should be the ones in full name or email address. Others can include last name only, last name plus initial of first name, etc. Thus, the action of rejecting or accepting a person from his/her mask (the surface expression of a person in the text) is not simply a Boolean decision, but a probabilistic one with a reliability weight Qp (corresponding to P(c|p) in Equation (3) ). Similarly, the best trained weights for full name, email name, combined name, abbreviated name, short name, and alias email are set to 1, 1, 0.8, 0.2, 0.2, and 0.1, respectively. 4.2.3 Relation Type Quality The relation quality consists of two factors. One factor is about the type of the relation. Different types of relations indicate different strength of the connection between expert candidates and queried topics. In our system, the section title-body relation is given the highest confidence. The other factor is about the degree of proximity between a query and an expert candidate. The intuition is that, the more distant are a query and an expert candidate within a relation, the looser the connection between them is. To include these two factors, the quality score Qr (corresponding to P(type(r)) in Equation (12) )of a relation r is defined as: 1 ) , ( + = t p dis C W Q r r r , (14) where Wr is the weight of relation type r, dis(p, t) is the distance from the person occurrence p to the queried topic t and Cr is a constant for normalization. Again, we optimize the Wr based on the training topics, the best weights for section relation, windowed section relation, reference section relation, title-author relation, and section title-body relation are 1, 4, 10, 45, and 1000 respectively. 4.2.4 Document Quality The quality of evidence also depends on the quality of the document, the context in which it is found. The document context can affect the credibility of the evidence in two ways: 919 Static quality: indicating the authority of a document. In our experiment, the static quality Qd (corresponding to P(d) in Equation (12) ) is estimated by the PageRank, which is calculated using a standard iterative algorithm with a damping factor of 0.85 (Brin and Page, 1998). Dynamic quality: by “dynamic”, we mean the quality score varies for different queries q. We denote the dynamic quality as QDY(d,q) (corresponding to P(q|d) in Equation (12) ), which is actually the document relevance score returned by a standard language model for IR(Ponte and Croft, 1998). 5 Experimental Results 5.1 The Evaluation Data In our experiment, we used the data set in the expert search task of enterprise search track at TREC 2005 and 2006. The document collection is a crawl of the public W3C sites in June 2004. The crawl comprises in total 331,307 web pages. In the following experiments, we used the training set of 10 topics of TREC 2005 for tuning the parameters aforementioned in Section 4.2, and used the test set of 50 topics of TREC 2005 and 49 topics of TREC 2006 as the evaluation data sets. 5.2 Evaluation Metrics We used three measures in evaluation: Mean average precision (MAP), R-precision (R-P), and Top N precision (P@N). They are also the standard measures used in the expert search task of TREC. 5.3 Evidence Extraction In the following experiments, we constructed the baseline by using the query matching methods of phrase matching, the expert matching methods of full name matching and email matching, and the relation of section relation. To show the contribution of each individual method for evidence extraction, we incrementally add the methods to the baseline method. In the following description, we will use ‘+’ to denote applying new method on the previous setting. 5.3.1 Query Matching Table 4 shows the results of expert search achieved by applying different methods of query matching. QB, QPR, QF, and QS denote bi-gram match, proximity match, fuzzy match, and stemmed match, respectively. The performance of the proposed model increases stably on MAP when new query matches are added incrementally. We also find that the introduction of QF and QS bring some drop on R-Precision and P@10. It is reasonable because both QF and QS bring high recall while affect the precision a bit. The overall relative improvement of using query matching compared to the baseline is presented in the row “Improv.”. We performed ttests on MAP. The p-values (< 0.05) are presented in the “T-test” row, which shows that the improvement is statistically significant. TREC 2005 TREC 2006 MAP R-P P@10 MAP R-P P@10 Baseline 0.1840 0.2136 0.3060 0.3752 0.4585 0.5604 +QB 0.1957 0.2438 0.3320 0.4140 0.4910 0.5799 +QPR 0.2024 0.2501 0.3360 0.4530 0.5137 0.5922 +QF ,QS 0.2030 0.2501 0.3360 0.4580 0.5112 0.5901 Improv. 10.33% 17.09% 9.80% 22.07% 11.49% 5.30% T-test 0.0084 0.0000 Table 4. The effects of query matching 5.3.2 Person Matching For person matching, we considered four types of masks, namely combined name (NC), abbreviated name (NA), short name (NS) and alias and new email (NAE). Table 5 provides the results on person matching at TREC 2005 and 2006. The baseline is the best model achieved in previous section. It seems that there is little improvement on P@10 while an improvement of 6.21% and 14.00% is observed on MAP. This might be due to the fact that the matching method such as NC has a higher recall but lower precision. TREC 2005 TREC 2006 MAP R-P P@10 MAP R-P P@10 Baseline 0.2030 0.2501 0.3360 0.4580 0.5112 0.5901 +NC 0.2056 0.2539 0.3463 0.4709 0.5152 0.5931 +NA 0.2106 0.2545 0.3400 0.5010 0.5181 0.6000 +NS 0.2111 0.2578 0.3400 0.5121 0.5192 0.6000 +NAE 0.2156 0.2591 0.3400 0.5221 0.5212 0.6000 Improv. 6.21% 3.60% 1.19% 14.00% 1.96% 1.68% T-test 0.0064 0.0057 Table 5. The effects of person matching 920 5.3.3 Multiple Relations For relation extraction, we experimentally demonstrated the use of each of the five relations proposed in Section 4.1.3, i.e., section relation (RS), windowed section relation (RWS), reference section relation (RRS), title-author relation (RTA), and section title-body relation (RSTB). We used the best model achieved in previous section as the baseline. From Table 6, we can see that the section titlebody relation contributes the most to the improvement of the performance. By using all the discovered relations, a significant improvement of 19.94% and 8.35% is achieved. TREC 2005 TREC 2006 MAP R-P P@10 MAP R-P P@10 Baseline 0.2156 0.2591 0.3400 0.5221 0.5212 0.6000 +RWS 0.2158 0.2633 0.3380 0.5255 0.5311 0.6082 +RRS 0.2160 0.2630 0.3380 0.5272 0.5314 0.6061 +RTA 0.2234 0.2634 0.3580 0.5354 0.5355 0.6245 +RSTB 0.2586 0.3107 0.3740 0.5657 0.5669 0.6510 Improv. 19.94% 19.91% 10.00% 8.35% 8.77% 8.50% T-test 0.0013 0.0043 Table 6. The effects of relation extraction 5.4 Evidence Quality The performance of expert search can be further improved by considering the evidence quality. Table 7 shows the results by considering the differences in quality. We evaluated two kinds of evidence quality: context static quality (Qd) and context dynamic quality (QDY). Each of the evidence quality contributes about 1%-2% improvement for MAP. The improvement from the PageRank that we calculated from the corpus implies that the web scaled rank technique is also effective in the corpus of documents. Finally, we find a significant relative improvement of 6.13% and 2.86% on MAP by using evidence qualities. TREC 2005 TREC 2006 MAP R-P P@10 MAP R-P P@10 Baseline 0.2586 0.3107 0.3740 0.5657 0.5669 0.6510 +Qd 0.2711 0.3188 0.3720 0.5900 0.5813 0.6796 +QDY 0.2755 0.3252 0.3880 0.5943 0.5877 0.7061 Improv. 6.13% 4.67% 3.74% 2.86% 3.67% 8.61% T-test 0.0360 0.0252 Table 7. The effects of using evidence quality 5.5 Comparison with Other Systems In Table 8, we juxtapose the results of our probabilistic model for fine-grained expert search with automatic expert search systems from the TREC evaluation. The performance of our proposed model is rather encouraging, which achieved comparable results to the best automatic systems on the TREC 2005 and 2006. MAP R-prec Prec@10 TREC2005 0.2749 0.3330 0.4520 Rank-1 System TREC20061 0.5947 0.5783 0.7041 TREC2005 0.2755 0.3252 0.3880 Our System TREC2006 0.5943 0.5877 0.7061 Table 8. Comparison with other systems 6 Conclusions This paper proposed to conduct expert search using a fine-grained level of evidence. Specifically, quadruple evidence was formally defined and served as the basis of the proposed model. Different implementations of evidence extraction and evidence quality evaluation were also comprehensively studied. The main contributions are: 1. The proposal of fine-grained expert search, which we believe to be a promising direction for exploring subtle aspects of evidence. 2. The proposal of probabilistic model for finegrained expert search. The model facilitates investigating the subtle aspects of evidence. 3. The extensive evaluation of the proposed probabilistic model and its implementation on the TREC data set. The evaluation shows promising expert search results. In future, we are to explore more domain independent evidences and evaluate the proposed model on the basis of the data from other domains. Acknowledgments The authors would like to thank the three anonymous reviewers for their elaborate and helpful comments. The authors also appreciate the valuable suggestions of Hang Li, Nick Craswell, Yangbo Zhu and Linyun Fu. 1 This system, where cluster-based re-ranking is used, is a variation of the fine-grained model proposed in this paper. 921 References Bailey, P., Soboroff , I., Craswell, N., and Vries A.P., Overview of the TREC 2007 Enterprise Track. In: Proc. of TREC 2007. Balog, K., Azzopardi, L., and Rijke, M. D., 2006. Formal models for expert finding in enterprise corpora. In: Proc. of SIGIR’06,pp.43-50. Brin, S. and Page, L., 1998. The anatomy of a rlargescale hypertextual Web search engine, Computer Networks and ISDN Systems (30), pp.107-117. Campbell, C.S., Maglio, P., Cozzi, A. and Dom, B., 2003. Expertise identification using email communications. In: Proc. of CIKM ’03 pp.528–531. Cao, Y., Liu, J., and Bao, S., and Li, H., 2005. Research on expert search at enterprise track of TREC 2005. In: Proc. of TREC 2005. Craswell, N., Hawking, D., Vercoustre, A. M. and Wilkins, P., 2001. P@NOPTIC Expert: searching for experts not just for documents. In: Proc. of Ausweb’01. Craswell, N., Vries, A.P., and Soboroff, I., 2005. Overview of the TREC 2005 Enterprise Track. In: Proc. of TREC 2005. Davenport, T. H. and Prusak, L., 1998. Working Knowledge: how organizations manage what they know. Howard Business, School Press, Boston, MA. Dom, B., Eiron, I., Cozzi A. and Yi, Z., 2003. Graphbased ranking algorithms for e-mail expertise analysis, In: Proc. of SIGMOD’03 workshop on Research issues in data mining and knowledge discovery. Fang, H., Zhou, L., Zhai, C., 2006. Language models for expert finding-UIUC TREC 2006 Enterprise Track Experiments, In: Proc. of TREC2006. Fu, Y., Xiang, R., Liu, Y., Zhang, M., Ma, S., 2007. A CDD-based Formal Model for Expert Finding. In Proc. of CIKM 2007. Hertzum, M. and Pejtersen, A. M., 2000. The information-seeking practices of engineers: searching for documents as well as for people. Information Processing and Management, 36(5), pp.761–778. Hu, Y., Li, H., Cao, Y., Meyerzon, D. Teng, L., and Zheng, Q., 2006. Automatic extraction of titles from general documents using machine learning, IPM. Kautz, H., Selman, B. and Milewski, A., 1996. Agent amplified communication. In: Proc. of AAAI‘96, pp. 3–9. Mattox, D., Maybury, M. and Morey, D., 1999. Enterprise expert and knowledge discovery. Technical Report. McDonald, D. W. and Ackerman, M. S., 1998. Just Talk to Me: a field study of expertise location. In: Proc. of CSCW’98, pp.315-324. Mockus, A. and Herbsleb, J.D., 2002. Expertise Browser: a quantitative approach to identifying expertise, In: Proc. of ICSE’02. Maconald, C. and Ounis, I., 2006. Voting for candidates: adapting data fusion techniques for an expert search task. In: Proc. of CIKM'06, pp.387-396. Macdonald, C. and Ounis, I., 2007. Expertise Drift and Query Expansion in Expert Search. In Proc. of CIKM 2007. Petkova, D., and Croft, W. B., 2006. Hierarchical language models for expert finding in enterprise corpora, In: Proc. of ICTAI’06, pp.599-608. Ponte, J. and Croft, W., 1998. A language modeling approach to information retrieval, In: Proc. of SIGIR’98, pp.275-281. Sihn, W. and Heeren F., 2001. Xpertfinder-expert finding within specified subject areas through analysis of e-mail communication. In: Proc. of the 6th Annual Scientific conference on Web Technology. Soboroff, I., Vries, A.P., and Craswell, N., 2006. Overview of the TREC 2006 Enterprise Track. In: Proc. of TREC 2006. Steer, L.A. and Lochbaum, K.E., 1988. An expert/expert locating system based on automatic representation of semantic structure, In: Proc. of the 4th IEEE Conference on Artificial Intelligence Applications. Yimam, D., 1996. Expert finding systems for organizations: domain analysis and the DEMOIR approach. In: ECSCW’99 workshop of beyond knowledge management: managing expertise, pp. 276–283. 922
2008
104
Proceedings of ACL-08: HLT, pages 923–931, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Credibility Improves Topical Blog Post Retrieval Wouter Weerkamp ISLA, University of Amsterdam [email protected] Maarten de Rijke ISLA, University of Amsterdam [email protected] Abstract Topical blog post retrieval is the task of ranking blog posts with respect to their relevance for a given topic. To improve topical blog post retrieval we incorporate textual credibility indicators in the retrieval process. We consider two groups of indicators: post level (determined using information about individual blog posts only) and blog level (determined using information from the underlying blogs). We describe how to estimate these indicators and how to integrate them into a retrieval approach based on language models. Experiments on the TREC Blog track test set show that both groups of credibility indicators significantly improve retrieval effectiveness; the best performance is achieved when combining them. 1 Introduction The growing amount of user generated content available online creates new challenges for the information retrieval (IR) community, in terms of search and analysis tasks for this type of content. The introduction of a blog retrieval track at TREC (Ounis et al., 2007) has created a platform where we can begin to address these challenges. During the 2006 edition of the track, two types of blog post retrieval were considered: topical (retrieve posts about a topic) and opinionated (retrieve opinionated posts about a topic). Here, we consider the former task. Blogs and blog posts offer unique features that may be exploited for retrieval purposes. E.g., Mishne (2007b) incorporates time in a blog post retrieval model to account for the fact that many blog queries and posts are a response to a news event (Mishne and de Rijke, 2006). Data quality is an issue with blogs—the quality of posts ranges from low to edited news article-like. Some approaches to post retrieval use indirect quality measures (e.g., elaborate spam filtering (Java et al., 2007) or counting inlinks (Mishne, 2007a)). Few systems turn the credibility (Metzger, 2007) of blog posts into an aspect that can benefit the retrieval process. Our hypothesis is that more credible blog posts are preferred by searchers. The idea of using credibility in the blogosphere is not new: Rubin and Liddy (2006) define a framework for assessing blog credibility, consisting of four main categories: blogger’s expertise and offline identity disclosure; blogger’s trustworthiness and value system; information quality; and appeals and triggers of a personal nature. Under these four categories the authors list a large number of indicators, some of which can be determined from textual sources (e.g., literary appeal), and some of which typically need non-textual evidence (e.g., curiosity trigger); see Section 2. We give concrete form to Rubin and Liddy (2006)’s indicators and test their impact on blog post retrieval effectiveness. We do not consider all indicators: we only consider indicators that are textual in nature, and to ensure reproducibility of our results, we only consider indicators that can be derived from the TRECBlog06 corpus (and that do not need additional resources such as bloggers’ profiles that may be hard to obtain for technical or legal reasons). We detail and implement two groups of credibility indicators: post level (these use information about individual posts) and blog level (these use information from the underlying blogs). Within the post level group, we distinguish between topic dependent and independent indicators. To make matters concrete, consider Figure 1: both posts are relevant to the query “tennis,” but based on obvious surface level features of the posts we quickly determine Post 2 to be more credible than Post 1. The most obvious features are spelling errors, the lack of leading capitals, and the large number of exclamation marks and 923 Post 1 as for today (monday) we had no school! yaay labor day. but we had tennis from 9-11 at the highschool. after that me suzi melis & ashley had a picnic at cecil park and then played tennis. i just got home right now. it was a very very very fun afternoon. (...) we will have a short week. mine will be even shorter b/c i wont be there all day on friday cuz we have the Big 7 Tournament at like keystone oaks or sumthin. so i will miss school the whole day. Post 2 Wimbledon champion Venus Williams has pulled out of next week’s Kremlin Cup with a knee injury, tournament organisers said on Friday. The American has not played since pulling out injured of last month’s China Open. The former world number one has been troubled by various injuries (...) Williams’s withdrawal is the latest blow for organisers after Australian Open champion and home favorite Marat Safin withdrew (...). Figure 1: Two blog posts relevant to the query “tennis.” personal pronouns—i.e., topic independent ones— and the fact that the language usage in the second post is more easily associated with credible information about tennis than the language usage in the first post—i.e., a topic dependent feature. Our main finding is that topical blog post retrieval can benefit from using credibility indicators in the retrieval process. Both post and blog level indicator groups each show a significant improvement over the baseline. When we combine all features we obtain the best retrieval performance, and this performance is comparable to the best performing TREC 2006 and 2007 Blog track participants. The improvement over the baseline is stable across most topics, although topic shift occurs in a few cases. The rest of the paper is organized as follows. In Section 2 we provide information on determining credibility; we also relate previous work to the credibility indicators that we consider. Section 3 specifies our retrieval model, a method for incorporating credibility indicators in our retrieval model, and estimations of credibility indicators. Section 4 gives the results of our experiments aimed at assessing the contribution of credibility towards blog post retrieval effectiveness. We conclude in Section 5. 2 Credibility Indicators In our choice of credibility indicators we use (Rubin and Liddy, 2006)’s work as a reference point. We recall the main points of their framework and relate our indicators to it. We briefly discuss other credibility-related indicators found in the literature. 2.1 Rubin and Liddy (2006)’s work Rubin and Liddy (2006) proposed a four factor analytical framework for blog-readers’ credibility assessment of blog sites, based in part on evidentiality theory (Chafe, 1986), website credibility assessment surveys (Stanford et al., 2002), and Van House (2004)’s observations on blog credibility. The four factors—plus indicators for each of them—are: 1. blogger’s expertise and offline identity disclosure (a: name and geographic location; b: credentials; c: affiliations; d: hyperlinks to others; e: stated competencies; f: mode of knowing); 2. blogger’s trustworthiness and value system (a: biases; b: beliefs; c: opinions; d: honesty; e: preferences; f: habits; g: slogans) 3. information quality (a: completeness; b: accuracy; c: appropriateness; d: timeliness; e: organization (by categories or chronology); f: match to prior expectations; g: match to information need); and 4. appeals and triggers of a personal nature (a: aesthetic appeal; b: literary appeal (i.e., writing style); c: curiosity trigger; d: memory trigger; e: personal connection). 2.2 Our credibility indicators We only consider credibility indicators that avoid making use of the searcher’s or blogger’s identity (i.e., excluding 1a, 1c, 1e, 1f, 2e from Rubin and Liddy’s list), that can be estimated automatically from available test collections only so as to facilitate repeatability of our experiments (ruling out 3e, 4a, 4c, 4d, 4e), that are textual in nature (ruling out 2d), and that can be reliably estimated with state-of-theart language technology (ruling out 2a, 2b, 2c, 2g). For reasons that we explain below, we also ignore the “hyperlinks to others” indicator (1d). The indicators that we do consider—1b, 2f, 3a, 3b, 3c, 3d, 3f, 3g, 4b—are organized in two groups, 924 depending on the information source that we use to estimate them, post level and blog level, and the former is further subdivided into topic independent and topic dependent. Table 1 lists the indicators we consider, together with the corresponding Rubin and Liddy indicator(s). Let us quickly explain our indicators. First, we consider the use of capitalization to be an indicator of good writing style, which in turn contributes to a sense of credibility. Second, we identify Western style emoticons (e.g., :-) and :-D) in blog posts, and assume that excessive use indicates a less credible blog post. Third, words written in all caps are considered shouting in a web environment; we consider shouting to be indicative for non-credible posts. Fourth, a credible author should be able to write without (a lot of) spelling errors; the more spelling errors occur in a blog post, the less credible we consider it to be. Fifth, we assume that credible texts have a reasonable length; the text should supply enough information to convince the reader of the author’s credibility. Sixth, assuming that much of what goes on in the blogosphere is inspired by events in the news (Mishne and de Rijke, 2006), we believe that, for news related topics, a blog post is more credible if it is published around the time of the triggering news event (timeliness). Seventh, our semantic indicator also exploits the news-related nature of many blog posts, and “prefers” posts whose language usage is similar to news stories on the topic. Eighth, blogs are a popular place for spammers; spam blogs are not considered credible and we want to demote them in the search results. Ninth, comments are a notable blog feature: readers of a blog post often have the possibility of leaving a comment for other readers or the author. When people comment on a blog post they apparently find the post worth putting effort in, which can be seen as an indicator of credibility (Mishne and Glance, 2006). Tenth, blogs consist of multiple posts in (reverse) chronological order. The temporal aspect of blogs may indicate credibility: we assume that bloggers with an irregular posting behavior are less credible than bloggers who post regularly. And, finally, we consider the topical fluctuation of a blogger’s posts. When looking for credible information we would like to retrieve posts from bloggers that have a certain level of (topical) consistency: not the fluctuating indicator topic de- post level/ related Rubin & pendent? blog level Liddy indicator capitalization no post 4b emoticons no post 4b shouting no post 4b spelling no post 4b post length no post 3a timeliness yes post 3d semantic yes post 3b, 3c spam no blog 3b, 3c, 3f, 3g comments no blog 1b regularity no blog 2f consistency no blog 2f Table 1: Credibility indicators behavior of a (personal) blogger, but a solid interest. 2.3 Other work In a web setting, credibility is often couched in terms of authoritativeness and estimated by exploiting the hyperlink structure. Two well-known examples are the PageRank and HITS algorithms (Liu, 2007), that use the link structure in a topic independent or topic dependent way, respectively. Zhou and Croft (2005) propose collection-document distance and signal-to-noise ratio as priors for the indication of quality in web ad hoc retrieval. The idea of using link structure for improving blog post retrieval has been researched, but results do not show improvements. E.g., Mishne (2007a) finds that retrieval performance decreased. This confirms lessons from the TREC web tracks, where participants found no conclusive benefit from the use of link information for ad hoc retrieval tasks (Hawking and Craswell, 2002). Hence, we restrict ourselves to the use of content-based features for blog post retrieval, thus ignoring indicator 1d (hyperlinks to others). Related to credibility in blogs is the automatic assessment of forum post quality discussed by Weimer et al. (2007). The authors use surface, lexical, syntactic and forum-specific features to classify forum posts as bad posts or good posts. The use of forumspecific features (such as whether or not the post contains HTML, and the fraction of characters that are inside quotes of other posts), gives the highest benefits to the classification. Working in the community question/answering domain, Agichtein et al. (2008) use a content features, as well non-content information available, such as links between items and 925 explicit quality ratings from members of the community to identify high-quality content. As we argued above, spam identification may be part of estimating a blog (or blog post’s) credibility. Spam identification has been successfully applied in the blogosphere to improve retrieval effectiveness; see, e.g., (Mishne, 2007b; Java et al., 2007). 3 Modeling In this section we detail the retrieval model that we use, incorporating ranking by relevance and by credibility. We also describe how we estimate the credibility indicators listed in Section 2. 3.1 Baseline retrieval model We address the baseline retrieval task using a language modeling approach (Croft and Lafferty, 2003), where we rank documents given a query: p(d|q) = p(d)p(q|d)p(q)−1. Using Bayes’ Theorem we rewrite this, ignoring expressions that do not influence the ranking, obtaining p(d|q) ∝p(d)p(q|d), (1) and, assuming that query terms are independent, p(d|q) ∝p(d) Q t∈q p(t|θd)n(t,q), (2) where θd is the blog post model, and n(t, q) denotes the number of times term t occurs in query q. To prevent numerical underflows, we perform this computation in the log domain: log p(d|q) ∝log p(d) + X t∈q n(t, q) log p(t|θd) (3) In our final formula for ranking posts based on relevance only we substitute n(t, q) by the probability of the term given the query. This allows us to assign different weights to query terms and yields: log p(d|q) ∝log p(d) + X t∈q p(t|q) log p(t|θd). (4) For our baseline experiments we assume that all query terms are equally important and set p(t|q) set to be n(t, q)·|q|−1. The component p(d) is the topic independent (“prior”) probability that the document is relevant; in the baseline model, priors are ignored. 3.2 Incorporating credibility Next, we extend Eq. 4 by incorporating estimations of the credibility indicators listed in Table 1. Recall that our credibility indicators come in two kinds— post level and blog level—and that the post level indicators can be topic indepedent or topic dependent, while all blog level indicators are topic independent. Now, modeling topic independent indicators is easy—they can simply be incorporated in Eq. 4 as a weighted sum of two priors: p(d) = λ · ppl(d) + (1 −λ) · pbl(d), (5) where ppl(d) and pbl(d) are the post level and blog level prior probability of d, respectively. The priors ppl and pbl are defined as equally weighted sums: ppl(d) = P i 1 5 · pi(d) pbl(d) = P j 1 4 · pj(d), where i ∈{capitalization, emoticons, shouting, spelling, post length} and j ∈{spam, comments, regularity, consistency}. Estimations of the priors pi and pj are given below; the weighting parameter λ is determined experimentally. Modeling topic dependent indicators is slighty more involved. Given a query q, we create a query model θq that is a mixture of a temporal query model θtemporal and a semantic query model θsemantic: p(t|θq) = (6) µ · p(t|θtemporal) + (1 −µ) · p(t|θsemantic). The component models θtemporal and θsemantic will be estimated below; the parameter µ will be estimated experimentally. Our final ranking formula, then, is obtained by plugging in Eq. 5 and 6 in Eq. 4: log p(d|q) ∝log p(d) + β (P t p(t|q) · log p(t|θd)) (7) + (1 −β) (P t p(t|θq) · log p(t|θd)) . 3.3 Estimating credibility indicators Next, we specify how each of the credibility indicators is estimated; we do so in two groups: post level and blog level. 926 3.3.1 Post level credibility indicators Capitalization We estimate the capitalization prior as follows: pcapitalization(d) = n(c, s) · |s|−1, (8) where n(c, s) is the number of sentences starting with a capital and |s| is the number of sentences; we only consider sentences with five or more words. Emoticons The emoticons prior is estimated as pemoticons(d) = 1 −n(e, d) · |d|−1, (9) where n(e, d) is the number of emoticons in the post and |d| is the length of the post in words. Shouting We use the following equation to estimate the shouting prior: pshouting(d) = 1 −n(a, d) · |d|−1, (10) where n(a, d) is the number of all caps words in blog post d and |d| is the post length in words. Spelling The spelling prior is estimated as pspelling(d) = 1 −n(m, d) · |d|−1, (11) where n(m, d) is the number of misspelled (or unknown) words and |d| is the post length in words. Post length The post length prior is estimated using |d|, the post length in words: plength(d) = log(|d|). (12) Timeliness We estimate timeliness using the timebased language models θtemporal proposed in (Li and Croft, 2003; Mishne, 2007b). I.e., we use a news corpus from the same period as the blog corpus that we use for evaluation purposes (see Section 4.2). We assign a timeliness score per post based on: p(d|θtemporal) = k−1 · (n(date(d), k) + 1) , (13) where k is the number of top results from the initial result list, date(d) is the date associated with document d, and n(date(d), k) is the number of documents in k with the same date as d. For our initial result list we perform retrieval on both the blog and the news corpus and take k = 50 for both corpora. Semantic A semantic query model θsemantic is obtained using ideas due to Diaz and Metzler (2006). Again, we use a news corpus from the same period as the evaluation blog corpus and estimate θsemantic. We issue the query to the external news corpus, retrieve the top 10 documents and extract the top 10 distinctive terms from these documents. These terms are added to the original query terms to capture the language usage around the topic. 3.3.2 Blog level credibility indicators Spam filtering To estimate the spaminess of a blog, we take a simple approach. We train an SVM classifier on a labeled splog blog dataset (Kolari et al., 2006) using the top 1500 words for both spam and non-spam blogs as features. For each classified blog d we have a confidence value s(d). If the classifier cannot make a decision (s(d) = 0) we set pspam(d) to 0, otherwise we use the following to transform s(d) into a spam prior pspam(d): pspam(d) = s(d) 2|s(d)| + −1 · s(d) 2s(d)2 + 2|s(d)| + 1 2. (14) Comments We estimate the comment prior as pcomment(d) = log(n(r, d)), (15) where n(r, d) is the number of comments on post d. Regularity To estimate the regularity prior we use pregularity(d) = log(σinterval), (16) where σinterval expresses the standard deviation of the temporal intervals between two successive posts. Topical consistency Here we use an approach similar to query clarity (Cronen-Townsend and Croft, 2002): based on the list of posts from the same blog we compare the topic distribution of blog B to the topic distribution in the collection C and assign a ‘clarity’ value to B; a score further away from zero indicates a higher topical consistency. We estimate the topical consistency prior as ptopic(d) = log(clarity(d)), (17) where clarity(d) is estimated by clarity(d) = P w p(w|B) · log  p(w|B) p(w)  P w p(w|B) (18) with p(w) = count(w,C) |C| and p(w|B) = count(w,B) |B| . 927 3.3.3 Efficiency All estimators discussed above can be implemented efficiently: most are document priors and can therefore be calculated offline. The only topic dependent estimators are timeliness and language usage; both can be implemented efficiently as specific forms of query expansion. 4 Evaluation In this section we describe the experiments we conducted to answer our research questions about the impact of credibility on blog post retrieval. 4.1 Research questions Our research revolves around the contribution of credibility to the effectiveness of topical blog post retrieval: what is the contribution of individual indicators, of the post level indicators (topic dependent or independent), of the blog level indicators, and of all indicators combined? And do different topics benefit from different indicators? To answer our research question we compared the performance of the baseline retrieval system (as detailed in Section 3.1) with extensions of the baseline system with a single indicator, a set of indicators, or all indicators. 4.2 Setup We apply our models to the TREC Blog06 corpus (Macdonald and Ounis, 2006). This corpus has been constructed by monitoring around 100,000 blog feeds for a period of 11 weeks in early 2006, downloading all posts created in this period. For each permalink (HTML page containing one blog post) the feed id is registered. We can use this id to aggregate post level features to the blog level. In our experiments we use only the HTML documents, 3.2M permalinks, which add up to around 88 GB. The TREC 2006 and 2007 Blog tracks each offer 50 topics and assessments (Ounis et al., 2007; Macdonald et al., 2007). For topical relevancy, assessment was done using a standard two-level scale: the content of the post was judged to be topically relevant or not. The evaluation metrics that we use are standard ones: mean average precision (MAP) and precision@10 (p@10) (Baeza-Yates and RibeiroNeto, 1999). For all our retrieval tasks we use the title field (T) of the topic statement as query. To estimate the timeliness and semantic credibility indicators, we use AQUAINT-2, a set of newswire articles (2.5 GB, about 907K documents) that are roughly contemporaneous with the TREC Blog06 collection (AQUAINT-2, 2007). Articles are in English and come from a variety of sources. Statistical significance is tested using a two-tailed paired t-test. Significant improvements over the baseline are marked with △(α = 0.05) or ▲(α = 0.01). We use ▽and ▼for a drop in performance (for α = 0.05 and α = 0.01, respectively). 4.3 Parameter estimation The models proposed in Section 3.2 contain parameters β, λ and µ. These parameters need to be estimated and, hence, require a training and test set. We use a two-fold parameter estimation process: in the first cycle we estimate the parameters on the TREC 2006 Blog topic set and test these settings on the topics of the TREC 2007 Blog track. The second cycle goes the other way around and trains on the 2007 set, while testing on the 2006 set. Figure 2 shows the optimum values for λ, β, and µ on the 2006 and the 2007 topic sets for both MAP (bottom lines) and p@10 (top lines). When looking at the MAP scores, the optimal setting for λ is almost identical for the two topic sets: 0.4 for the 2006 set and 0.3 for the 2007 set, and also the optimal setting for β is very similar for both sets: 0.4 for the 2006 set and 0.5 for the 2007 set. As to µ, it is clear that timeliness does not improve the performance over using the semantic feature alone and the optimal setting for µ is therefore 0.0. Both µ and β show similar behavior on p@10 as on MAP, but for λ we see a different trend. If early precision is required, the value of λ should be increased, giving more weight to the topic-independent post level features compared to the blog level features. 4.4 Retrieval performance Table 2 lists the retrieval results for the baseline, for each of the credibility indicators (on top of the baseline), for four subsets of indicators, and for all indicators combined. The baseline performs similar to the median scores at the TREC 2006 Blog track (MAP: 0.2203; p@10: 0.564) and somewhat below the median MAP score at 2007 Blog track (MAP: 0.3340) but above the median p@10 score: 0.3805. 928 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 MAP / P10 lambda 2006 2007 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 MAP / P10 beta 2006 2007 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 MAP / P10 mu 2006 2007 Figure 2: Parameter estimation on the TREC 2006 and 2007 Blog topics. (Left): λ. (Center): β. (Right): µ. 2006 2007 map p@10 map p@10 baseline 0.2156 0.4360 0.2820 0.5160 capitalization 0.2155 0.4500 0.2824 0.5160 emoticons 0.2156 0.4360 0.2820 0.5200 shouting 0.2159 0.4320 0.2833 0.5100 spelling 0.2179△0.4480△0.2839▲0.5220 post length 0.2502▲0.4960▲0.3112▲0.5700▲ timeliness 0.1865▼0.4520 0.2660 0.4860 semantic 0.2840▲0.6240▲0.3379▲0.6640▲ spam filtering 0.2093 0.4700 0.2814 0.5760▲ comments 0.2497▲0.5000▲0.3099▲0.5600▲ regularity 0.1658▼0.4940△0.2353▼0.5640△ consistency 0.2141▼0.4220 0.2785▽0.5040 post level 0.2374▲0.4920▲0.2990▲0.5660▲ (topic indep.) post level 0.2840▲0.6240▲0.3379▲0.6640▲ (topic dep.) post level 0.2911▲0.6380▲0.3369▲0.6620▲ (all) blog level 0.2391▲0.4500 0.3023▲0.5580▲ all 0.3051▲0.6880▲0.3530▲0.6900▲ Table 2: Retrieval performance on 2006 and 2007 topics, using λ = 0.3, β = 0.4, and µ = 0.0. Some (topic independent) post level indicators hurt the MAP score, while others help (for both years, and both measures). Combined, the topic independent post level indicators perform less well than the use of one of them (post length). As to the topic dependent post level indicators, timeliness hurts performance on MAP for both years, while the semantic indicator provides significant improvements across the board (resulting in a top 2 score in terms of MAP and a top 5 score in terms of p@10, when compared to the TREC 2006 Blog track participants that only used the T field). Some of the blog level features hurt more than they help (regularity, consistency), while the comments feature helps, on all measures, and for both years. Combined, the blog level features help less than the use of one of them (comments). As a group, the combined post level features help more than either of the two post level sub groups alone. The blog level features show similar results to the topic-independent post level features, obtaining a significant increase on both MAP and p@10, but lower than the topic-dependent post level features. The grand combination of all credibility indicators leads to a significant improvement over any of the single indicators and over any of the four subsets considered in Table 2. The MAP score of this run is higher than the best performing run in the TREC 2006 Blog track and has a top 3 performance on p@10; its 2007 performance is just within the top half on both MAP and p@10. 4.5 Analysis Next we examine the differences in average precision (per topic) between the baseline and subsets of indicators (post and blog level) and the grand combination. We limit ourselves to an analysis of the MAP scores. Figure 3 displays the per topic average precision scores, where topics are sorted by absolute gain of the grand combination over the baseline. In 2006, 7 (out of 50) topics were negatively affected by the use of credibility indicators; in 2007, 15 (out of 50) were negatively affected. Table 3 lists the topics that displayed extreme behavior (in terms of relative performance gain or drop in AP score). While the extreme drops for both years are in the same range, the gains for 2006 are more extreme than for 2007. The topic that is hurt most (in absolute terms) by the credibility indicators is the 2007 topic 910: aperto network (AP -0.2781). The semantic indicator is to blame for this decrease is: the terms included in the expanded query shift the topic from a wireless broadband provider to television networks. 929 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 AP difference topics all post blog -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 AP difference topics all post blog Figure 3: Per-topic AP differences between baseline run and runs with blog level features (triangles), post level features (circles) and all feature (squares) on the 2006 (left) en 2007 (right) topics. Table 3: Extreme performance gains/drops of the grand combination over the baseline (MAP). 2006 id topic % gain/loss 900 mcdonalds +525.9% 866 foods +446.2% 865 basque +308.6% 862 blackberry -21.5% 870 barry bonds -35.2% 898 business intelligence resources -78.8% 2007 id topic % gain/loss 923 challenger +162.1% 926 hawthorne heights +160.7% 945 bolivia +125.5% 943 censure -49.4% 928 big love -80.0% 904 alterman -84.2% Topics that gain most (in absolute terms) are 947 (sasha cohen; AP +0.3809) and 923 (challenger; AP +0.3622) from the 2007 topic set. Finally, the combination of all credibility indicators hurts 7 (2006) plus 15 (2007) equals 22 topics; for the post level indicators get a performance drop in AP for 28 topics (10 plus 18, respectively) and for the blog level indicators we get a drop for 15 topics (4 plus 11, respectively). Hence, the combination of all indicators strikes a good balance between overall performance gain and per topic risk. 5 Conclusions We provided efficient estimations for 11 credibility indicators and assessed their impact on topical blog post retrieval, on top of a content-based retrieval baseline. We compared the contribution of these indicators, both individually and in groups, and found that (combined) they have a significant positive impact on topical blog post retrieval effectiveness. Certain single indicators, like post length and comments, make good credibility indicators on their own; the best performing credibility indicator group consists of topic dependent post level ones. Other future work concerns indicator selection: instead of taking all indicators on board, consider selected indicators only, in a topic dependent fashion. Our choice of credibility indicators was based on a framework proposed by Rubin and Liddy (2006): the estimators we used are natural implementations of the selected indicators, but by no means the only possible ones. In future work we intend to extend the set of indicators considered so as to include, e.g., stated competencies (1e), by harvesting and analyzing bloggers’ profiles, and to extend the set of estimators for indicators that we already consider such as reading level measures (e.g., Flesch-Kincaid) for the literary appeal indicator (4b). Acknowledgments We would like to thank our reviewers for their feedback. Both authors were supported by the E.U. IST programme of the 6th FP for RTD under project MultiMATCH contract IST-033104. De Rijke was also supported by NWO under project numbers 017.001.190, 220-80-001, 264-70-050, 354-20-005, 600.065.120, 612-13-001, 612.000.106, 612.066.302, 612.069.006, 640.001.501, and 640.002.501. 930 References Agichtein, E., Castillo, C., Donato, D., Gionis, A., and Mishne, G. (2008). Finding high-quality content in social media. In WSDM ’08. AQUAINT-2 (2007). URL: http://trec. nist.gov/data/qa/2007_qadata/qa. 07.guidelines.html#documents. Baeza-Yates, R. and Ribeiro-Neto, B. (1999). Modern Information Retrieval. Addison Wesley. Chafe, W. (1986). Evidentiality in English conversion and academic writing. In Chaf, W. and Nichols, J., editors, Evidentiality: The Linguistic Coding of Epistemology, volume 20, pages 261–273. Ablex Publishing Corporation. Croft, W. B. and Lafferty, J., editors (2003). Language Modeling for Information Retrieval. Kluwer. Cronen-Townsend, S. and Croft, W. (2002). Quantifying query ambiguity. In Proceedings of Human Language Technology 2002, pages 94–98. Diaz, F. and Metzler, D. (2006). Improving the estimation of relevance models using large external corpora. In SIGIR ’06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 154–161, New York. ACM Press. Hawking, D. and Craswell, N. (2002). Overview of the TREC-2001 web track. In The Tenth Text Retrieval Conferences (TREC-2001), pages 25–31. Java, A., Kolari, P., Finin, T., Joshi, A., and Martineau, J. (2007). The blogvox opinion retrieval system. In The Fifteenth Text REtrieval Conference (TREC 2006). Kolari, P., Finin, T., Java, A., and Joshi, A. (2006). Splog blog dataset. URL: http: //ebiquity.umbc.edu/resource/html/ id/212/Splog-Blog-Dataset. Li, X. and Croft, W. (2003). Time-based language models. In Proceedings of the 12th International Conference on Information and Knowledge Managment (CIKM), pages 469–475. Liu, B. (2007). Web Data Mining. Springer-Verlag, Heidelberg. Macdonald, C. and Ounis, I. (2006). The trec blogs06 collection: Creating and analyzing a blog test collection. Technical Report TR-2006-224, Department of Computer Science, University of Glasgow. Macdonald, C., Ounis, I., and Soboroff, I. (2007). Overview of the trec 2007 blog track. In TREC 2007 Working Notes, pages 31–43. Metzger, M. (2007). Making sense of credibility on the web: Models for evaluating online information and recommendations for future research. Journl of the American Society for Information Science and Technology, 58(13):2078–2091. Mishne, G. (2007a). Applied Text Analytics for Blogs. PhD thesis, University of Amsterdam, Amsterdam. Mishne, G. (2007b). Using blog properties to improve retrieval. In Proceedings of ICWSM 2007. Mishne, G. and de Rijke, M. (2006). A study of blog search. In Lalmas, M., MacFarlane, A., R¨uger, S., Tombros, A., Tsikrika, T., and Yavlinsky, A., editors, Advances in Information Retrieval: Proceedings 28th European Conference on IR Research (ECIR 2006), volume 3936 of LNCS, pages 289–301. Springer. Mishne, G. and Glance, N. (2006). Leave a reply: An analysis of weblog comments. In Proceedings of WWW 2006. Ounis, I., de Rijke, M., Macdonald, C., Mishne, G., and Soboroff, I. (2007). Overview of the trec-2006 blog track. In The Fifteenth Text REtrieval Conference (TREC 2006) Proceedings. Rubin, V. and Liddy, E. (2006). Assessing credibility of weblogs. In Proceedings of the AAAI Spring Symposium: Computational Approaches to Analyzing Weblogs (CAAW). Stanford, J., Tauber, E., Fogg, B., and Marable, L. (2002). Experts vs online consumers: A comparative credibility study of health and finance web sites. URL: http://www.consumerwebwatch.org/ news/report3_credibilityresearch/ slicedbread.pdf. Van House, N. (2004). Weblogs: Credibility and collaboration in an online world. URL: people. ischool.berkeley.edu/˜vanhouse/Van\ %20House\%20trust\%20workshop.pdf. Weimer, M., Gurevych, I., and Mehlhauser, M. (2007). Automatically assessing the post quality in online discussions on software. In Proceedings of the ACL 2007 Demo and Poster Sessions, pages 125–128. Zhou, Y. and Croft, W. B. (2005). Document quality models for web ad hoc retrieval. In CIKM ’05: Proceedings of the 14th ACM international conference on Information and knowledge management, pages 331– 332. 931
2008
105
Proceedings of ACL-08: HLT, pages 932–940, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Linguistically Motivated Features for Enhanced Back-of-the-Book Indexing Andras Csomai and Rada Mihalcea Department of Computer Science University of North Texas [email protected],[email protected] Abstract In this paper we present a supervised method for back-of-the-book index construction. We introduce a novel set of features that goes beyond the typical frequency-based analysis, including features based on discourse comprehension, syntactic patterns, and information drawn from an online encyclopedia. In experiments carried out on a book collection, the method was found to lead to an improvement of roughly 140% as compared to an existing state-of-the-art supervised method. 1 Introduction Books represent one of the oldest forms of written communication and have been used since thousands of years ago as a means to store and transmit information. Despite this fact, given that a large fraction of the electronic documents available online and elsewhere consist of short texts such as Web pages, news articles, scientific reports, and others, the focus of natural language processing techniques to date has been on the automation of methods targeting short documents. We are witnessing however a change: more and more books are becoming available in electronic format, in projects such as the Million Books project (http://www.archive.org/details/millionbooks), the Gutenberg project (http://www.gutenberg.org), or Google Book Search (http://books.google.com). Similarly, a large number of the books published in recent years are often available – for purchase or through libraries – in electronic format. This means that the need for language processing techniques able to handle very large documents such as books is becoming increasingly important. This paper addresses the problem of automatic back-of-the-book index construction. A back-ofthe-book index typically consists of the most important keywords addressed in a book, with pointers to the relevant pages inside the book. The construction of such indexes is one of the few tasks related to publishing that still requires extensive human labor. Although there is a certain degree of computer assistance, consisting of tools that help the professional indexer to organize and edit the index, there are no methods that would allow for a complete or nearly-complete automation. In addition to helping professional indexers in their task, an automatically generated back-of-thebook index can also be useful for the automatic storage and retrieval of a document; as a quick reference to the content of a book for potential readers, researchers, or students (Schutze, 1998); or as a starting point for generating ontologies tailored to the content of the book (Feng et al., 2006). In this paper, we introduce a supervised method for back-of-the-book index construction, using a novel set of linguistically motivated features. The algorithm learns to automatically identify important keywords in a book based on an ensemble of syntactic, discourse-based and information-theoretic properties of the candidate concepts. In experiments performed on a collection of books and their indexes, the method was found to exceed by a large margin the performance of a previously proposed state-ofthe-art supervised system for keyword extraction. 2 Supervised Back-of-the-Book Indexing We formulate the problem of back-of-the-book indexing as a supervised keyword extraction task, by making a binary yes/no classification decision at the 932 level of each candidate index entry. Starting with a set of candidate entries, the algorithm automatically decides which entries should be added to the backof-the-book index, based on a set of linguistic and information theoretic features. We begin by identifying the set of candidate index entries, followed by the construction of a feature vector for each such candidate entry. In the training data set, these feature vectors are also assigned with a correct label, based on the presence/absence of the entry in the gold standard back-of-the-book index provided with the data. Finally, a machine learning algorithm is applied, which automatically classifies the candidate entries in the test data for their likelihood to belong to the back-of-the-book index. The application of a supervised algorithm requires three components: a data set, which is described next; a set of features, which are described in Section 3; and a machine learning algorithm, which is presented in Section 4. 2.1 Data We use a collection of books and monographs from the eScholarship Editions collection of the University of California Press (UC Press),1 consisting of 289 books, each with a manually constructed backof-the-book index. The average length of the books in this collection is 86053 words, and the average length of the indexes is 820 entries. A collection of 56 books was previously introduced in (Csomai and Mihalcea, 2006); however, that collection is too small to be split in training and test data to support supervised keyword extraction experiments. The UC Press collection was provided in a standardized XML format, following the Text Encoding Initiative (TEI) recommendations, and thus it was relatively easy to process the collection and separate the index from the body of the text. In order to use this corpus as a gold standard collection for automatic index construction, we had to eliminate the inversions, which are typical in human-built indexes. Inversion is a method used by professional indexers by which they break the ordering of the words in each index entry, and list the head first, thereby making it easier to find entries in an alphabetically ordered index. As an example, consider the entry indexing of illustrations, which, following inversion, becomes illustrations, indexing of. To eliminate inversion, we use an approach that gen1http://content.cdlib.org/escholarship/ erates each permutation of the composing words for each index entry, looks up the frequency of that permutation in the book, and then chooses the one with the highest frequency as the correct reconstruction of the entry. In this way, we identify the form of the index entries as appearing in the book, which is the form required for the evaluation of extraction methods. Entries that cannot be found in the book, which were most likely generated by the human indexers, are preserved in their original ordering. For training and evaluation purposes, we used a random split of the collection into 90% training and 10% test. This yields a training corpus of 259 documents and a test data set of 30 documents. 2.2 Candidate Index Entries Every sequence of words in a book represents a potential candidate for an entry in the back-of-the-book index. Thus, we extract from the training and the test data sets all the n-grams (up to the length of four), not crossing sentence boundaries. These represent the candidate index entries that will be used in the classification algorithm. The training candidate entries are then labeled as positive or negative, depending on whether the given n-gram was found in the back-of-the-book index associated with the book. Using a n-gram-based method to extract candidate entries has the advantage of providing high coverage, but the unwanted effect of producing an extremely large number of entries. In fact, the resulting set is unmanageably large for any machine learning algorithm. Moreover, the set is extremely unbalanced, with a ratio of positive and negative examples of 1:675, which makes it unsuitable for most machine learning algorithms. In order to address this problem, we had to find ways to reduce the size of the data set, possibly eliminating the training instances that will have the least negative effect on the usability of the data set. The first step to reduce the size of the data set was to use the candidate filtering techniques for unsupervised back-of-the-book index construction that we proposed in (Csomai and Mihalcea, 2007). Namely, we use the commonword and comma filters, which are applied to both the training and the test collections. These filters work by eliminating all the ngrams that begin or end with a common word (we use a list of 300 most frequent English words), as well as those n-grams that cross a comma. This results in a significant reduction in the number of neg933 positive negative total positive:negative ratio Training data All (original) 71,853 48,499,870 48,571,723 1:674.98 Commonword/comma filters 66,349 11,496,661 11,563,010 1:173.27 10% undersampling 66,349 1,148,532 1,214,881 1:17.31 Test data All (original) 7,764 6,157,034 6,164,798 1:793.02 Commonword/comma filters 7,225 1,472,820 1,480,045 1:203.85 Table 1: Number of training and test instances generated from the UC Press data set ative examples, from 48 to 11 million instances, with a loss in terms of positive examples of only 7.6%. The second step is to use a technique for balancing the distribution of the positive and the negative examples in the data sets. There are several methods proposed in the existing literature, focusing on two main solutions: undersampling and oversampling (Weiss and Provost, 2001). Undersampling (Kubat and Matwin, 1997) means the elimination of instances from the majority class (in our case negative examples), while oversampling focuses on increasing the number of instances of the minority class. Aside from the fact that oversampling has hard to predict effects on classifier performance, it also has the additional drawback of increasing the size of the data set, which in our case is undesirable. We thus adopted an undersampling solution, where we randomly select 10% of the negative examples. Evidently, the undersampling is applied only to the training set. Table 1 shows the number of positive and negative entries in the data set, for the different preprocessing and balancing phases. 3 Features An important step in the development of a supervised system is the choice of features used in the learning process. Ideally, any property of a word or a phrase indicating that it could be a good keyword should be represented as a feature and included in the training and test examples. We use a number of features, including information-theoretic features previously used in unsupervised keyword extraction, as well as a novel set of features based on syntactic and discourse properties of the text, or on information extracted from external knowledge repositories. 3.1 Phraseness and Informativeness We use the phraseness and informativeness features that we previously proposed in (Csomai and Mihalcea, 2007). Phraseness refers to the degree to which a sequence of words can be considered a phrase. We use it as a measure of lexical cohesion of the component terms and treat it as a collocation discovery problem. Informativeness represents the degree to which the keyphrase is representative for the document at hand, and it correlates to the amount of information conveyed to the user. To measure the informativeness of a keyphrase, various methods can be used, some of which were previously proposed in the keyword extraction literature: • tf.idf, which is the traditional information retrieval metric (Salton and Buckley, 1997), employed in most existing keyword extraction applications. We measure inverse document frequency using the article collection of the online encyclopedia Wikipedia. • χ2 independence test, which measures the degree to which two events happen together more often than by chance. In our work, we use the χ2 in a novel way. We measure the informativeness of a keyphrase by finding if a phrase occurs in the document more frequently than it would by chance. The information required for the χ2 independence test can be typically summed up in a contingency table (Manning and Schutze, 1999): count(phrase in count(all other phrases document) in document) count(phrase in other count(all other phrases documents) in all other documents) The independence score is calculated based on the observed (O) and expected (E) counts: χ2 = X i,j (Oi,j −Ei,j)2 Ei,j where i, j are the row and column indices of the 934 contingency table. The O counts are the cells of the table. The E counts are calculated from the marginal probabilities (the sum of the values of a column or a row) converted into proportions by dividing them with the total number of observed events (N): N = O1,1 + O1,2 + O2,1 + O2,2 Then the expected count for seeing the phrase in the document is: E1,1 = O1,1 + O1,2 N × O1,1 + O2,1 N × N To measure the phraseness of a candidate phrase we use a technique based on the χ2 independence test. We measure the independence of the events of seeing the components of the phrase in the text. This method was found to be one of the best performing models in collocation discovery (Pecina and Schlesinger, 2006). For n-grams where N > 2 we apply the χ2 independence test by splitting the phrase in two (e.g. for a 4-gram, we measure the independence of the composing bigrams). 3.2 Discourse Comprehension Features Very few existing keyword extraction methods look beyond word frequency. Except for (Turney and Littman, 2003), who uses pointwise mutual information to improve the coherence of the keyword set, we are not aware of any other work that attempts to use the semantics of the text to extract keywords. The fact that most systems rely heavily on term frequency properties poses serious difficulties, since many index entries appear only once in the document, and thus cannot be identified by features based solely on word counts. For instance, as many as 52% of the index entries in our training data set appeared only once in the books they belong to. Moreover, another aspect not typically covered by current keyword extraction methods is the coherence of the keyword set, which can also be addressed by discoursebased properties. In this section, we propose a novel feature for keyword extraction inspired by work on discourse comprehension. We use a construction integration framework, which is the backbone used by many discourse comprehension theories. 3.2.1 Discourse Comprehension Discourse comprehension is a field in cognitive science focusing on the modeling of mental processes associated with reading and understanding text. The most widely accepted theory for discourse comprehension is the construction integration theory (Kintsch, 1998). According to this theory, the elementary units of comprehension are propositions, which are defined as instances of a predicateargument schema. As an example, consider the sentence The hemoglobin carries oxygen, which generates the predicate CARRY[HEMOGLOBIN,OXIGEN]. The processing cycle of the construction integration model processes one proposition at a time, and builds a local representation of the text in the working memory, called the propositional network. During the construction phase, propositions are extracted from a segment of the input text (typically a single sentence) using linguistic features. The propositional network is represented as a graph, with nodes consisting of propositions, and weighted edges representing the semantic relations between them. All the propositions generated from the input text are inserted into the graph, as well as all the propositions stored in the short term memory. The short term memory contains the propositions that compose the representation of the previous few sentences. The second phase of the construction step is the addition of past experiences (or background knowledge), which is stored in the long term memory. This is accomplished by adding new elements to the graph, usually consisting of the set of closely related propositions from the long term memory. After processing a sentence, the integration step establishes the role of each proposition in the meaning representation of the current sentence, through a spreading activation applied on the propositions derived from the original sentence. Once the weights are stabilized, the set of propositions with the highest activation values give the mental representation of the processed sentence. The propositions with the highest activation values are added to the short term memory, the working memory is cleared and the process moves to the next sentence. Figure 3.2.1 shows the memory types used in the construction integration process and the main stages of the process. 3.2.2 Keyword Extraction using Discourse Comprehension The main purpose of the short term memory is to ensure the coherence of the meaning representation across sentences. By keeping the most important propositions in the short term memory, the spreading activation process transfers additional weight to se935 Semantic Memory Short-term Memory Add Associates Add Previous Propositions Decay Integration Working Memory Next Proposition Figure 1: The construction integration process mantically related propositions in the sentences that follow. This also represents a way of alleviating one of the main problems of statistical keyword extraction, namely the sole dependence on term frequency. Even if a phrase appears only once, the construction integration process ensures the presence of the phrase in the short term memory as long as it is relevant to the current topic, thus being a good indicator of the phrase importance. The construction integration model is not directly applicable to keyword extraction due to a number of practical difficulties. The first implementation problem was the lack of a propositional parser. We solve this problem by using a shallow parser to extract noun phrase chunks from the original text (Munoz et al., 1999). Second, since spreading activation is a process difficult to control, with several parameters that require fine tuning, we use instead a different graph centrality measure, namely PageRank (Brin and Page, 1998). Finally, to represent the relations inside the long term semantic memory, we use a variant of latent semantic analysis (LSA) (Landauer et al., 1998) as implemented in the InfoMap package,2 trained on a corpus consisting of the British National Corpus, the English Wikipedia, and the books in our collection. To alleviate the data sparsity problem, we also use the pointwise mutual information (PMI) to calculate the relatedness of the phrases based on the book being processed. The final system works by iterating the following steps: (1) Read the text sentence by sentence. For each new sentence, a graph is constructed, consisting of the noun phrase chunks extracted from the original text. This set of nodes is augmented with all the phrases from the short term memory. (2) A 2http://infomap.stanford.edu/ weighted edge is added between all the nodes, based on the semantic relatedness measured between the phrases by using LSA and PMI. We use a weighted combination of these two measures, with a weight of 0.9 assigned to LSA and 0.1 to PMI. For the nodes from the short term memory, we adjust the connection weights to account for memory decay, which is a function of the distance from the last occurrence. We implement decay by decreasing the weight of both the outgoing and the incoming edges by n ∗α, where n is the number of sentences since we last saw the phrase and α = 0.1. (3) Apply PageRank on the resulting graph. (4) Select the 10 highest ranked phrases and place them in the short term memory. (5) Read the next sentence and go back to step (1). Three different features are derived based on the construction integration model: • CI short term memory frequency (CI shortterm), which measures the number of iterations that the phrase remains in the short term memory, which is seen as an indication of the phrase importance. • CI normalized short term memory frequency (CI normalized), which is the same as CI shortterm, except that it is normalized by the frequency of the phrase. Through this normalization, we hope to enhance the effect of the semantic relatedness of the phrase to subsequent sentences. • CI maximum score (CI maxscore), which measures the maximum centrality score the phrase achieves across the entire book. This can be thought of as a measure of the importance of the phrase in a smaller coherent segment of the document. 3.3 Syntactic Features Previous work has pointed out the importance of syntactic features for supervised keyword extraction (Hulth, 2003). The construction integration model described before is already making use of syntactic patterns to some extent, through the use of a shallow parser to identify noun phrases. However, that approach does not cover patterns other than noun phrases. To address this limitation, we introduce a new feature that captures the part-of-speech of the words composing a candidate phrase. 936 There are multiple ways to represent such a feature. The simplest is to create a string feature consisting of the concatenation of the part-of-speech tags. However, this representation imposes limitations on the machine learning algorithms that can be used, since many learning systems cannot handle string features. The second solution is to introduce a binary feature for each part-of-speech tag pattern found in the training and the test data sets. In our case this is again unacceptable, given the size of the documents we work with and the large number of syntactic patterns that can be extracted. Instead, we decided on a novel solution which, rather than using the part-of-speech pattern directly, determines the probability of a phrase with a certain tag pattern to be selected as a keyphrase. Formally: P(pattern) = C(pattern, positive) C(pattern) where C(pattern, positive) is the number of distinct phrases having the tag pattern pattern and being selected as keyword, and C(pattern) represents the number of distinct phrases having the tag pattern pattern. This probability is estimated based on the training collection, and is used as a numeric feature. We refer to this feature as part-of-speech pattern. 3.4 Encyclopedic Features Recent work has suggested the use of domain knowledge to improve the accuracy of keyword extraction. This is typically done by consulting a vocabulary of plausible keyphrases, usually in the form of a list of subject headings or a domain specific thesaurus. The use of a vocabulary has the additional benefit of eliminating the extraction of incomplete phrases (e.g. ”States of America”). In fact, (Medelyan and Witten, 2006) reported an 110% Fmeasure improvement in keyword extraction when using a domain-specific thesaurus. In our case, since the books can cover several domains, the construction and use of domain-specific thesauruses is not plausible, as the advantage of such resources is offset by the time it usually takes to build them. Instead, we decided to use encyclopedic information, as a way to ensure high coverage in terms of domains and concepts. We use Wikipedia, which is the largest and the fastest growing encyclopedia available today, and whose structure has the additional benefit of being particularly useful for the task of keyword extraction. Wikipedia includes a rich set of links that connect important phrases in an article to their corresponding articles. These links are added manually by the Wikipedia contributors, and follow the general guidelines of annotation provided by Wikipedia. The guidelines coincide with the goals of keyword extraction, and thus the Wikipedia articles and their link annotations can be treated as a vast keyword annotated corpus. We make use of the Wikipedia annotations in two ways. First, if a phrase is used as the title of a Wikipedia article, or as the anchor text in a link, this is a good indicator that the given phrase is well formed. Second, we can also estimate the probability of a term W to be selected as a keyword in a new document by counting the number of documents where the term was already selected as a keyword (count(Dkey)) divided by the total number of documents where the term appeared (count(DW )). These counts are collected from the entire set of Wikipedia articles. P(keyword|W) ≈count(Dkey) count(DW ) (1) This probability can be interpreted as “the more often a term was selected as a keyword among its total number of occurrences, the more likely it is that it will be selected again.” In the following, we will refer to this feature as Wikipedia keyphraseness. 3.5 Other Features In addition to the features described before, we add several other features frequently used in keyword extraction: the frequency of the phrase inside the book (term frequency (tf)); the number of documents that include the phrase (document frequency (df)); a combination of the two (tf.idf); the within-document frequency, which divides a book into ten equallysized segments, and counts the number of segments that include the phrase (within document frequency); the length of the phrase (length of phrase); and finally a binary feature indicating whether the given phrase is a named entity, according to a simple heuristic based on word capitalization. 4 Experiments and Evaluation We integrate the features described in the previous section in a machine learning framework. The system is evaluated on the data set described in Section 2.1, consisting of 289 books, randomly split into 937 90% training (259 books) and 10% test (30 books). We experiment with three learning algorithms, selected for the diversity of their learning strategy: multilayer perceptron, SVM, and decision trees. For all three algorithms, we use their implementation as available in the Weka package. For evaluation, we use the standard information retrieval metrics: precision, recall, and F-measure. We use two different mechanisms for selecting the number of entries in the index. In the first evaluation (ratio-based), we use a fixed ratio of 0.45% from the number of words in the text; for instance, if a book has 100,000 words, the index will consist of 450 entries. This number was estimated based on previous observations regarding the typical size of a back-ofthe-book index (Csomai and Mihalcea, 2006). In order to match the required number of entries, we sort all the candidates in reversed order of the confidence score assigned by the machine learning algorithm, and consequently select the top entries in this ranking. In the second evaluation (decision-based), we allow the machine learning algorithm to decide on the number of keywords to extract. Thus, in this evaluation, all the candidates labeled as keywords by the learning algorithm will be added to the index. Note that all the evaluations are run using a training data set with 10% undersampling of the negative examples, as described before. Table 2 shows the results of the evaluation. As seen in the table, the multilayer perceptron and the decision tree provide the best results, for an overall average F-measure of 27%. Interestingly, the results obtained when the number of keywords is automatically selected by the learning method (decisionbased) are comparable to those when the number of keywords is selected a-priori (ratio-based), indicating the ability of the machine learning algorithm to correctly identify the correct keywords. Additionally, we also ran an experiment to determine the amount of training data required by the system. While the learning curve continues to grow with additional amounts of data, the steepest part of the curve is observed for up to 10% of the training data, which indicates that a relatively small amount of data (about 25 books) is enough to sustain the system. It is worth noting that the task of creating backof-the-book indexes is highly subjective. In order to put the performance figures in perspective, one should also look at the inter-annotator agreement between human indexers as an upper bound of performance. Although we are not aware of any comprehensive studies for inter-annotator agreement on back-of-the-book indexing, we can look at the consistency studies that have been carried out on the MEDLINE corpus (Funk and Reid, 1983), where an inter-annotator agreement of 48% was found on an indexing task using a domain-specific controlled vocabulary of subject headings. 4.1 Comparison with Other Systems We compare the performance of our system with two other methods for keyword extraction. One is the tf.idf method, traditionally used in information retrieval as a mechanism to assign words in a text with a weight reflecting their importance. This tf.idf baseline system uses the same candidate extraction and filtering techniques as our supervised systems. The other baseline is the KEA keyword extraction system (Frank et al., 1999), a state-of-the-art algorithm for supervised keyword extraction. Very briefly, KEA is a supervised system that uses a Na¨ıve Bayes learning algorithm and several features, including information theoretic features such as tf.idf and positional features reflecting the position of the words with respect to the beginning of the text. The KEA system was trained on the same training data set as used in our experiments. Table 3 shows the performance obtained by these methods on the test data set. Since none of these methods have the ability to automatically determine the number of keywords to be extracted, the evaluation of these methods is done under the ratio-based setting, and thus for each method the top 0.45% ranked keywords are extracted. Algorithm P R F tf.idf 8.09 8.63 8.35 KEA 11.18 11.48 11.32 Table 3: Baseline systems 4.2 Performance of Individual Features We also carried out experiments to determine the role played by each feature, by using the information gain weight as assigned by the learning algorithm. Note that ablation studies are not appropriate in our case, since the features are not orthogonal (e.g., there is high redundancy between the construction integration and the informativeness features), and thus we cannot entirely eliminate a feature from the system. 938 ratio-based decision-based Algorithm P R F P R F Multilayer perceptron 27.98 27.77 27.87 23.93 31.98 27.38 Decision tree 27.06 27.13 27.09 22.75 34.12 27.30 SVM 20.94 20.35 20.64 21.76 30.27 25.32 Table 2: Evaluation results Feature Weight part-of-speech pattern 0.1935 CI shortterm 0.1744 Wikipedia keyphraseness 0.1731 CI maxscore 0.1689 CI shortterm normalized 0.1379 ChiInformativeness 0.1122 document frequency (df) 0.1031 tf.idf 0.0870 ChiPhraseness 0.0660 length of phrase 0.0416 named entity heuristic 0.0279 within document frequency 0.0227 term frequency (tf) 0.0209 Table 4: Information gain feature weight Table 4 shows the weight associated with each feature. Perhaps not surprisingly, the features with the highest weight are the linguistically motivated features, including syntactic patterns and the construction integration features. The Wikipedia keyphraseness also has a high score. The smallest weights belong to the information theoretic features, including term and document frequency. 5 Related Work With a few exceptions (Schutze, 1998; Csomai and Mihalcea, 2007), very little work has been carried out to date on methods for automatic back-of-thebook index construction. The task that is closest to ours is perhaps keyword extraction, which targets the identification of the most important words or phrases inside a document. The state-of-the-art in keyword extraction is currently represented by supervised learning methods, where a system is trained to recognize keywords in a text, based on lexical and syntactic features. This approach was first suggested in (Turney, 1999), where parameterized heuristic rules are combined with a genetic algorithm into a system for keyphrase extraction (GenEx) that automatically identifies keywords in a document. A different learning algorithm was used in (Frank et al., 1999), where a Naive Bayes learning scheme is applied on the document collection, with improved results observed on the same data set as used in (Turney, 1999). Neither Turney nor Frank report on the recall of their systems, but only on precision: a 29.0% precision is achieved with GenEx (Turney, 1999) for five keyphrases extracted per document, and 18.3% precision achieved with Kea (Frank et al., 1999) for fifteen keyphrases per document. Finally, in recent work, (Hulth, 2003) proposes a system for keyword extraction from abstracts that uses supervised learning with lexical and syntactic features, which proved to improve significantly over previously published results. 6 Conclusions and Future Work In this paper, we introduced a supervised method for back-of-the-book indexing which relies on a novel set of features, including features based on discourse comprehension, syntactic patterns, and information drawn from an online encyclopedia. According to an information gain measure of feature importance, the new features performed significantly better than the traditional frequency-based techniques, leading to a system with an F-measure of 27%. This represents an improvement of 140% with respect to a state-of-the-art supervised method for keyword extraction. Our system proved to be successful both in ranking the phrases in terms of their suitability as index entries, as well as in determining the optimal number of entries to be included in the index. Future work will focus on developing methodologies for computer-assisted back-of-the-book indexing, as well as on the use of the automatically extracted indexes in improving the browsing of digital libraries. Acknowledgments We are grateful to Kirk Hastings from the California Digital Library for his help in obtaining the UC Press corpus. This research has been partially supported by a grant from Google Inc. and a grant from the Texas Advanced Research Program (#003594). 939 References S. Brin and L. Page. 1998. The anatomy of a large-scale hypertextual Web search engine. Computer Networks and ISDN Systems, 30(1–7). A. Csomai and R. Mihalcea. 2006. Creating a testbed for the evaluation of automatically generated back-ofthe-book indexes. In Proceedings of the International Conference on Computational Linguistics and Intelligent Text Processing, pages 19–25, Mexico City. A. Csomai and R. Mihalcea. 2007. Investigations in unsupervised back-of-the-book indexing. In Proceedings of the Florida Artificial Intelligence Research Society, Key West. D. Feng, J. Kim, E. Shaw, and E. Hovy. 2006. Towards modeling threaded discussions through ontologybased analysis. In Proceedings of National Conference on Artificial Intelligence. E. Frank, G. W. Paynter, I. H. Witten, C. Gutwin, and C. G. Nevill-Manning. 1999. Domain-specific keyphrase extraction. In Proceedings of the 16th International Joint Conference on Artificial Intelligence. M. E. Funk and C.A. Reid. 1983. Indexing consistency in medline. Bulletin of the Medical Library Association, 71(2). A. Hulth. 2003. Improved automatic keyword extraction given more linguistic knowledge. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, Japan, August. W. Kintsch. 1998. Comprehension: A paradigm for cognition. Cambridge Uniersity Press. M. Kubat and S. Matwin. 1997. Addressing the curse of imbalanced training sets: one-sided selection. In Proceedings of the 14th International Conference on Machine Learning. T. K. Landauer, P. Foltz, and D. Laham. 1998. Introduction to latent semantic analysis. Discourse Processes, 25. C. Manning and H. Schutze. 1999. Foundations of Natural Language Processing. MIT Press. O. Medelyan and I. H. Witten. 2006. Thesaurus based automatic keyphrase indexing. In Proceedings of the Joint Conference on Digital Libraries. M. Munoz, V. Punyakanok, D. Roth, and D. Zimak. 1999. A learning approach to shallow parsing. In Proceedings of the Conference on Empirical Methods for Natural Language Processing. P. Pecina and P. Schlesinger. 2006. Combining association measures for collocation extraction. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 651–658, Sydney, Australia. G. Salton and C. Buckley. 1997. Term weighting approaches in automatic text retrieval. In Readings in Information Retrieval. Morgan Kaufmann Publishers, San Francisco, CA. H. Schutze. 1998. The hypertext concordance: a better back-of-the-book index. In Proceedings of Computerm, pages 101–104. P. Turney and M. Littman. 2003. Measuring praise and criticism: Inference of semantic orientation from association. ACM Transactions on Information Systems, 4(21):315–346. P. Turney. 1999. Learning to extract keyphrases from text. Technical report, National Research Council, Institute for Information Technology. G. Weiss and F. Provost. 2001. The effect of class distribution on classifier learning. Technical Report ML-TR 43, Rutgers University. 940
2008
106
Proceedings of ACL-08: HLT, pages 941–949, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Resolving Personal Names in Email Using Context Expansion Tamer Elsayed,∗Douglas W. Oard,† and Galileo Namata∗ Human Language Technology Center of Excellence and UMIACS Laboratory for Computational Linguistics and Information Processing (CLIP) University of Maryland, College Park, MD 20742 {telsayed, oard, gnamata}@umd.edu Abstract This paper describes a computational approach to resolving the true referent of a named mention of a person in the body of an email. A generative model of mention generation is used to guide mention resolution. Results on three relatively small collections indicate that the accuracy of this approach compares favorably to the best known techniques, and results on the full CMU Enron collection indicate that it scales well to larger collections. 1 Introduction The increasing prevalence of informal text from which a dialog structure can be reconstructed (e.g., email or instant messaging), raises new challenges if we are to help users make sense of this cacophony. Large collections offer greater scope for assembling evidence to help with that task, but they pose additional challenges as well. With well over 100,000 unique email addresses in the CMU version of the Enron collection (Klimt and Yang, 2004), common names (e.g., John) might easily refer to any one of several hundred people. In this paper, we associate named mentions in unstructured text (i.e., the body of an email and/or the subject line) to modeled identities. We see at least two direct applications for this work: (1) helping searchers who are unfamiliar with the contents of an email collection (e.g., historians or lawyers) better understand the context of emails that they find, and (2) augmenting more typical social networks (based on senders and recipients) with additional links based on references found in unstructured text. Most approaches to resolving identity can be decomposed into four sub-problems: (1) finding a reference that requires resolution, (2) identifying candidates, (3) assembling evidence, and (4) choosing ∗Department of Computer Science †College of Information Studies among the candidates based on the evidence. For the work reported in this paper, we rely on the user to designate references requiring resolution (which we model as a predetermined set of mention-queries for which the correct referent is known). Candidate identification is a computational expedient that permits the evidence assembly effort to be efficiently focused; we use only simple techniques for that task. Our principal contributions are the approaches we take to evidence generation (leveraging three ways of linking to other emails where evidence might be found: reply chains, social interaction, and topical similarity) and our approach to choosing among candidates (based on a generative model of reference production). We evaluate the effectiveness of our approach on four collections, three of which have previously reported results for comparison, and one that is considerably larger than the others. The remainder of this paper is as follows. Section 2 surveys prior work. Section 3 then describes our approach to modeling identity and ranking candidates. Section 4 presents results, and Section 5 concludes. 2 Related Work The problem of identity resolution in email is a special case of the more general problem referred to as “Entity Resolution.” Entity resolution is generically defined as a process of determining the mapping from references (e.g., names, phrases) observed in data to real-world entities (e.g., persons, locations). In our case, the problem is to map mentions in emails to the identities of the individuals being referred to. Various approaches have been proposed for entity resolution. In structured data (e.g., databases), approaches have included minimizing the number of “matching” and “merging” operations (Benjelloun et al., 2006), using global relational information(Malin, 2005; Bhattacharya and Getoor, 2007; Reuther, 2006) and using a probabilistic generative 941 model (Bhattacharya and Getoor, 2006). None of these approaches, however, both make use of conversational, topical, and time aspects, shown important in resolving personal names (Reuther, 2006), and take into account global relational information. Similarly, approaches in unstructured data (e.g., text) have involved using clustering techniques over biographical facts (Mann and Yarowsky, 2003), within-document resolution (Blume, 2005), and discriminative unsupervised generative models (Li et al., 2005). These too are insufficient for our problem since they suffer from inability scale or to handle early negotiation. Specific to the problem of resolving mentions in email collections, Abadi (Abadi, 2003) used email orders from an online retailer to resolve product mentions in orders and Holzer et al. (Holzer et al., 2005) used the Web to acquire information about individuals mentioned in headers of an email collection. Our work is focused on resolving personal name references in the full email including the message body; a problem first explored by Diehl et al. (Diehl et al., 2006) using header-based traffic analysis techniques. Minkov et al.(Minkov et al., 2006) studied the same problem using a lazy graph walk based on both headers and content. Those two recent studies reported results on different test collections, however, making direct comparisons difficult. We have therefore adopted their test collections in order to establish a common point of reference. 3 Mention Resolution Approach The problem we are interested in is the resolution of a personal-name mention (i.e., a named reference to a person) m, in a specific email em in the given collection of emails E, to its true referent. We assume that the user will designate such mention. This can be formulated as a known-item retrieval problem (Allen, 1989) since there is always only one right answer. Our goal is to develop a system that provides a list of potential candidates, ranked according to how strongly the system believes that a candidate is the true referent meant by the email author. In this paper, we propose a probabilistic approach that ranks the candidates based on the estimated probability of having been mentioned. Formally, we seek to estimate the probability p(c|m) that a potential candidate c is the one referred to by the given mention m, over all candidates C. We define a mention m as a tuple < lm, em >, where lm is the “literal” string of characters that represents m and em is the email where m is observed.1 We assume that m can be resolved to a distinguishable participant for whom at least one email address is present in the collection.2 The probabilistic approach we propose is motivated by a generative scenario of mentioning people in email. The scenario begins with the author of the email em, intending to refer to a person in that email. To do that s/he will: 1. Select a person c to whom s/he will refer 2. Select an appropriate context xk to mention c 3. Select a specific lexical reference lm to refer to c given the context xk. For example, suppose “John” is sending an email to “Steve” and wants to mention a common friend “Edward.” “John” knows that he and Steve know 2 people named Edward, one is a friend of both known by “Ed” and the other is his soccer trainer. If “John” would like to talk about the former, he would use “Ed” but he would likely use “Edward” plus some terms (e.g., “soccer”, “team”, etc) for the latter. “John” relies on the social context, or the topical context, for “Steve” to disambiguate the mention. The steps of this scenario impose a certain structure to our solution. First, we need to have a representational model for each candidate identity. Second, we need to reconstruct the context of the queried mention. Third, it requires a computational model of identity that supports reasoning about identities. Finally, it requires a resolution technique that leverages both the identity models and the context to rank the potential candidates. In this section, we will present our resolution approach within that structure. We first discuss how to build both representational and computational models of identity in section 3.1. Next, we introduce a definition of the contextual space and how we can reconstruct it in 1The exact position in em where lm is observed should also be included in the definition, but we ignore it assuming that all matched literal mentions in one email refer to the same identity. 2Resolving mentions that refer to non-participants is outside the scope of this paper. 942 section 3.2. Finally, we link those pieces together by the resolution algorithm in section 3.3. 3.1 Computational Model of Identity Representation: In a collection of emails, individuals often use different email addresses, multiple forms of their proper names, and different nicknames. In order to track references to a person over a large collection, we need to capture as many as possible of these referential attributes in one representation. We extend our simple representation of identity proposed in (Elsayed and Oard, 2006) where an identity is represented by a set of pairwise co-occurrence of referential attributes (i.e., cooccurrence “associations”), and each extracted association has a frequency of occurrence. The attributes are extracted from the headers and salutation and signature lines. For example, an “addressnickname” association < a, n > is inferred whenever a nickname n is usually observed in signature lines of emails sent from email address a. Three types of referential attributes were identified in the original representation: email addresses, names, and nicknames. We add usernames as well to account for the absence of any other type of names. Names, nicknames, and usernames are distinguishable based on where each is extracted: email addresses and names from headers, nicknames from salutation and signature lines, and usernames from email addresses. Since (except in rare cases) an email address is bound to one personal identity, the model leverages email addresses as the basis by mandating that at least one email address must appear in any observed association. As an off-line preprocessing step, we extract the referential attributes from the whole collection and build the identity models. The first step in the resolution process is to determine the list of identity models that are viable candidates as the true referent. For the experiments reported in this paper, any identity model with a first name or nickname that exactly matches the mention is considered a candidate. Labeling Observed Names: For the purpose of resolving name mentions, it is necessary to compute the probability p(l|c) that a person c is referred to by a given “literal” mention l. Intuitively, that probability can be estimated based on the observed “nametype” of l and how often that association occurs in the represented model. We define T as the set of 3 different types of single-token name-types: first, last, and nickname. We did not handle middle names and initials, just for simplicity. Names that are extracted from salutation and signature lines are labeled as nicknames whereas full names extracted from headers are first normalized to “First Last” form and then each single token is labeled based on its relative position as being the first or last name. Usernames are treated similarly to full names if they have more than one token, otherwise they are ignored. Note that the same single-token name may appear as a first name and a nickname. Figure 1: A computational model of identity. Reasoning: Having tokenized and labeled all names, we propose to model the association of a single-token name l of type t to an identity c by a simple 3-node Bayesian network illustrated in Figure 1. In the network, the observed mention l is distributed conditionally on both the identity c and the name-type t. p(c) is the prior probability of observing the identity c in the collection. p(t|c) is the probability that a name-type t is used to refer to c. p(l|t, c) is the probability of referring to c by l of type t. These probabilities can be inferred from the representational model as follows: p(c) = |assoc(c)| P c′∈C |assoc(c ′)| p(t|c) = freq(t, c) P t′∈T freq(t ′, c) p(l|t, c) = freq(l, t, c) P l′∈assoc(c) freq(l ′, t, c) where assoc(c) is the set of observed associations of referential attributes in the represented model c. The probability of observing a mention l given that it belongs to an identity c, without assuming a specific token type, can then be inferred as follows: p(l|c) = X t∈T p(t|c) p(l|t, c) In the case of a multi-token names (e.g., John Smith), we assume that the first is either a first name 943 or nickname and the last is a last name, and compute it accordingly as follows: p(l1l2|c) = { X t∈{f,n} p(t|c) p(l1|t, c)} · p(l2|last, c) where f and n above denotes first name and nickname respectively. Email addresses are also handled, but in a different way. Since we assume each of them uniquely identifies the identity, all email addresses for one identity are mapped to just one of them, which then has half of the probability mass (because it appears in every extracted co-occurrence association). Our computational model of identity can be thought of as a language model over a set of personal references and thus it is important to account for unobserved references. If we know that a specific first name often has a common nickname (by a dictionary of commonly used first to nickname mappings (e.g., Robert to Bob)), but this nickname was not observed in the corpus, we will need to apply smoothing. We achieve that by assuming the nickname would have been observed n times where n is some fraction (0.75 in our experiments) of the frequency of the observed name. We repeat that for each unobserved nickname and then treat them as if they were actually observed. 3.2 Contextual Space Figure 2: Contextual Space It is obvious that understanding the context of an ambiguous mention will help with resolving it. Fortunately, the nature of email as a conversational medium and the link-relationships between emails and people over time can reveal clues that can be exploited to partially reconstruct that context. We define the contextual space X(m) of a mention m as a mixture of 4 types of contexts with λk as the mixing coefficient of context xk. The four contexts (illustrated in Figure 2) are: (1) Local Context: the email em where the named person is mentioned. (2) Conversational Context: emails in the broader discussion that includes em, typically the thread that contains it. (3) Social Context: discussions that some or all of the participants (sender and receivers) of em joined or initiated at around the time of the mention-email. These might bear some otherwise-undetected relationship to the mention-email. (4) Topical Context: discussions that are topically similar to the mention-discussion that took place at around the time of em, regardless of whether the discussions share any common participants. These generally represent a growing (although not strictly nested) contextual space around the queried mention. We assume that all mentions in an email share the same contextual space. Therefore, we can treat the context of a mention as the context of its email. However, each email in the collection has its own contextual space that could overlap with another email’s space. 3.2.1 Formal Definition We define K as the set of the 4 types of contexts. A context xk is represented by a probability distribution over all emails in the collection. An email ej belongs to the kth context of another email ei with probability p(ej|xk(ei)). How we actually represent each context and estimate the distribution depends upon the type of the context. We explain that in detail in section 3.2.2. 3.2.2 Context Reconstruction In this section, we describe how each context is constructed. Local Context: Since this is simply em, all of the probability mass is assigned to it. Conversational Context: Threads (i.e., reply chains) are imperfect approximations of focused discussions, since people sometimes switch topics within a thread (and indeed sometimes within the same email). We nonetheless expect threads to exhibit a useful degree of focus and we have therefore adopted them as a computational representation of a discussion in our experiments. To reconstruct threads in the collection, we adopted the technique introduced in (Lewis and Knowles, 1997). Thread 944 reconstruction results in a unique tree containing the mention-email. Although we can distinguish between different paths or subtrees of that tree, we elected to have a uniform distribution over all emails in the same thread. This also applies to threads retrieved in the social and topical contexts as well. Social Context: Discussions that share common participants may also be useful, though we expect their utility to decay somewhat with time. To reconstruct that context, we temporally rank emails that share at least one participant with em in a time period around em and then expand each by its thread (with duplicate removal). Emails in each thread are then each assigned a weight that equals the reciprocal of its thread rank. We do that separately for emails that temporally precede or follow em. Finally, weights are normalized to produce one distribution for the whole social context. Topical Context: Identifying topically-similar content is a traditional query-by-example problem that has been well researched in, for example, the TREC routing task (Lewis, 1996) and the Topic Detection and Tracking evaluations (Allan, 2002). Individual emails may be quite terse, but we can exploit the conversational structure to obtain topically related text. In our experiments, we tracked back to the root of the thread in which em was found and used the subject line and the body text of that root email as a query to Lucene3 to identify topically-similar emails. Terms found in the subject line are doubled in the query to emphasize what is sometimes a concise description of the original topic. Subsequent processing is then similar to that used for the social context, except that the emails are first ranked by their topical, rather than temporal, similarity. The approaches we adopted to reconstruct the social and topical contexts were chosen for their relative simplicity, but there are clearly more sophisticated alternatives. For example, topic modeling techniques (McCallum et al., 2005) could be leveraged in the reconstruction of the topical context. 3.3 Mention Resolution Given a specific mention m and the set of identity models C, our goal now is to compute p(c|m) for each candidate c and rank them accordingly. 3http://lucene.apache.org 3.3.1 Context-Free Mention Resolution If we resolve m out of its context, then we can compute p(c|m) by applying Bayes’ rule as follows: p(c|m) ≈p(c|lm) = p(lm|c) p(c) P c′∈C p(lm|c ′) p(c ′) All the terms above are estimated as discussed earlier in section 3.1. We call this approach “backoff” since it can be used as a fall-back strategy. It is considered the baseline approach in our experiments. 3.3.2 Contextual Mention Resolution We now discuss the more realistic situation in which we use the context to resolve m. By expanding the mention with its context, we get p(c|m) = p(c|lm, X(em)) We then apply Bayes’ rule to get p(c|lm, X(em)) = p(c, lm, X(em)) p(lm, X(em)) where p(lm, X(em)) is the probability of observing lm in the context. We can ignore this probability since it is constant across all candidates in our ranking. We now restrict our focus to the numerator p(c, lm, X(em)), that is the probability that the sender chose to refer to c by lm in the contextual space. As we discussed in section 3.2, X is defined as a mixture of contexts therefore we can further expand it as follows: p(c, lm, X(em)) = X k λk p(c, lm, xk(em)) Following the intuitive generative scenario we introduced earlier, the context-specific probability can be decomposed as follows: p(c, lm, xk(em)) = p(c) ∗p(xk(em)|c) ∗p(lm|xk(em), c) where p(c) is the probability of selecting a candidate c, p(xk(em)|c) is the probability of selecting xk as an appropriate context to mention c, and p(lm|xk(em), c) is the probability of choosing to mention c by lm given that xk is the appropriate context. Choosing person to mention: p(c) can be estimated as discussed in section 3.1. Choosing appropriate context: By applying Bayes’ rule to compute p(xk(em)|c) we get p(xk(em)|c) = p(c|xk(em)) p(xk(em)) p(c) 945 p(xk(em)) is the probability of choosing xk to generally mention people. In our experiments, we assumed a uniform distribution over all contexts. p(c|xk(em)) is the probability of mentioning c in xk(em). Given that the context is defined as a distribution over emails, this can be expanded to p(c|xk(em)) = X ei∈E p(ei|xk(em) p(c|ei)) where p(c|ei) is the probability that c is mentioned in the email ei. This, in turn, can be estimated using the probability of referring to c by at least one unique reference observed in that email. By assuming that all lexical matches in the same email refer to the same person, and that all lexically-unique references are statistically independent, we can compute that probability as follows: p(c|ei) = 1 −p(c is not mentioned in ei) = 1 − Y m′∈M(ei) (1 −p(c|m′)) where p(c|m ′) is the probability that c is the true referent of m ′. This is the same general problem of resolving mentions, but now concerning a related mention m ′ found in the context of m. To handle this, there are two alternative solutions: (1) break the cycle and compute context-free resolution probabilities for those related mentions, or (2) jointly resolve all mentions. In this paper, we will only consider the first, leaving joint resolution for future work. Choosing a name-mention: To estimate p(lm|xk(em), c), we suggest that the email author would choose either to select a reference (or a modified version of a reference) that was previously mentioned in the context or just ignore the context. Hence, we estimate that probability as follows: p(lm|xk(em), c) = α p(lm ∈xk(em)|c) +(1 −α) p(lm|c) where α ∈[0, 1] is a mixing parameter (set at 0.9 in our experiments), and p(lm|c) is estimated as in section 3.1. p(lm ∈xk(em)|c) can be estimated as follows: p(lm ∈xk(em)|c) = X m′∈xk p(lm|lm ′ )p(lm ′ |xk) p(c|lm ′ ) where p(lm|lm ′ ) is the probability of modifying lm ′ into lm. We assume all possible mentions of c are equally similar to m and estimate p(lm|lm ′ ) by 1 |possible mentions of c|. p(lm ′ |xk) is the probability of observing lm ′ in xk, which we estimate by its relative frequency in that context. Finally, p(c|lm′) is again a mention resolution problem concerning the reference ri which can be resolved as shown earlier. The Aho-Corasick linear-time algorithm (Aho and Corasick, 1975) is used to find mentions of names, using a corpus-based dictionary that includes all names, nicknames, and email addresses extracted in the preprocessing step. 4 Experimental Evaluation We evaluate our mention resolution approach using four test collections, all are based on the CMU version of the Enron collection; each was created by selecting a subset of that collection, selecting a set of query-mentions within emails from that subset, and creating an answer key in which each query-mention is associated with a single email address. The first two test collections were created by Minkov et al (Minkov et al., 2006). These test collections correspond to two email accounts, “sagere” (the “Sager” collection) and “shapiro-r” (the “Shapiro” collection). Their mention-queries and answer keys were generated automatically by identifying name mentions that correspond uniquely to individuals referenced in the cc header, and eliminating that cc entry from the header. The third test collection, which we call the “Enron-subset” is an extended version of the test collection created by Diehl at al (Diehl et al., 2006). Emails from all top-level folders were included in the collection, but only those that were both sent by and received by at least one email address of the form <name1>.<name2>@enron.com were retained. A set of 78 mention-queries were manually selected and manually associated with the email address of the true referent by the third author using an interactive search system developed specifically to support that task. The set of queries was limited to those that resolve to an address of the form <name1>.<name2>@enron.com. Names found in salutation or signature lines or that exactly match <name1> or <name2> of any of the email participants were not selected as query-mentions. Those 78 queries include the 54 used by Diehl et al. 946 Table 1: Test collections used in the experiments. Test Coll. Emails IDs Queries Candidates Sager 1,628 627 51 4 (1-11) Shapiro 974 855 49 8 (1-21) Enron-sub 54,018 27,340 78 152 (1-489) Enron-all 248,451 123,783 78 518 (3-1785) For our fourth test collection (“Enron-all”), we used the same 78 mention-queries and the answer key from the Enron-subset collection, but we used the full CMU version of the Enron collection (with duplicates removed). We use this collection to assess the scalability of our techniques. Some descriptive statistics for each test collection are shown in Table 1. The Sager and Shapiro collections are typical of personal collections, while the other two represent organizational collections. These two types of collections differ markedly in the number of known identities and the candidate list sizes as shown in the table (the candidate list size is presented as an average over that collection’s mention-queries and as the full range of values). 4.1 Evaluation Measures There are two commonly used single-valued evaluation measures for “known item”-retrieval tasks. The “Success @ 1” measure characterizes the accuracy of one-best selection, computed as the mean across queries of the precision at the top rank for each query. For a single-valued figure of merit that considers every list position, we use “Mean Reciprocal Rank” (MRR), computed as the mean across queries of the inverse of the rank at which the correct referent is found. 4.2 Results There are four basic questions which we address in our experimental evaluation: (1) How does our approach perform compared to other approaches?, (2) How is it affected by the size of the collection and by increasing the time period?, (3) Which context makes the most important contribution to the resolution task? and (4) Does the mixture help? In our experiments, we set the mixing coefficients λk and the context priors p(xk) to a uniform distribution over all reconstructed contexts. To compare our system performance with results Table 2: Accuracy results with different time periods. Period MRR Success @ 1 (days) Prob. Minkov Prob. Minkov 10 0.899 0.889 0.843 0.804 Sager 100 0.911 0.889 0.863 0.804 200 0.911 0.889 0.863 0.804 10 0.913 0.879 0.857 0.779 Shapiro 100 0.910 0.879 0.837 0.779 200 0.911 0.837 0.878 0.779 10 0.878 0.821 Enron-sub 100 0.911 0.846 200 0.911 0.846 10 0.890 0.821 Enron-all 100 0.888 0.821 200 0.888 0.821 previously reported, we experimented with different (symmetric) time periods for selecting threads in the social and topical contexts. Three representative time periods, in days, were arbitrarily chosen: 10 (i.e., +/- 5) days, 100 (i.e., +/- 50) days, and 200 (i.e., +/- 100) days. In each case, the mention-email defines the center of this period. A summary of the our results (denoted by “Prob.”) are shown in Table 2 with the best results for each test collection highlighted in bold. The table also includes the results reported in Minkov et al (Minkov et al., 2006) for the small collections for comparison purposes.4 Each score for our system was the best over all combinations of contexts for these collections and time periods. Given these scores, our results compare favorably with the previously reported results for both Sager and Shapiro collections. Another notable thing about our results is that they seem to be good enough for practical applications. Specifically, our one-best selection (over all tried conditions) is correct at least 82% of the time over all collections, including the largest one. Of course, the Enron-focused selection of mentionqueries in every case is an important caveat on these results; we do not yet know how well our techniques will hold up with less evidence, as might be the case for mentions of people from outside Enron. It is encouraging that testing on the largest col4For the “Enron-subset” collection, we do not know which 54 mention-queries Diehl et al used in (Diehl et al., 2006) 947 lection (with all unrelated and thus noisy data) did not hurt the effectiveness much. For the three different time periods we tried, there was no systematic effect. Figure 3: Individual contexts, period set to 100 days. Individual Contexts: Our choice of contexts was motivated by intuition rather than experiments, so we also took this opportunity to characterize the contribution of each context to the results. We did that by setting some of the context mixingcoefficients to zero and leaving the others equallyweighted. Figure 3 shows the MRR achieved with each context. In that figure, the “backoff” curve indicates how well the simple context-free resolution would do. The difference between the two smallest and the two largest collections is immediately apparent–this backoff is remarkably effective for the smaller collections, and almost useless for the larger ones, suggesting that the two smaller collections are essentially much easier. The social context is clearly quite useful, more so than any other single context, for every collection. This tends to support our expectation that social networks can be as informative as content networks in email collections. The topical context also seems to be useful on its own. The conversational context is moderately useful on its own in the larger collections. The local context alone is not very informative for the larger collections. Mixture of Contexts: The principal motivation for combining different types of contexts is that different sources may provide complementary evidence. To characterize that effect, we look at combinations of contexts. Figure 4 shows three such context combinations, anchored by the social context alone, with a 100-day window (the results for 10 and 200 day periods are similar). Reassuringly, adding more contexts (hence more evidence) turns out to be a reaFigure 4: Mixture of contexts, period set to 100 days. sonable choice in most cases. For the full combination, we notice a drop in the effectiveness from the addition of the topical context.5 This suggests that the construction of the topical context may need more careful design, and/or that learned λk’s could yield better evidence combination (since these results were obtained with equal λk’s). 5 Conclusion We have presented an approach to mention resolution in email that flexibly makes use of expanding contexts to accurately resolve the identity of a given mention. Our approach focuses on four naturally occurring contexts in email, including a message, a thread, other emails with senders and/or recipients in common, and other emails with significant topical content in common. Our approach outperforms previously reported techniques and it scales well to larger collections. Moreover, our results serve to highlight the importance of social context when resolving mentions in social media, which is an idea that deserves more attention generally. In future work, we plan to extend our test collection with mention queries that must be resolved in the “long tail” of the identity distribution where less evidence is available. We are also interested in exploring iterative approaches to jointly resolving mentions. Acknowledgments The authors would like to thank Lise Getoor for her helpful advice. 5This also occurs even when topical context is combined with only social context. 948 References Daniel J. Abadi. 2003. Comparing domain-specific and non-domain-specific anaphora resolution techniques. Cambridge University MPhil Dissertation. Alfred V. Aho and Margaret J. Corasick. 1975. Efficient string matching: an aid to bibliographic search. In Communications of the ACM. James Allan, editor. 2002. Topic detection and tracking: event-based information organization. Kluwer Academic Publishers, Norwell, MA, USA. Bryce Allen. 1989. Recall cues in known-item retrieval. JASIS, 40(4):246–252. Omar Benjelloun, Hector Garcia-Molina, Hideki Kawai, Tait Eliott Larson, David Menestrina, Qi Su, Sutthipong Thavisomboon, and Jennifer Widom. 2006. Generic entity resolution in the serf project. IEEE Data Engineering Bulletin, June. Indrajit Bhattacharya and Lise Getoor. 2006. A latent dirichlet model for unsupervised entity resolution. In The SIAM International Conference on Data Mining (SIAM-SDM), Bethesda, MD, USA. Indrajit Bhattacharya and Lise Getoor. 2007. Collective entity resolution in relational data. ACM Transactions on Knowledge Discovery from Data, 1(1), March. Matthias Blume. 2005. Automatic entity disambiguation: Benefits to NER, relation extraction, link analysis, and inference. In International Conference on Intelligence Analysis, May. Chris Diehl, Lise Getoor, and Galileo Namata. 2006. Name reference resolution in organizational email archives. In Proceddings of SIAM International Conference on Data Mining, Bethesda, MD , USA, April 20-22. Tamer Elsayed and Douglas W. Oard. 2006. Modeling identity in archival collections of email: A preliminary study. In Proceedings of the 2006 Conference on Email and Anti-Spam (CEAS 06), pages 95–103, Mountain View, California, July. Ralf Holzer, Bradley Malin, and Latanya Sweeney. 2005. Email alias detection using social network analysis. In LinkKDD ’05: Proceedings of the 3rd international workshop on Link discovery, pages 52–57, New York, NY, USA. ACM Press. Bryan Klimt and Yiming Yang. 2004. Introducing the Enron corpus. In Conference on Email and Anti-Spam, Mountain view, CA, USA, July 30-31. David D. Lewis and Kimberly A. Knowles. 1997. Threading electronic mail: a preliminary study. Inf. Process. Manage., 33(2):209–217. David D. Lewis. 1996. The trec-4 filtering track. In The Fourth Text REtrieval Conference (TREC-4), pages 165–180, Gaithersburg, Maryland. Xin Li, Paul Morie, and Dan Roth. 2005. Semantic integration in text: from ambiguous names to identifiable entities. AI Magazine. Special Issue on Semantic Integration, 26(1):45–58. Bradley Malin. 2005. Unsupervised name disambiguation via social network similarity. In Workshop on Link Analysis, Counter-terrorism, and Security, in conjunction with the SIAM International Conference on Data Mining, Newport Beach, CA, USA, April 2123. Gideon S. Mann and David Yarowsky. 2003. Unsupervised personal name disambiguation. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003, pages 33–40, Morristown, NJ, USA. Association for Computational Linguistics. Andrew McCallum, Andres Corrada-Emmanuel, and XueruiWang Wang. 2005. Topic and role discovery in social networks. In IJCAI. Einat Minkov, William W. Cohen, and Andrew Y. Ng. 2006. Contextual search and name disambiguation in email using graphs. In SIGIR ’06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 27–34, New York, NY, USA. ACM Press. Patric Reuther. 2006. Personal name matching: New test collections and a social network based approach. 949
2008
107
Proceedings of ACL-08: HLT, pages 950–958, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Integrating Graph-Based and Transition-Based Dependency Parsers Joakim Nivre V¨axj¨o University Uppsala University Computer Science Linguistics and Philology SE-35195 V¨axj¨o SE-75126 Uppsala [email protected] Ryan McDonald Google Inc. 76 Ninth Avenue New York, NY 10011 [email protected] Abstract Previous studies of data-driven dependency parsing have shown that the distribution of parsing errors are correlated with theoretical properties of the models used for learning and inference. In this paper, we show how these results can be exploited to improve parsing accuracy by integrating a graph-based and a transition-based model. By letting one model generate features for the other, we consistently improve accuracy for both models, resulting in a significant improvement of the state of the art when evaluated on data sets from the CoNLL-X shared task. 1 Introduction Syntactic dependency graphs have recently gained a wide interest in the natural language processing community and have been used for many problems ranging from machine translation (Ding and Palmer, 2004) to ontology construction (Snow et al., 2005). A dependency graph for a sentence represents each word and its syntactic dependents through labeled directed arcs, as shown in figure 1. One advantage of this representation is that it extends naturally to discontinuous constructions, which arise due to long distance dependencies or in languages where syntactic structure is encoded in morphology rather than in word order. This is undoubtedly one of the reasons for the emergence of dependency parsers for a wide range of languages. Many of these parsers are based on data-driven parsing models, which learn to produce dependency graphs for sentences solely from an annotated corpus and can be easily ported to any Figure 1: Dependency graph for an English sentence. language or domain in which annotated resources exist. Practically all data-driven models that have been proposed for dependency parsing in recent years can be described as either graph-based or transitionbased (McDonald and Nivre, 2007). In graph-based parsing, we learn a model for scoring possible dependency graphs for a given sentence, typically by factoring the graphs into their component arcs, and perform parsing by searching for the highest-scoring graph. This type of model has been used by, among others, Eisner (1996), McDonald et al. (2005a), and Nakagawa (2007). In transition-based parsing, we instead learn a model for scoring transitions from one parser state to the next, conditioned on the parse history, and perform parsing by greedily taking the highest-scoring transition out of every parser state until we have derived a complete dependency graph. This approach is represented, for example, by the models of Yamada and Matsumoto (2003), Nivre et al. (2004), and Attardi (2006). Theoretically, these approaches are very different. The graph-based models are globally trained and use exact inference algorithms, but define features over a limited history of parsing decisions. The transitionbased models are essentially the opposite. They use local training and greedy inference algorithms, but 950 define features over a rich history of parsing decisions. This is a fundamental trade-off that is hard to overcome by tractable means. Both models have been used to achieve state-of-the-art accuracy for a wide range of languages, as shown in the CoNLL shared tasks on dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007), but McDonald and Nivre (2007) showed that a detailed error analysis reveals important differences in the distribution of errors associated with the two models. In this paper, we consider a simple way of integrating graph-based and transition-based models in order to exploit their complementary strengths and thereby improve parsing accuracy beyond what is possible by either model in isolation. The method integrates the two models by allowing the output of one model to define features for the other. This method is simple – requiring only the definition of new features – and robust by allowing a model to learn relative to the predictions of the other. 2 Two Models for Dependency Parsing 2.1 Preliminaries Given a set L = {l1, . . . , l|L|} of arc labels (dependency relations), a dependency graph for an input sentence x = w0, w1, . . . , wn (where w0 = ROOT) is a labeled directed graph G = (V, A) consisting of a set of nodes V = {0, 1, . . . , n}1 and a set of labeled directed arcs A ⊆V ×V ×L, i.e., if (i, j, l) ∈A for i, j ∈V and l ∈L, then there is an arc from node i to node j with label l in the graph. A dependency graph G for a sentence x must be a directed tree originating out of the root node 0 and spanning all nodes in V , as exemplified by the graph in figure 1. This is a common constraint in many dependency parsing theories and their implementations. 2.2 Graph-Based Models Graph-based dependency parsers parameterize a model over smaller substructures in order to search the space of valid dependency graphs and produce the most likely one. The simplest parameterization is the arc-factored model that defines a real-valued score function for arcs s(i, j, l) and further defines the score of a dependency graph as the sum of the 1We use the common convention of representing words by their index in the sentence. score of all the arcs it contains. As a result, the dependency parsing problem is written: G = arg max G=(V,A) X (i,j,l)∈A s(i, j, l) This problem is equivalent to finding the highest scoring directed spanning tree in the complete graph over the input sentence, which can be solved in O(n2) time (McDonald et al., 2005b). Additional parameterizations are possible that take more than one arc into account, but have varying effects on complexity (McDonald and Satta, 2007). An advantage of graph-based methods is that tractable inference enables the use of standard structured learning techniques that globally set parameters to maximize parsing performance on the training set (McDonald et al., 2005a). The primary disadvantage of these models is that scores – and as a result any feature representations – are restricted to a single arc or a small number of arcs in the graph. The specific graph-based model studied in this work is that presented by McDonald et al. (2006), which factors scores over pairs of arcs (instead of just single arcs) and uses near exhaustive search for unlabeled parsing coupled with a separate classifier to label each arc. We call this system MSTParser, or simply MST for short, which is also the name of the freely available implementation.2 2.3 Transition-Based Models Transition-based dependency parsing systems use a model parameterized over transitions of an abstract machine for deriving dependency graphs, such that every transition sequence from the designated initial configuration to some terminal configuration derives a valid dependency graph. Given a real-valued score function s(c, t) (for transition t out of configuration c), parsing can be performed by starting from the initial configuration and taking the optimal transition t∗= arg maxt∈T s(c, t) out of every configuration c until a terminal configuration is reached. This can be seen as a greedy search for the optimal dependency graph, based on a sequence of locally optimal decisions in terms of the transition system. Many transition systems for data-driven dependency parsing are inspired by shift-reduce parsing, 2http://mstparser.sourceforge.net 951 where each configuration c contains a stack σc for storing partially processed nodes and a buffer βc containing the remaining input. Transitions in such a system add arcs to the dependency graph and manipulate the stack and buffer. One example is the transition system defined by Nivre (2003), which parses a sentence x = w0, w1, . . . , wn in O(n) time. To learn a scoring function on transitions, these systems rely on discriminative learning methods, such as memory-based learning or support vector machines, using a strictly local learning procedure where only single transitions are scored (not complete transition sequences). The main advantage of these models is that features are not restricted to a limited number of graph arcs but can take into account the entire dependency graph built so far. The major disadvantage is that the greedy parsing strategy may lead to error propagation. The specific transition-based model studied in this work is that presented by Nivre et al. (2006), which uses support vector machines to learn transition scores. We call this system MaltParser, or Malt for short, which is also the name of the freely available implementation.3 2.4 Comparison and Analysis These models differ primarily with respect to three properties: inference, learning, and feature representation. MaltParser uses an inference algorithm that greedily chooses the best parsing decision based on the current parser history whereas MSTParser uses exhaustive search algorithms over the space of all valid dependency graphs to find the graph that maximizes the score. MaltParser trains a model to make a single classification decision (choose the next transition) whereas MSTParser trains a model to maximize the global score of correct graphs. MaltParser can introduce a rich feature history based on previous parser decisions, whereas MSTParser is forced to restrict features to a single decision or a pair of nearby decisions in order to retain efficiency. These differences highlight an inherent trade-off between global inference/learning and expressiveness of feature representations. MSTParser favors the former at the expense of the latter and MaltParser the opposite. This difference was highlighted in the 3http://w3.msi.vxu.se/∼jha/maltparser/ study of McDonald and Nivre (2007), which showed that the difference is reflected directly in the error distributions of the parsers. Thus, MaltParser is less accurate than MSTParser for long dependencies and those closer to the root of the graph, but more accurate for short dependencies and those farthest away from the root. Furthermore, MaltParser is more accurate for dependents that are nouns and pronouns, whereas MSTParser is more accurate for verbs, adjectives, adverbs, adpositions, and conjunctions. Given that there is a strong negative correlation between dependency length and tree depth, and given that nouns and pronouns tend to be more deeply embedded than (at least) verbs and conjunctions, these patterns can all be explained by the same underlying factors. Simply put, MaltParser has an advantage in its richer feature representations, but this advantage is gradually diminished by the negative effect of error propagation due to the greedy inference strategy as sentences and dependencies get longer. MSTParser has a more even distribution of errors, which is expected given that the inference algorithm and feature representation should not prefer one type of arc over another. This naturally leads one to ask: Is it possible to integrate the two models in order to exploit their complementary strengths? This is the topic of the remainder of this paper. 3 Integrated Models There are many conceivable ways of combining the two parsers, including more or less complex ensemble systems and voting schemes, which only perform the integration at parsing time. However, given that we are dealing with data-driven models, it should be possible to integrate at learning time, so that the two complementary models can learn from one another. In this paper, we propose to do this by letting one model generate features for the other. 3.1 Feature-Based Integration As explained in section 2, both models essentially learn a scoring function s : X →R, where the domain X is different for the two models. For the graph-based model, X is the set of possible dependency arcs (i, j, l); for the transition-based model, X is the set of possible configuration-transition pairs (c, t). But in both cases, the input is represented 952 MSTMalt – defined over (i, j, l) (∗= any label/node) Is (i, j, ∗) in GMalt x ? Is (i, j, l) in GMalt x ? Is (i, j, ∗) not in GMalt x ? Is (i, j, l) not in GMalt x ? Identity of l′ such that (∗, j, l′) is in GMalt x ? Identity of l′ such that (i, j, l′) is in GMalt x ? MaltMST – defined over (c, t) (∗= any label/node) Is (σ0 c, β0 c, ∗) in GMST x ? Is (β0 c, σ0 c, ∗) in GMST x ? Head direction for σ0 c in GMST x (left/right/ROOT) Head direction for β0 c in GMST x (left/right/ROOT) Identity of l such that (∗, σ0 c, l) is in GMST x ? Identity of l such that (∗, β0 c, l) is in GMST x ? Table 1: Guide features for MSTMalt and MaltMST. by a k-dimensional feature vector f : X →Rk. In the feature-based integration we simply extend the feature vector for one model, called the base model, with a certain number of features generated by the other model, which we call the guide model in this context. The additional features will be referred to as guide features, and the version of the base model trained with the extended feature vector will be called the guided model. The idea is that the guided model should be able to learn in which situations to trust the guide features, in order to exploit the complementary strength of the guide model, so that performance can be improved with respect to the base parser. This method of combining classifiers is sometimes referred to as classifier stacking. The exact form of the guide features depend on properties of the base model and will be discussed in sections 3.2–3.3 below, but the overall scheme for the feature-based integration can be described as follows. To train a guided version BC of base model B with guide model C and training set T, the guided model is trained, not on the original training set T, but on a version of T that has been parsed with the guide model C under a cross-validation scheme (to avoid overlap with training data for C). This means that, for every sentence x ∈T, BC has access at training time to both the gold standard dependency graph Gx and the graph GC x predicted by C, and it is the latter that forms the basis for the additional guide features. When parsing a new sentence x′ with BC, x′ is first parsed with model C (this time trained on the entire training set T) to derive GC x′, so that the guide features can be extracted also at parsing time. 3.2 The Guided Graph-Based Model The graph-based model, MSTParser, learns a scoring function s(i, j, l) ∈R over labeled dependencies. More precisely, dependency arcs (or pairs of arcs) are first represented by a high dimensional feature vector f(i, j, l) ∈Rk, where f is typically a binary feature vector over properties of the arc as well as the surrounding input (McDonald et al., 2005a; McDonald et al., 2006). The score of an arc is defined as a linear classifier s(i, j, l) = w · f(i, j, l), where w is a vector of feature weights to be learned by the model. For the guided graph-based model, which we call MSTMalt, this feature representation is modified to include an additional argument GMalt x , which is the dependency graph predicted by MaltParser on the input sentence x. Thus, the new feature representation will map an arc and the entire predicted MaltParser graph to a high dimensional feature representation, f(i, j, l, GMalt x ) ∈Rk+m. These m additional features account for the guide features over the MaltParser output. The specific features used by MSTMalt are given in table 1. All features are conjoined with the part-of-speech tags of the words involved in the dependency to allow the guided parser to learn weights relative to different surface syntactic environments. Though MSTParser is capable of defining features over pairs of arcs, we restrict the guide features over single arcs as this resulted in higher accuracies during preliminary experiments. 3.3 The Guided Transition-Based Model The transition-based model, MaltParser, learns a scoring function s(c, t) ∈R over configurations and transitions. The set of training instances for this learning problem is the set of pairs (c, t) such that t is the correct transition out of c in the transition sequence that derives the correct dependency graph Gx for some sentence x in the training set T. Each training instance (c, t) is represented by a feature vector f(c, t) ∈Rk, where features are defined in terms of arbitrary properties of the configuration c, including the state of the stack σc, the input buffer βc, and the partially built dependency graph Gc. In particular, many features involve properties of the two target tokens, the token on top of the stack σc (σ0 c) and the first token in the input buffer βc (β0 c ), 953 which are the two tokens that may become connected by a dependency arc through the transition out of c. The full set of features used by the base model MaltParser is described in Nivre et al. (2006). For the guided transition-based model, which we call MaltMST, training instances are extended to triples (c, t, GMST x ), where GMST x is the dependency graph predicted by the graph-based MSTParser for the sentence x to which the configuration c belongs. We define m additional guide features, based on properties of GMST x , and extend the feature vector accordingly to f(c, t, GMST x ) ∈Rk+m. The specific features used by MaltMST are given in table 1. Unlike MSTParser, features are not explicitly defined to conjoin guide features with part-of-speech features. These features are implicitly added through the polynomial kernel used to train the SVM. 4 Experiments In this section, we present an experimental evaluation of the two guided models based on data from the CoNLL-X shared task, followed by a comparative error analysis including both the base models and the guided models. The data for the experiments are training and test sets for all thirteen languages from the CoNLL-X shared task on multilingual dependency parsing with training sets ranging in size from from 29,000 tokens (Slovene) to 1,249,000 tokens (Czech). The test sets are all standardized to about 5,000 tokens each. For more information on the data sets, see Buchholz and Marsi (2006). The guided models were trained according to the scheme explained in section 3, with two-fold crossvalidation when parsing the training data with the guide parsers. Preliminary experiments suggested that cross-validation with more folds had a negligible impact on the results. Models are evaluated by their labeled attachment score (LAS) on the test set, i.e., the percentage of tokens that are assigned both the correct head and the correct label, using the evaluation software from the CoNLL-X shared task with default settings.4 Statistical significance was assessed using Dan Bikel’s randomized parsing evaluation comparator with the default setting of 10,000 iterations.5 4http://nextens.uvt.nl/∼conll/software.html 5http://www.cis.upenn.edu/∼dbikel/software.html Language MST MSTMalt Malt MaltMST Arabic 66.91 68.64 (+1.73) 66.71 67.80 (+1.09) Bulgarian 87.57 89.05 (+1.48) 87.41 88.59 (+1.18) Chinese 85.90 88.43 (+2.53) 86.92 87.44 (+0.52) Czech 80.18 82.26 (+2.08) 78.42 81.18 (+2.76) Danish 84.79 86.67 (+1.88) 84.77 85.43 (+0.66) Dutch 79.19 81.63 (+2.44) 78.59 79.91 (+1.32) German 87.34 88.46 (+1.12) 85.82 87.66 (+1.84) Japanese 90.71 91.43 (+0.72) 91.65 92.20 (+0.55) Portuguese 86.82 87.50 (+0.68) 87.60 88.64 (+1.04) Slovene 73.44 75.94 (+2.50) 70.30 74.24 (+3.94) Spanish 82.25 83.99 (+1.74) 81.29 82.41 (+1.12) Swedish 82.55 84.66 (+2.11) 84.58 84.31 (–0.27) Turkish 63.19 64.29 (+1.10) 65.58 66.28 (+0.70) Average 80.83 82.53 (+1.70) 80.74 82.01 (+1.27) Table 2: Labeled attachment scores for base parsers and guided parsers (improvement in percentage points). 10 20 30 40 50 60 Sentence Length 0.7 0.75 0.8 0.85 0.9 Accuracy Malt MST Malt+MST MST+Malt Figure 2: Accuracy relative to sentence length. 4.1 Results Table 2 shows the results, for each language and on average, for the two base models (MST, Malt) and for the two guided models (MSTMalt, MaltMST). First of all, we see that both guided models show a very consistent increase in accuracy compared to their base model, even though the extent of the improvement varies across languages from about half a percentage point (MaltMST on Chinese) up to almost four percentage points (MaltMST on Slovene).6 It is thus quite clear that both models have the capacity to learn from features generated by the other model. However, it is also clear that the graph-based MST model shows a somewhat larger improvement, both on average and for all languages except Czech, 6The only exception to this pattern is the result for MaltMST on Swedish, where we see an unexpected drop in accuracy compared to the base model. 954 2 4 6 8 10 12 14 15+ Dependency Length 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 Recall Malt MST Malt+MST MST+Malt 2 4 6 8 10 12 14 15+ Dependency Length 0.55 0.6 0.65 0.7 0.75 0.8 0.85 Precision Malt MST Malt+MST MST+Malt 1 2 3 4 5 6 7+ Distance to Root 0.8 0.82 0.84 0.86 0.88 0.9 Recall Malt MST Malt+MST MST+Malt 1 2 3 4 5 6 7+ Distance to Root 0.78 0.8 0.82 0.84 0.86 0.88 0.9 0.92 Precision Malt MST Malt+MST MST+Malt (a) (b) Figure 3: Dependency arc precision/recall relative to predicted/gold for (a) dependency length and (b) distance to root. German, Portuguese and Slovene. Finally, given that the two base models had the previously best performance for these data sets, the guided models achieve a substantial improvement of the state of the art. While there is no statistically significant difference between the two base models, they are both outperformed by MaltMST (p < 0.0001), which in turn has significantly lower accuracy than MSTMalt (p < 0.0005). An extension to the models described so far would be to iteratively integrate the two parsers in the spirit of pipeline iteration (Hollingshead and Roark, 2007). For example, one could start with a Malt model, use it to train a guided MSTMalt model, then use that as the guide to train a MaltMSTMalt model, etc. We ran such experiments, but found that accuracy did not increase significantly and in some cases decreased slightly. This was true regardless of which parser began the iterative process. In retrospect, this result is not surprising. Since the initial integration effectively incorporates knowledge from both parsing systems, there is little to be gained by adding additional parsers in the chain. 4.2 Analysis The experimental results presented so far show that feature-based integration is a viable approach for improving the accuracy of both graph-based and transition-based models for dependency parsing, but they say very little about how the integration benefits the two models and what aspects of the parsing process are improved as a result. In order to get a better understanding of these matters, we replicate parts of the error analysis presented by McDonald and Nivre (2007), where parsing errors are related to different structural properties of sentences and their dependency graphs. For each of the four models evaluated, we compute error statistics for labeled attachment over all twelve languages together. Figure 2 shows accuracy in relation to sentence length, binned into ten-word intervals (1–10, 11-20, etc.). As expected, Malt and MST have very similar accuracy for short sentences but Malt degrades more rapidly with increasing sentence length because of error propagation (McDonald and Nivre, 2007). The guided models, MaltMST and MSTMalt, behave in a very similar fashion with respect to each other but both outperform their base parser over the entire range of sentence lengths. However, except for the two extreme data points (0–10 and 51–60) there is also a slight tendency for MaltMST to improve more for longer sentences and for MSTMalt to improve more for short sentences, which indicates that the feature-based integration allows one parser to exploit the strength of the other. Figure 3(a) plots precision (top) and recall (bottom) for dependency arcs of different lengths (predicted arcs for precision, gold standard arcs for recall). With respect to recall, the guided models appear to have a slight advantage over the base mod955 Part of Speech MST MSTMalt Malt MaltMST Verb 82.6 85.1 (2.5) 81.9 84.3 (2.4) Noun 80.0 81.7 (1.7) 80.7 81.9 (1.2) Pronoun 88.4 89.4 (1.0) 89.2 89.3 (0.1) Adjective 89.1 89.6 (0.5) 87.9 89.0 (1.1) Adverb 78.3 79.6 (1.3) 77.4 78.1 (0.7) Adposition 69.9 71.5 (1.6) 68.8 70.7 (1.9) Conjunction 73.1 74.9 (1.8) 69.8 72.5 (2.7) Table 3: Accuracy relative to dependent part of speech (improvement in percentage points). els for short and medium distance arcs. With respect to precision, however, there are two clear patterns. First, the graph-based models have better precision than the transition-based models when predicting long arcs, which is compatible with the results of McDonald and Nivre (2007). Secondly, both the guided models have better precision than their base model and, for the most part, also their guide model. In particular MSTMalt outperforms MST and is comparable to Malt for short arcs. More interestingly, MaltMST outperforms both Malt and MST for arcs up to length 9, which provides evidence that MaltMST has learned specifically to trust the guide features from MST for longer dependencies. The reason that accuracy does not improve for dependencies of length greater than 9 is probably that these dependencies are too rare for MaltMST to learn from the guide parser in these situations. Figure 3(b) shows precision (top) and recall (bottom) for dependency arcs at different distances from the root (predicted arcs for precision, gold standard arcs for recall). Again, we find the clearest patterns in the graphs for precision, where Malt has very low precision near the root but improves with increasing depth, while MST shows the opposite trend (McDonald and Nivre, 2007). Considering the guided models, it is clear that MaltMST improves in the direction of its guide model, with a 5-point increase in precision for dependents of the root and smaller improvements for longer distances. Similarly, MSTMalt improves precision in the range where its base parser is inferior to Malt and for distances up to 4 has an accuracy comparable to or higher than its guide parser Malt. This again provides evidence that the guided parsers are learning from their guide models. Table 3 gives the accuracy for arcs relative to dependent part-of-speech. As expected, we see that MST does better than Malt for all categories except nouns and pronouns (McDonald and Nivre, 2007). But we also see that the guided models in all cases improve over their base parser and, in most cases, also over their guide parser. The general trend is that MST improves more than Malt, except for adjectives and conjunctions, where Malt has a greater disadvantage from the start and therefore benefits more from the guide features. Considering the results for parts of speech, as well as those for dependency length and root distance, it is interesting to note that the guided models often improve even in situations where their base parsers are more accurate than their guide models. This suggests that the improvement is not a simple function of the raw accuracy of the guide model but depends on the fact that labeled dependency decisions interact in inference algorithms for both graph-based and transition-based parsing systems. Thus, if a parser can improve its accuracy on one class of dependencies, e.g., longer ones, then we can expect to see improvements on all types of dependencies – as we do. The interaction between different decisions may also be part of the explanation why MST benefits more from the feature-based integration than Malt, with significantly higher accuracy for MSTMalt than for MaltMST as a result. Since inference is global (or practically global) in the graph-based model, an improvement in one type of dependency has a good chance of influencing the accuracy of other dependencies, whereas in the transition-based model, where inference is greedy, some of these additional benefits will be lost because of error propagation. This is reflected in the error analysis in the following recurrent pattern: Where Malt does well, MaltMST does only slightly better. But where MST is good, MSTMalt is often significantly better. Another part of the explanation may have to do with the learning algorithms used by the systems. Although both Malt and MST use discriminative algorithms, Malt uses a batch learning algorithm (SVM) and MST uses an online learning algorithm (MIRA). If the original rich feature representation of Malt is sufficient to separate the training data, regularization may force the weights of the guided features to be small (since they are not needed at training time). On the other hand, an online learn956 ing algorithm will recognize the guided features as strong indicators early in training and give them a high weight as a result. Features with high weight early in training tend to have the most impact on the final classifier due to both weight regularization and averaging. This is in fact observed when inspecting the weights of MSTMalt. 5 Related Work Combinations of graph-based and transition-based models for data-driven dependency parsing have previously been explored by Sagae and Lavie (2006), who report improvements of up to 1.7 percentage points over the best single parser when combining three transition-based models and one graph-based model for unlabeled dependency parsing, evaluated on data from the Penn Treebank. The combined parsing model is essentially an instance of the graph-based model, where arc scores are derived from the output of the different component parsers. Unlike the models presented here, integration takes place only at parsing time, not at learning time, and requires at least three different base parsers. The same technique was used by Hall et al. (2007) to combine six transition-based parsers in the best performing system in the CoNLL 2007 shared task. Feature-based integration in the sense of letting a subset of the features for one model be derived from the output of a different model has been exploited for dependency parsing by McDonald (2006), who trained an instance of MSTParser using features generated by the parsers of Collins (1999) and Charniak (2000), which improved unlabeled accuracy by 1.7 percentage points, again on data from the Penn Treebank. In addition, feature-based integration has been used by Taskar et al. (2005), who trained a discriminative word alignment model using features derived from the IBM models, and by Florian et al. (2004), who trained classifiers on auxiliary data to guide named entity classifiers. Feature-based integration also has points in common with co-training, which have been applied to syntactic parsing by Sarkar (2001) and Steedman et al. (2003), among others. The difference, of course, is that standard co-training is a weakly supervised method, where guide features replace, rather than complement, the gold standard annotation during training. Feature-based integration is also similar to parse re-ranking (Collins, 2000), where one parser produces a set of candidate parses and a secondstage classifier chooses the most likely one. However, feature-based integration is not explicitly constrained to any parse decisions that the guide model might make and only the single most likely parse is used from the guide model, making it significantly more efficient than re-ranking. Finally, there are several recent developments in data-driven dependency parsing, which can be seen as targeting the specific weaknesses of graph-based and transition-based models, respectively, though without integrating the two models. Thus, Nakagawa (2007) and Hall (2007) both try to overcome the limited feature scope of graph-based models by adding global features, in the former case using Gibbs sampling to deal with the intractable inference problem, in the latter case using a re-ranking scheme. For transition-based models, the trend is to alleviate error propagation by abandoning greedy, deterministic inference in favor of beam search with globally normalized models for scoring transition sequences, either generative (Titov and Henderson, 2007a; Titov and Henderson, 2007b) or conditional (Duan et al., 2007; Johansson and Nugues, 2007). 6 Conclusion In this paper, we have demonstrated how the two dominant approaches to data-driven dependency parsing, graph-based models and transition-based models, can be integrated by letting one model learn from features generated by the other. Our experimental results show that both models consistently improve their accuracy when given access to features generated by the other model, which leads to a significant advancement of the state of the art in data-driven dependency parsing. Moreover, a comparative error analysis reveals that the improvements are largely predictable from theoretical properties of the two models, in particular the tradeoff between global learning and inference, on the one hand, and rich feature representations, on the other. Directions for future research include a more detailed analysis of the effect of feature-based integration, as well as the exploration of other strategies for integrating different parsing models. 957 References Giuseppe Attardi. 2006. Experiments with a multilanguage non-projective dependency parser. In Proceedings of CoNLL, pages 166–170. Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceedings of CoNLL, pages 149–164. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of NAACL, pages 132–139. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Michael Collins. 2000. Discriminative reranking for natural language parsing. In Proceedings of ICML, pages 175–182. Yuan Ding and Martha Palmer. 2004. Synchronous dependency insertion grammars: A grammar formalism for syntax based statistical MT. In Proceedings of the Workshop on Recent Advances in Dependency Grammar, pages 90–97. Xiangyu Duan, Jun Zhao, and Bo Xu. 2007. Probabilistic parsing action models for multi-lingual dependency parsing. In Proceedings of EMNLP-CoNLL, pages 940–946. Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of COLING, pages 340–345. Radu Florian, Hany Hassan, Abraham Ittycheriah, Hongyan Jing, Nanda Kambhatla, Xiaoqiang Luo, Nicolas Nicolov, and Salim Roukos. 2004. A statistical model for multilingual entity detection and tracking. In Proceedings of NAACL/HLT. Johan Hall, Jens Nilsson, Joakim Nivre, G¨ulsen Eryi˘git, Be´ata Megyesi, Mattias Nilsson, and Markus Saers. 2007. Single malt or blended? A study in multilingual parser optimization. In Proceedings of EMNLPCoNLL. Keith Hall. 2007. K-best spanning tree parsing. In Proceedings of ACL, pages 392–399. Kristy Hollingshead and Brian Roark. 2007. Pipeline iteration. In Proceedings of ACL, pages 952–959. Richard Johansson and Pierre Nugues. 2007. Incremental dependency parsing using online learning. In Proceedings of EMNLP-CoNLL, pages 1134–1138. Ryan McDonald and Joakim Nivre. 2007. Characterizing the errors of data-driven dependency parsing models. In Proceedings of EMNLP-CoNLL, pages 122– 131. Ryan McDonald and Giorgio Satta. 2007. On the complexity of non-projective data-driven dependency parsing. In Proceedings of IWPT, pages 122–131. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of dependency parsers. In Proceedings of ACL, pages 91–98. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajiˇc. 2005b. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of HLT/EMNLP, pages 523–530. Ryan McDonald, Kevin Lerman, and Fernando Pereira. 2006. Multilingual dependency analysis with a twostage discriminative parser. In Proceedings of CoNLL, pages 216–220. Ryan McDonald. 2006. Discriminative Learning and Spanning Tree Algorithms for Dependency Parsing. Ph.D. thesis, University of Pennsylvania. Tetsuji Nakagawa. 2007. Multilingual dependency parsing using global features. In Proceedings of EMNLPCoNLL, pages 952–956. Joakim Nivre, Johan Hall, and Jens Nilsson. 2004. Memory-based dependency parsing. In Proceedings of CoNLL, pages 49–56. Joakim Nivre, Johan Hall, Jens Nilsson, G¨ulsen Eryi˘git, and Svetoslav Marinov. 2006. Labeled pseudoprojective dependency parsing with support vector machines. In Proceedings of CoNLL, pages 221–225. Joakim Nivre, Johan Hall, Sandra K¨ubler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In Proceedings of EMNLP-CoNLL, pages 915–932. Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of IWPT, pages 149–160. Kenji Sagae and Alon Lavie. 2006. Parser combination by reparsing. In Proceedings of NAACL: Short Papers, pages 129–132. Anoop Sarkar. 2001. Applying co-training methods to statistical parsing. In Proceedings of NAACL, pages 175–182. Rion Snow, Dan Jurafsky, and Andrew Y. Ng. 2005. Learning syntactic patterns for automatic hypernym discovery. In Proceedings of NIPS. Mark Steedman, Rebecca Hwa, Miles Osborne, and Anoop Sarkar. 2003. Corrected co-training for statistical parsers. In Proceedings of ICML, pages 95–102. Ben Taskar, Simon Lacoste-Julien, and Dan Klein. 2005. A discriminative matching approach to word alignment. In Proceedings of HLT/EMNLP, pages 73–80. Ivan Titov and James Henderson. 2007a. Fast and robust multilingual dependency parsing with a generative latent variable model. In Proceedings of EMNLPCoNLL, pages 947–951. Ivan Titov and James Henderson. 2007b. A latent variable model for generative dependency parsing. In Proceedings of IWPT, pages 144–155. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT, pages 195–206. 958
2008
108
Proceedings of ACL-08: HLT, pages 959–967, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Efficient, Feature-based, Conditional Random Field Parsing Jenny Rose Finkel, Alex Kleeman, Christopher D. Manning Department of Computer Science Stanford University Stanford, CA 94305 jrfi[email protected], [email protected], [email protected] Abstract Discriminative feature-based methods are widely used in natural language processing, but sentence parsing is still dominated by generative methods. While prior feature-based dynamic programming parsers have restricted training and evaluation to artificially short sentences, we present the first general, featurerich discriminative parser, based on a conditional random field model, which has been successfully scaled to the full WSJ parsing data. Our efficiency is primarily due to the use of stochastic optimization techniques, as well as parallelization and chart prefiltering. On WSJ15, we attain a state-of-the-art F-score of 90.9%, a 14% relative reduction in error over previous models, while being two orders of magnitude faster. On sentences of length 40, our system achieves an F-score of 89.0%, a 36% relative reduction in error over a generative baseline. 1 Introduction Over the past decade, feature-based discriminative models have become the tool of choice for many natural language processing tasks. Although they take much longer to train than generative models, they typically produce higher performing systems, in large part due to the ability to incorporate arbitrary, potentially overlapping features. However, constituency parsing remains an area dominated by generative methods, due to the computational complexity of the problem. Previous work on discriminative parsing falls under one of three approaches. One approach does discriminative reranking of the n-best list of a generative parser, still usually depending highly on the generative parser score as a feature (Collins, 2000; Charniak and Johnson, 2005). A second group of papers does parsing by a sequence of independent, discriminative decisions, either greedily or with use of a small beam (Ratnaparkhi, 1997; Henderson, 2004). This paper extends the third thread of work, where joint inference via dynamic programming algorithms is used to train models and to attempt to find the globally best parse. Work in this context has mainly been limited to use of artificially short sentences due to exorbitant training and inference times. One exception is the recent work of Petrov et al. (2007), who discriminatively train a grammar with latent variables and do not restrict themselves to short sentences. However their model, like the discriminative parser of Johnson (2001), makes no use of features, and effectively ignores the largest advantage of discriminative training. It has been shown on other NLP tasks that modeling improvements, such as the switch from generative training to discriminative training, usually provide much smaller performance gains than the gains possible from good feature engineering. For example, in (Lafferty et al., 2001), when switching from a generatively trained hidden Markov model (HMM) to a discriminatively trained, linear chain, conditional random field (CRF) for part-of-speech tagging, their error drops from 5.7% to 5.6%. When they add in only a small set of orthographic features, their CRF error rate drops considerably more to 4.3%, and their out-of-vocabulary error rate drops by more than half. This is further supported by Johnson (2001), who saw no parsing gains when switch959 ing from generative to discriminative training, and by Petrov et al. (2007) who saw only small gains of around 0.7% for their final model when switching training methods. In this work, we provide just such a framework for training a feature-rich discriminative parser. Unlike previous work, we do not restrict ourselves to short sentences, but we do provide results both for training and testing on sentences of length ≤15 (WSJ15) and for training and testing on sentences of length ≤40, allowing previous WSJ15 results to be put in context with respect to most modern parsing literature. Our model is a conditional random field based model. For a rule application, we allow arbitrary features to be defined over the rule categories, span and split point indices, and the words of the sentence. It is well known that constituent length influences parse probability, but PCFGs cannot easily take this information into account. Another benefit of our feature based model is that it effortlessly allows smoothing over previously unseen rules. While the rule may be novel, it will likely contain features which are not. Practicality comes from three sources. We made use of stochastic optimization methods which allow us to find optimal model parameters with very few passes through the data. We found no difference in parser performance between using stochastic gradient descent (SGD), and the more common, but significantly slower, L-BFGS. We also used limited parallelization, and prefiltering of the chart to avoid scoring rules which cannot tile into complete parses of the sentence. This speed-up does not come with a performance cost; we attain an F-score of 90.9%, a 14% relative reduction in errors over previous work on WSJ15. 2 The Model 2.1 A Conditional Random Field Context Free Grammar (CRF-CFG) Our parsing model is based on a conditional random field model, however, unlike previous TreeCRF work, e.g., (Cohn and Blunsom, 2005; Jousse et al., 2006), we do not assume a particular tree structure, and instead find the most likely structure and labeling. This is similar to conventional probabilistic context-free grammar (PCFG) parsing, with two exceptions: (a) we maximize conditional likelihood of the parse tree, given the sentence, not joint likelihood of the tree and sentence; and (b) probabilities are normalized globally instead of locally – the graphical models depiction of our trees is undirected. Formally, we have a CFG G, which consists of (Manning and Sch¨utze, 1999): (i) a set of terminals {wk},k = 1,...,V; (ii) a set of nonterminals {Nk},k = 1,...,n; (iii) a designated start symbol ROOT; and (iv) a set of rules, {ρ = Ni →ζ j}, where ζ j is a sequence of terminals and nonterminals. A PCFG additionally assigns probabilities to each rule ρ such that ∀i∑j P(Ni →ζ j) = 1. Our conditional random field CFG (CRF-CFG) instead defines local clique potentials φ(r|s;θ), where s is the sentence, and r contains a one-level subtree of a tree t, corresponding to a rule ρ, along with relevant information about the span of words which it encompasses, and, if applicable, the split position (see Figure 1). These potentials are relative to the sentence, unlike a PCFG where rule scores do not have access to words at the leaves of the tree, or even how many words they dominate. We then define a conditional probability distribution over entire trees, using the standard CRF distribution, shown in (1). There is, however, an important subtlety lurking in how we define the partition function. The partition function Zs, which makes the probability of all possible parses sum to unity, is defined over all structures as well as all labelings of those structures. We define τ(s) to be the set of all possible parse trees for the given sentence licensed by the grammar G. P(t|s;θ) = 1 Zs ∏r∈t φ(r|s;θ) (1) where Zs = ∑t∈τ(s)∏r∈t′ φ(r|s;θ) The above model is not well-defined over all CFGs. Unary rules of the form Ni →N j can form cycles, leading to infinite unary chains with infinite mass. However, it is standard in the parsing literature to transform grammars into a restricted class of CFGs so as to permit efficient parsing. Binarization of rules (Earley, 1970) is necessary to obtain cubic parsing time, and closure of unary chains is required for finding total probability mass (rather than just best parses) (Stolcke, 1995). To address this issue, we define our model over a restricted class of 960 S NP NN Factory NNS payrolls VP VBD fell PP IN in NN September Phrasal rules r1 = S0,5 →NP0,2 VP2,5 | Factory payrolls fell in September r3 = VP2,5 →VBD2,3 PP3,5 | Factory payrolls fell in September . . . Lexicon rules r5 = NN0,1 →Factory | Factory payrolls fell in September r6 = NNS1,2 →payrolls | Factory payrolls fell in September . . . (a) PCFG Structure (b) Rules r Figure 1: A parse tree and the corresponding rules over which potentials and features are defined. CFGs which limits unary chains to not have any repeated states. This was done by collapsing all allowed unary chains to single unary rules, and disallowing multiple unary rule applications over the same span.1 We give the details of our binarization scheme in Section 5. Note that there exists a grammar in this class which is weakly equivalent with any arbitrary CFG. 2.2 Computing the Objective Function Our clique potentials take an exponential form. We have a feature function, represented by f(r,s), which returns a vector with the value for each feature. We denote the value of feature fi by fi(r,s) and our model has a corresponding parameter θi for each feature. The clique potential function is then: φ(r|s;θ) = exp∑i θi fi(r,s) (2) The log conditional likelihood of the training data D, with an additional L2 regularization term, is then: L (D;θ) = ∑ (t,s)∈D  ∑ r∈t∑ i θi fi(r,s)  −Zs ! +∑ i θ2 i 2σ 2 (3) And the partial derivatives of the log likelihood, with respect to the model weights are, as usual, the difference between the empirical counts and the model expectations: ∂L ∂θi = ∑ (t,s)∈D  ∑ r∈t fi(r,s)  −Eθ[ fi|s] ! + θi σ 2 (4) 1In our implementation of the inside-outside algorithm, we then need to keep two inside and outside scores for each span: one from before and one from after the application of unary rules. The partition function Zs and the partial derivatives can be efficiently computed with the help of the inside-outside algorithm.2 Zs is equal to the inside score of ROOT over the span of the entire sentence. To compute the partial derivatives, we walk through each rule, and span/split, and add the outside log-score of the parent, the inside log-score(s) of the child(ren), and the log-score for that rule and span/split. Zs is subtracted from this value to get the normalized log probability of that rule in that position. Using the probabilities of each rule application, over each span/split, we can compute the expected feature values (the second term in Equation 4), by multiplying this probability by the value of the feature corresponding to the weight for which we are computing the partial derivative. The process is analogous to the computation of partial derivatives in linear chain CRFs. The complexity of the algorithm for a particular sentence is O(n3), where n is the length of the sentence. 2.3 Parallelization Unlike (Taskar et al., 2004), our algorithm has the advantage of being easily parallelized (see footnote 7 in their paper). Because the computation of both the log likelihood and the partial derivatives involves summing over each tree individually, the computation can be parallelized by having many clients which each do the computation for one tree, and one central server which aggregates the information to compute the relevant information for a set of trees. Because we use a stochastic optimization method, as discussed in Section 3, we compute the objective for only a small portion of the training data at a time, typically between 15 and 30 sentences. In 2In our case the values in the chart are the clique potentials which are non-negative numbers, but not probabilities. 961 this case the gains from adding additional clients decrease rapidly, because the computation time is dominated by the longest sentences in the batch. 2.4 Chart Prefiltering Training is also sped up by prefiltering the chart. On the inside pass of the algorithm one will see many rules which cannot actually be tiled into complete parses. In standard PCFG parsing it is not worth figuring out which rules are viable at a particular chart position and which are not. In our case however this can make a big difference.We are not just looking up a score for the rule, but must compute all the features, and dot product them with the feature weights, which is far more time consuming. We also have to do an outside pass as well as an inside one, which is sped up by not considering impossible rule applications. Lastly, we iterate through the data multiple times, so if we can compute this information just once, we will save time on all subsequent iterations on that sentence. We do this by doing an insideoutside pass that is just boolean valued to determine which rules are possible at which positions in the chart. We simultaneously compute the features for the possible rules and then save the entire data structure to disk. For all but the shortest of sentences, the disk I/O is easily worth the time compared to recomputation. The first time we see a sentence this method is still about one third faster than if we did not do the prefiltering, and on subsequent iterations the improvement is closer to tenfold. 3 Stochastic Optimization Methods Stochastic optimization methods have proven to be extremely efficient for the training of models involving computationally expensive objective functions like those encountered with our task (Vishwanathan et al., 2006) and, in fact, the on-line backpropagation learning used in the neural network parser of Henderson (2004) is a form of stochastic gradient descent. Standard deterministic optimization routines such as L-BFGS (Liu and Nocedal, 1989) make little progress in the initial iterations, often requiring several passes through the data in order to satisfy sufficient descent conditions placed on line searches. In our experiments SGD converged to a lower objective function value than L-BFGS, however it required far 0 5 10 15 20 25 30 35 40 45 50 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0 x 10 5 Passes Log Likelihood SGD L−BFGS Figure 2: WSJ15 objective value for L-BFGS and SGD versus passes through the data. SGD ultimately converges to a lower objective value, but does equally well on test data. fewer iterations (see Figure 2) and achieved comparable test set performance to L-BFGS in a fraction of the time. One early experiment on WSJ15 showed a seven time speed up. 3.1 Stochastic Function Evaluation Utilization of stochastic optimization routines requires the implementation of a stochastic objective function. This function, ˆ L is designed to approximate the true function L based off a small subset of the training data represented by Db. Here b, the batch size, means that Db is created by drawing b training examples, with replacement, from the training set D. With this notation we can express the stochastic evaluation of the function as ˆ L (Db;θ). This stochastic function must be designed to ensure that: E h ∑ n i ˆ L (D(i) b ;θ) i = L (D;θ) Note that this property is satisfied, without scaling, for objective functions that sum over the training data, as it is in our case, but any priors must be scaled down by a factor of b/|D|. The stochastic gradient, ∇L (D(i) b ;θ), is then simply the derivative of the stochastic function value. 3.2 Stochastic Gradient Descent SGD was implemented using the standard update: θk+1 = θk −ηk∇L (D(k) b ;θk) 962 And employed a gain schedule in the form ηk = η0 τ τ +k where parameter τ was adjusted such that the gain is halved after five passes through the data. We found that an initial gain of η0 = 0.1 and batch size between 15 and 30 was optimal for this application. 4 Features As discussed in Section 5 we performed experiments on both sentences of length ≤15 and length ≤40. All feature development was done on the length 15 corpus, due to the substantially faster train and test times. This has the unfortunate effect that our features are optimized for shorter sentences and less training data, but we found development on the longer sentences to be infeasible. Our features are divided into two types: lexicon features, which are over words and tags, and grammar features which are over the local subtrees and corresponding span/split (both have access to the entire sentence). We ran two kinds of experiments: a discriminatively trained model, which used only the rules and no other grammar features, and a featurebased model which did make use of grammar features. Both models had access to the lexicon features. We viewed this as equivalent to the more elaborate, smoothed unknown word models that are common in many PCFG parsers, such as (Klein and Manning, 2003; Petrov et al., 2006). We preprocessed the words in the sentences to obtain two extra pieces of information. Firstly, each word is annotated with a distributional similarity tag, from a distributional similarity model (Clark, 2000) trained on 100 million words from the British National Corpus and English Gigaword corpus. Secondly, we compute a class for each word based on the unknown word model of Klein and Manning (2003); this model takes into account capitalization, digits, dashes, and other character-level features. The full set of features, along with an explanation of our notation, is listed in Table 1. 5 Experiments For all experiments, we trained and tested on the Penn treebank (PTB) (Marcus et al., 1993). We used Binary Unary Model States Rules Rules WSJ15 1,428 5,818 423 WSJ15 relaxed 1,428 22,376 613 WSJ40 7,613 28,240 823 Table 2: Grammar size for each of our models. the standard splits, training on sections 2 to 21, testing on section 23 and doing development on section 22. Previous work on (non-reranking) discriminative parsing has given results on sentences of length ≤15, but most parsing literature gives results on either sentences of length ≤40, or all sentences. To properly situate this work with respect to both sets of literature we trained models on both length ≤ 15 (WSJ15) and length ≤40 (WSJ40), and we also tested on all sentences using the WSJ40 models. Our results also provide a context for interpreting previous work which used WSJ15 and not WSJ40. We used a relatively simple grammar with few additional annotations. Starting with the grammar read off of the training set, we added parent annotations onto each state, including the POS tags, resulting in rules such as S-ROOT →NP-S VP-S. We also added head tag annotations to VPs, in the same manner as (Klein and Manning, 2003). Lastly, for the WSJ40 runs we used a simple, right branching binarization where each active state is annotated with its previous sibling and first child. This is equivalent to children of a state being produced by a second order Markov process. For the WSJ15 runs, each active state was annotated with only its first child, which is equivalent to a first order Markov process. See Table 5 for the number of states and rules produced. 5.1 Experiments For both WSJ15 and WSJ40, we trained a generative model; a discriminative model, which used lexicon features, but no grammar features other than the rules themselves; and a feature-based model which had access to all features. For the length 15 data we also did experiments in which we relaxed the grammar. By this we mean that we added (previously unseen) rules to the grammar, as a means of smoothing. We chose which rules to add by taking existing rules and modifying the parent annotation on the parent of the rule. We used stochastic gradient descent for 963 Table 1: Lexicon and grammar features. w is the word and t the tag. r represents a particular rule along with span/split information; ρ is the rule itself, rp is the parent of the rule; wb, ws, and we are the first, first after the split (for binary rules) and last word that a rule spans in a particular context. All states, including the POS tags, are annotated with parent information; b(s) represents the base label for a state s and p(s) represents the parent annotation on state s. ds(w) represents the distributional similarity cluster, and lc(w) the lower cased version of the word, and unk(w) the unknown word class. Lexicon Features Grammar Features t Binary-specific features b(t) ρ ⟨t,w⟩ ⟨b(p(rp)),ds(ws)⟩ ⟨b(p(rp)),ds(ws−1,dsws)⟩ ⟨t,lc(w)⟩ ⟨b(p(rp)),ds(we)⟩ PP feature: ⟨b(t),w⟩ unary? if right child is a PP then ⟨r,ws⟩ ⟨b(t),lc(w)⟩ simplified rule: VP features: ⟨t,ds(w)⟩ base labels of states if some child is a verb tag, then rule, ⟨t,ds(w−1)⟩ dist sim bigrams: with that child replaced by the word ⟨t,ds(w+1)⟩ all dist. sim. bigrams below ⟨b(t),ds(w)⟩ rule, and base parent state Unaries which span one word: ⟨b(t),ds(w−1)⟩ dist sim bigrams: ⟨b(t),ds(w+1)⟩ same as above, but trigrams ⟨r,w⟩ ⟨p(t),w⟩ heavy feature: ⟨r,ds(w)⟩ ⟨t,unk(w)⟩ whether the constituent is “big” ⟨b(p(r)),w⟩ ⟨b(t),unk(w)⟩ as described in (Johnson, 2001) ⟨b(p(r)),ds(w)⟩ these experiments; the length 15 models had a batch size of 15 and we allowed twenty passes through the data.3 The length 40 models had a batch size of 30 and we allowed ten passes through the data. We used development data to decide when the models had converged. Additionally, we provide generative numbers for training on the entire PTB to give a sense of how much performance suffered from the reduced training data (generative-all in Table 4). The full results for WSJ15 are shown in Table 3 and for WSJ40 are shown in Table 4. The WSJ15 models were each trained on a single Dual-Core AMD OpteronTM using three gigabytes of RAM and no parallelization. The discriminatively trained generative model (discriminative in Table 3) took approximately 12 minutes per pass through the data, while the feature-based model (feature-based in Table 3) took 35 minutes per pass through the data. The feature-based model with the relaxed grammar (relaxed in Table 3) took about four times as long as the regular feature-based model. The discrimina3Technically we did not make passes through the data, because we sampled with replacement to get our batches. By this we mean having seen as many sentences as are in the data, despite having seen some sentences multiple times and some not at all. tively trained generative WSJ40 model (discriminative in Table 4) was trained using two of the same machines, with 16 gigabytes of RAM each for the clients.4 It took about one day per pass through the data. The feature-based WSJ40 model (featurebased in Table 4) was trained using four of these machines, also with 16 gigabytes of RAM each for the clients. It took about three days per pass through the data. 5.2 Discussion The results clearly show that gains came from both the switch from generative to discriminative training, and from the extensive use of features. In Figure 3 we show for an example from section 22 the parse trees produced by our generative model and our feature-based discriminative model, and the correct parse. The parse from the feature-based model better exhibits the right branching tendencies of English. This is likely due to the heavy feature, which encourages long constituents at the end of the sentence. It is difficult for a standard PCFG to learn this aspect of the English language, because the score it assigns to a rule does not take its span into account. 4The server does almost no computation. 964 Model P R F1 Exact Avg CB 0 CB P R F1 Exact Avg CB 0 CB development set – length ≤15 test set – length ≤15 Taskar 2004 89.7 90.2 90.0 – – – 89.1 89.1 89.1 – – – Turian 2007 – – – – – – 89.6 89.3 89.4 – – – generative 86.9 85.8 86.4 46.2 0.34 81.2 87.6 85.8 86.7 49.2 0.33 81.9 discriminative 89.1 88.6 88.9 55.5 0.26 85.5 88.9 88.0 88.5 56.6 0.32 85.0 feature-based 90.4 89.3 89.9 59.5 0.24 88.3 91.1 90.2 90.6 61.3 0.24 86.8 relaxed 91.2 90.3 90.7 62.1 0.24 88.1 91.4 90.4 90.9 62.0 0.22 87.9 Table 3: Development and test set results, training and testing on sentences of length ≤15 from the Penn treebank. Model P R F1 Exact Avg CB 0 CB P R F1 Exact Avg CB 0 CB test set – length ≤40 test set – all sentences Petrov 2007 – – 88.8 – – – – – 88.3 – – – generative 83.5 82.0 82.8 25.5 1.57 53.4 82.8 81.2 82.0 23.8 1.83 50.4 generative-all 83.6 82.1 82.8 25.2 1.56 53.3 – – – – – – discriminative 85.1 84.5 84.8 29.7 1.41 55.8 84.2 83.7 83.9 27.8 1.67 52.8 feature-based 89.2 88.8 89.0 37.3 0.92 65.1 88.2 87.8 88.0 35.1 1.15 62.3 Table 4: Test set results, training on sentences of length ≤40 from the Penn treebank. The generative-all results were trained on all sentences regardless of length 6 Comparison With Related Work The most similar related work is (Johnson, 2001), which did discriminative training of a generative PCFG. The model was quite similar to ours, except that it did not incorporate any features and it required the parameters (which were just scores for rules) to be locally normalized, as with a generatively trained model. Due to training time, they used the ATIS treebank corpus , which is much smaller than even WSJ15, with only 1,088 training sentences, 294 testing sentences, and an average sentence length of around 11. They found no significant difference in performance between their generatively and discriminatively trained parsers. There are two probable reasons for this result. The training set is very small, and it is a known fact that generative models tend to work better for small datasets and discriminative models tend to work better for larger datasets (Ng and Jordan, 2002). Additionally, they made no use of features, one of the primary benefits of discriminative learning. Taskar et al. (2004) took a large margin approach to discriminative learning, but achieved only small gains. We suspect that this is in part due to the grammar that they chose – the grammar of (Klein and Manning, 2003), which was hand annotated with the intent of optimizing performance of a PCFG. This grammar is fairly sparse – for any particular state there are, on average, only a few rules with that state as a parent – so the learning algorithm may have suffered because there were few options to discriminate between. Starting with this grammar we found it difficult to achieve gains as well. Additionally, their long training time (several months for WSJ15, according to (Turian and Melamed, 2006)) made feature engineering difficult; they were unable to really explore the space of possible features. More recent is the work of (Turian and Melamed, 2006; Turian et al., 2007), which improved both the training time and accuracy of (Taskar et al., 2004). They define a simple linear model, use boosted decision trees to select feature conjunctions, and a line search to optimize the parameters. They use an agenda parser, and define their atomic features, from which the decision trees are constructed, over the entire state being considered. While they make extensive use of features, their setup is much more complex than ours and takes substantially longer to train – up to 5 days on WSJ15 – while achieving only small gains over (Taskar et al., 2004). The most recent similar research is (Petrov et al., 2007). They also do discriminative parsing of length 40 sentences, but with a substantially different setup. Following up on their previous work (Petrov et al., 2006) on grammar splitting, they do discriminative 965 S S NP PRP He VP VBZ adds NP DT This VP VBZ is RB n’t NP NP CD 1987 VP VBN revisited S NP PRP He VP VBZ adds S NP DT This VP VBZ is RB n’t NP CD 1987 VP VBN revisited S NP PRP He VP VBZ adds S NP DT This VP VBZ is RB n’t NP NP CD 1987 VP VBN revisited (a) generative output (b) feature-based discriminative output (c) gold parse Figure 3: Example output from our generative and feature-based discriminative models, along with the correct parse. parsing with latent variables, which requires them to optimize a non-convex function. Instead of using a stochastic optimization technique, they use LBFGS, but do coarse-to-fine pruning to approximate their gradients and log likelihood. Because they were focusing on grammar splitting they, like (Johnson, 2001), did not employ any features, and, like (Taskar et al., 2004), they saw only small gains from switching from generative to discriminative training. 7 Conclusions We have presented a new, feature-rich, dynamic programming based discriminative parser which is simpler, more effective, and faster to train and test than previous work, giving us new state-of-the-art performance when training and testing on sentences of length ≤15 and the first results for such a parser trained and tested on sentences of length ≤40. We also show that the use of SGD for training CRFs performs as well as L-BFGS in a fraction of the time. Other recent work on discriminative parsing has neglected the use of features, despite their being one of the main advantages of discriminative training methods. Looking at how other tasks, such as named entity recognition and part-of-speech tagging, have evolved over time, it is clear that greater gains are to be gotten from developing better features than from better models. We have provided just such a framework for improving parsing performance. Acknowledgments Many thanks to Teg Grenager and Paul Heymann for their advice (and their general awesomeness), and to our anonymous reviewers for helpful comments. This paper is based on work funded in part by the Defense Advanced Research Projects Agency through IBM, by the Disruptive Technology Office (DTO) Phase III Program for Advanced Question Answering for Intelligence (AQUAINT) through Broad Agency Announcement (BAA) N61339-06R-0034, and by a Scottish Enterprise EdinburghStanford Link grant (R37588), as part of the EASIE project. References Eugene Charniak and Mark Johnson. 2005. Coarse-tofine n-best parsing and maxent discriminative reranking. In ACL 43, pages 173–180. Alexander Clark. 2000. Inducing syntactic categories by context distribution clustering. In Proc. of Conference on Computational Natural Language Learning, pages 91–94, Lisbon, Portugal. Trevor Cohn and Philip Blunsom. 2005. Semantic role labelling with tree conditional random fields. In CoNLL 2005. Michael Collins. 2000. Discriminative reranking for natural language parsing. In ICML 17, pages 175–182. Jay Earley. 1970. An efficient context-free parsing algorithm. Communications of the ACM, 6(8):451–455. James Henderson. 2004. Discriminative training of a neural network statistical parser. In ACL 42, pages 96– 103. Mark Johnson. 2001. Joint and conditional estimation of tagging and parsing models. In Meeting of the Association for Computational Linguistics, pages 314–321. Florent Jousse, R´emi Gilleron, Isabelle Tellier, and Marc Tommasi. 2006. Conditional Random Fields for XML 966 trees. In ECML Workshop on Mining and Learning in Graphs. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the Association of Computational Linguistics (ACL). John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data. In ICML 2001, pages 282–289. Morgan Kaufmann, San Francisco, CA. Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Math. Programming, 45(3, (Ser. B)):503–528. Christopher D. Manning and Hinrich Sch¨utze. 1999. Foundations of Statistical Natural Language Processing. The MIT Press, Cambridge, Massachusetts. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Andrew Ng and Michael Jordan. 2002. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. In Advances in Neural Information Processing Systems (NIPS). Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In ACL 44/COLING 21, pages 433–440. Slav Petrov, Adam Pauls, and Dan Klein. 2007. Discriminative log-linear grammars with latent variables. In NIPS. Adwait Ratnaparkhi. 1997. A linear observed time statistical parser based on maximum entropy models. In EMNLP 2, pages 1–10. Andreas Stolcke. 1995. An efficient probabilistic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics, 21:165– 202. Ben Taskar, Dan Klein, Michael Collins, Daphne Koller, and Christopher D. Manning. 2004. Max-margin parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Joseph Turian and I. Dan Melamed. 2006. Advances in discriminative parsing. In ACL 44, pages 873–880. Joseph Turian, Ben Wellington, and I. Dan Melamed. 2007. Scalable discriminative learning for natural language parsing and translation. In Advances in Neural Information Processing Systems 19, pages 1409–1416. MIT Press. S. V. N. Vishwanathan, Nichol N. Schraudolph, Mark W. Schmidt, and Kevin P. Murphy. 2006. Accelerated training of conditional random fields with stochastic gradient methods. In ICML 23, pages 969–976. 967
2008
109
Proceedings of ACL-08: HLT, pages 89–96, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Measure Word Generation for English-Chinese SMT Systems Dongdong Zhang1, Mu Li1, Nan Duan2, Chi-Ho Li1, Ming Zhou1 1Microsoft Research Asia 2Tianjin University Beijing, China Tianjin, China {dozhang,muli,v-naduan,chl,mingzhou}@microsoft.com Abstract Measure words in Chinese are used to indicate the count of nouns. Conventional statistical machine translation (SMT) systems do not perform well on measure word generation due to data sparseness and the potential long distance dependency between measure words and their corresponding head words. In this paper, we propose a statistical model to generate appropriate measure words of nouns for an English-to-Chinese SMT system. We model the probability of measure word generation by utilizing lexical and syntactic knowledge from both source and target sentences. Our model works as a post-processing procedure over output of statistical machine translation systems, and can work with any SMT system. Experimental results show our method can achieve high precision and recall in measure word generation. 1 Introduction In linguistics, measure words (MW) are words or morphemes used in combination with numerals or demonstrative pronouns to indicate the count of nouns1, which are often referred to as head words (HW). Chinese measure words are grammatical units and occur quite often in real text. According to our survey on the measure word distribution in the Chinese Penn Treebank and the test datasets distributed by Linguistic Data Consortium (LDC) for Chinese-to-English machine translation evaluation, the average occurrence is 0.505 and 0.319 measure 1 The uncommon cases of verbs are not considered. words per sentence respectively. Unlike in Chinese, there is no special set of measure words in English. Measure words are usually used for mass nouns and any semantically appropriate nouns can function as the measure words. For example, in the phrase three bottles of water, the word bottles acts as a measure word. Countable nouns are almost never modified by measure words2. Numerals and indefinite articles are directly followed by countable nouns to denote the quantity of objects. Therefore, in the English-to-Chinese machine translation task we need to take additional efforts to generate the missing measure words in Chinese. For example, when translating the English phrase three books into the Chinese phrases “三本书”, where three corresponds to the numeral “三” and books corresponds to the noun “书”, the Chinese measure word “本” should be generated between the numeral and the noun. In most statistical machine translation (SMT) models (Och et al., 2004; Koehn et al., 2003; Chiang, 2005), some of measure words can be generated without modification or additional processing. For example, in above translation, the phrase translation table may suggest the word three be translated into “三”, “三本”, “三只”, etc, and the word books into “书”, “书本”, “名册” (scroll), etc. Then the SMT model selects the most likely combination “三本书” as the final translation result. In this example, a measure word candidate set consisting of “本” and “只” can be generated by bilingual phrases (or synchronous translation rules), and the best measure word “本” from the measure 2 There are some exceptional cases, such as “100 head of cattle”. But they are very uncommon. 89 word candidate set can be selected by the SMT decoder. However, as we will show below, existing SMT systems do not deal well with the measure word generation in general due to data sparseness and long distance dependencies between measure words and their corresponding head words. Due to the limited size of bilingual corpora, many measure words, as well as the collocations between a measure and its head word, cannot be well covered by the phrase translation table in an SMT system. Moreover, Chinese measure words often have a long distance dependency to their head words which makes language model ineffective in selecting the correct measure words from the measure word candidate set. For example, in Figure 1 the distance between the measure word “项” and its head word “工程” (undertaking) is 15. In this case, an n-gram language model with n<15 cannot capture the MW-HW collocation. Table 1 shows the relative position’s distribution of head words around measure words in the Chinese Penn Treebank, where a negative position indicates that the head word is to the left of the measure word and a positive position indicates that the head word is to the right of the measure word. Although lots of measure words are close to the head words they modify, more than sixteen percent of measure words are far away from their corresponding head words (the absolute distance is more than 5). To overcome the disadvantage of measure word generation in a general SMT system, this paper proposes a dedicated statistical model to generate measure words for English-to-Chinese translation. We model the probability of measure word generation by utilizing rich lexical and syntactic knowledge from both source and target sentences. Three steps are involved in our method to generate measure words: Identifying the positions to generate measure words, collecting the measure word candidate set and selecting the best measure word. Our method is performed as a post-processing procedure of the output of SMT systems. The advantage is that it can be easily integrated into any SMT system. Experimental results show our method can significantly improve the quality of measure word generation. We also compared the performance of our model based on different contextual information, and show that both large-scale monolingual data and parallel bilingual data can be helpful to generate correct measure words. Position Occurrence Position Occurrence 1 39.5% -1 0 2 15.7% -2 0 3 4.7% -3 8.7% 4 1.4% -4 6.8% 5 2.1% -5 4.3% >5 8.8% <-5 8.0% Table 1. Position distribution of head words 2 Our Method 2.1 Measure word generation in Chinese In Chinese, measure words are obligatory in certain contexts, and the choice of measure word usually depends on the head word’s semantics (e.g., shape or material). The set of Chinese measure words is a relatively close set and can be classified into two categories based on whether they have a corresponding English translation. Those not having an English counterpart need to be generated during translation. For those having English translations, such as “米” (meter), “吨” (ton), we just use the translation produced by the SMT system itself. According to our survey, about 70.4% of measure words in the Chinese Penn Treebank need Figure 1. Example of long distance dependency between MW and its modified HW 浦东/开发/ 开放/ 是/ 一 工程 Pudong 's development and opening up is a century-spanning /跨/世 纪/ for vigorously promoting shanghai and constructing a modern economic , trade , and financial center undertaking 振兴/上海/ ,/ 建设 /现代化 /经济 / 、/ 贸易/ 、 /金融/ 中心/ 的/ 项 . 。 90 to be explicitly generated during the translation process. In Chinese, there are generally stable linguistic collocations between measure words and their head words. Once the head word is determined, the collocated measure word can usually be selected accordingly. However, there is no easy way to identify head words in target Chinese sentences since for most of the time an SMT output is not a well formed sentence due to translation errors. Mistake of head word identification may cause low quality of measure word generation. In addition, sometimes the head word itself is not enough to determine the measure word. For example, in Chinese sentences “他家有5 口人” (there are five people in his family) and “总共有5 个人参加了会议” (a total of five people attended the meeting), where “人” (people) is the head word collocated with two different measure words “口” and “个”, we cannot determine the measure word just based on the head word “人”. 2.2 Framework In our framework, a statistical model is used to generate measure words. The model is applied to SMT system outputs as a post-processing procedure. Given an English source sentence, an SMT decoder produces a target Chinese translation, in which positions for measure word generation are identified. Based on contextual information contained in both input source sentence and SMT system’s output translation, a measure word candidate set M is constructed. Then a measure word selection model is used to select the best one from M. Finally, the selected measure word is inserted into previously determined measure word slot in the SMT system’s output, yielding the final translation result. 2.3 Measure word position identification To identify where to generate measure words in the SMT outputs, all positions after numerals are marked at first since measure words often follow numerals. For other cases in which measure words do not follow numerals (e.g., “许多/台/电脑” (many computers), where “台” is a measure word and “电脑” (computers) is its head word), we just mine the set of words which can be followed by measure words from training corpus. Most of words in the set are pronouns such as “该” (this), “那” (that) and “若干” (several). In the SMT output, the positions after these words are also identified as candidate positions to generate measure words. 2.4 Candidate measure word generation To avoid high computation cost, the measure word candidate set only consists of those measure words which can form valid MW-HW collocations with their head words. We assume that all the surrounding words within a certain window size centered on the given position to generate a measure word are potential head words, and require that a measure word candidate must collocate with at least one of the surrounding words. Valid MW-HW collocations are mined from the training corpus and a separate lexicon resource. There is a possibility that the real head word is outside the window of given size. To address this problem, we also use a source window centered on the position ps, which is aligned to the target measure word position pt. The link between ps and pt can be inferred from SMT decoding result. Thus, the chance of capturing the best measure word increases with the aid of words located in the source window. For example, given the window size of 10, although the target head word “工程” (undertaking) in Figure 1 is located outside the target window, its corresponding source head word undertaking can be found in the source window. Based on this source head word, the best measure word “项” will be included into the candidate measure word set. This example shows how bilingual information can enrich the measure word candidate set. Another special word {NULL} is always included in the measure word candidate set. {NULL} represents those measure words having a corresponding English translation as mentioned in Section 2.1. If {NULL} is selected, it means that we need not generate any measure word at the current position. Thus, no matter what kinds of measure words they are, we can handle the issue of measure word generation in a unified framework. 2.5 Measure word selection model After obtaining the measure word candidate set M, a measure word selection model is employed to select the best one from M. Given the contextual information C in both source window and target 91 window, we model the measure word selection as finding the measure word m* with highest posterior probability given C: 𝑚∗= argmax௠∈ெ𝑃(𝑚|𝐶) (1) To leverage the collocation knowledge between measure words and head words, we extend (1) by introducing a hidden variable h where H represents all candidate head words located within the target window: 𝑚∗= argmax௠∈ெ∑ 𝑃(𝑚, ℎ|𝐶) ௛∈ு = argmax௠∈ெ∑ 𝑃(ℎ|𝐶)𝑃(𝑚|ℎ, 𝐶) ௛∈ு (2) In (2), 𝑃(ℎ|𝐶) is the head word selection probability and is empirically estimated according to the position distribution of head words in Table 1. 𝑃(𝑚|ℎ, 𝐶) is the conditional probability of m given both h and C. We use maximum entropy model to compute 𝑃(𝑚|ℎ, 𝐶): 𝑃(𝑚|ℎ, 𝐶) = exp(∑𝜆𝑖 𝑓𝑖(𝑚,𝐶) 𝑖 ) ∑ exp(∑𝜆𝑖 𝑓𝑖(𝑚′,𝐶) 𝑖 ) 𝑚′∈𝑀 (3) Based on the different features used in the computation of 𝑃(𝑚|ℎ, 𝐶) , we can train two submodels – a monolingual model (Mo-ME) which only uses monolingual (Chinese) features and a bilingual model (Bi-ME) which integrates bilingual features. The advantage of the Mo-ME model is that it can employ an unlimited monolingual target training corpora, while the Bi-ME model leverages rich features including both the source and target information and may improve the precision. Compared to the Mo-ME model, the Bi-ME model suffers from small scale of parallel training data. To leverage advantages of both models, we use a combined model Co-ME, by linearly combing the monolingual and bilingual sub-models: 𝑚∗= argmax௠∈ெ𝜆𝑃ெ௢ିொ + (1 −𝜆)𝑃஻௜ିொ where 𝜆∈[0,1] is a free parameter that can be optimized on held-out data and it was set to 0.39 in our experiments. 2.6 Features The computation of Formula (3) involves the features listed in Table 2 where the Mo-ME model only employs target features and the Bi-ME model leverages both target features and source features. For target features, n-gram language model score is defined as the sum of log n-gram probabilities within the target window after the measure word is filled into the measure word slot. The MW-HW collocation feature is defined to be a function f1 to capture the collocation between a measure word and a head word. For features of surrounding words, the feature function f2 is defined as 1 if a certain word exists at a certain position, otherwise 0. For example, f2(人,-2)=1 means the second word on the left is “人”. f2(书,3)=1 means the third word on the right is “书”. For punctuation position feature function f3, the feature value is 1 when there is a punctuation following the measure word, which indicates the target head word may appear to the left of measure word. Otherwise, it is 0. In practice, we can also ignore the position part, i.e., a word appears anywhere within the window is viewed as the same feature. Target features Source features n-gram language model score MW-HW collocation MW-HW collocation surrounding words surrounding words source head word punctuation position POS tags Table 2. Features used in our model For source language side features, MW-HW collocation and surrounding words are used in a similar way as does with target features. The source head word feature is defined to be a function f4 to indicate whether a word ei is the source head word in English according to a parse tree of the source sentence. Similar to the definition of lexical features, we also use a set of features based on POS tags of source language. 3 Model Training and Application 3.1 Training We parsed English and Chinese sentences to get training samples for measure word generation model. Based on the source syntax parse tree, for each measure word, we identified its head word by using a toolkit from (Chiang and Bikel, 2002) which can heuristically identify head words for sub-trees. For the bilingual corpus, we also perform word alignment to get correspondences between source and target words. Then, the collocation between measure words and head words and their surrounding contextual information are extracted to train the measure word selection models. According to word alignment results, we classify 92 measure words into two classes based on whether they have non-null translations. We map Chinese measure words having non-null translations to a unified symbol {NULL} as mentioned in Section 2.4, indicating that we need not generate these kind of measure words since they can be translated from English. In our work, the Berkeley parser (Petrov and Klein, 2007) was employed to extract syntactic knowledge from the training corpus. We ran GIZA++ (Och and Ney, 2000) on the training corpus in both directions with IBM model 4, and then applied the refinement rule described in (Koehn et al., 2003) to obtain a many-to-many word alignment for each sentence pair. We used the SRI Language Modeling Toolkit (Stolcke, 2002) to train a fivegram model with modified Kneser-Ney smoothing (Chen and Goodman, 1998). The Maximum Entropy training toolkit from (Zhang, 2006) was employed to train the measure word selection model. 3.2 Measure word generation As mentioned in previous sections, we apply our measure word generation module into SMT output as a post-processing step. Given a translation from an SMT system, we first determine the position pt at which to generate a Chinese measure word. Centered on pt, a surrounding word window with specified size is determined. From translation alignments, the corresponding source position ps aligned to pt can be referred. In the same way, a source window centered on ps is determined as well. Then, contextual information within the windows in the source and the target sentence is extracted and fed to the measure word selection model. Meanwhile, the candidate set is obtained based on words in both windows. Finally, each measure word in the candidate set is inserted to the position pt, and its score is calculated based on the models presented in Section 2.5. The measure word with the highest probability will be chosen. There are two reasons why we perform measure word generation for SMT systems as a postprocessing step. One is that in this way our method can be easily applied to any SMT system. The other is that we can leverage both source and target information during the measure word generation process. We do not integrate our measure word generation module into the SMT decoder since there is only little target contextual information available during SMT decoding. Moreover, as we will show in experiment section, a pre-processing method does not work well when only source information is available. 4 Experiments 4.1 Data In the experiments, the language model is a Chinese 5-gram language model trained with the Chinese part of the LDC parallel corpus and the Xinhua part of the Chinese Gigaword corpus with about 27 million words. We used an SMT system similar to Chiang (2005), in which FBIS corpus is used as the bilingual training data. The training corpus for Mo-ME model consists of the Chinese Peen Treebank and the Chinese part of the LDC parallel corpus with about 2 million sentences. The Bi-ME model is trained with FBIS corpus, whose size is smaller than that used in Mo-ME model training. We extracted both development and test data set from years of NIST Chinese-to-English evaluation data by filtering out sentence pairs not containing measure words. The development set is extracted from NIST evaluation data from 2002 to 2004, and the test set consists of sentence pairs from NIST evaluation data from 2005 to 2006. There are 759 testing cases for measure word generation in our test data consisting of 2746 sentence pairs. We use the English sentences in the data sets as input to the SMT decoder, and apply our proposed method to generate measure words for the output from the decoder. Measure words in Chinese sentences of the development and test sets are used as references. When there are more than one measure words acceptable at some places, we manually augment the references with multiple acceptable measure words. 4.2 Baseline Our baseline is the SMT output where measure words are generated by a Hiero-like SMT decoder as discussed in Section 1. Due to noises in the Chinese translations introduced by the SMT system, we cannot correctly identify all the positions to generate measure words. Therefore, besides precision we examine recall in our experiments. 4.3 Evaluation over SMT output Table 3 and Table 4 show the precision and recall of our measure word generation method. From the 93 experimental results, the Mo-ME, Bi-ME and CoME models all outperform the baseline. Compared with the baseline, the Mo-ME method takes advantage of a large size monolingual training corpus and reduces the data sparseness problem. The advantage of the Bi-ME model is being able to make full use of rich knowledge from both source and target sentences. Also as shown in Table 3 and Table 4, the Co-ME model always achieve the best results when using the same window size since it leverages the advantage of both the Mo-ME and the Bi-ME models. Wsize Baseline Mo-ME Bi-ME Co-ME 6 54.82% 64.29% 67.15% 67.66% 8 64.93% 68.50% 69.00% 10 64.72% 69.40% 69.58% 12 65.46% 69.40% 69.76% 14 65.61% 69.69% 70.03% Table 3. Precision over SMT output Wsize Baseline Mo-ME Bi-ME Co-ME 6 45.61% 51.48% 53.69% 54.09% 8 51.98% 54.75% 55.14% 10 51.81% 55.44% 55.58% 12 52.38% 55.44% 55.72% 14 52.50% 55.67% 55.93% Table 4. Recall over SMT output We can see that the Bi-ME model can achieve better results than the Mo-ME model in both recall and precision metrics although only a small sized bilingual corpus is used for Bi-ME model training. The reason is that the Mo-ME model cannot correctly handle the cases where head words are located outside the target window. However, due to word order differences between English and Chinese, when target head words are outside the target window, their corresponding source head words might be within the source window. The capacity of capturing head words is improved when both source and target windows are used, which demonstrates that bilingual knowledge is useful for measure word generation. We compare the results for each model with different window sizes. Larger window size can lead to better results as shown in Table 3 and Table 4 since more contextual knowledge is used to model measure word generation. However, enlarging the window size does not bring significant improvements, The major reason is that even a small window size is already able to cover most of measure word collocations, as indicated by the position distribution of head words in Table 1. The quality of the SMT output also affects the quality of measure word generation since our method is performed in a post-processing step over the SMT output. Although translation errors degrade the measure word generation accuracy, we achieve about 15% improvement in precision and a 10% increase in recall over baseline. We notice that the recall is relatively lower. Part of the reason is some positions to generate measure words are not successfully identified due to translation errors. In addition to precision and recall, we also evaluate the Bleu score (Papineni et al., 2002) changes before and after applying our measure word generation method to the SMT output. For our test data, we only consider sentences containing measure words for Bleu score evaluation. Our measure word generation step leads to a Bleu score improvement of 0.32 where the window size is set to 10, which shows that it can improve the translation quality of an English-to-Chinese SMT system. 4.4 Evaluation over reference data To isolate the impact of the translation errors in SMT output on the performance of our measure word generation model, we conducted another experiment with reference bilingual sentences in which measure words in Chinese sentences are manually removed. This experiment can show the performance upper bound of our method without interference from an SMT system. Table 5 shows the results. Compared to the results in Table 3, the precision improvement in the Mo-ME model is larger than that in the Bi-ME model, which shows that noisy translation of the SMT system has more serious influence on the Mo-ME model than the Bi-ME model. This also indicates that source information without noises is helpful for measure word generation. Wsize Mo-ME Bi-ME Co-ME 6 71.63% 74.92% 75.72% 8 73.80% 75.48% 76.20% 10 73.80% 74.76% 75.48% 12 73.80% 75.24% 75.96% 14 73.56% 75.48% 76.44% Table 5. Results over reference data 94 4.5 Impacts of features In this section, we examine the contribution of both target language based features and source language based features in our model. Table 6 and Table 7 show the precision and recall when using different features. The window size is set to 10. In the tables, Lm denotes the n-gram language model feature, Tmh denotes the feature of collocation between target head words and the candidate measure word, Smh denotes the feature of collocation between source head words and the candidate measure word, Hs denotes the feature of source head word selection, Punc denotes the feature of target punctuation position, Tlex denotes surrounding word features in translation, Slex denotes surrounding word features in source sentence, and Pos denotes Part-Of-Speech feature. Feature setting Precision Recall Baseline 54.82% 45.61% Lm 51.11% 41.24% +Tmh 61.43% 49.22% +Punc 62.54% 50.08% +Tlex 64.80% 51.87% Table 6. Feature contribution in Mo-ME model Feature setting Precision Recall Baseline 54.82% 45.61% Lm 51.11% 41.24% +Tmh+Smh 64.50% 51.64% +Hs 65.32% 52.26% +Punc 66.29% 53.10% +Pos 66.53% 53.25% +Tlex 67.50% 54.02% +Slex 69.52% 55.54% Table 7. Feature contribution in Bi-ME model The experimental results show that all the features can bring incremental improvements. The method with only Lm feature performs worse than the baseline. However, with more features integrated, our method outperforms the baseline, which indicates each kind of features we selected is useful for measure word generation. According to the results, the feature of MW-HW collocation has much contribution to reducing the selection error of measure words given head words. The contribution of Slex feature explains that other surrounding words in source sentence are also helpful since head word determination in source language might be incorrect due to errors in English parse trees. Meanwhile, the contribution from Smh, Hs and Slex features demonstrates that bilingual knowledge can play an important role for measure word generation. Compared with lexicalized features, we do not get much benefit from the Pos features. 4.6 Error analysis We conducted an error analysis on 100 randomly selected sentences from the test data. There are four major kinds of errors as listed in Table 8. Most errors are caused by failures in finding positions to generate measure words. The main reason for this is some hint information used to identify measure word positions is missing in the noisy output of SMT systems. Two kinds of errors are introduced by incomplete head word and MW-HW collocation coverage, which can be solved by enlarging the size of training corpus. There are also head word selection errors due to incorrect syntax parsing. Error type Ratio unseen head word 32.14% unseen MW-HW collocation 10.71% missing MW position 39.29% incorrect HW selection 10.71% others 7.14% Table 8. Error distribution 4.7 Comparison with other methods In this section we compare our statistical methods with the pre-processing method and the rule-based methods for measure word generation in a translation task. In pre-processing method, only source language information is available. Given a source sentence, the corresponding syntax parse tree Ts is first constructed with an English parser. Then the preprocessing method chooses the source head word hs based on Ts. The candidate measure word with the highest probability collocated with hs is selected as the best result, where the measure word candidate set corresponding to each head word is mined over a bilingual training corpus in advance. We achieved precision 58.62% and recall 49.25%, which are worse than the results of our postprocessing based methods. The weakness of the pre-processing method is twofold. One problem is data sparseness with respect to collocations be95 tween English head words and Chinese measure words. The other problem comes from the English head word selection error introduced by using source parse trees. We also compared our method with a wellknown rule-based machine translation system – SYSTRAN3. We translated our test data with SYSTRAN’s English-to-Chinese translation engine. The precision and recall are 63.82% and 51.09% respectively, which are also lower than our method. 5 Related Work Most existing rule-based English-to-Chinese MT systems have a dedicated module handling measure word generation. In general a rule-based method uses manually constructed rule patterns to predict measure words. Like most rule based approaches, this kind of system requires lots of human efforts of experienced linguists and usually cannot easily be adapted to a new domain. The most relevant work based on statistical methods to our research might be statistical technologies employed to model issues such as morphology generation (Minkov et al., 2007). 6 Conclusion and Future Work In this paper we propose a statistical model for measure word generation for English-to-Chinese SMT systems, in which contextual knowledge from both source and target sentences is involved. Experimental results show that our method not only achieves high precision and recall for generating measure words, but also improves the quality of English-to-Chinese SMT systems. In the future, we plan to investigate more features and enlarge coverage to improve the quality of measure word generation, especially reduce the errors found in our experiments. Acknowledgements Special thanks to David Chiang, Stephan Stiller and the anonymous reviewers for their feedback and insightful comments. References Stanley F. Chen and Joshua Goodman. 1998. An Empirical study of smoothing techniques for language 3 http://www.systransoft.com/ modeling. Technical Report TR-10-98, Harvard University Center for Research in Computing Technology, 1998. David Chiang and Daniel M. Bikel. 2002. Recovering latent information in treebanks. Proceedings of COLING '02, 2002. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL 2005, pages 263-270. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT-NAACL 2003, pages 127-133. Einat Minkov, Kristina Toutanova, and Hisami Suzuki. 2007. Generating complex morphology for machine translation. In Proceedings of 45th Annual Meeting of the ACL, pages 128-135. Franz J. Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of 38th Annual Meeting of the ACL, pages 440-447. Franz J. Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30:417-449. Kishore Papineni, Salim Roukos, ToddWard, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the ACL, pages 311-318. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of HLTNAACL, 2007. Andreas Stolcke. 2002. SRILM - an extensible language modeling toolkit. In Proceedings of International Conference on Spoken Language Processing, volume 2, pages 901-904. Le Zhang. MaxEnt toolkit. 2006. http://homepages.inf. ed.ac.uk/s0450736/maxent_toolkit.html 96
2008
11
Proceedings of ACL-08: HLT, pages 968–976, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics A Deductive Approach to Dependency Parsing∗ Carlos G´omez-Rodr´ıguez Departamento de Computaci´on Universidade da Coru˜na, Spain [email protected] John Carroll and David Weir Department of Informatics University of Sussex, United Kingdom {johnca,davidw}@sussex.ac.uk Abstract We define a new formalism, based on Sikkel’s parsing schemata for constituency parsers, that can be used to describe, analyze and compare dependency parsing algorithms. This abstraction allows us to establish clear relations between several existing projective dependency parsers and prove their correctness. 1 Introduction Dependency parsing consists of finding the structure of a sentence as expressed by a set of directed links (dependencies) between words. This is an alternative to constituency parsing, which tries to find a division of the sentence into segments (constituents) which are then broken up into smaller constituents. Dependency structures directly show head-modifier and head-complement relationships which form the basis of predicate argument structure, but are not represented explicitly in constituency trees, while providing a representation in which no non-lexical nodes have to be postulated by the parser. In addition to this, some dependency parsers are able to represent non-projective structures, which is an important feature when parsing free word order languages in which discontinuous constituents are common. The formalism of parsing schemata (Sikkel, 1997) is a useful tool for the study of constituency parsers since it provides formal, high-level descriptions of parsing algorithms that can be used to prove their formal properties (such as correctness), establish relations between them, derive new parsers from existing ones and obtain efficient implementations automatically (G´omez-Rodr´ıguez et al., 2007). The formalism was initially defined for context-free grammars and later applied to other constituencybased formalisms, such as tree-adjoining grammars ∗Partially supported by Ministerio de Educaci´on y Ciencia and FEDER (TIN2004-07246-C03, HUM2007-66607-C04), Xunta de Galicia (PGIDIT07SIN005206PR, PGIDIT05PXIC10501PN, PGIDIT05PXIC30501PN, Rede Galega de Proc. da Linguaxe e RI) and Programa de Becas FPU. (Alonso et al., 1999). However, since parsing schemata are defined as deduction systems over sets of constituency trees, they cannot be used to describe dependency parsers. In this paper, we define an analogous formalism that can be used to define, analyze and compare dependency parsers. We use this framework to provide uniform, high-level descriptions for a wide range of well-known algorithms described in the literature, and we show how they formally relate to each other and how we can use these relations and the formalism itself to prove their correctness. 1.1 Parsing schemata Parsing schemata (Sikkel, 1997) provide a formal, simple and uniform way to describe, analyze and compare different constituency-based parsers. The notion of a parsing schema comes from considering parsing as a deduction process which generates intermediate results called items. An initial set of items is directly obtained from the input sentence, and the parsing process consists of the application of inference rules (deduction steps) which produce new items from existing ones. Each item contains a piece of information about the sentence’s structure, and a successful parsing process will produce at least one final item containing a full parse tree for the sentence or guaranteeing its existence. Items in parsing schemata are formally defined as sets of partial parse trees from a set denoted Trees(G), which is the set of all the possible partial parse trees that do not violate the constraints imposed by a grammar G. More formally, an item set I is defined by Sikkel as a quotient set associated with an equivalence relation on Trees(G).1 Valid parses for a string are represented by items containing complete marked parse trees for that string. Given a context-free grammar G = 1While Shieber et al. (1995) also view parsers as deduction systems, Sikkel formally defines items and related concepts, providing the mathematical tools to reason about formal properties of parsers. 968 (N, Σ, P, S), a marked parse tree for a string w1 . . . wn is any tree τ ∈Trees(G)/root(τ) = S∧yield(τ) = w1 . . . wn 2. An item containing such a tree for some arbitrary string is called a final item. An item containing such a tree for a particular string w1 . . . wn is called a correct final item for that string. For each input string, a parsing schema’s deduction steps allow us to infer a set of items, called valid items for that string. A parsing schema is said to be sound if all valid final items it produces for any arbitrary string are correct for that string. A parsing schema is said to be complete if all correct final items are valid. A correct parsing schema is one which is both sound and complete. A correct parsing schema can be used to obtain a working implementation of a parser by using deductive engines such as the ones described by Shieber et al. (1995) and G´omez-Rodr´ıguez et al. (2007) to obtain all valid final items. 2 Dependency parsing schemata Although parsing schemata were initially defined for context-free parsers, they can be adapted to different constituency-based grammar formalisms, by finding a suitable definition of Trees(G) for each particular formalism and a way to define deduction steps from its rules. However, parsing schemata are not directly applicable to dependency parsing, since their formal framework is based on constituency trees. In spite of this problem, many of the dependency parsers described in the literature are constructive, in the sense that they proceed by combining smaller structures to form larger ones until they find a complete parse for the input sentence. Therefore, it is possible to define a variant of parsing schemata, where these structures can be defined as items and the strategies used for combining them can be expressed as inference rules. However, in order to define such a formalism we have to tackle some issues specific to dependency parsers: • Traditional parsing schemata are used to define grammar-based parsers, in which the parsing process is guided by some set of rules which are used to license deduction steps: for example, an Earley Predictor step is tied to a particular grammar rule, and can only be executed if such a rule exists. Some dependency parsers are also grammar2wi is shorthand for the marked terminal (wi, i). These are used by Sikkel (1997) to link terminal symbols to string positions so that an input sentence can be represented as a set of trees which are used as initial items (hypotheses) for the deduction system. Thus, a sentence w1 . . . wn produces a set of hypotheses {{w1(w1)}, . . . , {wn(wn)}}. Figure 1: Representation of a dependency structure with a tree. The arrows below the words correspond to its associated dependency graph. based: for example, those described by Lombardo and Lesmo (1996), Barbero et al. (1998) and Kahane et al. (1998) are tied to the formalizations of dependency grammar using context-free like rules described by Hays (1964) and Gaifman (1965). However, many of the most widely used algorithms (Eisner, 1996; Yamada and Matsumoto, 2003) do not use a formal grammar at all. In these, decisions about which dependencies to create are taken individually, using probabilistic models (Eisner, 1996) or classifiers (Yamada and Matsumoto, 2003). To represent these algorithms as deduction systems, we use the notion of D-rules (Covington, 1990). D-rules take the form a →b, which says that word b can have a as a dependent. Deduction steps in non-grammarbased parsers can be tied to the D-rules associated with the links they create. In this way, we obtain a representation of the semantics of these parsing strategies that is independent of the particular model used to take the decisions associated with each Drule. • The fundamental structures in dependency parsing are dependency graphs. Therefore, as items for constituency parsers are defined as sets of partial constituency trees, it is tempting to define items for dependency parsers as sets of partial dependency graphs. However, predictive grammar-based algorithms such as those of Lombardo and Lesmo (1996) and Kahane et al. (1998) have operations which postulate rules and cannot be defined in terms of dependency graphs, since they do not do any modifications to the graph. In order to make the formalism general enough to include these parsers, we define items in terms of sets of partial dependency trees as shown in Figure 1. Note that a dependency graph can always be extracted from such a tree. • Some of the most popular dependency parsing algorithms, like that of Eisner (1996), work by connecting spans which can represent disconnected dependency graphs. Such spans cannot be represented by a single dependency tree. Therefore, our formalism allows items to be sets of forests of partial dependency trees, instead of sets of trees. 969 Taking these considerations into account, we define the concepts that we need to describe item sets for dependency parsers: Let Σ be an alphabet of terminal symbols. Partial dependency trees: We define the set of partial dependency trees (D-trees) as the set of finite trees where children of each node have a left-to-right ordering, each node is labelled with an element of Σ∪(Σ×N), and the following conditions hold: • All nodes labelled with marked terminals wi ∈ (Σ × N) are leaves, • Nodes labelled with terminals w ∈Σ do not have more than one daughter labelled with a marked terminal, and if they have such a daughter node, it is labelled wi for some i ∈N, • Left siblings of nodes labelled with a marked terminal wk do not have any daughter labelled wj with j ≥k. Right siblings of nodes labelled with a marked terminal wk do not have any daughter labelled wj with j ≤k. We denote the root node of a partial dependency tree t as root(t). If root(t) has a daughter node labelled with a marked terminal wh, we will say that wh is the head of the tree t, denoted by head(t). If all nodes labelled with terminals in t have a daughter labelled with a marked terminal, t is grounded. Relationship between trees and graphs: Let t ∈D-trees be a partial dependency tree; g(t), its associated dependency graph, is a graph (V, E) • V ={wi ∈(Σ × N) | wi is the label of a node in t}, • E ={(wi, wj) ∈(Σ × N)2 | C, D are nodes in t such that D is a daughter of C, wj the label of a daughter of C, wi the label of a daughter of D}. Projectivity: A partial dependency tree t ∈ D-trees is projective iff yield(t) cannot be written as . . . wi . . . wj . . . where i ≥j. It is easy to verify that the dependency graph g(t) is projective with respect to the linear order of marked terminals wi, according to the usual definition of projectivity found in the literature (Nivre, 2006), if and only if the tree t is projective. Parse tree: A partial dependency tree t ∈ D-trees is a parse tree for a given string w1 . . . wn if its yield is a permutation of w1 . . . wn. If its yield is exactly w1 . . . wn, we will say it is a projective parse tree for the string. Item set: Let δ ⊆D-trees be the set of dependency trees which are acceptable according to a given grammar G (which may be a grammar of Drules or of CFG-like rules, as explained above). We define an item set for dependency parsing as a set I ⊆Π, where Π is a partition of 2δ. Once we have this definition of an item set for dependency parsing, the remaining definitions are analogous to those in Sikkel’s theory of constituency parsing (Sikkel, 1997), so we will not include them here in full detail. A dependency parsing system is a deduction system (I, H, D) where I is a dependency item set as defined above, H is a set containing initial items or hypotheses, and D ⊆(2(H∪I) × I) is a set of deduction steps defining an inference relation ⊢. Final items in this formalism will be those containing some forest F containing a parse tree for some arbitrary string. An item containing such a tree for a particular string w1 . . . wn will be called a correct final item for that string in the case of nonprojective parsers. When defining projective parsers, correct final items will be those containing projective parse trees for w1 . . . wn. This distinction is relevant because the concepts of soundness and correctness of parsing schemata are based on correct final items (cf. section 1.1), and we expect correct projective parsers to produce only projective structures, while nonprojective parsers should find all possible structures including nonprojective ones. 3 Some practical examples 3.1 Col96 (Collins, 96) One of the most straightforward projective dependency parsing strategies is the one described by Collins (1996), directly based on the CYK parsing algorithm. This parser works with dependency trees which are linked to each other by creating links between their heads. Its item set is defined as ICol96 = {[i, j, h] | 1 ≤i ≤h ≤j ≤n}, where an item [i, j, h] is defined as the set of forests containing a single projective dependency tree t such that t is grounded, yield(t) = wi . . . wj and head(t) = wh. For an input string w1 . . . wn, the set of hypotheses is H = {[i, i, i] | 0 ≤i ≤n + 1}, i.e., the set of forests containing a single dependency tree of the form wi(wi). This same set of hypotheses can be used for all the parsers, so we will not make it explicit for subsequent schemata.3 The set of final items is {[1, n, h] | 1 ≤h ≤n}: these items trivially represent parse trees for the input sentence, where wh is the sentence’s head. The deduction steps are shown in Figure 2. 3Note that the words w0 and wn+1 used in the definition do not appear in the input: these are dummy terminals that we will call beginning of sentence (BOS) and end of sentence (EOS) marker, respectively; and will be needed by some parsers. 970 Col96 (Collins,96): R-Link [i, j, h1] [j + 1, k, h2] [i, k, h2] wh1 →wh2 L-Link [i, j, h1] [j + 1, k, h2] [i, k, h1] wh2 →wh1 Eis96 (Eisner, 96): Initter [i, i, i] [i + 1, i + 1, i + 1] [i, i + 1, F, F] R-Link [i, j, F, F] [i, j, T, F] wi →wj L-Link [i, j, F, F] [i, j, F, T] wj →wi CombineSpans [i, j, b, c] [j, k, not(c), d] [i, k, b, d] ES99 (Eisner and Satta, 99): R-Link [i, j, i] [j + 1, k, k] [i, k, k] wi →wk L-Link [i, j, i] [j + 1, k, k] [i, k, i] wk →wi R-Combiner [i, j, i] [j, k, j] [i, k, i] L-Combiner [i, j, j] [j, k, k] [i, k, k] YM03 (Yamada and Matsumoto, 2003): Initter [i, i, i] [i + 1, i + 1, i + 1] [i, i + 1] R-Link [i, j] [j, k] [i, k] wj →wk L-Link [i, j] [j, k] [i, k] wj →wi LL96 (Lombardo and Lesmo, 96): Initter [(.S), 1, 0] ∗(S)∈P Predictor [A(α.Bβ), i, j] [B(.γ), j + 1, j] B(γ)∈P Scanner [A(α. ⋆β), i, h −1] [h, h, h] [A(α ⋆.β), i, h] wh IS A Completer [A(α.Bβ), i, j] [B(γ.), j + 1, k] [A(αB.β), i, k] Figure 2: Deduction steps of the parsing schemata for some well-known dependency parsers. As we can see, we use D-rules as side conditions for deduction steps, since this parsing strategy is not grammar-based. Conceptually, the schema we have just defined describes a recogniser: given a set of Drules and an input string wi . . . wn, the sentence can be parsed (projectively) under those D-rules if and only if this deduction system can infer a correct final item. However, when executing this schema with a deductive engine, we can recover the parse forest by following back pointers in the same way as is done with constituency parsers (Billot and Lang, 1989). Of course, boolean D-rules are of limited interest in practice. However, this schema provides a formalization of a parsing strategy which is independent of the way linking decisions are taken in a particular implementation. In practice, statistical models can be used to decide whether a step linking words a and b (i.e., having a →b as a side condition) is executed or not, and probabilities can be attached to items in order to assign different weights to different analyses of the sentence. The same principle applies to the rest of D-rule-based parsers described in this paper. 3.2 Eis96 (Eisner, 96) By counting the number of free variables used in each deduction step of Collins’ parser, we can conclude that it has a time complexity of O(n5). This complexity arises from the fact that a parentless word (head) may appear in any position in the partial results generated by the parser; the complexity can be reduced to O(n3) by ensuring that parentless words can only appear at the first or last position of an item. This is the principle behind the parser defined by Eisner (1996), which is still in wide use today (Corston-Oliver et al., 2006; McDonald et al., 2005a). The item set for Eisner’s parsing schema is IEis96 = {[i, j, T, F] | 0 ≤i ≤j ≤n} ∪ {[i, j, F, T] | 0 ≤i ≤j ≤n} ∪{[i, j, F, F] | 0 ≤i ≤j ≤n}, where each item [i, j, T, F] is defined as the item [i, j, j] ∈ ICol96, each item [i, j, F, T] is defined as the item [i, j, i] ∈ICol96, and each item [i, j, F, F] is defined as the set of forests of the form {t1, t2} such that t1 and t2 are grounded, head(t1) = wi, head(t2) = wj, and ∃k ∈N(i ≤k < j)/yield(t1) = wi . . . wk ∧ yield(t2) = wk+1 . . . wj. Note that the flags b, c in an item [i, j, b, c] indicate whether the words in positions i and j, respectively, have a parent in the item or not. Items with one of the flags set to T represent dependency trees where the word in position i or j is the head, while items with both flags set to F represent pairs of trees headed at positions i and j, and therefore correspond to disconnected dependency graphs. Deduction steps4 are shown in Figure 2. The set of final items is {[0, n, F, T]}. Note that these items represent dependency trees rooted at the BOS marker w0, which acts as a “dummy head” for the sentence. In order for the algorithm to parse sentences correctly, we will need to define D-rules to allow w0 to be linked to the real sentence head. 3.3 ES99 (Eisner and Satta, 99) Eisner and Satta (1999) define an O(n3) parser for split head automaton grammars that can be used 4Alternatively, we could consider items of the form [i, i + 1, F, F] to be hypotheses for this parsing schema, so we would not need an Initter step. However, we have chosen to use a standard set of hypotheses valid for all parsers because this allows for more straightforward proofs of relations between schemata. 971 for dependency parsing. This algorithm is conceptually simpler than Eis96, since it only uses items representing single dependency trees, avoiding items of the form [i, j, F, F]. Its item set is IES99 = {[i, j, i] | 0 ≤i ≤j ≤n} ∪{[i, j, j] | 0 ≤i ≤j ≤n}, where items are defined as in Collins’ parsing schema. Deduction steps are shown in Figure 2, and the set of final items is {[0, n, 0]}. (Parse trees have w0 as their head, as in the previous algorithm). Note that, when described for head automaton grammars as in Eisner and Satta (1999), this algorithm seems more complex to understand and implement than the previous one, as it requires four different kinds of items in order to keep track of the state of the automata used by the grammars. However, this abstract representation of its underlying semantics as a dependency parsing schema shows that this parsing strategy is in fact conceptually simpler for dependency parsing. 3.4 YM03 (Yamada and Matsumoto, 2003) Yamada and Matsumoto (2003) define a deterministic, shift-reduce dependency parser guided by support vector machines, which achieves over 90% dependency accuracy on section 23 of the Penn treebank. Parsing schemata are not suitable for directly describing deterministic parsers, since they work at a high abstraction level where a set of operations are defined without imposing order constraints on them. However, many deterministic parsers can be viewed as particular optimisations of more general, nondeterministic algorithms. In this case, if we represent the actions of the parser as deduction steps while abstracting from the deterministic implementation details, we obtain an interesting nondeterministic parser. Actions in Yamada and Matsumoto’s parser create links between two target nodes, which act as heads of neighbouring dependency trees. One of the actions creates a link where the left target node becomes a child of the right one, and the head of a tree located directly to the left of the target nodes becomes the new left target node. The other action is symmetric, performing the same operation with a right-to-left link. An O(n3) nondeterministic parser generalising this behaviour can be defined by using an item set IY M03 = {[i, j] | 0 ≤i ≤j ≤n + 1}, where each item [i, j] is defined as the item [i, j, F, F] in IEis96; and the deduction steps are shown in Figure 2. The set of final items is {[0, n + 1]}. In order for this set to be well-defined, the grammar must have no D-rules of the form wi →wn+1, i.e., it must not allow the EOS marker to govern any words. If this is the case, it is trivial to see that every forest in an item of the form [0, n + 1] must contain a parse tree rooted at the BOS marker and with yield w0 . . . wn. As can be seen from the schema, this algorithm requires less bookkeeping than any other of the parsers described here. 3.5 LL96 (Lombardo and Lesmo, 96) and other Earley-based parsers The algorithms in the above examples are based on taking individual decisions about dependency links, represented by D-rules. Other parsers, such as that of Lombardo and Lesmo (1996), use grammars with context-free like rules which encode the preferred order of dependents for each given governor, as defined by Gaifman (1965). For example, a rule of the form N(Det ∗PP) is used to allow N to have Det as left dependent and PP as right dependent. The algorithm by Lombardo and Lesmo (1996) is a version of Earley’s context-free grammar parser (Earley, 1970) using Gaifman’s dependency grammar, and can be written by using an item set ILomLes = {[A(α.β), i, j] | A(αβ) ∈ P ∧ 1 ≤i ≤j ≤n}, where each item [A(α.β), i, j] represents the set of partial dependency trees rooted at A, where the direct children of A are αβ, and the subtrees rooted at α have yield wi . . . wj. The deduction steps for the schema are shown in Figure 2, and the final item set is {[(S.), 1, n]}. As we can see, the schema for Lombardo and Lesmo’s parser resembles the Earley-style parser in Sikkel (1997), with some changes to adapt it to dependency grammar (for example, the Scanner always moves the dot over the head symbol ∗). Analogously, other dependency parsing schemata based on CFG-like rules can be obtained by modifying context-free grammar parsing schemata of Sikkel (1997) in a similar way. The algorithm by Barbero et al. (1998) can be obtained from the leftcorner parser, and the one by Courtin and Genthial (1998) is a variant of the head-corner parser. 3.6 Pseudo-projectivity Pseudo-projective parsers can generate nonprojective analyses in polynomial time by using a projective parsing strategy and postprocessing the results to establish nonprojective links. For example, the algorithm by Kahane et al. (1998) uses a projective parsing strategy like that of LL96, but using the following initializer step instead of the 972 Initter and Predictor:5 Initter [A(α), i, i −1] A(α) ∈P ∧1 ≤i ≤n 4 Relations between dependency parsers The framework of parsing schemata can be used to establish relationships between different parsing algorithms and to obtain new algorithms from existing ones, or derive formal properties of a parser (such as soundness or correctness) from the properties of related algorithms. Sikkel (1994) defines several kinds of relations between schemata, which fall into two categories: generalisation relations, which are used to obtain more fine-grained versions of parsers, and filtering relations, which can be seen as the reverse of generalisation and are used to reduce the number of items and/or steps needed for parsing. He gives a formal definition of each kind of relation. Informally, a parsing schema can be generalised from another via the following transformations: • Item refinement: We say that P1 ir −→P2 (P2 is an item refinement of P1) if there is a mapping between items in both parsers such that single items in P1 are broken into multiple items in P2 and individual deductions are preserved. • Step refinement: We say that P1 sr −→P2 if the item set of P1 is a subset of that of P2 and every single deduction step in P1 can be emulated by a sequence of inferences in P2. On the other hand, a schema can be obtained from another by filtering in the following ways: • Static/dynamic filtering: P1 sf/df −−−→P2 if the item set of P2 is a subset of that of P1 and P2 allows a subset of the direct inferences in P16. • Item contraction: The inverse of item refinement. P1 ic −→P2 if P2 ir −→P1. • Step contraction: The inverse of step refinement. P1 sc −→P2 if P2 sr −→P1. All the parsers described in section 3 can be related via generalisation and filtering, as shown in Figure 3. For space reasons we cannot show formal proofs of all the relations, but we sketch the proofs for some of the more interesting cases: 5The initialization step as reported in Kahane’s paper is different from this one, as it directly consumes a nonterminal from the input. However, using this step results in an incomplete algorithm. The problem can be fixed either by using the step shown here instead (bottom-up Earley strategy) or by adding an additional step turning it into a bottom-up Left-Corner parser. 6Refer to Sikkel (1994) for the distinction between static and dynamic filtering, which we will not use here. 4.1 YM03 sr −→Eis96 It is easy to see from the schema definitions that IY M03 ⊆IEis96. In order to prove the relation between these parsers, we need to verify that every deduction step in YM03 can be emulated by a sequence of inferences in Eis96. In the case of the Initter step this is trivial, since the Initters of both parsers are equivalent. If we write the R-Link step in the notation we have used for Eisner items, we have R-Link [i, j, F, F] [j, k, F, F] [i, k, F, F] wj →wk This can be emulated in Eisner’s parser by an R-Link step followed by a CombineSpans step: [j, k, F, F] ⊢[j, k, T, F] (by R-Link), [j, k, T, F], [i, j, F, F] ⊢[i, k, F, F] (by CombineSpans). Symmetrically, the L-Link step in YM03 can be emulated by an L-Link followed by a CombineSpans in Eis96. 4.2 ES99 sr −→Eis96 If we write the R-Link step in Eisner and Satta’s parser in the notation for Eisner items, we have R-Link [i, j, F, T] [j + 1, k, T, F] [i, k, T, F] wi →wk This inference can be emulated in Eisner’s parser as follows: ⊢[j, j + 1, F, F] (by Initter), [i, j, F, T], [j, j + 1, F, F] ⊢[i, j + 1, F, F] (CombineSpans), [i, j + 1, F, F], [j + 1, k, T, F] ⊢[i, k, F, F] (CombineSpans), [i, k, F, F] ⊢[i, k, T, F] (by R-Link). The proof corresponding to the L-Link step is symmetric. As for the R-Combiner and L-Combiner steps in ES99, it is easy to see that they are particular cases of the CombineSpans step in Eis96, and therefore can be emulated by a single application of CombineSpans. Note that, in practice, the relations in sections 4.1 and 4.2 mean that the ES99 and YM03 parsers are superior to Eis96, since they generate fewer items and need fewer steps to perform the same deductions. These two parsers also have the interesting property that they use disjoint item sets (one uses items representing trees while the other uses items representing pairs of trees); and the union of these disjoint sets is the item set used by Eis96. Also note that the optimisation in YM03 comes from contracting deductions in Eis96 so that linking operations are immediately followed by combining operations; while ES99 does the opposite, forcing combining operations to be followed by linking operations. 4.3 Other relations If we generalise the linking steps in ES99 so that the head of each item can be in any position, we obtain a 973 Figure 3: Formal relations between several well-known dependency parsers. Arrows going upwards correspond to generalisation relations, while those going downwards correspond to filtering. The specific subtype of relation is shown in each arrow’s label, following the notation in Section 4. correct O(n5) parser which can be filtered to Col96 just by eliminating the Combiner steps. From Col96, we can obtain an O(n5) head-corner parser based on CFG-like rules by an item refinement in which each Collins item [i, j, h] is split into a set of items [A(α.β.γ), i, j, h]. Of course, the formal refinement relation between these parsers only holds if the D-rules used for Collins’ parser correspond to the CFG rules used for the head-corner parser: for every D-rule B →A there must be a corresponding CFG-like rule A →. . . B . . . in the grammar used by the head-corner parser. Although this parser uses three indices i, j, h, using CFG-like rules to guide linking decisions makes the h indices unnecessary, so they can be removed. This simplification is an item contraction which results in an O(n3) head-corner parser. From here, we can follow the procedure in Sikkel (1994) to relate this head-corner algorithm to parsers analogous to other algorithms for context-free grammars. In this way, we can refine the head-corner parser to a variant of de Vreught and Honig’s algorithm (Sikkel, 1997), and by successive filters we reach a left-corner parser which is equivalent to the one described by Barbero et al. (1998), and a step contraction of the Earley-based dependency parser LL96. The proofs for these relations are the same as those described in Sikkel (1994), except that the dependency variants of each algorithm are simpler (due to the absence of epsilon rules and the fact that the rules are lexicalised). 5 Proving correctness Another useful feature of the parsing schemata framework is that it provides a formal way to define the correctness of a parser (see last paragraph of Section 1.1) which we can use to prove that our parsers are correct. Furthermore, relations between schemata can be used to derive the correctness of a schema from that of related ones. In this section, we will show how we can prove that the YM03 and ES99 algorithms are correct, and use that fact to prove the correctness of Eis96. 5.1 ES99 is correct In order to prove the correctness of a parser, we must prove its soundness and completeness (see section 1.1). Soundness is generally trivial to verify, since we only need to check that every individual deduction step in the parser infers a correct consequent item when applied to correct antecedents (i.e., in this case, that steps always generate non-empty items that conform to the definition in 3.3). The difficulty is proving completeness, for which we need to prove that all correct final items are valid (i.e., can be inferred by the schema). To show this, we will prove the stronger result that all correct items are valid. We will show this by strong induction on the length of items, where the length of an item ι = [i, k, h] is defined as length(ι) = k −i + 1. Correct items of length 1 are the hypotheses of the schema (of the form [i, i, i]) which are trivially valid. We will prove that, if all correct items of length m are valid for all 1 ≤m < l, then items of length l are also valid. Let [i, k, i] be an item of length l in IES99 (thus, l = k −i+1). If this item is correct, then it contains a grounded dependency tree t such that yield(t) = wi . . . wk and head(t) = wi. By construction, the root of t is labelled wi. Let wj be the rightmost daughter of wi in t. Since t is projective, we know that the yield of wj must be of the form wl . . . wk, where i < l ≤j ≤k. If l < j, then wl is the leftmost transitive dependent of wj in t, and if k > j, then we know that wk is the rightmost transitive dependent of wj in t. Let tj be the subtree of t rooted at wj. Let t1 be the tree obtained from removing tj from t. Let t2 be 974 the tree obtained by removing all the children to the right of wj from tj, and t3 be the tree obtained by removing all the children to the left of wj from tj. By construction, t1 belongs to a correct item [i, l −1, i], t2 belongs to a correct item [l, j, j] and t3 belongs to a correct item [j, k, j]. Since these three items have a length strictly less than l, by the inductive hypothesis, they are valid. This allows us to prove that the item [i, k, i] is also valid, since it can be obtained from these valid items by the following inferences: [i, l −1, i], [l, j, j] ⊢[i, j, i] (by the L-Link step), [i, j, i], [j, k, j] ⊢[i, k, i] (by the L-Combiner step). This proves that all correct items of length l which are of the form [i, k, i] are correct under the inductive hypothesis. The same can be proved for items of the form [i, k, k] by symmetric reasoning, thus proving that the ES99 parsing schema is correct. 5.2 YM03 is correct In order to prove correctness of this parser, we follow the same procedure as above. Soundness is again trivial to verify. To prove completeness, we use strong induction on the length of items, where the length of an item [i, j] is defined as j −i + 1. The induction step is proven by considering any correct item [i, k] of length l > 2 (l = 2 is the base case here since items of length 2 are generated by the Initter step) and proving that it can be inferred from valid antecedents of length less than l, so it is valid. To show this, we note that, if l > 2, either wi has at least a right dependent or wk has at least a left dependent in the item. Supposing that wi has a right dependent, if t1 and t2 are the trees rooted at wi and wk in a forest in [i, k], we call wj the rightmost daughter of wi and consider the following trees: v = the subtree of t1 rooted at wj, u1 = the tree obtained by removing v from t1, u2 = the tree obtained by removing all children to the right of wj from v, u3 = the tree obtained by removing all children to the left of wj from v. We observe that the forest {u1, u2} belongs to the correct item [i, j], while {u3, t2} belongs to the correct item [j, k]. From these two items, we can obtain [i, k] by using the L-Link step. Symmetric reasoning can be applied if wi has no right dependents but wk has at least a left dependent, and analogously to the case of the previous parser, we conclude that the YM03 parsing schema is correct. 5.3 Eis96 is correct By using the previous proofs and the relationships between schemata that we explained earlier, it is easy to prove that Eis96 is correct: soundness is, as always, straightforward, and completeness can be proven by using the properties of other algorithms. Since the set of final items in Eis96 and ES99 are the same, and the former is a step refinement of the latter, the completeness of ES99 directly implies the completeness of Eis96. Alternatively, we can use YM03 to prove the correctness of Eis96 if we redefine the set of final items in the latter to be of the form [0, n + 1, F, F], which are equally valid as final items since they always contain parse trees. This idea can be applied to transfer proofs of completeness across any refinement relation. 6 Conclusions We have defined a variant of Sikkel’s parsing schemata formalism which allows us to represent dependency parsing algorithms in a simple, declarative way7. We have clarified relations between parsers which were originally described very differently. For example, while Eisner presented his algorithm as a dynamic programming algorithm which combines spans into larger spans, Yamada and Matsumoto’s works by sequentially executing parsing actions that move a focus point in the input one position to the left or right, (possibly) creating a dependency link. However, in the parsing schemata for these algorithms we can see (and formally prove) that they are related: one is a refinement of the other. Parsing schemata are also a formal tool that can be used to prove the correctness of parsing algorithms. The relationships between dependency parsers can be exploited to derive properties of a parser from those of others, as we have seen in several examples. Although the examples in this paper are centered in projective dependency parsing, the formalism does not require projectivity and can be used to represent nonprojective algorithms as well8. An interesting line for future work is to use relationships between schemata to find nonprojective parsers that can be derived from existing projective counterparts. 7An alternative framework that formally describes some dependency parsers is that of transition systems (McDonald and Nivre, 2007). This model is based on parser configurations and transitions, and has no clear relationship with the approach described here. 8Note that spanning tree parsing algorithms based on edgefactored models, such as the one by McDonald et al. (2005b) are not constructive in the sense outlined in Section 2, so the approach described here does not directly apply to them. However, other nonprojective parsers such as (Attardi, 2006) follow a constructive approach and can be analysed deductively. 975 References Miguel A. Alonso, Eric de la Clergerie, David Cabrero, and Manuel Vilares. 1999. Tabular algorithms for TAG parsing. In Proc. of the Ninth Conference on European chapter of the Association for Computational Linguistics, pages 150–157, Bergen, Norway. ACL. Giuseppe Attardi. 2006. Experiments with a Multilanguage Non-Projective Dependency Parser. In Proc. of the Tenth Conference on Natural Language Learning (CoNLL-X), pages 166–170, New York, USA. ACL. Cristina Barbero, Leonardo Lesmo, Vincenzo Lombarlo, and Paola Merlo. 1998. Integration of syntactic and lexical information in a hierarchical dependency grammar. In Proc. of the Workshop on Dependency Grammars, pages 58–67, ACL-COLING, Montreal, Canada. Sylvie Billot and Bernard Lang. 1989. The structure of shared forest in ambiguous parsing. In Proc. of the 27th Annual Meeting of the Association for Computational Linguistics, pages 143–151, Vancouver, British Columbia, Canada, June. ACL. Michael John Collins. 1996. A new statistical parser based on bigram lexical dependencies. In Proc. of the 34th annual meeting on Association for Computational Linguistics, pages 184–191, Morristown, NJ, USA. ACL. Simon Corston-Oliver, Anthony Aue, Kevin Duh, and Eric Ringger. 2006. Multilingual dependency parsing using Bayes Point Machines. In Proc. of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 160–167, Morristown, NJ, USA. ACL. Jacques Courtin and Damien Genthial. 1998. Parsing with dependency relations and robust parsing. In Proc. of the Workshop on Dependency Grammars, pages 88– 94, ACL-COLING, Montreal, Canada. Michael A. Covington. 1990. A dependency parser for variable-word-order languages. Technical Report AI1990-01, Athens, GA. Jay Earley. 1970. An efficient context-free parsing algorithm. Communications of the ACM, 13(2):94–102. Jason Eisner and Giorgio Satta. 1999. Efficient parsing for bilexical context-free grammars and head automaton grammars. In Proc. of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, pages 457–464, Morristown, NJ, USA. ACL. Jason Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proc. of the 16th International Conference on Computational Linguistics (COLING-96), pages 340–345, Copenhagen, August. Haim Gaifman. 1965. Dependency systems and phrasestructure systems. Information and Control, 8:304– 337. Carlos G´omez-Rodr´ıguez, Jes´us Vilares, and Miguel A. Alonso. 2007. Compiling declarative specifications of parsing algorithms. In Database and Expert Systems Applications, volume 4653 of Lecture Notes in Computer Science, pages 529–538, Springer-Verlag. David Hays. 1964. Dependency theory: a formalism and some observations. Language, 40:511–525. Sylvain Kahane, Alexis Nasr, and Owen Rambow. 1998. Pseudo-projectivity: A polynomially parsable nonprojective dependency grammar. In COLING-ACL, pages 646–652. Vincenzo Lombardo and Leonardo Lesmo. 1996. An Earley-type recognizer for dependency grammar. In Proc. of the 16th conference on Computational linguistics, pages 723–728, Morristown, NJ, USA. ACL. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of dependency parsers. In ACL ’05: Proc. of the 43rd Annual Meeting on Association for Computational Linguistics, pages 91–98, Morristown, NJ, USA. ACL. Ryan McDonald, Fernando Pereira, Kiril Ribarov and Jan Hajiˇc. 2005b. Non-projective dependency parsing using spanning tree algorithms. In HLT ’05: Proc. of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 523–530. ACL. Ryan McDonald and Joakim Nivre. 2007. Characterizing the Errors of Data-Driven Dependency Parsing Models. In Proc. of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 122–131. Joakim Nivre. 2006. Inductive Dependency Parsing (Text, Speech and Language Technology). SpringerVerlag New York, Inc., Secaucus, NJ, USA. Stuart M. Shieber, Yves Schabes, and Fernando C.N. Pereira. 1995. Principles and implementation of deductive parsing. Journal of Logic Programming, 24:3– 36. Klaas Sikkel. 1994. How to compare the structure of parsing algorithms. In G. Pighizzini and P. San Pietro, editors, Proc. of ASMICS Workshop on Parsing Theory. Milano, Italy, Oct 1994, pages 21–39. Klaas Sikkel. 1997. Parsing Schemata — A Framework for Specification and Analysis of Parsing Algorithms. Texts in Theoretical Computer Science — An EATCS Series. Springer-Verlag, Berlin/Heidelberg/New York. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proc. of 8th International Workshop on Parsing Technologies, pages 195–206. 976
2008
110
Proceedings of ACL-08: HLT, pages 977–985, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Evaluating a Crosslinguistic Grammar Resource: A Case Study of Wambaya Emily M. Bender University of Washington Department of Linguistics Box 354340 Seattle WA 98195-4340 [email protected] Abstract This paper evaluates the LinGO Grammar Matrix, a cross-linguistic resource for the development of precision broad coverage grammars, by applying it to the Australian language Wambaya. Despite large typological differences between Wambaya and the languages on which the development of the resource was based, the Grammar Matrix is found to provide a significant jump-start in the creation of the grammar for Wambaya: With less than 5.5 person-weeks of development, the Wambaya grammar was able to assign correct semantic representations to 76% of the sentences in a naturally occurring text. While the work on Wambaya identified some areas of refinement for the Grammar Matrix, 59% of the Matrix-provided types were invoked in the final Wambaya grammar, and only 4% of the Matrix-provided types required modification. 1 Introduction Hand-built grammars are often dismissed as too expensive to build on the one hand, and too brittle on the other. Nevertheless, they are key to various NLP applications, including those benefiting from deep natural language understanding (e.g., textual inference (Bobrow et al., 2007)), generation of wellformed output (e.g., natural language weather alert systems (Lareau and Wanner, 2007)) or both (as in machine translation (Oepen et al., 2007)). Of particular interest here are applications concerning endangered languages: Endangered languages represent a case of minimal linguistic resources, typically lacking even moderately-sized corpora, let alone treebanks. In the best case, one finds well-crafted descriptive grammars, bilingual dictionaries, and a handful of translated texts. The methods of precision grammar engineering are well-suited to taking advantage of such resources. At the same time, the applications of interest in the context of endangered languages emphasize linguistic precision: implemented grammars can be used to enrich existing linguistic documentation, to build grammar checkers in the context of language standardization, and to create software language tutors in the context of language preservation efforts. The LinGO Grammar Matrix (Bender et al., 2002; Bender and Flickinger, 2005; Drellishak and Bender, 2005) is a toolkit for reducing the cost of creating broad-coverage precision grammars by prepackaging both a cross-linguistic core grammar and a series of libraries of analyses of cross-linguistically variable phenomena, such as major-constituent word order or question formation. The Grammar Matrix was developed initially on the basis of broadcoverage grammars for English (Flickinger, 2000) and Japanese (Siegel and Bender, 2002), and has since been extended and refined as it has been used in the development of broad-coverage grammars for Norwegian (Hellan and Haugereid, 2003), Modern Greek (Kordoni and Neu, 2005), and Spanish (Marimon et al., 2007), as well as being applied to 42 other languages from a variety of language families in a classroom context (Bender, 2007). This paper aims to evaluate both the utility of the Grammar Matrix in jump-starting precision grammar development and the current state of its crosslinguistic hypotheses through a case study of a 977 language typologically very different from any of the languages above: the non-Pama-Nyungan Australian language Wambaya (Nordlinger, 1998). The remainder of this paper is structured as follows: §2 provides background on the Grammar Matrix and Wambaya, and situates the project with respect to related work. §3 presents the implemented grammar of Wambaya, describes its development, and evaluates it against unseen, naturally occurring text. §4 uses the Wambaya grammar and its development as one point of reference to measure the usefulness and cross-linguistic validity of the Grammar Matrix. §5 provides further discussion. 2 Background 2.1 The LinGO Grammar Matrix The LinGO Grammar Matrix is situated theoretically within Head-Driven Phrase Structure Grammar (HPSG; Pollard and Sag, 1994), a lexicalist, constraint-based framework. Grammars in HPSG are expressed as a collection of typed feature structures which are arranged into a hierarchy such that information shared across multiple lexical entries or construction types is represented only on a single supertype. The Matrix is written in the TDL (type description language) formalism, which is interpreted by the LKB parser, generator, and grammar development environment (Copestake, 2002). It is compatible with the broader range of DELPH-IN tools, e.g., for machine translation (Lønning and Oepen, 2006), treebanking (Oepen et al., 2004) and parse selection (Toutanova et al., 2005). The Grammar Matrix consists of a crosslinguistic core type hierarchy and a collection of phenomenon-specific libraries. The core type hierarchy defines the basic feature geometry, the ways that heads combine with arguments and adjuncts, linking types for relating syntactic to semantic arguments, and the constraints required to compositionally build up semantic representations in the format of Minimal Recursion Semantics (Copestake et al., 2005; Flickinger and Bender, 2003). The libraries provide collections of analyses for cross-linguistically variable phenomena. The current libraries include analyses of major constituent word order (SOV, SVO, etc), sentential negation, coordination, and yes-no question formation. The Matrix is accessed through a web-based configuration system1 which elicits typological information from the user-linguist through a questionnaire and then outputs a grammar consisting of the Matrix core plus selected types and constraints from the libraries according to the specifications in the questionnaire. 2.2 Wambaya Wambaya is a recently extinct language of the West Barkly family from the Northern Territory in Australia (Nordlinger, 1998). Wambaya was selected for this project because of its typological properties and because it is extraordinarily well-documented by Nordlinger in her 1998 descriptive grammar. Perhaps the most striking feature of Wambaya is its word order: it is a radically non-configurational language with a second position auxiliary/clitic cluster. That is, aside from the constraint that verbal clauses require a clitic cluster (marking subject and object agreement and tense, aspect and mood) in second position, the word order is otherwise free, to the point that noun phrases can be non-contiguous, with head nouns and their modifiers separated by unrelated words. Furthermore, head nouns are generally not required: argument positions can be instantiated by modifiers only, or, if the referent is clear from the context, by no nominal constituent of any kind. It has a rich system of case marking, and adnominal modifiers agree with the heads they modify in case, number, and four genders. An example is given in (1) (Nordlinger, 1998, 223).2 (1) Ngaragana-nguja grog-PROP.IV.ACC ngiy-a 3.SG.NM.A-PST gujinganjanga-ni mother.II.ERG jiyawu give ngabulu. milk.IV.ACC ‘(His) mother gave (him) milk with grog in it.’ [wmb] In (1), ngaragana-nguja (‘grog-proprietive’, or ‘having grog’) is a modifier of ngabulu milk. They agree in case (accusative) and gender (class IV), but they are not contiguous within the sentence. To relate such discontinuous noun phrases to appropriate semantic representations where ‘having1http://www.delph-in.net/matrix/customize/matrix.cgi 2In this example, the glosses II, IV, and NM indicate gender and ACC and ERG indicate case. A stands for ‘agent’, PST for ‘past’, and PROP for ‘proprietive’. 978 grog’ and ‘milk’ are predicated of the same entity requires a departure from the ordinary way that heads are combined with arguments and modifiers combined with heads in HPSG in general and in the Matrix in particular.3 In the Grammar Matrix, as in most work in HPSG, lexical heads record the dependents they require in valence lists (SUBJ, COMPS, SPR). When a head combines with one of its arguments, the result is a phrase with the same valence requirements as the head daughter, minus the one corresponding to the argument that was just satisfied. In contrast, the project described here has explored a non-cancellation analysis for Wambaya: even after a head combines with one of its arguments, that argument remains on the appropriate valence list of the mother, so that it is visible for further combination with modifiers. In addition, heads can combine directly with modifiers of their arguments (as opposed to just modifiers of themselves). Argument realization and the combination of heads and modifiers are fairly fundamental aspects of the system implemented in the Matrix. In light of the departure described above, it is interesting to see to what extent the Matrix can still support rapid development of a precision grammar for Wambaya. 2.3 Related Work There are currently many multilingual grammar engineering projects under active development, including ParGram, (Butt et al., 2002; King et al., 2005), the MetaGrammar project (Kinyon et al., 2006), KPML (Bateman et al., 2005), Grammix (M¨uller, 2007) and OpenCCG (Baldridge et al., 2007). Among approaches to multilingual grammar engineering, the Grammar Matrix’s distinguishing characteristics include the deployment of a shared core grammar for crosslinguistically consistent constraints and a series of libraries modeling varying linguistic properties. Thus while other work has successfully exploited grammar porting between typologically related languages (e.g., Kim et al., 2003), to my knowledge, no other grammar porting project has covered the same typological dis3A linearization-based analysis as suggested by Donohue and Sag (1999) for discontinuous constituents in Warlpiri (another Australian language), is not available, because it relies on disassociating the constituent structure from the surface order of words in a way that is not compatible with the TDL formalism. tance attempted here. The current project is also situated within a broader trend of using computational linguistics in the service of endangered language documentation (e.g., Robinson et al., 2007, see also www.emeld.org). 3 Wambaya grammar 3.1 Development The Wambaya grammar was developed on the basis of the grammatical description in Nordlinger 1998, including the Wambaya-English translation lexicon and glosses of individual example sentences. The development test suite consisted of all 794 distinct positive examples from Ch. 3–8 of the descriptive grammar. This includes elicited examples as well as (sometimes simplified) naturally occurring examples. They range in length from one to thirteen words (mean: 3.65). The test suite was extracted from the descriptive grammar at the beginning of the project and used throughout with only minor refinements as errors in formatting were discovered. The regression testing facilities of [incr tsdb()] allowed for rapid experimentation with alternative analyses as new phenomena were brought into the grammar (cf. Oepen et al., 2002). With no prior knowledge of this language beyond its most general typological properties, we were able to develop in under 5.5 person-weeks of development time (210 hours) a grammar able to assign appropriate analyses to 91% of the examples in the development set.4 The 210 hours include 25 hours of an RA’s time entering lexical entries, 7 hours spent preparing the development test suite, and 15 hours treebanking (using the LinGO Redwoods software (Oepen et al., 2004) to annotate the intended parse for each item). The remainder of the time was ordinary grammar development work.5 In addition, this grammar has relatively low ambiguity, assigning on average 11.89 parses per item in the development set. This reflects the fact that the grammar is modeling grammaticality: the rules are 4An additional 6% received some analysis, but not one that matched the translation given in the reference grammar. 5These numbers do not include the time put into the original field work and descriptive grammar work. Nordlinger (p.c.) estimates that as roughly 28 linguist-months, plus the native speaker consultants’ time. 979 meant to exclude ungrammatical strings as well as are unwarranted analyses of grammatical strings. 3.2 Scope The grammar encodes mutually interoperable analyses of a wide variety of linguistic phenomena, including: • Word order: second position clitic cluster, otherwise free word order, discontinuous noun phrases • Argument optionality: argument positions with no overt head • Linking of syntactic to semantic arguments • Case: case assignment by verbs to dependents • Agreement: subject and object agreement in person and number (and to some extent gender) marked in the clitic cluster, agreement between nouns and adnominal modifiers in case, number and gender • Lexical adverbs, including manner, time, and location, and adverbs of negation, which vary by clause type (declarative, imperative, or interrogative) • Derived event modifiers: nominals (nouns, adjectives, noun phrases) used as event modifiers with meaning dependent on their case marking • Lexical adjectives, including demonstratives adverbs, numerals, and possessive adjectives, as well as ordinary intersective adjectives • Derived nominal modifiers: modifiers of nouns derived from nouns, adjectives and verbs, including the proprietive, privative, and ‘origin’ constructions • Subordinate clauses: clausal complements of verbs like “tell” and “remember”, non-finite subordinate clauses such as purposives (“in order to”) and clauses expressing prior or simultaneous events • Verbless clauses: nouns, adjectives, and adverbs, lexical or derived, functioning as predicates • Illocutionary force: imperatives, declaratives, and interrogatives (including wh questions) • Coordination: of clauses and noun phrases • Other: inalienable possession, secondary predicates, causatives of verbs and adjectives 3.3 Sample Analysis This section provides a brief description of the analysis of radical non-configurationality in order to give a sense of the linguistic detail encoded in the Wambaya grammar and give context for the evaluation of the Wambaya grammar and the Grammar Matrix in later sections. The linguistic analyses encoded in the grammar serve to map the surface strings to semantic representations (in Minimal Recursion Semantics (MRS) format (Copestake et al., 2005)). The MRS in Figure 1 is assigned to the example in (1).6 It includes the basic propositional structure: a situation of ‘giving’ in which the first argument, or agent, is ‘mother’, the second (recipient) is some third-person entity, and the third (patient), is ‘milk’ which is also related to ‘grog’ through the proprietive relation. It is marked as past tense, and as potentially a statement or a question, depending on the intonation.7,8 A simple tree display of the parse giving rise to this MRS is given in Figure 2. The non-branching nodes at the bottom of the tree represent the lexical rules which associate morphosyntactic information with a word according to its suffixes. The general left-branching structure of the tree is a result of the analysis of the second-position clitic cluster: The clitic clusters are treated as argument-composition auxiliaries, which combine with a lexical verb and ‘inherit’ all of the verb’s arguments. The auxiliaries first pick up all dependents to the right, and then combine with exactly one constituent to the left. The grammar is able to connect x7 (the index of ‘milk’) to both the ARG3 position of the ‘give’ relation and the ARG1 position of the proprietive relation, despite the separation between ngaraganaguja (‘grog-PROP.IV.ACC’) and ngabulu (‘milk.IV.ACC’) in the surface structure, as follows: The auxiliary ngiya is subject to the constraints in (2), meaning that it combines with a verb as its first complement and then the verb’s complements as its remaining complements.9 The auxiliary can combine with its complements in any order, thanks to a series of headcomplement rules which realize the nth element of 6The grammar in fact finds 42 parses for this example. The one associated with the MRS in Figure 1 best matches the intended interpretation as indicated by the gloss of the example. 7The relations are given English predicate names for the convenience of the grammar developer, and these are not intended as any kind of interlingua. 8This MRS is ‘fragmented’ in the sense that the labels of several of the elementary predications (eps) are not related to any argument position of any other ep. This is related to the fact that the grammar doesn’t yet introduce quantifiers for any of the nominal arguments. 9In this and other attribute value matrices displayed, feature paths are abbreviated and detail not relevant to the current point is suppressed. 980   LTOP h1 INDEX e2 (prop-or-ques, past) RELS *  grog n rel LBL h3 ARG0 x4 (3, iv)  ,   proprietive a rel LBL h5 ARG0 e6 ARG1 x7 (3, iv) ARG2 x4   ,   mother n rel LBL h8 ARG0 x9 (3sg, ii)  ,   give v rel LBL h1 ARG0 e2 ARG1 x9 ARG2 x10 (3) ARG3 x7   ,   milk n rel LBL h5 ARG0 x7   + HCONS ⟨⟩   Figure 1: MRS for (1) V V ADJ ADJ ADJ N N Ngaraganaguja V V V V V V V ngiya N N N gujinganjangani V V jiyawu N N N ngabulu Figure 2: Phrase structure tree for (1) the COMPS list. It this example, it first picks up the subject gujinganjangani (‘mother-ERG’), then the main verb jiyawu (‘give’), and then the object ngabulu (‘milk-ACC’). (2)   lexeme HEAD verb [AUX +] SUBJ ⟨1 ⟩ COMPS *  HEAD verb [AUX −] SUBJ ⟨1 ⟩ COMPS 2   + ⊕2   The resulting V node over ngiya gujinganjangani jiyawu ngabulu is associated with the constraints sketched in (3). (3)   phrase HEAD verb [AUX +] SUBJ *   1 N:‘mother’ INDEX x9 CASE erg INST +   + COMPS *   V:‘give’ SUBJ ⟨1 ⟩ COMPS ⟨2 , 3 ⟩ INST +  ,   2 N INDEX x10 CASE acc INST −  ,   3 N:‘milk’ INDEX x7 CASE acc INST +   +   Unlike in typical HPSG approaches, the information about the realized arguments is still exposed in the COMPS and SUBJ lists of this constituent.10 This makes the necessary information available to separately-attaching modifiers (such as ngaraganaguja (‘grog-PROP.IV.ACC’)) so that they can check for case and number/gender compatibility and connect the semantic index of the argument they modify to a role in their own semantic contribution (in this case, the ARG1 of the ‘proprietive’ relation). 3.4 Evaluation The grammar was evaluated against a sample of naturally occurring data taken from one of the texts transcribed and translated by Nordlinger (1998) (“The two Eaglehawks”, told by Molly Nurlanyma Grueman). Of the 92 sentences in this text, 20 overlapped with items in the development set, so the 10The feature INST, newly proposed for this analysis, records the fact that they have been instantiated by lexical heads. 981 correct parsed unparsed average incorrect ambiguity Existing 50% 8% 42% 10.62 vocab w/added 76% 8% 14% 12.56 vocab Table 1: Grammar performance on held-out data evaluation was carried out only on the remaining 72 sentences. The evaluation was run twice: once with the grammar exactly as is, including the existing lexicon, and a second time after new lexical entries were added, using only existing lexical types. In some cases, the orthographic components of the lexical rules were also adjusted to accommodate the new lexical entries. In both test runs, the analyses of each test item were hand-checked against the translation provided by Nordlinger (1998). An item is counted as correctly analyzed if the set of analyses returned by the parser includes at least one with an MRS that matches the dependency structure, illocutionary force, tense, aspect, mood, person, number, and gender information indicated. The results are shown in Table 1: With only lexical additions, the grammar was able to assign a correct parse to 55 (76%) of the test sentences, with an average ambiguity over these sentences of 12.56 parses/item. 3.5 Parse selection The parsed portion of the development set (732 items) constitutes a sufficiently large corpus to train a parse selection model using the Redwoods disambiguation technology (Toutanova et al., 2005). As part of the grammar development process, the parses were annotated using the Redwoods parse selection tool (Oepen et al., 2004). The resulting treebank was used to select appropriate parameters by 10-fold cross-validation, applying the experimentation environment and feature templates of (Velldal, 2007). The optimal feature set included 2-level grandparenting, 3-grams of lexical entry types, and both constituent weight features. In the cross-validation trials on the development set, this model achieved a parse selection accuracy of 80.2% (random choice baseline: 23.9%). A model with the same features was then trained on all 544 ambiguous examples from the development set and used to rank the parses of the test set. It ranked the correct parse (exact match) highest in 75.0% of the test sentences. This is well above the random-choice baseline of 18.4%, and affirms the cross-linguistic validity of the parseselection techniques. 3.6 Summary This section has presented the Matrix-derived grammar of Wambaya, illustrating its semantic representations and analyses and measuring its performance against held-out data. I hope to have shown the grammar to be reasonably substantial, and thus an interesting case study with which to evaluate the Grammar Matrix itself. 4 Evaluation of Grammar Matrix It is not possible to directly compare the development of a grammar for the same language, by the same grammar engineer, with and without the assistance of the Grammar Matrix. Therefore, in this section, I evaluate the usefulness of the Grammar Matrix by measuring the extent to which the Wambaya grammar as developed makes use of types defined in Matrix as well as the extent to which Matrix-defined types had to be modified. The former is in some sense a measure of the usefulness of the Matrix, and the latter is a measure of its correctness. While the libraries and customization system were used in the initial grammar development, this evaluation primarily concerns itself with the Matrix core type hierarchy. The customization-provided Wambaya-specific type definitions for word order, lexical types, and coordination constructions were used for inspiration, but most needed fairly extensive modification. This is particularly unsurprising for basic word order, where the closest available option (“free word order”) was taken, in the absence of a pre-packaged analysis of non-configurationality and second-position phenomena. The other changes to the library output were largely side-effects of this fundamental difference. Table 2 presents some measurements of the overall size of the Wambaya grammar. Since HPSG grammars consist of types organized into a hierarchy and instances of those types, the unit of measure for these evaluations will be types and/or instances. The 982 N Matrix types 891 ordinary 390 pos disjunctions 591 Wambaya-specific types 911 Phrase structure rules 83 Lexical rules 161 Lexical entries 1528 Table 2: Size of Wambaya grammar Matrix core types w/ POS types Directly used 132 34% 136 15% Indirectly used 98 25% 584 66% Total types used 230 59% 720 81% Types unused 160 41% 171 19% Types modified 16 4% 16 2% Total 390 100% 891 100% Table 3: Matrix core types used in Wambaya grammar Wambaya grammar includes 891 types defined in the Matrix core type hierarchy. These in turn include 390 ordinary types, and 591 ‘disjunctive’ types, the powerset of 9 part of speech types. These are provided in the Matrix so that Matrix users can easily refer to classes of, say, “nouns and verbs” or “nouns and verbs and adjectives”. The Wambaya-specific portion of the grammar includes 911 types. These types are invoked in the definitions of the phrase structure rules, lexical rules, and lexical entries. Including the disjunctive part-of-speech types, just under half (49%) of the types in the grammar are provided by the Matrix. However, it is necessary to look more closely; just because a type is provided in the Matrix core hierarchy doesn’t mean that it is invoked by any rules or lexical entries of the Wambaya grammar. The breakdown of types used is given in Table 3. Types that are used directly are either called as supertypes for types defined in the Wambayaspecific portion of the grammar, or used as the value of some feature in a type constraint in the Wambayaspecific portion of the grammar. Types that are used indirectly are either ancestor types to types that are used directly, or types that are used as the value of a feature in a constraint in the Matrix core types on a type that is used (directly or indirectly) by the Wambaya-specific portion of the grammar. Relatively few (16) of the Matrix-provided types needed to be modified. These were types that were useful, but somehow unsuitable, and typically deeply interwoven into the type system, such that not using and them and defining parallel types in their place would be inconvenient. Setting aside the types for part of speech disjunctions, 59% of the Matrix-provided types are invoked by the Wambaya-specific portion of the grammar. While further development of the Wambaya grammar might make use of some of the remaining 41% of the types, this work suggests that there is a substantial amount of information in the Matrix core type hierarchy which would better be stored as part of the typological libraries. In particular, the analyses of argument realization implemented in the Matrix were not used for this grammar. The types associated with argument realization in configurational languages should be moved into the wordorder library, which should also be extended to include an analysis of Wambaya-style radical nonconfigurationality. At the same time, the lexical amalgamation analysis of the features used in longdistance dependencies (Sag, 1997) was found to be incompatible with the approach to argument realization in Wambaya, and a phrasal amalgamation analysis was implemented instead. This again suggests that lexical v. phrasal amalgamation should be encoded in the libraries, and selected according to the word order pattern of the language. As for parts of speech, of the nine types provided by the Matrix, five were used in the Wambaya grammar (verb, noun, adj, adv, and det) and four were not (num, conj, comp, and adp(osition)). Four disjunctive types were directly invoked, to describe phenomena applying to nouns and adjectives, verbs and adverbs, anything but nouns, and anything but determiners. While it was convenient to have the disjunctive types predefined, it also seems that a much smaller set of types would suffice in this case. Since the nine proposed part of speech types have varying crosslinguistic validity (e.g., not all languages have conjunctions), it might be better to provide software support for creating the disjunctive types as the need arises, rather than predefining them. Even though the number of Matrix-provided types is small compared to the grammar as a whole, the relatively short development time indicates that the types that were incorporated were quite useful. In providing the fundamental organization of the gram983 mar, to the extent that that organization is consistent with the language modeled, these types significantly ease the path to creating a working grammar. The short development time required to create the Wambaya grammar presents a qualitative evaluation of the Grammar Matrix as a crosslinguistic resource, as one goal of the Grammar Matrix is to reduce the cost of developing precision grammars. The fact that a grammar capable of assigning valid analyses to an interesting portion of sentences from naturally occurring text could be developed in less than 5.5 person-weeks of effort suggests that this goal is indeed met. This is particularly encouraging in the case of endangered and other resource-poor languages. A grammar such as the one described here could be a significant aide in analyzing additional texts as they are collected, and in identifying constructions that have not yet been analyzed (cf. Baldwin et al, 2005). 5 Conclusion This paper has presented a precision, hand-built grammar for the Australian language Wambaya, and through that grammar a case study evaluation of the LinGO Grammar Matrix. True validation of the Matrix qua hypothesized linguistic universals requires many more such case studies, but this first test is promising. Even though Wambaya is in some respects very different from the well-studied languages on which the Matrix is based, the existing machinery otherwise worked quite well, providing a significant jump-start to the grammar development process. While the Wambaya grammar has a long way to go to reach the complexity and range of linguistic phenomena handled by, for example, the LinGO English Resource Grammar, it was shown to provide analyses of an interesting portion of a naturally occurring text. This suggests that the methodology of building such grammars could be profitably incorporated into language documentation efforts. The Grammar Matrix allows new grammars to directly leverage the expertise in grammar engineering gained in extensive work on previous grammars of better-studied languages. Furthermore, the design of the Matrix is such that it is not a static object, but intended to evolve and be refined as more languages are brought into its purview. Generalizing the core hierarchy and libraries of the Matrix to support languages like Wambaya can extend its typological reach and further its development as an investigation in computational linguistic typology. Acknowledgments I would like to thank Rachel Nordlinger for providing access to the data used in this work in electronic form, as well as for answering questions about Wambaya; Russ Hugo for data entry of the lexicon; Stephan Oepen for assistance with the parse ranking experiments; and Scott Drellishak, Stephan Oepen, and Laurie Poulson for general discussion. This material is based upon work supported by the National Science Foundation under Grant No. BCS-0644097. References J. Baldridge, S. Chatterjee, A. Palmer, and B. Wing. 2007. DotCCG and VisCCG: Wiki and programming paradigms for improved grammar engineering with OpenCCG. In T.H. King and E.M. Bender, editors, GEAF 2007, Stanford, CA. CSLI. T. Baldwin, J. Beavers, E.M. Bender, D. Flickinger, Ara Kim, and S. Oepen. 2005. Beauty and the beast: What running a broad-coverage precision grammar over the BNC taught us about the grammar — and the corpus. In S. Kepser and M. Reis, editors, Linguistic Evidence: Empirical, Theoretical, and Computational Perspectives, pages 49–70. Mouton de Gruyter, Berlin. J.A. Bateman, I. Kruijff-Korbayov´a, and G.-J. Kruijff. 2005. Multilingual resource sharing across both related and unrelated languages: An implemented, opensource framework for practical natural language generation. Research on Language and Computation, 3(2):191–219. E.M. Bender and D. Flickinger. 2005. Rapid prototyping of scalable grammars: Towards modularity in extensions to a language-independent core. In IJCNLP-05 (Posters/Demos), Jeju Island, Korea. E.M. Bender, D. Flickinger, and S. Oepen. 2002. The grammar matrix: An open-source starter-kit for the rapid development of cross-linguistically consistent broad-coverage precision grammars. In J. Carroll, N. Oostdijk, and R. Sutcliffe, editors, Proceedings of the Workshop on Grammar Engineering and Evaluation, COLING 19, pages 8–14, Taipei, Taiwan. E.M. Bender. 2007. Combining research and pedagogy in the development of a crosslinguistic grammar resource. In T.H. King and E.M. Bender, editors, GEAF 2007, Stanford, CA. CSLI. 984 D.G. Bobrow, C. Condoravdi, R.S. Crouch, V. de Paiva, L. Karttunen, T.H. King, R. Nairn, L. Price, and A Zaenen. 2007. Precision-focused textual inference. In ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, Prague, Czech Republic. M. Butt, H. Dyvik, T.H. King, H. Masuichi, and C. Rohrer. 2002. The parallel grammar project. In J. Carroll, N. Oostdijk, and R. Sutcliffe, editors, Proceedings of the Workshop on Grammar Engineering and Evaluation at COLING 19, pages 1–7. A. Copestake, D. Flickinger, C. Pollard, and I.A. Sag. 2005. Minimal recursion semantics: An introduction. Research on Language & Computation, 3(2–3):281– 332. A. Copestake. 2002. Implementing Typed Feature Structure Grammars. CSLI, Stanford, CA. C. Donohue and I.A. Sag. 1999. Domains in Warlpiri. Paper presented at HPSG 99, University of Edinburgh. S. Drellishak and E.M. Bender. 2005. A coordination module for a crosslinguistic grammar resource. In Stefan M¨uller, editor, HPSG 2005, pages 108–128, Stanford. CSLI. D. Flickinger and E.M. Bender. 2003. Compositional semantics in a multilingual grammar resource. In E.M. Bender, D. Flickinger, F. Fouvry, and M. Siegel, editors, Proceedings of the Workshop on Ideas and Strategies for Multilingual Grammar Development, ESSLLI 2003, pages 33–42, Vienna, Austria. D. Flickinger. 2000. On building a more efficient grammar by exploiting types. Natural Language Engineering, 6 (1):15 – 28. L. Hellan and P. Haugereid. 2003. NorSource: An exercise in Matrix grammar-building design. In E.M. Bender, D. Flickinger, F. Fouvry, and M. Siegel, editors, Proceedings of the Workshop on Ideas and Strategies for Multilingual Grammar Development, ESSLLI 2003, pages 41–48, Vienna, Austria. R. Kim, M. Dalrymple, R.M. Kaplan, T.H. King, H. Masuichi, and T. Ohkuma. 2003. Multilingual grammar development via grammar porting. In E.M. Bender, D. Flickinger, F. Fouvry, and M. Siegel, editors, Proceedings of the Workshop on Ideas and Strategies for Multilingual Grammar Development, ESSLLI 2003, pages 49–56, Vienna, Austria. T.H. King, M. Forst, J. Kuhn, and M. Butt. 2005. The feature space in parallel grammar writing. Research on Language and Computation, 3(2):139–163. A. Kinyon, O. Rambow, T. Scheffler, S.W. Yoon, and A.K. Joshi. 2006. The metagrammar goes multilingual: A cross-linguistic look at the V2-phenomenon. In TAG+8, Sydney, Australia. V. Kordoni and J. Neu. 2005. Deep analysis of Modern Greek. In K-Y Su, J. Tsujii, and J-H Lee, editors, Lecture Notes in Computer Science, volume 3248, pages 674–683. Springer-Verlag, Berlin. F. Lareau and L. Wanner. 2007. Towards a generic multilingual dependency grammar for text generation. In T.H. King and E.M. Bender, editors, GEAF 2007, pages 203–223, Stanford, CA. CSLI. J.T. Lønning and S. Oepen. 2006. Re-usable tools for precision machine translation. In COLING|ACL 2006 Interactive Presentation Sessions, pages 53 – 56, Sydney, Australia. M. Marimon, N. Bel, and N. Seghezzi. 2007. Test-suite construction for a Spanish grammar. In T.H. King and E.M. Bender, editors, GEAF 2007, Stanford, CA. CSLI. Stefan M¨uller. 2007. The Grammix CD-ROM: A software collection for developing typed feature structure grammars. In T.H. King and E.M. Bender, editors, GEAF 2007, Stanford, CA. CSLI. R. Nordlinger. 1998. A Grammar of Wambaya, Northern Australia. Research School of Pacific and Asian Studies, The Australian National University, Canberra. S. Oepen, E.M. Bender, U. Callmeier, D. Flickinger, and M. Siegel. 2002. Parallel distributed grammar engineering for practical applications. In Proceedings of the Workshop on Grammar Engineering and Evaluation, COLING 19, Taipei, Taiwan. S. Oepen, D. Flickinger, K. Toutanova, and C.D. Manning. 2004. LinGO Redwoods. A rich and dynamic treebank for HPSG. Journal of Research on Language and Computation, 2(4):575 – 596. Stephan Oepen, Erik Velldal, Jan Tore Lnning, Paul Meurer, Victoria Rosn, and Dan Flickinger. 2007. Towards hybrid quality-oriented machine translation. On linguistics and probabilities in MT. In TMI 2007, Skvde, Sweden. C. Pollard and I.A. Sag. 1994. Head-Driven Phrase Structure Grammar. CSLI, Stanford, CA. S. Robinson, G. Aumann, and S. Bird. 2007. Managing fieldwork data with Toolbox and the Natural Language Toolkit. Language Documentation and Conservation, 1:44–57. I.A. Sag. 1997. English relative clause constructions. Journal of Linguistics, 33(2):431 – 484. M. Siegel and E.M. Bender. 2002. Efficient deep processing of Japanese. In Proceedings of the 3rd Workshop on Asian Language Resources and International Standardization, COLING 19, Taipei, Taiwan. K. Toutanova, C.D. Manning, D. Flickinger, and S. Oepen. 2005. Stochastic HPSG parse selection using the Redwoods corpus. Journal of Research on Language and Computation, 3(1):83 – 105. E. Velldal. 2007. Empirical Realization Ranking. Ph.D. thesis, University of Oslo, Department of Informatics. 985
2008
111
Proceedings of ACL-08: HLT, pages 986–993, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Better Alignments = Better Translations? Kuzman Ganchev Computer & Information Science University of Pennsylvania [email protected] Jo˜ao V. Grac¸a L2F INESC-ID Lisboa, Portugal [email protected] Ben Taskar Computer & Information Science University of Pennsylvania [email protected] Abstract Automatic word alignment is a key step in training statistical machine translation systems. Despite much recent work on word alignment methods, alignment accuracy increases often produce little or no improvements in machine translation quality. In this work we analyze a recently proposed agreement-constrained EM algorithm for unsupervised alignment models. We attempt to tease apart the effects that this simple but effective modification has on alignment precision and recall trade-offs, and how rare and common words are affected across several language pairs. We propose and extensively evaluate a simple method for using alignment models to produce alignments better-suited for phrase-based MT systems, and show significant gains (as measured by BLEU score) in end-to-end translation systems for six languages pairs used in recent MT competitions. 1 Introduction The typical pipeline for a machine translation (MT) system starts with a parallel sentence-aligned corpus and proceeds to align the words in every sentence pair. The word alignment problem has received much recent attention, but improvements in standard measures of word alignment performance often do not result in better translations. Fraser and Marcu (2007) note that none of the tens of papers published over the last five years has shown that significant decreases in alignment error rate (AER) result in significant increases in translation performance. In this work, we show that by changing the way the word alignment models are trained and used, we can get not only improvements in alignment performance, but also in the performance of the MT system that uses those alignments. We present extensive experimental results evaluating a new training scheme for unsupervised word alignment models: an extension of the Expectation Maximization algorithm that allows effective injection of additional information about the desired alignments into the unsupervised training process. Examples of such information include “one word should not translate to many words” or that directional translation models should agree. The general framework for the extended EM algorithm with posterior constraints of this type was proposed by (Grac¸a et al., 2008). Our contribution is a large scale evaluation of this methodology for word alignments, an investigation of how the produced alignments differ and how they can be used to consistently improve machine translation performance (as measured by BLEU score) across many languages on training corpora with up to hundred thousand sentences. In 10 out of 12 cases we improve BLEU score by at least 1 4 point and by more than 1 point in 4 out of 12 cases. After presenting the models and the algorithm in Sections 2 and 3, in Section 4 we examine how the new alignments differ from standard models, and find that the new method consistently improves word alignment performance, measured either as alignment error rate or weighted F-score. Section 5 explores how the new alignments lead to consistent and significant improvement in a state of the art phrase base machine translation by using posterior decoding rather than Viterbi decoding. We propose a heuristic for tuning posterior decoding in the absence of annotated alignment data and show improvements over baseline systems for six different 986 language pairs used in recent MT competitions. 2 Statistical word alignment Statistical word alignment (Brown et al., 1994) is the task identifying which words are translations of each other in a bilingual sentence corpus. Figure 2 shows two examples of word alignment of a sentence pair. Due to the ambiguity of the word alignment task, it is common to distinguish two kinds of alignments (Och and Ney, 2003). Sure alignments (S), represented in the figure as squares with borders, for single-word translations and possible alignments (P), represented in the figure as alignments without boxes, for translations that are either not exact or where several words in one language are translated to several words in the other language. Possible alignments can can be used either to indicated optional alignments, such as the translation of an idiom, or disagreement between annotators. In the figure red/black dots indicates correct/incorrect predicted alignment points. 2.1 Baseline word alignment models We focus on the hidden Markov model (HMM) for alignment proposed by (Vogel et al., 1996). This is a generalization of IBM models 1 and 2 (Brown et al., 1994), where the transition probabilities have a first-order Markov dependence rather than a zerothorder dependence. The model is an HMM, where the hidden states take values from the source language words and generate target language words according to a translation table. The state transitions depend on the distance between the source language words. For source sentence s the probability of an alignment a and target sentence t can be expressed as: p(t, a | s) = Y j pd(aj|aj −aj−1)pt(tj|saj), (1) where aj is the index of the hidden state (source language index) generating the target language word at index j. As usual, a “null” word is added to the source sentence. Figure 1 illustrates the mapping between the usual HMM notation and the HMM alignment model. 2.2 Baseline training All word alignment models we consider are normally trained using the Expectation Maximization s1 s1 s2 s3 we know the way sabemos el camino null usual HMM word alignment meaning Si (hidden) source language word i Oj (observed) target language word j aij (transition) distortion model bij (emission) translation model Figure 1: Illustration of an HMM for word alignment. (EM) algorithm (Dempster et al., 1977). The EM algorithm attempts to maximize the marginal likelihood of the observed data (s, t pairs) by repeatedly finding a maximal lower bound on the likelihood and finding the maximal point of the lower bound. The lower bound is constructed by using posterior probabilities of the hidden alignments (a) and can be optimized in closed form from expected sufficient statistics computed from the posteriors. For the HMM alignment model, these posteriors can be efficiently calculated by the Forward-Backward algorithm. 3 Adding agreement constraints Grac¸a et al. (2008) introduce an augmentation of the EM algorithm that uses constraints on posteriors to guide learning. Such constraints are useful for several reasons. As with any unsupervised induction method, there is no guarantee that the maximum likelihood parameters correspond to the intended meaning for the hidden variables, that is, more accurate alignments using the resulting model. Introducing additional constraints into the model often results in intractable decoding and search errors (e.g., IBM models 4+). The advantage of only constraining the posteriors during training is that the model remains simple while respecting more complex requirements. For example, constraints might include “one word should not translate to many words” or that translation is approximately symmetric. The modification is to add a KL-projection step after the E-step of the EM algorithm. For each sentence pair instance x = (s, t), we find the posterior 987 distribution pθ(z|x) (where z are the alignments). In regular EM, pθ(z|x) is used to complete the data and compute expected counts. Instead, we find the distribution q that is as close as possible to pθ(z|x) in KL subject to constraints specified in terms of expected values of features f(x, z) arg min q KL(q(z) || pθ(z|x)) s.t. Eq[f(x, z)] ≤b. (2) The resulting distribution q is then used in place of pθ(z|x) to compute sufficient statistics for the M-step. The algorithm converges to a local maximum of the log of the marginal likelihood, pθ(x) = P z pθ(z, x), penalized by the KL distance of the posteriors pθ(z|x) from the feasible set defined by the constraints (Grac¸a et al., 2008): Ex[log pθ(x) − min q:Eq[f(x,z)]≤b KL(q(z) || pθ(z|x))], where Ex is expectation over the training data. They suggest how this framework can be used to encourage two word alignment models to agree during training. We elaborate on their description and provide details of implementation of the projection in Equation 2. 3.1 Agreement Most MT systems train an alignment model in each direction and then heuristically combine their predictions. In contrast, Grac¸a et al. encourage the models to agree by training them concurrently. The intuition is that the errors that the two models make are different and forcing them to agree rules out errors only made by one model. This is best exhibited in the rare word alignments, where onesided “garbage-collection” phenomenon often occurs (Moore, 2004). This idea was previously proposed by (Matusov et al., 2004; Liang et al., 2006) although the the objectives differ. In particular, consider a feature that takes on value 1 whenever source word i aligns to target word j in the forward model and -1 in the backward model. If this feature has expected value 0 under the mixture of the two models, then the forward model and backward model agree on how likely source word i is to align to target word j. More formally denote the forward model −→p (z) and backward model ←−p (z) where −→p (z) = 0 for z /∈−→ Z and ←−p (z) = 0 for z /∈←− Z (−→ Z and ←− Z are possible forward and backward alignments). Define a mixture p(z) = 1 2−→p (z) + 1 2←−p (z) for z ∈←− Z ∪−→ Z. Restating the constraints that enforce agreement in this setup: Eq[f(x, z)] = 0 with fij(x, z) = 8 > < > : 1 z ∈−→ Z and zij = 1 −1 z ∈← − Z and zij = 1 0 otherwise . 3.2 Implementation EM training of hidden Markov models for word alignment is described elsewhere (Vogel et al., 1996), so we focus on the projection step: arg min q KL(q(z) || pθ(z|x)) s.t. Eq[f(x, z)] = 0. (3) The optimization problem in Equation 3 can be efficiently solved in its dual formulation: arg min λ log X z pθ(z | x) exp {λ⊤f(x, z)} (4) where we have solved for the primal variables q as: qλ(z) = pθ(z | x) exp{λ⊤f(x, z)}/Z, (5) with Z a normalization constant that ensures q sums to one. We have only one dual variable per constraint, and we optimize them by taking a few gradient steps. The partial derivative of the objective in Equation 4 with respect to feature i is simply Eqλ[fi(x, z)]. So we have reduced the problem to computing expectations of our features under the model q. It turns out that for the agreement features, this reduces to computing expectations under the normal HMM model. To see this, we have by the definition of qλ and pθ, qλ(z) = −→p (z | x) + ←−p (z | x) 2 exp{λ⊤f(x, z)}/Z = −→q (z) + ←−q (z) 2 . (To make the algorithm simpler, we have assumed that the expectation of the feature f0(x, z) = {1 if z ∈−→ Z; −1 if z ∈←− Z} is set to zero to ensure that the two models −→q , ←−q are each properly normalized.) For −→q , we have: (←−q is analogous) −→p (z | x)eλ⊤f(x,z) = Y j −→p d(aj|aj −aj−1)−→p t(tj|saj) Y ij eλijfij(x,zij) = Y j,i=aj −→p d(i|i −aj−1)−→p t(tj|si)eλijfij(x,zij) = Y j,i=aj −→p d(i|i −aj−1)−→p ′ t(tj|si). 988 Where we have let −→p ′ t(tj|si) = −→p t(tj|si)eλij, and retained the same form for the model. The final projection step is detailed in Algorithm1. Algorithm 1 AgreementProjection(−→p , ←−p ) 1: λij ←0 ∀i, j 2: for T iterations do 3: −→p ′ t(j|i) ←−→p t(tj|si)eλij ∀i, j 4: ←−p ′ t(i|j) ←←−p t(si|tj)e−λij ∀i, j 5: −→q ←forwardBackward(−→p ′ t, −→p d) 6: ←−q ←forwardBackward(←−p ′ t, ←−p d) 7: λij ←λij −E− →q [ai = j] + E← −q [aj = i] ∀i, j 8: end for 9: return (−→q , ←−q ) 3.3 Decoding After training, we want to extract a single alignment from the distribution over alignments allowable for the model. The standard way to do this is to find the most probable alignment, using the Viterbi algorithm. Another alternative is to use posterior decoding. In posterior decoding, we compute for each source word i and target word j the posterior probability under our model that i aligns to j. If that probability is greater than some threshold, then we include the point i −j in our final alignment. There are two main differences between posterior decoding and Viterbi decoding. First, posterior decoding can take better advantage of model uncertainty: when several likely alignment have high probability, posteriors accumulate confidence for the edges common to many good alignments. Viterbi, by contrast, must commit to one high-scoring alignment. Second, in posterior decoding, the probability that a 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8 0 · · · · · · · · · 0 · · · · · · · · · it 1 · · · · · · · · · 1 · · · · · · · · · was 2 · · • · · · · · · 2 · · • · · · · · · an 3 · · · · • · · · · 3 • · · · • · · · · animated 4 · · · · · • · · · 4 · · · · · • · · · , 5 · · • · · · • · · 5 · · · · · · • · · very 6 • · · • • • · • · 6 · · · · · · · • · convivial 7 • · · · · · · · · 7 • · · · · · · · · game 8 · · · · · · · · • 8 · · · · · · · · • . jugaban de una manera animada y muy cordial . jugaban de una manera animada y muy cordial . Figure 2: An example of the output of HMM trained on 100k the EPPS data. Left: Baseline training. Right: Using agreement constraints. target word aligns to none or more than one word is much more flexible: it depends on the tuned threshold. 4 Word alignment results We evaluated the agreement HMM model on two corpora for which hand-aligned data are widely available: the Hansards corpus (Och and Ney, 2000) of English/French parliamentary proceedings and the Europarl corpus (Koehn, 2002) with EPPS annotation (Lambert et al., 2005) of English/Spanish. Figure 2 shows two machine-generated alignments of a sentence pair. The black dots represent the machine alignments and the shading represents the human annotation (as described in the previous section), on the left using the regular HMM model and on the right using our agreement constraints. The figure illustrates a problem known as garbage collection (Brown et al., 1993), where rare source words tend to align to many target words, since the probability mass of the rare word translations can be hijacked to fit the sentence pair. Agreement constraints solve this problem, because forward and backward models cannot agree on the garbage collection solution. Grac¸a et al. (2008) show that alignment error rate (Och and Ney, 2003) can be improved with agreement constraints. Since AER is the standard metric for alignment quality, we reproduce their results using all the sentences of length at most 40. For the Hansards corpus we improve from 15.35 to 7.01 for the English →French direction and from 14.45 to 6.80 for the reverse. For English →Spanish we improve from 28.20 to 19.86 and from 27.54 to 19.18 for the reverse. These values are competitive with other state of the art systems (Liang et al., 2006). Unfortunately, as was shown by Fraser and Marcu (2007) AER can have weak correlation with translation performance as measured by BLEU score (Papineni et al., 2002), when the alignments are used to train a phrase-based translation system. Consequently, in addition to AER, we focus on precision and recall. Figure 3 shows the change in precision and recall with the amount of provided training data for the Hansards corpus. We see that agreement constraints improve both precision and recall when we 989 65 70 75 80 85 90 95 100 1 10 100 1000 Thousands of training sentences Agreement Baseline 65 70 75 80 85 90 95 100 1 10 100 1000 Thousands of training sentences Agreement Baseline Figure 3: Effect of posterior constraints on precision (left) and recall (right) learning curves for Hansards En→Fr. 10 20 30 40 50 60 70 80 90 100 1 10 100 1000 Thousands of training sentences Rare Common Agreement Baseline 10 20 30 40 50 60 70 80 90 100 1 10 100 1000 Thousands of training sentences Rare Common Agreement Baseline Figure 4: Left: Precision. Right: Recall. Learning curves for Hansards En→Fr split by rare (at most 5 occurances) and common words. use Viterbi decoding, with larger improvements for small amounts of training data. We see a similar improvement on the EPPS corpus. Motivated by the garbage collection problem, we also analyze common and rare words separately. Figure 4 shows precision and recall learning curves for rare and common words. We see that agreement constraints improve precision but not recall of rare words and improve recall but not precision of common words. As described above an alternative to Viterbi decoding is to accept all alignments that have probability above some threshold. By changing the threshold, we can trade off precision and recall. Figure 5 compares this tradeoff for the baseline and agreement model. We see that the precision/recall curve for agreement is entirely above the baseline curve, so for any recall value we can achieve higher precision than the baseline for either corpus. In Figure 6 we break down the same analysis into rare and non rare words. Figure 7 shows an example of the same sentence, using the same model where in one case Viterbi decoding was used and in the other case Posterior decoding tuned to minimize AER on a development set 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Precision Baseline Agreement 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Precision Baseline Agreement Figure 5: Precision and recall trade-off for posterior decoding with varying threshold. Left: Hansards En→Fr. Right: EPPS En→Es. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Precision Baseline Agreement 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Precision Baseline Agreement Figure 6: Precision and recall trade-off for posterior on Hansards En→Fr. Left: rare words only. Right: common words only. was used. An interesting difference is that by using posterior decoding one can have n-n alignments as shown in the picture. A natural question is how to tune the threshold in order to improve machine translation quality. In the next section we evaluate and compare the effects of the different alignments in a phrase based machine translation system. 5 Phrase-based machine translation In this section we attempt to investigate whether our improved alignments produce improved machine 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8 0 • • • · · · · · · 0 • • • · · · · · · firstly 1 · · · • · · · · · 1 · · · • · · · · · , 2 · · · · • · · · · 2 · · · · • · · · · we 3 · · · · • · · · · 3 · · · · • · · · · have 4 · · · · · • · · · 4 · · · · · • · · · a 5 · · · · · · · • · 5 · · · · · · · • · legal 6 · · · · · · • · · 6 · · · · · · • · · framework 8 · · · · · · · · • 8 · · · · · · · · • . en primero lugar , tenemos un marco jur´ıdico . en primero lugar , tenemos un marco jur´ıdico . Figure 7: An example of the output of HMM trained on 100k the EPPS data using agreement HMM. Left: Viterbi decoding. Right: Posterior decoding tuned to minimize AER. The addition is en-firstly and tenemos-have. 990 translation. In particular we fix a state of the art machine translation system1 and measure its performance when we vary the supplied word alignments. The baseline system uses GIZA model 4 alignments and the open source Moses phrase-based machine translation toolkit2, and performed close to the best at the competition last year. For all experiments the experimental setup is as follows: we lowercase the corpora, and train language models from all available data. The reasoning behind this is that even if bilingual texts might be scarce in some domain, monolingual text should be relatively abundant. We then train the competing alignment models and compute competing alignments using different decoding schemes. For each alignment model and decoding type we train Moses and use MERT optimization to tune its parameters on a development set. Moses is trained using the grow-diag-final-and alignment symmetrization heuristic and using the default distance base distortion model. We report BLEU scores using a script available with the baseline system. The competing alignment models are GIZA Model 4, our implementation of the baseline HMM alignment and our agreement HMM. We would like to stress that the fair comparison is between the performance of the baseline HMM and the agreement HMM, since Model 4 is more complicated and can capture more structure. However, we will see that for moderate sized data the agreement HMM performs better than both its baseline and GIZA Model 4. 5.1 Corpora In addition to the Hansards corpus and the Europarl English-Spanish corpus, we used four other corpora for the machine translation experiments. Table 1 summarizes some statistics of all corpora. The German and Finnish corpora are also from Europarl, while the Czech corpus contains news commentary. All three were used in recent ACL workshop shared tasks and are available online3. The Italian corpus consists of transcribed speech in the travel domain and was used in the 2007 workshop on spoken language translation4. We used the development and 1www.statmt.org/wmt07/baseline.html 2www.statmt.org/moses/ 3http://www.statmt.org 4http://iwslt07.itc.it/ Corpus Train Len Test Rare (%) Unk (%) En, Fr 1018 17.4 1000 0.3, 0.4 0.1, 0.2 En, Es 126 21.0 2000 0.3, 0.5 0.2, 0.3 En, Fi 717 21.7 2000 0.4, 2.5 0.2, 1.8 En, De 883 21.5 2000 0.3, 0.5 0.2, 0.3 En, Cz 57 23.0 2007 2.3, 6.6 1.3, 3.9 En, It 20 9.4 500 3.1, 6.2 1.4, 2.9 Table 1: Statistics of the corpora used in MT evaluation. The training size is measured in thousands of sentences and Len refers to average (English) sentence length. Test is the number of sentences in the test set. Rare and Unk are the percentage of tokens in the test set that are rare and unknown in the training data, for each language. 26 28 30 32 34 36 10000 100000 1e+06 Training data size (sentences) Agreement Post-pts Model 4 Baseline Viterbi Figure 8: BLEU score as the amount of training data is increased on the Hansards corpus for the best decoding method for each alignment model. tests sets from the workshops when available. For Italian corpus we used dev-set 1 as development and dev-set 2 as test. For Hansards we randomly chose 1000 and 500 sentences from test 1 and test 2 to be testing and development sets respectively. Table 1 summarizes the size of the training corpus in thousands of sentences, the average length of the English sentences as well as the size of the testing corpus. We also report the percentage of tokens in the test corpus that are rare or not encountered in the training corpus. 5.2 Decoding Our initial experiments with Viterbi decoding and posterior decoding showed that for our agreement model posterior decoding could provide better alignment quality. When labeled data is available, we can tune the threshold to minimize AER. When labeled data is not available we use a different heuristic to 991 tune the threshold: we choose a threshold that gives the same number of aligned points as Viterbi decoding produces. In principle, we would like to tune the threshold by optimizing BLEU score on a development set, but that is impractical for experiments with many pairs of languages. We call this heuristic posterior-points decoding. As we shall see, it performs well in practice. 5.3 Training data size The HMM alignment models have a smaller parameter space than GIZA Model 4, and consequently we would expect that they would perform better when the amount of training data is limited. We found that this is generally the case, with the margin by which we beat model 4 slowly decreasing until a crossing point somewhere in the range of 105 - 106 sentences. We will see in section 5.3.1 that the Viterbi decoding performs best for the baseline HMM model, while posterior decoding performs best for our agreement HMM model. Figure 8 shows the BLEU score for the baseline HMM, our agreement model and GIZA Model 4 as we vary the amount of training data from 104 - 106 sentences. For all but the largest data sizes we outperform Model 4, with a greater margin at lower training data sizes. This trend continues as we lower the amount of training data further. We see a similar trend with other corpora. 5.3.1 Small to Medium Training Sets Our next set of experiments look at our performance in both directions across our 6 corpora, when we have small to moderate amounts of training data: for the language pairs with more than 100,000 sentences, we use only the first 100,000 sentences. Table 2 shows the performance of all systems on these datasets. In the table, post-pts and post-aer stand for posterior-points decoding and posterior decoding tuned for AER. With the notable exception of Czech and Italian, our system performs better than or comparable to both baselines, even though it uses a much more limited model than GIZA’s Model 4. The small corpora for which our models do not perform as well as GIZA are the ones with a lot of rare words. We suspect that the reason for this is that we do not implement smoothing, which has been shown to be important, especially in situations with a lot of rare words. X →En En →X Base Agree Base Agree GIZA M4 23.92 17.89 De Viterbi 24.08 23.59 18.15 18.13 post-pts 24.24 24.65(+) 18.18 18.45(+) GIZA M4 18.29 11.05 Fi Viterbi 18.79 18.38 11.17 11.54 post-pts 18.88 19.45(++) 11.47 12.48(++) GIZA M4 33.12 26.90 Fr Viterbi 32.42 32.15 25.85 25.48 post-pts 33.06 33.09(≈) 25.94 26.54(+) post-aer 31.81 33.53(+) 26.14 26.68(+) GIZA M4 30.24 30.09 Es Viterbi 29.65 30.03 29.76 29.85 post-pts 29.91 30.22(++) 29.71 30.16(+) post-aer 29.65 30.34(++) 29.78 30.20(+) GIZA M4 51.66 41.99 It Viterbi 52.20 52.09 41.40 41.28 post-pts 51.06 51.14(−−) 41.63 41.79(≈) GIZA M4 22.78 12.75 Cz Viterbi 21.25 21.89 12.23 12.33 post-pts 21.37 22.51(++) 12.16 12.47(+) Table 2: BLEU scores for all language pairs using up to 100k sentences. Results are after MERT optimization. The marks (++)and (+)denote that agreement with posterior decoding is better by 1 BLEU point and 0.25 BLEU points respectively than the best baseline HMM model; analogously for (−−), (−); while (≈)denotes smaller differences. 5.3.2 Larger Training Sets For four of the corpora we have more than 100 thousand sentences. The performance of the systems on all the data is shown in Table 3. German is not included because MERT optimization did not complete in time. We see that even on over a million instances, our model sometimes performs better than GIZA model 4, and always performs better than the baseline HMM. 6 Conclusions In this work we have evaluated agreementconstrained EM training for statistical word alignment models. We carefully studied its effects on word alignment recall and precision. Agreement training has a different effect on rare and common words, probably because it fixes different types of errors. It corrects the garbage collection problem for rare words, resulting in a higher precision. The recall improvement in common words 992 X →En En →X Base Agree Base Agree GIZA M4 22.78 14.72 Fi Viterbi 22.92 22.89 14.21 14.09 post-pts 23.15 23.43 (+) 14.57 14.74 (≈) GIZA M4 35.65 31.15 Fr Viterbi 35.19 35.17 30.57 29.97 post-pts 35.49 35.95 (+) 29.78 30.02 (≈) post-aer 34.85 35.48 (+) 30.15 30.07 (≈) GIZA M4 31.62 32.40 Es Viterbi 31.75 31.84 31.17 31.09 post-pts 31.88 32.19 (+) 31.16 31.56 (+) post-aer 31.93 32.29 (+) 31.23 31.36 (≈) Table 3: BLEU scores for all language pairs using all available data. Markings as in Table 2. can be explained by the idea that ambiguous common words are different in the two languages, so the un-ambiguous choices in one direction can force the choice for the ambiguous ones in the other through agreement constraints. To our knowledge this is the first extensive evaluation where improvements in alignment accuracy lead to improvements in machine translation performance. We tested this hypothesis on six different language pairs from three different domains, and found that the new alignment scheme not only performs better than the baseline, but also improves over a more complicated, intractable model. In order to get the best results, it appears that posterior decoding is required for the simplistic HMM alignment model. The success of posterior decoding using our simple threshold tuning heuristic is fortunate since no labeled alignment data are needed: Viterbi alignments provide a reasonable estimate of aligned words needed for phrase extraction. The nature of the complicated relationship between word alignments, the corresponding extracted phrases and the effects on the final MT system still begs for better explanations and metrics. We have investigated the distribution of phrase-sizes used in translation across systems and languages, following recent investigations (Ayan and Dorr, 2006), but unfortunately found no consistent correlation with BLEU improvement. Since the alignments we extracted were better according to all metrics we used, it should not be too surprising that they yield better translation performance, but perhaps a better tradeoff can be achieved with a deeper understanding of the link between alignments and translations. Acknowledgments J. V. Grac¸a was supported by a fellowship from Fundac¸˜ao para a Ciˆencia e Tecnologia (SFRH/ BD/ 27528/ 2006). K. Ganchev was partially supported by NSF ITR EIA 0205448. References N. F. Ayan and B. J. Dorr. 2006. Going beyond AER: An extensive analysis of word alignments and their impact on MT. In Proc. ACL. P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, M. J. Goldsmith, J. Hajic, R. L. Mercer, and S. Mohanty. 1993. But dictionaries are data too. In Proc. HLT. P. F. Brown, S. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1994. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Royal Statistical Society, Ser. B, 39(1):1– 38. A. Fraser and D. Marcu. 2007. Measuring word alignment quality for statistical machine translation. Comput. Linguist., 33(3):293–303. J. Grac¸a, K. Ganchev, and B. Taskar. 2008. Expectation maximization and posterior constraints. In Proc. NIPS. P. Koehn. 2002. Europarl: A multilingual corpus for evaluation of machine translation. P. Lambert, A.De Gispert, R. Banchs, and J. B. Mari˜no. 2005. Guidelines for word alignment evaluation and manual alignment. In Language Resources and Evaluation, Volume 39, Number 4. P. Liang, B. Taskar, and D. Klein. 2006. Alignment by agreement. In Proc. HLT-NAACL. E. Matusov, Zens. R., and H. Ney. 2004. Symmetric word alignments for statistical machine translation. In Proc. COLING. R. C. Moore. 2004. Improving IBM word-alignment model 1. In Proc. ACL. F. J. Och and H. Ney. 2000. Improved statistical alignment models. In ACL. F. J. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Comput. Linguist., 29(1):19–51. K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proc. ACL. S. Vogel, H. Ney, and C. Tillmann. 1996. Hmm-based word alignment in statistical translation. In Proc. COLING. 993
2008
112
Proceedings of ACL-08: HLT, pages 994–1002, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Mining Parenthetical Translations from the Web by Word Alignment Dekang Lin Shaojun Zhao† Benjamin Van Durme† Marius Paşca Google, Inc. University of Rochester University of Rochester Google, Inc. Mountain View Rochester Rochester Mountain View CA, 94043 NY, 14627 NY, 14627 CA, 94043 [email protected] [email protected] [email protected] [email protected] Abstract Documents in languages such as Chinese, Japanese and Korean sometimes annotate terms with their translations in English inside a pair of parentheses. We present a method to extract such translations from a large collection of web documents by building a partially parallel corpus and use a word alignment algorithm to identify the terms being translated. The method is able to generalize across the translations for different terms and can reliably extract translations that occurred only once in the entire web. Our experiment on Chinese web pages produced more than 26 million pairs of translations, which is over two orders of magnitude more than previous results. We show that the addition of the extracted translation pairs as training data provides significant increase in the BLEU score for a statistical machine translation system. 1 Introduction In natural language documents, a term (word or phrase) is sometimes followed by its translation in another language in a pair of parentheses. We call these parenthetical translations. The following examples are from Chinese web pages (we added underlines to indicate what is being translated): (1) 美国智库布鲁金斯学会(Brookings Institution)专研 跨大西洋恐怖主义的美欧中心研究部主任杰若米·夏皮 罗(Jeremy Shapiro)却认为,... (2) 消化性溃疡的症状往往与消化不良(indigestion),胃 炎(gastritis)等其他胃部疾病症状相似. (3) 殊不知美国是不会接受(not going to fly)这一想法的 (4) …当是一次式时,叫线性规划(linear programming). †Contributions made during an internship at Google The parenthetically translated terms are typically new words, technical terminologies, idioms, products, titles of movies, books, songs, and names of persons, organizations locations, etc. Commonly, an author might use such a parenthetical when a given term has no standard translation (or transliteration), and does not appear in conventional dictionaries. That is, an author might expect a term to be an out-of-vocabulary item for the target reader, and thus helpfully provides a reference translation in situ. For example, in (1), the name Shapiro was transliterated as 夏皮罗. The name has many other transliterations in web documents, such as 夏皮洛, 夏比洛, 夏布洛, 夏皮羅, 沙皮罗, 夏皮若, 夏庇罗, 夏皮諾, 夏畢洛, 夏比羅, 夏比罗, 夏普羅, 夏批羅, 夏批罗, 夏彼羅, 夏彼罗, 夏培洛, 夏卜尔, 夏匹若 ..., where the three Chinese characters corresponds to the three syllables in Sha-pi-ro respectively. Each syllable may be mapped into different characters: 'Sha' into 夏 or 沙, 'pi' into 皮, 比, 批, and 'ro' into 罗, 洛, 若, .... Variation is not limited to the effects of phonetic similarity. Story titles, for instance, are commonly translated semantically, often leading to a number of translations that have similar meaning, yet differ greatly in lexicographic form. For example, while the movie title Syriana is sometimes phonetically transliterated as 辛瑞那, 辛瑞纳, it may also be translated semantically according to the plot of the movie, e.g., 迷中迷 (mystery in mystery), 实录 (real log), 谍对谍 (spy against spy), 油激暗战 (oiltriggered secret war), 叙利亚 (Syria), 迷经 (mystery journey), ... The parenthetical translations are extremely valuable both as a stand-alone on-line dictionary and as training data for statistical machine translation systems. They provide fresh data (new words) and cover a much wider range of topics than typical parallel training data for statistical machine translation systems. 994 The main contribution of this paper is a method for mining parenthetical translations by treating text snippets containing candidate pairs as a partially parallel corpus and using a word alignment algorithm to establish the correspondences between in-parenthesis and pre-parenthesis words. This technique allows us to identify translation pairs even if they only appeared once on the entire web. As a result, we were able to obtain 26.7 million Chinese-English translation pairs from web documents in Chinese. This is over two orders of magnitude more than the number of extracted translation pairs in the previously reported results (Cao, et al. 2007). The next section presents an overview of our algorithm, which is then detailed in Sections 3 and 4. We evaluate our results in Section 5 by comparison with bilingually linked Wikipedia titles and by using the extracted pairs as additional training data in a statistical machine translation system. 2 Mining Parenthetical Translations A parenthetical translation matches the pattern: (4) f1f2…fm (e1e2…en) which is a sequence of m non-English words followed by a sequence of n English words in parentheses. In the remainder of the paper, we assume the non-English text is Chinese, but our technique works for other languages as well. There have been two approaches to finding such parenthetical translations. One is to assume that the English term e1e2…en is given and use a search engine to retrieve text snippets containing e1e2…en from predominately non-English web pages (Nagata et al, 2001, Kwok et al, 2005). Another method (Cao et al, 2007) is to go through a nonEnglish corpus and collect all instances that match the parenthetical pattern in (4). We followed the second approach since it does not require a predefined list of English terms and is amendable for extraction at large scale. In both cases, one can obtain a list of candidate pairs, where the translation of the in-parenthesis terms is a suffix of the pre-parenthesis text. The lengths and frequency counts of the suffixes have been used to determine what is the translation of the in-parenthesis term (Kwok et al, 2005). For example, Table 1 lists a set of Chinese segments (with word-to-word translation underneath) that precede the English term Lower Egypt. Owing to the frequency with which 下埃及 appears as a candidate, and in varying contexts, one has a good reason to believe下埃及is the correct translation of Lower Egypt. … 下游 地区 为 下 埃及 downstream region is down Egypt … 中心 位于 下 埃及 center located-at down Egypt … 以及 所谓 的 下 埃及 and so-called of down Egypt … 叫做 下 埃及 called down Egypt Table 1: Chinese text preceding Lower Egypt Unfortunately, this heuristic does not hold as often as one might imagine. Consider the candidates for Channel Spacing in Table 2. The suffix间隔 (gap) has the highest frequency count. It is nonetheless an incomplete translation of Channel Spacing. The correct translations in rows c to h occurred with Channel Spacing only once. a … 为 频道 间距 λ is channel distance b … 其 频道 间距 its channel distance c … 除了 降低 波道 间距 in-addition-to reducing wave-passage distance d … 亦 展示 具 波道 间隔 also showed have wave-passage gap e … 也 就 是 频道 间隔 also therefore is channel gap f … 且 频道 的 间隔 and channel ’s gap g … 一个 重要 特性 是 信道 间隔 an important property is signal-passage gap h … 已经 能够 达到 通道 间隔 already able reach passage gap Table 2: Text preceding Channel Spacing The crucial observation we make here is that although the words like 信道 (in row g) co-occurred with Channel Spacing only once, there are many co-occurrences of 信道and Channel in other candidate pairs, such as: … 而 不 是 语音 信道 (Speech Channel) … 块 平坦 衰落 信道 (Block Flat Fading Channel) … 信道 B (Channel B) … 光纤 信道 探针 (Fiber Channel Probes) 995 … 反向 信道 (Reverse Channel) … 基带 滤波 反向 信道 (Reverse Channel) Unlike previous approaches that rely solely on the preceding text of a single English term to determine its translation, we treat the entire collection of candidate pairs as a partially parallel corpus and establish the correspondences between the words using a word alignment algorithm. At first glance, word alignment appears to be a more difficult problem than the extraction of parenthetical translations. Extraction of parenthetical translations need only determine the first preparenthesis word aligned with an in-parenthesis word, whereas word alignment requires the respective linking of all such (pre,in)-parenthesis word pairs. However, by casting the problem as word alignment, we are able to generalize across instances involving different in-parenthesis terms, giving us a larger number of, and more varied, example contexts per word. For the examples in Table 2, the words频道 (channel), 波道(wave passage), 信道(signal passage), and 通道 (passage) are aligned with Channel, and the words间距(distance) and 间隔 (gap) are aligned with Spacing. Given these alignments, the left boundary of the translated Chinese term is simply the leftmost word that is linked to one of the English words. Our algorithm consists of two steps: Step 1 constructs a partially parallel corpus. This step takes as input a large collection of Chinese web pages and converts the sentences with parentheses containing English text into pairs of candidates. Step 2 uses an unsupervised algorithm to align English and Chinese and identify the term being translated according to the left-most aligned Chinese word. If no word alignments can be established, the pair is not considered a translation. The next two sections present the details of each of the two steps. 3 Constructing a Partially Parallel Corpus 3.1 Filtering out non-translations The first step of our algorithm is to extract parentheticals and then filter out those that are not translations. This filtering is required as parenthetical translations represent only a small fraction of the usages for parentheses (see Sec. 5.1). Table 3 shows some example of parentheses that are not translations. The input to Step 1 is a collection of arbitrary web documents. We used the following criteria to identify candidate pairs: • The pre-parenthesis text (Tp) is predominantly in Chinese and the in-parenthesis text (Ti) is predominantly in English. • The concatenation of the digits in Tp must be identical to the concatenation of the digits in Ti. For example, rows a, b and c in Table 3 can be ruled out this way. • If Tp contains some text in English, the same text must also appear in Ti. This filters out row d. • Remove the pairs where Ti is part of anchor text. This rule is often applied to instances like row e where the file type tends to be inside a clickable link to a media file. • The punctuation characters in Tp must also appear in Ti, unless they are quotation marks. The example in row f is ruled out because ‘/’ is not found in the pre-parenthesis text. Examples with translations in italic Function of the inparenthesis text a 其数值通常在1.4~3.0之间 (MacArthur, 1967) The range of its values is within 1.4~3.0 (MacArthur, 1967) to provide citation b 越航北京/胡志明 (VN901 15:20-22:30) Vietnam Airlines Beijing/Ho Chi Minh (VN901 15:20-22:30) flight information c 銷售台球桌(255-8FT) sale of pool table (255-8FT) product Id. d // 主程序 // void main ( void ) // main program // void main (void ) function declaration e 电影名称: 千年湖 (DVD) movie title: Thousand Year Lake (DVD) DVD is the file type f 水样 所 消耗 的 质量 ( g/L) mass consumed by water sample (g/L) measurement unit g 柔和保养面油 (Sensitive) gentle protective facial cream (Sensitive) to indicate the type of the cream h 美国九大搜索引擎评测第四章 (Ask Jeeves) Evaluation of Nine Main Search Engines in the US: Chapter 4 (Ask Jeeves) Chapter 4 is about Ask Jeeves Table 3: Other uses of parentheses 996 The instances in rows g and h cannot be eliminated by these simple rules, and are filtered only later, as we fail to discover a convincing word alignment. 3.2 Constraining term boundaries Similar to (Cao et al. 2007), we segmented the preparenthesis Chinese text and restrict the term boundary to be one of the segmentation boundaries. Since parenthetical translations are mostly translation of terms, it makes sense to further constrain the left boundary of the Chinese side to be a term boundary. Determining what should be counted as a term is a difficult task and there are not yet well-accepted solutions (Sag et al, 2003). We compiled an approximate term vocabulary by taking the top 5 million most frequent Chinese queries as according to a fully anonymized collection of search engine query logs. Given a Chinese sentence, we first identify all (possibly overlapping) sequences of words in the sentence that match one of the top-5M queries. A matching sequence is called a maximal match if it is not properly contained in another matching sequence. We then define the potential boundary positions to be the boundaries of maximal matches or words that are not covered by any of the top-5M queries. 3.3 Length-based trimming If there are numerous Chinese words preceding a pair of parentheses containing two English words, it is very unlikely for all but the right-most few Chinese words to be part of the translation of the English words. Including extremely long sequences as potential candidates introduces significantly more noise and makes word alignment harder than necessary. We therefore trimmed the pre-parenthesis text with a length-based constraint. The cut-off point is the first (counting from right to left) potential boundary position (see Sec. 3.2) such that C ≥ 2 E + K, where C is the length of the Chinese text, E is the length of the English text in the parentheses and K is a constant (we used K=6 in our experiments). The lengths C and E are measured in bytes, except when the English text is an abbreviation (in that case, E is multiplied by 5). 4 Word Alignment Word alignment is a well-studied topic in Machine Translation with many algorithms having been proposed (Brown et al, 1993; Och and Ney 2003). We used a modified version of one of the simplest word alignment algorithms called Competitive Linking (Melamed, 2000). The algorithm assumes that there is a score associated with each pair of words in a bi-text. It sorts the word pairs in descending order of their scores, selecting pairs based on the resultant order. A pair of words is linked if none of the two words were previously linked to any other words. The algorithm terminates when there are no more links to make. Tiedemann (2004) compared a variety of alignment algorithms and found Competitive Linking to have one of the highest precision scores. A disadvantage of Competitive Linking, however, is that the alignments are restricted word-to-word alignments, which implies that multi-word expressions can only be partially linked at best. 4.1 Dealing with multi-word alignment We made a small change to Competitive Linking to allow consecutive sequence of words on one side to be linked to the same word on the other side. Specifically, instead of requiring both ei and fj to have no previous linkages, we only require that at least one of them be unlinked and that (suppose ei is unlinked and fj is linked to ek) none of the words between ei and ek be linked to any word other than fj. 4.2 Link scoring We used φ2 (Gale and Church, 1991) as the link score in the modified competitive linking algorithm, although there are many other possible choices for the link scores, such as χ2 (Zhang, S. Vogel. 2005), log-likelihood ratio (Dunning, 1993) and discriminatively trained weights (Taskar et al, 2005). The φ2 statistics for a pair of words ei and fj is computed as ( ) ( )( )( )( ) d c d b c a b a bc ad + + + + ! = 2 2 " where a is the number of sentence pairs containing both ei and fj; a+b is the number of sentence pairs containing ei; a+c is the number of sentence pairs containing fj; d is the number of sentence pairs containing neither ei nor fj. 997 The φ2 score ranges from 0 to 1. We set a threshold at 0.001, below which the φ2 scores are treated as 0. 4.3 Bias in the partially parallel corpus Since only the last few Chinese words in a candidate pair are expected to be translated, there should be a preference for linking the words towards the end of the Chinese text. One advantage of Competitive Linking is that it is quite easy to introduce such preferences into the algorithm, by using the word positions to break ties of the φ2 scores when sorting the word pairs. 4.4 Capturing syllable-level regularities Many of the parenthetical translations involve proper names, which are often transliterated according to the sound. Word alignment algorithms have generally ignored syllable-level regularities in transliterated terms. Consider again the Shapiro example in the introduction section. There are numerous correct transliterations for the same English word, some of which are not very frequent. For example, the word 夏布洛happens to have a similar φ2 score with Shapiro as the word 流利 (fluency), which is totally unrelated to Shapiro but happened to have the same co-occurrence statistics in the (partially) parallel corpus. Previous approaches to parenthetical translations relied on specialized algorithms to deal with transliterations (Cao et al, 2007; Jiang et al, 2007; Wu and Chang, 2007). They convert Chinese words into their phonetic representations (Pinyin) and use the known transliterations in a bilingual dictionary to train a transliteration model. We adopted a simpler approach that does not require any additional resources such as pronunciation dictionaries and bilingual dictionaries. In addition to computing the φ2 scores between words, we also compute the φ2 scores of prefixes and suffixes of Chinese and English words. For both languages, the prefix of a word is defined as the first three bytes of the word and the suffix is defined as the last three bytes. Since we used UTF8 encoding, the first and last three bytes of a Chinese word, except in very rare cases, correspond to the first and last Chinese character of the word. Table 4 lists the English prefixes and suffixes that have the highest φ2 scores with the Chinese prefix 夏and suffix洛. Type Chinese English prefix 夏 sha, amo, cha, sum, haw, lav, lun, xia, xal, hnl, shy, eve, she, cfh, … suffix 洛 rlo, llo, ouh, low, ilo, owe, lol, lor, zlo, klo, gue, ude, vir, row, oro, olo, aro, ulo, ero, iro, rro, loh, lok, … Table 4: Example prefixes and suffixes with top φ2 In our modified version of the competitive linking algorithm, the link score of a pair of words is the sum of the φ2 scores of the words themselves, their prefixes and their suffixes. In addition to syllable-level correspondences in transliterations, the φ2 scores of prefixes and suffixes can also capture correlations in morphologically composed words. For example, the Chinese prefix 三 (three) has a relatively high φ2 score with the English prefix tri. Such scores enable word alignments to be made that may otherwise be missed. Consider the following text snippet: ...... 三 嗪 氟草胺 (triaziflam) The correct translation for triaziflam is三嗪氟草胺 . However, the Chinese term is segmented as 三 + 嗪 + 氟草胺. The association between三 (three) and triaziflam is very weak because 三is a very frequent word, whereas triaziflam is an extremely rare word. With the addition of the φ2 score between 三and tri, we were able to correctly establish the connection between triaziflam and 三. It turns out to be quite effective to assume prefixes and suffixes of words consist of three bytes, despite its apparent simplicity. The benefit of φ2 scores for prefixes and suffixes is not limited to morphemes that happen to be three bytes long. For example, the English morpheme “du-” corresponds to the Chinese character 二 (two). Although the φ2 between du and二 won’t be computed, we do find high φ2 scores between二 and due and between二 and dua. The three letter prefixes account for many of the words with the du- prefix. 5 Experimental Results We extracted from Chinese web pages about 1.58 billion unique sentences with parentheses that contain ASCII text. We removed duplicate sentences so that duplications of web documents will not skew the statistics. By applying the filtering algorithm in Sec. 3.1, we constructed a partially paral998 lel corpus with 126,612,447 candidate pairs (46,791,841 unique), which is about 8% of the number of sentences. Using the word alignment algorithm in Sec. 4, we extracted 26,753,972 translation pairs between 13,471,221 unique English terms and 11,577,206 unique Chinese terms. Parenthetical translations mined from the Web have mostly been evaluated by manual examination of a small sample of results (usually a few hundred entries) or in a Cross Lingual Information Retrieval setup. There does not yet exist a common evaluation data set. 5.1 Evaluation with Wikipedia Our first evaluation is based on translations in Wikipedia, which contains far more terminology and proper names than bilingual dictionaries. We extracted the titles of Chinese and English Wikipedia articles that are linked to each other and treated them as gold standard translations. There are 79,714 such pairs. We removed the following types of pairs because they are not translations or are not terms: • Pairs with identical strings. For example, both English and Chinese versions have an entry titled “.ch”; • Pairs where the English term begins with a digit, e.g., “245”, “300 BC”, “1991 in film”; • Pairs where the English term matches the regular expression ‘List of .*’, e.g., “List of birds”, “List of cinemas in Hong Kong”; • Pairs where the Chinese title does not have any non-ASCII code. For example, the English entry “Syncfusion” is linked to “.NET Framework” in the Chinese Wikipedia. The resulting data set contains 68,131 translation pairs between 62,581 Chinese terms and 67,613 English terms. Only a small percentage of terms have more than one translation. Whenever there is more than one translation, we randomly pick one as the answer key. For each Chinese and English word in the Wikipedia data, we first find whether there is a translation for the word in the extracted translation pairs. The Coverage of the Wikipedia data is measured by the percentage of words for which one or more translations are found. We then see whether our most frequent translation is an Exact Match of the answer key in the Wikipedia data. Coverage Exact Match Full 70.8% 36.4% -term 67.1% 34.8% -pre-suffix 67.6% 34.4% IBM 67.6% 31.2% LDC 10.8% 4.8% Table 5: Chinese to English Results Coverage Exact Match Full 59.6% 27.9% -term 59.6% 27.5% -pre-suffix 58.9% 27.4% IBM 52.4% 13.4% LDC 3.0% 1.4% Table 6: English to Chinese Results Table 5 and 6 show the Chinese-to-English and English-to-Chinese results for the following systems: Full refers to our system described in Sec. 3 and 4; -term is the system without the use of query logs to restrict potential term boundary positions (Sec. 3.2); -pre-suffix is the system without using the φ2 score of the prefixes and suffixes; IBM refers to a system where we substitute our word alignment algorithm with IBM Model 1 and Model 2 followed by the HMM alignment (Och and Ney 2003), which is a common configuration for the word alignment components in machine translations systems; LDC refers to the LDC2.0 English to Chinese bilingual dictionary with 161,117 translation pairs. It can be seen that the use of queries to constrain boundary positions and the addition of φ2 scores of prefixes/suffixes improve the percentage of Exact Match. The IBM Model tends to make many more alignments than Completive Linking. While this is often beneficial for machine translation systems, it is not very suitable for creating bilingual dictionaries, where precision is of paramount importance. The LDC dictionary was manually compiled from diverse resources within LDC and (mostly) from the Internet. Its coverage of Wikipedia data is extremely low, compared to our method. 999 English Wikipedia Translation Parenthetical Translation Pumping lemma 泵引理 引理1 Topic-prominent language 话题优先语言 突出性语言1 Yoido Full Gospel Church 汝矣岛纯福音教 会 全备福音教会1 First Bulgarian Empire 第一保加利亚帝 国 强大的保加利 亚帝国2 Vespid 黄蜂 针对境内胡蜂2 Ibrahim Rugova 易卜拉欣·鲁戈瓦 鲁戈瓦3 Jerry West 杰里·韦斯特 威斯特3 Nicky Butt 尼基·巴特 巴特3 Benito Mussolini 贝尼托·墨索里尼 墨索里尼3 Ecology of Hong Kong 香港生态 本文介绍的* Paracetamol 对乙酰氨基酚 扑热息痛* Thermidor 热月 必杀* Udo 独活 乌多 Public opinion 舆论 公众舆论 Michael Bay 麦可·贝 迈克尔·贝 Dagestan 达吉斯坦共和国 达吉斯坦 Battle of Leyte Gulf 莱特湾海战 莱伊特海湾战 役 Glock 格洛克手枪 格洛克 Ergonomics 人因工程学 工效学 Frank Sinatra 法兰·仙纳杜拉 法兰克辛纳屈 Zaragoza 萨拉戈萨省 萨拉戈萨 Komodo 科莫多岛 科摩多岛 Eli Vance 伊莱·万斯 伊莱‧凡斯博士 Manitoba 缅尼托巴 曼尼托巴省 Giant Bottlenose Whale 阿氏贝喙鲸 巨瓶鼻鲸 Exclusionary rule 证据排除法则 证据排除规则 Computer worm 蠕虫病毒 计算机蠕虫 Social network 社会性网络 社会网络 Glasgow School of Art 格拉斯哥艺术学 校 格拉斯哥艺术 学院 Dee Hock 狄伊·哈克 迪伊·霍克 Bondage 绑缚 束缚 The China Post 英文中国邮报 中国邮报 Rachel 拉结 瑞秋 John Nash 约翰·纳西 约翰·纳什 Hattusa 哈图沙 哈图萨 Bangladesh 孟加拉国 孟加拉 Table 7: A random sample of non-exact-matches 1the extracted translation is too short 2the extracted translation is too long 3the extracted translation contains only the last name *the extracted term is completely wrong. Note that Exact Match is a rather stringent criterion. Table 7 shows a random sample of extracted parenthetical translations that failed the Exact Match test. Only a small percentage of them are genuine errors. We nonetheless adopted this measure because it has the advantage of automated evaluation and our goal is mainly to compare the relative performances. To determine the upper bound of the coverage of our web data, for each Wikipedia English term we searched within the total set of available parenthesized text fragments (our English candidate set before filtering as by Step 1). We discovered 81% of the Wikipedia titles, which is approximately 10% above the coverage of our final output. This indicates a minor loss of recall because of mistakes made in filtering (Sec. 3.1) and/or word alignment. 5.2 Evaluation with term translation requests To evaluate the coverage of output produced by their method, Cao et al (2007) extracted English queries from the query log of a Chinese search engine. They assume that the reason why users typed the English queries in a Chinese search box is mostly to find out their Chinese translations. Examining our own Chinese query logs, however, the most-frequent English queries appear to be navigational queries instead of translation requests. We therefore used the following regular expression to identify queries that are unambiguously translation requests: /^[a-zA-Z ]* 的中文$/ where的中文means “’s Chinese”. This regular expression matched 1579 unique queries in the logs. We manually judged the translation for 200 of them. A small random sample of the 200 is shown in Table 8. The empty cells indicate that the English term is missing from our translation pairs. We use * to mark incorrect translations. When compared with the sample queries in (Cao et al., 2007), the queries in our sample seem to contain more phrasal words and technical terminology. It is interesting to see that even though parenthetical translations tend to be out-of-vocabulary words, as we have remarked in the introduction, the sheer size of the web means that occasionally translations of common words such as ‘use’ are sometimes included as well. 1000 We compared our results with translations obtained from Google and Yahoo’s translation services. The numbers of correct translations for the random sample of 200 queries are as follows: Systems Google Yahoo! Mined Mined+G Correct 115 84 116 135 Our system’s outputs (Mined) have the same accuracy as the Google Translate. Our outputs have results for 154 out of the 200 queries. The 46 missing results are considered incorrect. If we combine our results with Google Translate by looking up Google results for missing entries, the accuracy increases from 56% to 68% (Mined+G). If we treat the LDC Chinese-English Dictionary 2.0 as a translator, it only covers 20.5% of the 200 queries. 5.3 Evaluation with SMT The extracted translations may serve as training data for statistical machine translation systems. To evaluate their effectiveness for this purpose, we trained a baseline phrase-based SMT system (Koehn et al, 2003; Brants et al, 2007) with the FBIS Chinese-English parallel text (NIST, 2003). We then added the extracted translation pairs as additional parallel training corpus. This resulted in a 0.57 increase of BLEU score based on the test data in the 2006 NIST MT Evaluation Workshop. 6 Related Work Nagata et al. (2001) made the first proposal to mine translations from the web. Their work was concentrated on terminologies, and assumed the English terms were given as input. Wu and Chang (2007), Kwok et al. (2005) also employed search engines and assumed the English term given as input, but their focus was on name transliteration. It is difficult to build a truly large-scale translation lexicon this way because the English terms themselves may be hard to come by. Cao et al. (2007), like us, used a 300GB collection of web documents as input. They used supervised learning to build models that deal with phonetic transliterations and semantic translations separately. Our work relies on unsupervised learning and does not make a distinction between translations and transliterations. Furthermore, we are able to extract two orders of magnitude more translations from than (Cao et al., 2007). 7 Conclusion We presented a method to apply a word alignment algorithm on a partially parallel corpus to extract translation pairs from the web. Treating the translation extraction problem as a word alignment problem allowed us to generalize across instances involving different in-parenthesis terms. Our algorithm extends Competitive Linking to deal with multi-word alignments and takes advantage of word-internal correspondences between transliterated words or morphologically composed words. Finally, through our discussion of parallel Wikipedia topic titles as a gold standard, we presented the first evaluation of such an extraction system that went beyond manual judgments on small sized samples. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments. buckingham palace 白金汉宫 chinadaily 中国日报 coo 首席运营官 diammonium sulfate emilio pucci 埃米里奥·普奇 finishing school 精修学校 gloria 格洛丽亚 horny 长角收割者* jam 詹姆 lean six sigma 精益六西格玛 meiosis 减数分裂 near miss 迹近错失 pachycephalosaurus 肿头龙 pops 持久性有机污染物 recreation vehicle 休闲露营车 shanghai ethylene cracker complex stenonychosaurus 细爪龙 theanine 茶氨酸 use 使用 with you all the time 回想和你在一起的日子里 Table 8: A small sample of manually judged query translations 1001 References T. Brants, A. Popat, P. Xu, F. Och and J. Dean, Large Language Models for Machine Translation, EMNLPCoNLL-2007. P.F. Brown, S.A. Della Pietra, V.J. Della Pietra, and R.L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. G. Cao, J. Gao and J.Y. Nie. 2007. A system to mine large-scale bilingual dictionaries from monolingual Web pages, MT Summit, pp. 57-64. T. Dunning. 1993. Accurate Methods for the Statistics of Surprise and Coincidence. Computational Linguistics 19, 1. W. Gale and K. Church. 1991. Identifying word correspondence in parallel text. In Proceedings of the DARPA NLP Workshop. L. Jiang, M. Zhou, L.F. Chien, C. Niu. 2007. Named Entity Translation with Web Mining and Transliteration. In Proc. of IJCAI-2007. pp. 1629-1634. P. Koehn, F. Och and D. Marcu, Statistical Phrasebased Translation, In Proc. of HLT-NAACL 2003. K.L. Kwok, P. Deng, N. Dinstl, H.L. Sun, W. Xu, P. Peng, and J. Doyon. 2005. CHINET: a Chinese name finder system for document triage. In Proceedings of 2005 International Conference on Intelligence Analysis. I.D. Melamed. 2000. Models of translational equivalence among words. Computational Linguistics, 26(2):221–249. M. Nagata, T. Saito, and K. Suzuki. 2001. Using the Web as a bilingual dictionary. In Proc. of ACL 2001 DD-MT Workshop, pp.95-102. NIST. 2003. The NIST machine translation evaluations. http://www.nist.gov/speech/tests/mt/. F.J. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. I.A. Sag, T. Baldwin, F. Bond, A. Copestake, and D. Flickinger. 2002. Multiword expressions: A pain in the neck for NLP. In Proc. of CICLing-2002, pp 1– 15, Mexico City, Mexico. B. Taskar, S. Lacoste-Julien, and D. Klein. 2005. A discriminative matching approach to word alignment. In Proc. of HLT/EMNLP-05. Vancouver, BC. J. Tiedemann. 2004. Word to word alignment strategies. In Proceedings of the 20th international Conference on Computational Linguistics. Geneva, Switzerland. J.C. Wu and J.S. Chang. 2007. Learning to Find English to Chinese Transliterations on the Web. In Proc. of EMNLP-CoNLL-2007. pp.996-1004. Prague, Czech Republic. Y. Zhang, S. Vogel. 2005 Competitive Grouping in Integrated Phrase Segmentation and Alignment Model. in Proceedings of ACL-05 Workshop on Building and Parallel Text. Ann Arbor, MI. 1002
2008
113
Proceedings of ACL-08: HLT, pages 1003–1011, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Soft Syntactic Constraints for Hierarchical Phrased-Based Translation Yuval Marton and Philip Resnik Department of Linguistics and the Laboratory for Computational Linguistics and Information Processing (CLIP) at the Institute for Advanced Computer Studies (UMIACS) University of Maryland, College Park, MD 20742-7505, USA {ymarton, resnik} @t umiacs.umd.edu Abstract In adding syntax to statistical MT, there is a tradeoff between taking advantage of linguistic analysis, versus allowing the model to exploit linguistically unmotivated mappings learned from parallel training data. A number of previous efforts have tackled this tradeoff by starting with a commitment to linguistically motivated analyses and then finding appropriate ways to soften that commitment. We present an approach that explores the tradeoff from the other direction, starting with a context-free translation model learned directly from aligned parallel text, and then adding soft constituent-level constraints based on parses of the source language. We obtain substantial improvements in performance for translation from Chinese and Arabic to English. 1 Introduction The statistical revolution in machine translation, beginning with (Brown et al., 1993) in the early 1990s, replaced an earlier era of detailed language analysis with automatic learning of shallow source-target mappings from large parallel corpora. Over the last several years, however, the pendulum has begun to swing back in the other direction, with researchers exploring a variety of statistical models that take advantage of source- and particularly target-language syntactic analysis (e.g. (Cowan et al., 2006; Zollmann and Venugopal, 2006; Marcu et al., 2006; Galley et al., 2006) and numerous others). Chiang (2005) distinguishes statistical MT approaches that are “syntactic” in a formal sense, going beyond the finite-state underpinnings of phrasebased models, from approaches that are syntactic in a linguistic sense, i.e. taking advantage of a priori language knowledge in the form of annotations derived from human linguistic analysis or treebanking.1 The two forms of syntactic modeling are doubly dissociable: current research frameworks include systems that are finite state but informed by linguistic annotation prior to training (e.g., (Koehn and Hoang, 2007; Birch et al., 2007; Hassan et al., 2007)), and also include systems employing contextfree models trained on parallel text without benefit of any prior linguistic analysis (e.g. (Chiang, 2005; Chiang, 2007; Wu, 1997)). Over time, however, there has been increasing movement in the direction of systems that are syntactic in both the formal and linguistic senses. In any such system, there is a natural tension between taking advantage of the linguistic analysis, versus allowing the model to use linguistically unmotivated mappings learned from parallel training data. The tradeoff often involves starting with a system that exploits rich linguistic representations and relaxing some part of it. For example, DeNeefe et al. (2007) begin with a tree-to-string model, using treebank-based target language analysis, and find it useful to modify it in order to accommodate useful “phrasal” chunks that are present in parallel training data but not licensed by linguistically motivated parses of the target language. Similarly, Cowan et al. (2006) focus on using syntactically rich representations of source and target parse trees, but they resort to phrase-based translation for modifiers within 1See (Lopez, to appear) for a comprehensive survey. 1003 clauses. Finding the right way to balance linguistic analysis with unconstrained data-driven modeling is clearly a key challenge. In this paper we address this challenge from a less explored direction. Rather than starting with a system based on linguistically motivated parse trees, we begin with a model that is syntactic only in the formal sense. We then introduce soft constraints that take source-language parses into account to a limited extent. Introducing syntactic constraints in this restricted way allows us to take maximal advantage of what can be learned from parallel training data, while effectively factoring in key aspects of linguistically motivated analysis. As a result, we obtain substantial improvements in performance for both Chinese-English and Arabic-English translation. In Section 2, we briefly review the Hiero statistical MT framework (Chiang, 2005, 2007), upon which this work builds, and we discuss Chiang’s initial effort to incorporate soft source-language constituency constraints for Chinese-English translation. In Section 3, we suggest that an insufficiently fine-grained view of constituency constraints was responsible for Chiang’s lack of strong results, and introduce finer grained constraints into the model. Section 4 demonstrates the the value of these constraints via substantial improvements in ChineseEnglish translation performance, and extends the approach to Arabic-English. Section 5 discusses the results, and Section 6 considers related work. Finally we conclude in Section 7 with a summary and potential directions for future work. 2 Hierarchical Phrase-based Translation 2.1 Hiero Hiero (Chiang, 2005; Chiang, 2007) is a hierarchical phrase-based statistical MT framework that generalizes phrase-based models by permitting phrases with gaps. Formally, Hiero’s translation model is a weighted synchronous contextfree grammar. Hiero employs a generalization of the standard non-hierarchical phrase extraction approach in order to acquire the synchronous rules of the grammar directly from word-aligned parallel text. Rules have the form X →⟨¯e, ¯f⟩, where ¯e and ¯f are phrases containing terminal symbols (words) and possibly co-indexed instances of the nonterminal symbol X.2 Associated with each rule is a set of translation model features, φi( ¯f, ¯e); for example, one intuitively natural feature of a rule is the phrase translation (log-)probability φ( ¯f, ¯e) = log p(¯e| ¯f) , directly analogous to the corresponding feature in non-hierarchical phrase-based models like Pharaoh (Koehn et al., 2003). In addition to this phrase translation probability feature, Hiero’s feature set includes the inverse phrase translation probability log p( ¯f|¯e), lexical weights lexwt( ¯f|¯e) and lexwt(¯e| ¯f), which are estimates of translation quality based on word-level correspondences (Koehn et al., 2003), and a rule penalty allowing the model to learn a preference for longer or shorter derivations; see (Chiang, 2007) for details. These features are combined using a log-linear model, with each synchronous rule contributing X i λiφi( ¯f, ¯e) (1) to the total log-probability of a derived hypothesis. Each λi is a weight associated with feature φi, and these weights are typically optimized using minimum error rate training (Och, 2003). 2.2 Soft Syntactic Constraints When looking at Hiero rules, which are acquired automatically by the model from parallel text, it is easy to find many cases that seem to respect linguistically motivated boundaries. For example, X →⟨jingtian X1, X1 this year⟩, seems to capture the use of jingtian/this year as a temporal modifier when building linguistic constituents such as noun phrases (the election this year) or verb phrases (voted in the primary this year). However, it is important to observe that nothing in the Hiero framework actually requires nonterminal symbols to cover linguistically sensible constituents, and in practice they frequently do not.3 2This is slightly simplified: Chiang’s original formulation of Hiero, which we use, has two nonterminal symbols, X and S. The latter is used only in two special “glue” rules that permit complete trees to be constructed via concatenation of subtrees when there is no better way to combine them. 3For example, this rule could just as well be applied with X1 covering the “phrase” submitted and to produce non-constituent substring submitted and this year in a hypothesis like The budget was submitted and this year cuts are likely. 1004 Chiang (2005) conjectured that there might be value in allowing the Hiero model to favor hypotheses for which the synchronous derivation respects linguistically motivated source-language constituency boundaries, as identified using a parser. He tested this conjecture by adding a soft constraint in the form of a “constituency feature”: if a synchronous rule X →⟨¯e, ¯f⟩is used in a derivation, and the span of ¯f is a constituent in the sourcelanguage parse, then a term λc is added to the model score in expression (1).4 Unlike a hard constraint, which would simply prevent the application of rules violating syntactic boundaries, using the feature to introduce a soft constraint allows the model to boost the “goodness” for a rule if it is constitent with the source language constituency analysis, and to leave its score unchanged otherwise. The weight λc, like all other λi, is set via minimum error rate training, and that optimization process determines empirically the extent to which the constituency feature should be trusted. Figure 1 illustrates the way the constituency feature worked, treating English as the source language for the sake of readability. In this example, λc would be added to the hypothesis score for any rule used in the hypothesis whose source side spanned the minister, a speech, yesterday, gave a speech yesterday, or the minister gave a speech yesterday. A rule translating, say, minister gave a as a unit would receive no such boost. Chiang tested the constituency feature for Chinese-English translation, and obtained no significant improvement on the test set. The idea then seems essentially to have been abandoned; it does not appear in later discussions (Chiang, 2007). 3 Soft Syntactic Constraints, Revisited On the face of it, there are any number of possible reasons Chiang’s (2005) soft constraint did not work – including, for example, practical issues like the quality of the Chinese parses.5 However, we focus here on two conceptual issues underlying his use of source language syntactic constituents. 4Formally, φc( ¯f, ¯e) is defined as a binary feature, with value 1 if ¯f spans a source constituent and 0 otherwise. In the latter case λcφc( ¯f, ¯e) = 0 and the score in expression (1) is unaffected. 5In fact, this turns out not to be the issue; see Section 4. Figure 1: Illustration of Chiang’s (2005) syntactic constituency feature, which does not distinguish among constituent types. First, the constituency feature treats all syntactic constituent types equally, making no distinction among them. For any given language pair, however, there might be some source constituents that tend to map naturally to the target language as units, and others that do not (Fox, 2002; Eisner, 2003). Moreover, a parser may tend to be more accurate for some constituents than for others. Second, the Chiang (2005) constituency feature gives a rule additional credit when the rule’s source side overlaps exactly with a source-side syntactic constituent. Logically, however, it might make sense not just to give a rule X →⟨¯e, ¯f⟩extra credit when ¯f matches a constituent, but to incur a cost when ¯f violates a constituent boundary. Using the example in Figure 1, we might want to penalize hypotheses containing rules where ¯f is the minister gave a (and other cases, such as minister gave, minister gave a, and so forth).6 These observations suggest a finer-grained approach to the constituency feature idea, retaining the idea of soft constraints, but applying them using various soft-constraint constituency features. Our first observation argues for distinguishing among constituent types (NP, VP, etc.). Our second observation argues for distinguishing the benefit of match6This accomplishes coverage of the logically complete set of possibilities, which include not only ¯f matching a constituent exactly or crossing its boundaries, but also ¯f being properly contained within the constituent span, properly containing it, or being outside it entirely. Whenever these latter possibilities occur, ¯f will exactly match or cross the boundaries of some other constituent. 1005 ing constituents from the cost of crossing constituent boundaries. We therefore define a space of new features as the cross product {CP, IP, NP, VP, . . .} × {=, +}. where = and + signify matching and crossing boundaries, respectively. For example, φNP= would denote a binary feature that matches whenever the span of ¯f exactly covers an NP in the source-side parse tree, resulting in λNP= being added to the hypothesis score (expression (1)). Similarly, φVP+ would denote a binary feature that matches whenever the span of ¯f crosses a VP boundary in the parse tree, resulting in λVP+ being subtracted from the hypothesis score.7 For readability from this point forward, we will omit φ from the notation and refer to features such as NP= (which one could read as “NP match”), VP+ (which one could read as “VP crossing”), etc. In addition to these individual features, we define three more variants: • For each constituent type, e.g. NP, we define a feature NP_ that ties the weights of NP= and NP+. If NP= matches a rule, the model score is incremented by λNP_, and if NP+ matches, the model score is decremented by the same quantity. • For each constituent type, e.g. NP, we define a version of the model, NP2, in which NP= and NP+ are both included as features, with separate weights λNP = and λNP +. • We define a set of “standard” linguistic labels containing {CP, IP, NP, VP, PP, ADJP, ADVP, QP, LCP, DNP} and excluding other labels such as PRN (parentheses), FRAG (fragment), etc.8 We define feature XP= as the disjunction of {CP=, IP=, . . ., DNP=}; i.e. its value equals 1 for a rule if the span of ¯f exactly covers a constituent having any of the standard labels. The 7Formally, λVP+ simply contributes to the sum in expression (1), as with all features in the model, but weight optimization using minimum error rate training should, and does, automatically assign this feature a negative weight. 8We map SBAR and S labels in Arabic parses to CP and IP, respectively, consistent with the Chinese parses. We map Chinese DP labels to NP. DNP and LCP appear only in Chinese. We ran no ADJP experiment in Chinese, because this label virtually aways spans only one token in the Chinese parses. definitions of XP+, XP_, and XP2 are analogous. • Similarly, since Chiang’s original constituency feature can be viewed as a disjunctive “alllabels=” feature, we also defined “all-labels+”, “all-labels2”, and “all-labels_” analogously. 4 Experiments We carried out MT experiments for translation from Chinese to English and from Arabic to English, using a descendant of Chiang’s Hiero system. Language models were built using the SRI Language Modeling Toolkit (Stolcke, 2002) with modified Kneser-Ney smoothing (Chen and Goodman, 1998). Word-level alignments were obtained using GIZA++ (Och and Ney, 2000). The baseline model in both languages used the feature set described in Section 2; for the Chinese baseline we also included a rule-based number translation feature (Chiang, 2007). In order to compute syntactic features, we analyzed source sentences using state of the art, tree-bank trained constituency parsers ((Huang et al., 2008) for Chinese, and the Stanford parser v.2007-08-19 for Arabic (Klein and Manning, 2003a; Klein and Manning, 2003b)). In addition to the baseline condition, and baseline plus Chiang’s (2005) original constituency feature, experimental conditions augmented the baseline with additional features as described in Section 3. All models were optimized and tested using the BLEU metric (Papineni et al., 2002) with the NISTimplemented (“shortest”) effective reference length, on lowercased, tokenized outputs/references. Statistical significance of difference from the baseline BLEU score was measured by using paired bootstrap re-sampling (Koehn, 2004).9 4.1 Chinese-English For the Chinese-English translation experiments, we trained the translation model on the corpora in Table 1, totalling approximately 2.1 million sentence pairs after GIZA++ filtering for length ratio. Chinese text was segmented using the Stanford segmenter (Tseng et al., 2005). 9Whenever we use the word “significant”, we mean “statistically significant” (at p < .05 unless specified otherwise). 1006 LDC ID Description LDC2002E18 Xinhua Ch/Eng Par News V1 beta LDC2003E07 Ch/En Treebank Par Corpus LDC2005T10 Ch/En News Mag Par Txt (Sinorama) LDC2003E14 FBIS Multilanguage Txts LDC2005T06 Ch News Translation Txt Pt 1 LDC2004T08 HK Par Text (only HKNews) Table 1: Training corpora for Chinese-English translation We trained a 5-gram language model using the English (target) side of the training set, pruning 4gram and 5-gram singletons. For minimum error rate training and development we used the NIST MTeval MT03 set. Table 2 presents our results. We first evaluated translation performance using the NIST MT06 (nisttext) set. Like Chiang (2005), we find that the original, undifferentiated constituency feature (Chiang05) introduces a negligible, statistically insignificant improvement over the baseline. However, we find that several of the finer-grained constraints (IP=, VP=, VP+, QP+, and NP=) achieve statistically significant improvements over baseline (up to .74 BLEU), and the latter three also improve significantly on the undifferentiated constituency feature. By combining multiple finer-grained syntactic features, we obtain significant improvements of up to 1.65 BLEU points (NP_, VP2, IP2, all-labels_, and XP+). We also obtained further gains using combinations of features that had performed well; e.g., condition IP2.VP2.NP_ augments the baseline features with IP2 and VP2 (i.e. IP=, IP+, VP= and VP+), and NP_ (tying weights of NP= and NP+; see Section 3). Since component features in those combinations were informed by individual-feature performance on the test set, we tested the best performing conditions from MT06 on a new test set, NIST MT08. NP= and VP+ yielded significant improvements of up to 1.53 BLEU. Combination conditions replicated the pattern of results from MT06, including the same increasing order of gains, with improvements up to 1.11 BLEU. 4.2 Arabic-English For Arabic-English translation, we used the training corpora in Table 3, approximately 100,000 senChinese MT06 MT08 Baseline .2624 .2064 Chiang-05 .2634 .2065 PP= .2607 DNP+ .2621 CP+ .2622 AP+ .2633 AP= .2634 DNP= .2640 IP+ .2643 PP+ .2644 LCP= .2649 LCP+ .2654 CP= .2657 NP+ .2662 QP= .2674^+ .2071 IP= .2680*+ .2061 VP= .2683* .2072 VP+ .2693**++ .2109*+ QP+ .2694**++ .2091 NP= .2698**++ .2217**++ Multiple / conflated features: QP2 .2614 NP2 .2621 XP= .2630 XP2 .2633 all-labels+ .2633 VP_ .2637 QP_ .2641 NP.VP.IP=.QP.VP+ .2646 IP_ .2647 IP2+VP2 .2649 all-labels2 .2673*.2070 NP_ .2690**++ .2101^+ IP2.VP2.NP_ .2699**++ .2105*+ VP2 .2722**++ .2123**++ all-labels_ .2731**++ .2125*++ IP2 .2750**++ .2132**+ XP+ .2789**++ .2175**++ Table 2: Chinese-English results. *: Significantly better than baseline (p < .05). **: Significantly better than baseline (p < .01). ^: Almost significantly better than baseline (p < .075). +: Significantly better than Chiang05 (p < .05). ++: Significantly better than Chiang-05 (p < .01). -: Almost significantly better than Chiang-05 (p < .075). 1007 LDC ID Description LDC2004T17 Ar News Trans Txt Pt 1 LDC2004T18 Ar/En Par News Pt 1 LDC2005E46 Ar/En Treebank En Translation LDC2004E72 eTIRR Ar/En News Txt Table 3: Training corpora for Arabic-English translation tence pairs after GIZA++ length-ratio filtering. We trained a trigram language model using the English side of this training set, plus the English Gigaword v2 AFP and Gigaword v1 Xinhua corpora. Development and minimum error rate training were done using the NIST MT02 set. Table 4 presents our results. We first tested on on the NIST MT03 and MT06 (nist-text) sets. On MT03, the original, undifferentiated constituency feature did not improve over baseline. Two individual finer-grained features (PP+ and AdvP=) yielded statistically significant gains up to .42 BLEU points, and feature combinations AP2, XP2 and all-labels2 yielded significant gains up to 1.03 BLEU points. XP2 and all-labels2 also improved significantly on the undifferentiated constituency feature, by .72 and 1.11 BLEU points, respectively. For MT06, Chiang’s original feature improved the baseline significantly — this is a new result using his feature, since he did not experiment with Arabic — as did our our IP=, PP=, and VP= conditions. Adding individual features PP+ and AdvP= yielded significant improvements up to 1.4 BLEU points over baseline, and in fact the improvement for individual feature AdvP= over Chiang’s undifferentiated constituency feature approaches significance (p < .075). More important, several conditions combining features achieved statistically significant improvements over baseline of up 1.94 BLEU points: XP2, IP2, IP, VP=.PP+.AdvP=, AP2, PP+.AdvP=, and AdvP2. Of these, AdvP2 is also a significant improvement over the undifferentiated constituency feature (Chiang-05), with p < .01. As we did for Chinese, we tested the best-performing models on a new test set, NIST MT08. Consistent patterns reappeared: improvements over the baseline up to 1.69 BLEU (p < .01), with AdvP2 again in the lead (also outperforming the undifferentiated constituency feature, p < .05). Arabic MT03 MT06 MT08 Baseline .4795 .3571 .3571 Chiang-05 .4787 .3679** .3678** VP+ .4802 .3481 AP+ .4856 .3495 IP+ .4818 .3516 CP= .4815 .3523 NP= .4847 .3537 NP+ .4800 .3548 AP= .4797 .3569 AdvP+ .4852 .3572 CP+ .4758 .3578 IP= .4811 .3636** .3647** PP= .4801 .3651** .3662** VP= .4803 .3655** .3694** PP+ .4837** .3707** .3700** AdvP= .4823** .3711**.3717** Multiple / conflated features: XP+ .4771 .3522 all-labels2 .4898**+ .3536 .3572 all-labels_ .4828 .3548 VP2 .4826 .3552 NP2 .4832 .3561 AdvP.VP.PP.IP= .4826 .3571 VP_ .4825 .3604 all-labels+ .4825 .3600 XP2 .4859**+ .3605^ .3613** IP2 .4793 .3611* .3593 IP_ .4791 .3635* .3648** XP= .4808 .3659** .3704**+ VP=.PP+.AdvP= .4833** .3677** .3718** AP2 .4840** .3692** .3719** PP+.AdvP= .4777 .3708** .3680** AdvP2 .4803 .3765**++ .3740**+ Table 4: Arabic-English Experiments. Results are sorted by MT06 BLEU score. *: Better than baseline (p < .05). **: Better than baseline (p < .01). +: Better than Chiang-05 (p < .05). ++: Better than Chiang-05 (p < .01). -: Almost significantly better than Chiang-05 (p < .075) 1008 5 Discussion The results in Section 4 demonstrate, to our knowledge for the first time, that significant and sometimes substantial gains over baseline can be obtained by incorporating soft syntactic constraints into Hiero’s translation model. Within language, we also see considerable consistency across multiple test sets, in terms of which constraints tend to help most. Furthermore, our results provide some insight into why the original approach may have failed to yield a positive outcome. For Chinese, we found that when we defined finer-grained versions of the exact-match features, there was value for some constituency types in biasing the model to favor matching the source language parse. Moreover, we found that there was significant value in allowing the model to be sensitive to violations (crossing boundaries) of source parses. These results confirm that parser quality was not the limitation in the original work (or at least not the only limitation), since in our experiments the parser was held constant. Looking at combinations of new features, some “double-feature” combinations (VP2, IP2) achieved large gains, although note that more is not necessarily better: combinations of more features did not yield better scores, and some did not yield any gain at all. No conflated feature reached significance, but it is not the case that all conflated features are worse than their same-constituent “double-feature” counterparts. We found no simple correlation between finer-grained feature scores (and/or boundary condition type) and combination or conflation scores. Since some combinations seem to cancel individual contributions, we can conclude that the higher the number of participant features (of the kinds described here), the more likely a cancellation effect is; therefore, a “double-feature” combination is more likely to yield higher gains than a combination containing more features. We also investigated whether non-canonical linguistic constituency labels such as PRN, FRAG, UCP and VSB introduce “noise”, by means of the XP features — the XP= feature is, in fact, simply the undifferentiated constituency feature, but sensitive only to “standard” XPs. Although performance of XP=, XP2 and all-labels+ were similar to that of the undifferentiated constituency feature, XP+ achieved the highest gain. Intuitively, this seems plausible: the feature says, at least for Chinese, that a translation hypothesis should incur a penalty if it is translating a substring as a unit when that substring is not a canonical source constituent. Having obtained positive results with Chinese, we explored the extent to which the approach might improve translation using a very different source language. The approach on Arabic-English translation yielded large BLEU gains over baseline, as well as significant improvements over the undifferentiated constituency feature. Comparing the two sets of experiments, we see that there are definitely language-specific variations in the value of syntactic constraints; for example, AdvP, the top performer in Arabic, cannot possibly perform well for Chinese, since in our parses the AdvP constituents rarely include more than a single word. At the same time, some IP and VP variants seem to do generally well in both languages. This makes sense, since — at least for these language pairs and perhaps more generally — clauses and verb phrases seem to correspond often on the source and target side. We found it more surprising that no NP variant yielded much gain in Arabic; this question will be taken up in future work. 6 Related Work Space limitations preclude a thorough review of work attempting to navigate the tradeoff between using language analyzers and exploiting unconstrained data-driven modeling, although the recent literature is full of variety and promising approaches. We limit ourselves here to several approaches that seem most closely related. Among approaches using parser-based syntactic models, several researchers have attempted to reduce the strictness of syntactic constraints in order to better exploit shallow correspondences in parallel training data. Our introduction has already briefly noted Cowan et al. (2006), who relax parse-tree-based alignment to permit alignment of non-constituent subphrases on the source side, and translate modifiers using a separate phrase-based model, and DeNeefe et al. (2007), who modify syntax-based extraction and binarize trees (following (Wang et al., 2007b)) to improve phrasal cov1009 erage. Similarly, Marcu et al. (2006) relax their syntax-based system by rewriting target-side parse trees on the fly in order to avoid the loss of “nonsyntactifiable” phrase pairs. Setiawan et al. (2007) employ a “function-word centered syntax-based approach”, with synchronous CFG and extended ITG models for reordering phrases, and relax syntactic constraints by only using a small number function words (approximated by high-frequency words) to guide the phrase-order inversion. Zollman and Venugopal (2006) start with a target language parser and use it to provide constraints on the extraction of hierarchical phrase pairs. Unlike Hiero, their translation model uses a full range of named nonterminal symbols in the synchronous grammar. As an alternative way to relax strict parser-based constituency requirements, they explore the use of phrases spanning generalized, categorial-style constituents in the parse tree, e.g. type NP/NN denotes a phrase like the great that lacks only a head noun (say, wall) in order to comprise an NP. In addition, various researchers have explored the use of hard linguistic constraints on the source side, e.g. via “chunking” noun phrases and translating them separately (Owczarzak et al., 2006), or by performing hard reorderings of source parse trees in order to more closely approximate target-language word order (Wang et al., 2007a; Collins et al., 2005). Finally, another soft-constraint approach that can also be viewed as coming from the data-driven side, adding syntax, is taken by Riezler and Maxwell (2006). They use LFG dependency trees on both source and target sides, and relax syntactic constraints by adding a “fragment grammar” for unparsable chunks. They decode using Pharaoh, augmented with their own log-linear features (such as p(esnippet|fsnippet) and its converse), side by side to “traditional” lexical weights. Riezler and Maxwell (2006) do not achieve higher BLEU scores, but do score better according to human grammaticality judgments for in-coverage cases. 7 Conclusion When hierarchical phrase-based translation was introduced by Chiang (2005), it represented a new and successful way to incorporate syntax into statistical MT, allowing the model to exploit non-local dependencies and lexically sensitive reordering without requiring linguistically motivated parsing of either the source or target language. An approach to incorporating parser-based constituents in the model was explored briefly, treating syntactic constituency as a soft constraint, with negative results. In this paper, we returned to the idea of linguistically motivated soft constraints, and we demonstrated that they can, in fact, lead to substantial improvements in translation performance when integrated into the Hiero framework. We accomplished this using constraints that not only distinguish among constituent types, but which also distinguish between the benefit of matching the source parse bracketing, versus the cost of using phrases that cross relevant bracketing boundaries. We demonstrated improvements for ChineseEnglish translation, and succeed in obtaining substantial gains for Arabic-English translation, as well. Our results contribute to a growing body of work on combining monolingually based, linguistically motivated syntactic analysis with translation models that are closely tied to observable parallel training data. Consistent with other researchers, we find that “syntactic constituency” may be too coarse a notion by itself; rather, there is value in taking a finergrained approach, and in allowing the model to decide how far to trust each element of the syntactic analysis as part of the system’s optimization process. Acknowledgments This work was supported in part by DARPA prime agreement HR0011-06-2-0001. The authors would like to thank David Chiang and Adam Lopez for making their source code available; the Stanford Parser team and Mary Harper for making their parsers available; David Chiang, Amy Weinberg, and CLIP Laboratory colleagues, particularly Chris Dyer, Adam Lopez, and Smaranda Muresan, for discussion and invaluable assistance. 1010 References Alexandra Birch, Miles Osborne, and Philipp Koehn. 2007. CCG supertags in factored statistical machine translation. In Proceedings of the ACL Workshop on Statistical Machine Translation 2007. P.F. Brown, S.A.D. Pietra, V.J.D. Pietra, and R.L. Mercer. 1993. The mathematics of statistical machine translation. Computational Linguistics, 19(2):263–313. S. F. Chen and J. Goodman. 1998. An empirical study of smoothing techniques for language modeling. Tech. Report TR-10-98, Comp. Sci. Group, Harvard U. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL-05, pages 263–270. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Michael Collins, Philipp Koehn, and Ivona Kucerova. 2005. Clause restructuring for statistical machine translation. In Proceedings of ACL-05. Brooke Cowan, Ivona Kucerova, and Michael Collins. 2006. A discriminative model for tree-to-tree translation. In Proc. EMNLP. S DeNeefe, K. Knight, W. Wang, and D. Marcu. 2007. What can syntax-based MT learn from phrase-based MT? In Proceedings of EMNLP-CoNLL. J. Eisner. 2003. Learning non-isomorphic tree mappings for machine translation. In ACL Companion Vol. Heidi Fox. 2002. Phrasal cohesion and statistical machine translation. In Proc. EMNLP 2002. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceedings of COLING/ACL-06. H. Hassan, K. Sima’an, and A. Way. 2007. Integrating supertags into phrase-based statistical machine translation. In Proc. ACL-07, pages 288–295. Zhongqiang Huang, Denis Filimonov, and Mary Harper. 2008. Accuracy enhancements for mandarin parsing. Tech. report, University of Maryland. Dan Klein and Christopher D. Manning. 2003a. Accurate unlexicalized parsing. In Proceedings of ACL-03, pages 423–430. Dan Klein and Christopher D. Manning. 2003b. Fast exact inference with a factored model for natural language parsing. Advances in Neural Information Processing Systems, 15(NIPS 2002):3–10. Philipp Koehn and Hieu Hoang. 2007. Factored translation models. In Proc. EMNLP+CoNLL, pages 868– 876, Prague. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT-NAACL, pages 127–133. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proc. EMNLP. Adam Lopez. (to appear). Statistical machine translation. ACM Computing Surveys. Earlier version: A Survey of Statistical Machine Translation. U. of Maryland, UMIACS tech. report 2006-47. Apr 2007. Daniel Marcu, Wei Wang, Abdessamad Echihabi, and Kevin Knight. 2006. SPMT: Statistical machine translation with syntactified target language phrases. In Proc. EMNLP, pages 44–52. Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of the 38th Annual Meeting of the ACL, pages 440–447. GIZA++. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the ACL, pages 160–167. K. Owczarzak, B. Mellebeek, D. Groves, J. Van Genabith, and A. Way. 2006. Wrapper syntax for example-based machine translation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas, pages 148–155. Kishore Papineni, Salim Roukos, Todd Ward, John Henderson, and Florence Reeder. 2002. Corpusbased comprehensive and diagnostic MT evaluation: Initial Arabic, Chinese, French, and Spanish results. In Proceedings of the Human Language Technology Conference (ACL’2002), pages 124–127, San Diego, CA. Stefan Riezler and John Maxwell. 2006. Grammatical machine translation. In Proc. HLT-NAACL, New York, NY. Hendra Setiawan, Min-Yen Kan, and Haizhou Li. 2007. Ordering phrases with function words. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 712–719. Andreas Stolcke. 2002. SRILM – an extensible language modeling toolkit. In Proceedings of the International Conference on Spoken Language Processing, volume 2, pages 901–904. Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A conditional random field word segmenter. In Fourth SIGHAN Workshop on Chinese Language Processing. Chao Wang, Michael Collins, and Phillip Koehn. 2007a. Chinese syntactic reordering for statistical machine translation. In Proceedings of EMNLP. Wei Wang, Kevin Knight, and Daniel Marcu. 2007b. Binarizing syntax trees to improve syntax-based machine translation accuracy. In Proc. EMNLP+CoNLL 2007. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23:377–404. Andreas Zollmann and Ashish Venugopal. 2006. Syntax augmented machine translation via chart parsing. In Proceedings of the SMT Workshop, HLT-NAACL. 1011
2008
114
Proceedings of ACL-08: HLT, pages 1012–1020, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Generalizing Word Lattice Translation Christopher Dyer∗, Smaranda Muresan, Philip Resnik∗ Laboratory for Computational Linguistics and Information Processing Institute for Advanced Computer Studies ∗Department of Linguistics University of Maryland College Park, MD 20742, USA redpony, smara, resnik AT umd.edu Abstract Word lattice decoding has proven useful in spoken language translation; we argue that it provides a compelling model for translation of text genres, as well. We show that prior work in translating lattices using finite state techniques can be naturally extended to more expressive synchronous context-free grammarbased models. Additionally, we resolve a significant complication that non-linear word lattice inputs introduce in reordering models. Our experiments evaluating the approach demonstrate substantial gains for ChineseEnglish and Arabic-English translation. 1 Introduction When Brown and colleagues introduced statistical machine translation in the early 1990s, their key insight – harkening back to Weaver in the late 1940s – was that translation could be viewed as an instance of noisy channel modeling (Brown et al., 1990). They introduced a now standard decomposition that distinguishes modeling sentences in the target language (language models) from modeling the relationship between source and target language (translation models). Today, virtually all statistical translation systems seek the best hypothesis e for a given input f in the source language, according to ˆe = arg max e Pr(e|f) (1) An exception is the translation of speech recognition output, where the acoustic signal generally underdetermines the choice of source word sequence f. There, Bertoldi and others have recently found that, rather than translating a single-best transcription f, it is advantageous to allow the MT decoder to consider all possibilities for f by encoding the alternatives compactly as a confusion network or lattice (Bertoldi et al., 2007; Bertoldi and Federico, 2005; Koehn et al., 2007). Why, however, should this advantage be limited to translation from spoken input? Even for text, there are often multiple ways to derive a sequence of words from the input string. Segmentation of Chinese, decompounding in German, morphological analysis for Arabic — across a wide range of source languages, ambiguity in the input gives rise to multiple possibilities for the source word sequence. Nonetheless, state-of-the-art systems commonly identify a single analysis f during a preprocessing step, and decode according to the decision rule in (1). In this paper, we go beyond speech translation by showing that lattice decoding can also yield improvements for text by preserving alternative analyses of the input. In addition, we generalize lattice decoding algorithmically, extending it for the first time to hierarchical phrase-based translation (Chiang, 2005; Chiang, 2007). Formally, the approach we take can be thought of as a “noisier channel”, where an observed signal o gives rise to a set of source-language strings f′ ∈ F(o) and we seek ˆe = arg max e max f′∈F(o) Pr(e, f′|o) (2) = arg max e max f′∈F(o) Pr(e)Pr(f′|e, o) (3) = arg max e max f′∈F(o) Pr(e)Pr(f′|e)Pr(o|f′).(4) Following Och and Ney (2002), we use the maximum entropy framework (Berger et al., 1996) to directly model the posterior Pr(e, f′|o) with parameters tuned to minimize a loss function representing 1012 the quality only of the resulting translations. Thus, we make use of the following general decision rule: ˆe = arg max e max f′∈F(o) M X m=1 λmφm(e, f′, o) (5) In principle, one could decode according to (2) simply by enumerating and decoding each f′ ∈ F(o); however, for any interestingly large F(o) this will be impractical. We assume that for many interesting cases of F(o), there will be identical substrings that express the same content, and therefore a lattice representation is appropriate. In Section 2, we discuss decoding with this model in general, and then show how two classes of translation models can easily be adapted for lattice translation; we achieve a unified treatment of finite-state and hierarchical phrase-based models by treating lattices as a subcase of weighted finite state automata (FSAs). In Section 3, we identify and solve issues that arise with reordering in non-linear FSAs, i.e. FSAs where every path does not pass through every node. Section 4 presents two applications of the noisier channel paradigm, demonstrating substantial performance gains in Arabic-English and Chinese-English translation. In Section 5 we discuss relevant prior work, and we conclude in Section 6. 2 Decoding Most statistical machine translation systems model translational equivalence using either finite state transducers or synchronous context free grammars (Lopez, to appear 2008). In this section we discuss the issues associated with adapting decoders from both classes of formalism to process word lattices. The first decoder we present is a SCFG-based decoder similar to the one described in Chiang (2007). The second is a phrase-based decoder implementing the model of Koehn et al. (2003). 2.1 Word lattices A word lattice G = ⟨V, E⟩is a directed acyclic graph that formally is a weighted finite state automaton (FSA). We further stipulate that exactly one node has no outgoing edges and is designated the ‘end node’. Figure 1 illustrates three classes of word lattices. 0 1 x 2 a y 3 b c 0 1 a x ε 2 b 3 d c 0 1 a 2 b 3 c Figure 1: Three examples of word lattices: (a) sentence, (b) confusion network, and (c) non-linear word lattice. A word lattice is useful for our purposes because it permits any finite set of strings to be represented and allows for substrings common to multiple members of the set to be represented with a single piece of structure. Additionally, all paths from one node to another form an equivalence class representing, in our model, alternative expressions of the same underlying communicative intent. For translation, we will find it useful to encode G in a chart based on a topological ordering of the nodes, as described by Cheppalier et al. (1999). The nodes in the lattices shown in Figure 1 are labeled according to an appropriate numbering. The chart-representation of the graph is a triple of 2-dimensional matrices ⟨F, p, R⟩, which can be constructed from the numbered graph. Fi,j is the word label of the jth transition leaving node i. The corresponding transition cost is pi,j. Ri,j is the node number of the node on the right side of the jth transition leaving node i. Note that Ri,j > i for all i, j. Table 1 shows the word lattice from Figure 1 represented in matrix form as ⟨F, p, R⟩. 0 1 2 a 1 1 b 1 2 c 1 3 a 1 3 1 b 1 2 c 1 2 3 x 1 3 1 d 1 2 3 ϵ 1 3 1 x 1 2 1 y 1 2 b 1 2 3 a 1 2 2 c 1 2 3 Table 1: Topologically ordered chart encoding of the three lattices in Figure 1. Each cell ij in this table is a triple ⟨Fij, pij, Rij⟩ 1013 2.2 Parsing word lattices Chiang (2005) introduced hierarchical phrase-based translation models, which are formally based on synchronous context-free grammars (SCFGs). Translation proceeds by parsing the input using the source language side of the grammar, simultaneously building a tree on the target language side via the target side of the synchronized rules. Since decoding is equivalent to parsing, we begin by presenting a parser for word lattices, which is a generalization of a CKY parser for lattices given in Cheppalier et al. (1999). Following Goodman (1999), we present our lattice parser as a deductive proof system in Figure 2. The parser consists of two kinds of items, the first with the form [X →α • β, i, j] representing rules that have yet to be completed and span node i to node j. The other items have the form [X, i, j] and indicate that non-terminal X spans [i, j]. As with sentence parsing, the goal is a deduction that covers the spans of the entire input lattice [S, 0, |V | −1]. The three inference rules are: 1) match a terminal symbol and move across one edge in the lattice 2) move across an ϵ-edge without advancing the dot in an incomplete rule 3) advance the dot across a nonterminal symbol given appropriate antecedents. 2.3 From parsing to MT decoding A target language model is necessary to generate fluent output. To do so, the grammar is intersected with an n-gram LM. To mitigate the effects of the combinatorial explosion of non-terminals the LM intersection entails, we use cube-pruning to only consider the most promising expansions (Chiang, 2007). 2.4 Lattice translation with FSTs A second important class of translation models includes those based formally on FSTs. We present a description of the decoding process for a word lattice using a representative FST model, the phrase-based translation model described in Koehn et al. (2003). Phrase-based models translate a foreign sentence f into the target language e by breaking up f into a sequence of phrases f I 1, where each phrase fi can contain one or more contiguous words and is translated into a target phrase ei of one or more contiguous words. Each word in f must be translated exactly once. To generalize this model to word lattices, it is necessary to choose both a path through the lattice and a partitioning of the sentence this induces into a sequence of phrases f I 1. Although the number of source phrases in a word lattice can be exponential in the number of nodes, enumerating the possible translations of every span in a lattice is in practice tractable, as described by Bertoldi et al. (2007). 2.5 Decoding with phrase-based models We adapted the Moses phrase-based decoder to translate word lattices (Koehn et al., 2007). The unmodified decoder builds a translation hypothesis from left to right by selecting a range of untranslated words and adding translations of this phrase to the end of the hypothesis being extended. When no untranslated words remain, the translation process is complete. The word lattice decoder works similarly, only now the decoder keeps track not of the words that have been covered, but of the nodes, given a topological ordering of the nodes. For example, assuming the third lattice in Figure 1 is our input, if the edge with word a is translated, this will cover two untranslated nodes [0,1] in the coverage vector, even though it is only a single word. As with sentencebased decoding, a translation hypothesis is complete when all nodes in the input lattice are covered. 2.6 Non-monotonicity and unreachable nodes The changes described thus far are straightforward adaptations of the underlying phrase-based sentence decoder; however, dealing properly with non-monotonic decoding of word lattices introduces some minor complexity that is worth mentioning. In the sentence decoder, any translation of any span of untranslated words is an allowable extension of a partial translation hypothesis, provided that the coverage vectors of the extension and the partial hypothesis do not intersect. In a non-linear word lattice, a further constraint must be enforced ensuring that there is always a path from the starting node of the translation extension’s source to the node representing the nearest right edge of the already-translated material, as well as a path from the ending node of the translation extension’s source to future translated spans. Figure 3 illustrates the problem. If [0,1] is translated, the decoder must not consider translating 1014 Axioms: [X →•γ, i, i] : w (X w −→⟨γ, α⟩) ∈G, i ∈[0, |V | −2] Inference rules: [X →α • Fj,kβ, i, j] : w [X →αFj,k • β, i, Rj,k] : w × pj,k [X →α • β, i, j] : w [X →α • β, i, Rj,k] : w × pj,k Fj,k = ϵ [Z →α • Xβ, i, k] : w1 [X →γ•, k, j] : w2 [Z →αX • β, i, j] : w1 × w2 Goal state: [S →γ•, 0, |V | −1] Figure 2: Word lattice parser for an unrestricted context free grammar G. 0 1 a 2 3 x Figure 3: The span [0, 3] has one inconsistent covering, [0, 1] + [2, 3]. [2,3] as a possible extension of this hypothesis since there is no path from node 1 to node 2 and therefore the span [1,2] would never be covered. In the parser that forms the basis of the hierarchical decoder described in Section 2.3, no such restriction is necessary since grammar rules are processed in a strictly left-to-right fashion without any skips. 3 Distortion in a non-linear word lattice In both hierarchical and phrase-based models, the distance between words in the source sentence is used to limit where in the target sequence their translations will be generated. In phrase based translation, distortion is modeled explicitly. Models that support non-monotonic decoding generally include a distortion cost, such as |ai −bi−1 −1| where ai is the starting position of the foreign phrase fi and bi−1 is the ending position of phrase fi−1 (Koehn et al., 2003). The intuition behind this model is that since most translation is monotonic, the cost of skipping ahead or back in the source should be proportional to the number of words that are skipped. Additionally, a maximum distortion limit is used to restrict 0 1 a 2 x 3 b y 4 c Figure 4: Distance-based distortion problem. What is the distance between node 4 to node 0? the size of the search space. In linear word lattices, such as confusion networks, the distance metric used for the distortion penalty and for distortion limits is well defined; however, in a non-linear word lattice, it poses the problem illustrated in Figure 4. Assuming the leftto-right decoding strategy described in the previous section, if c is generated by the first target word, the distortion penalty associated with “skipping ahead” should be either 3 or 2, depending on what path is chosen to translate the span [0,3]. In large lattices, where a single arc may span many nodes, the possible distances may vary quite substantially depending on what path is ultimately taken, and handling this properly therefore crucial. Although hierarchical phrase-based models do not model distortion explicitly, Chiang (2007) suggests using a span length limit to restrict the window in which reordering can take place.1 The decoder enforces the constraint that a synchronous rule learned from the training data (the only mechanism by which reordering can be introduced) can span 1This is done to reduce the size of the search space and because hierarchical phrase-based translation models are inaccurate models of long-distance distortion. 1015 Distance metric MT05 MT06 Difference 0.2943 0.2786 Difference+LexRO 0.2974 0.2890 ShortestP 0.2993 0.2865 ShortestP+LexRO 0.3072 0.2992 Table 2: Effect of distance metric on phrase-based model performance. maximally Λ words in f. Like the distortion cost used in phrase-based systems, Λ is also poorly defined for non-linear lattices. Since we want a distance metric that will restrict as few local reorderings as possible on any path, we use a function ξ(a, b) returning the length of the shortest path between nodes a and b. Since this function is not dependent on the exact path chosen, it can be computed in advance of decoding using an allpairs shortest path algorithm (Cormen et al., 1989). 3.1 Experimental results We tested the effect of the distance metric on translation quality using Chinese word segmentation lattices (Section 4.1, below) using both a hierarchical and phrase-based system modified to translate word lattices. We compared the shortest-path distance metric with a baseline which uses the difference in node number as the distortion distance. For an additional datapoint, we added a lexicalized reordering model that models the probability of each phrase pair appearing in three different orientations (swap, monotone, other) in the training corpus (Koehn et al., 2005). Table 2 summarizes the results of the phrasebased systems. On both test sets, the shortest path metric improved the BLEU scores. As expected, the lexicalized reordering model improved translation quality over the baseline; however, the improvement was more substantial in the model that used the shortest-path distance metric (which was already a higher baseline). Table 3 summarizes the results of our experiment comparing the performance of two distance metrics to determine whether a rule has exceeded the decoder’s span limit. The pattern is the same, showing a clear increase in BLEU for the shortest path metric over the baseline. Distance metric MT05 MT06 Difference 0.3063 0.2957 ShortestP 0.3176 0.3043 Table 3: Effect of distance metric on hierarchical model performance. 4 Exploiting Source Language Alternatives Chinese word segmentation. A necessary first step in translating Chinese using standard models is segmenting the character stream into a sequence of words. Word-lattice translation offers two possible improvements over the conventional approach. First, a lattice may represent multiple alternative segmentations of a sentence; input represented in this way will be more robust to errors made by the segmenter.2 Second, different segmentation granularities may be more or less optimal for translating different spans. By encoding alternatives in the input in a word lattice, the decision as to which granularity to use for a given span can be resolved during decoding rather than when constructing the system. Figure 5 illustrates a lattice based on three different segmentations. Arabic morphological variation. Arabic orthography is problematic for lexical and phrase-based MT approaches since a large class of functional elements (prepositions, pronouns, tense markers, conjunctions, definiteness markers) are attached to their host stems. Thus, while the training data may provide good evidence for the translation of a particular stem by itself, the same stem may not be attested when attached to a particular conjunction. The general solution taken is to take the best possible morphological analysis of the text (it is often ambiguous whether a piece of a word is part of the stem or merely a neighboring functional element), and then make a subset of the bound functional elements in the language into freestanding tokens. Figure 6 illustrates the unsegmented Arabic surface form as well as the morphological segmentation variant we made use of. The limitation of this approach is that as the amount and variety of training data increases, the optimal segmentation strategy changes: more aggressive segmentation results 2The segmentation process is ambiguous, even for native speakers of Chinese. 1016 0 1 硬 2 硬质 4 硬质合金 质 3 合 合金 金 5 号 6 号称 称 7 " 8 工 9 工业 业 10 牙 11 牙齿 齿 12 " Figure 5: Sample Chinese segmentation lattice using three segmentations. in fewer OOV tokens, but automatic evaluation metrics indicate lower translation quality, presumably because the smaller units are being translated less idiomatically (Habash and Sadat, 2006). Lattices allow the decoder to make decisions about what granularity of segmentation to use subsententially. 4.1 Chinese Word Segmentation Experiments In our experiments we used two state-of-the-art Chinese word segmenters: one developed at Harbin Institute of Technology (Zhao et al., 2001), and one developed at Stanford University (Tseng et al., 2005). In addition, we used a character-based segmentation. In the remaining of this paper, we use cs for character segmentation, hs for Harbin segmentation and ss for Stanford segmentation. We built two types of lattices: one that combines the Harbin and Stanford segmenters (hs+ss), and one which uses all three segmentations (hs+ss+cs). Data and Settings. The systems used in these experiments were trained on the NIST MT06 Eval corpus without the UN data (approximatively 950K sentences). The corpus was analyzed with the three segmentation schemes. For the systems using word lattices, the training data contained the versions of the corpus appropriate for the segmentation schemes used in the input. That is, for the hs+ss condition, the training data consisted of two copies of the corpus: one segmented with the Harbin segmenter and the other with the Stanford segmenter.3 A trigram English language model with modified Kneser-Ney smoothing (Kneser and Ney, 1995) was trained on the English side of our training data as well as portions of the Gigaword v2 English Corpus, and was used for all experiments. The NIST MT03 test set was used as a development set for optimizing the interpolation weights using minimum error rate train3The corpora were word-aligned independently and then concatenated for rule extraction. ing (Och, 2003). The testing was done on the NIST 2005 and 2006 evaluation sets (MT05, MT06). Experimental results: Word-lattices improve translation quality. We used both a phrase-based translation model, decoded using our modified version of Moses (Koehn et al., 2007), and a hierarchical phrase-based translation model, using our modified version of Hiero (Chiang, 2005; Chiang, 2007). These two translation model types illustrate the applicability of the theoretical contributions presented in Section 2 and Section 3. We observed that the coverage of named entities (NEs) in our baseline systems was rather poor. Since names in Chinese can be composed of relatively long strings of characters that cannot be translated individually, when generating the segmentation lattices that included cs arcs, we avoided segmenting NEs of type PERSON, as identified using a Chinese NE tagger (Florian et al., 2004). The results are summarized in Table 4. We see that using word lattices improves BLEU scores both in the phrase-based model and hierarchical model as compared to the single-best segmentation approach. All results using our word-lattice decoding for the hierarchical models (hs+ss and hs+ss+cs) are significantly better than the best segmentation (ss).4 For the phrase-based model, we obtain significant gains using our word-lattice decoder using all three segmentations on MT05. The other results, while better than the best segmentation (hs) by at least 0.3 BLEU points, are not statistically significant. Even if the results are not statistically significant for MT06, there is a high decrease in OOV items when using word-lattices. For example, for MT06 the number of OOVs in the hs translation is 484. 4Significance testing was carried out using the bootstrap resampling technique advocated by Koehn (2004). Unless otherwise noted, all reported improvements are signficant at at least p < 0.05. 1017 surface wxlAl ftrp AlSyf kAn mEZm AlDjyj AlAElAmy m&ydA llEmAd . segmented w- xlAl ftrp Al- Syf kAn mEZm Al- Djyj Al- AElAmy m&ydA l- Al- EmAd . (English) During the summer period , most media buzz was supportive of the general . Figure 6: Example of Arabic morphological segmentation. The number of OOVs decreased by 19% for hs+ss and by 75% for hs+ss+cs. As mentioned in Section 3, using lexical reordering for word-lattices further improves the translation quality. 4.2 Arabic Morphology Experiments We created lattices from an unsegmented version of the Arabic test data and generated alternative arcs where clitics as well as the definiteness marker and the future tense marker were segmented into tokens. We used the Buckwalter morphological analyzer and disambiguated the analysis using a simple unigram model trained on the Penn Arabic Treebank. Data and Settings. For these experiments we made use of the entire NIST MT08 training data, although for training of the system, we used a subsampling method proposed by Kishore Papineni that aims to include training sentences containing ngrams in the test data (personal communication). For all systems, we used a 5-gram English LM trained on 250M words of English training data. The NIST MT03 test set was used as development set for optimizing the interpolation weights using MER training (Och, 2003). Evaluation was carried out on the NIST 2005 and 2006 evaluation sets (MT05, MT06). Experimental results: Word-lattices improve translation quality. Results are presented in Table 5. Using word-lattices to combine the surface forms with morphologically segmented forms significantly improves BLEU scores both in the phrase-based and hierarchical models. 5 Prior work Lattice Translation. The ‘noisier channel’ model of machine translation has been widely used in spoken language translation as an alternative to selecting the single-best hypothesis from an ASR system and translating it (Ney, 1999; Casacuberta et al., 2004; Zhang et al., 2005; Saleem et al., 2005; Matusov et al., 2005; Bertoldi et al., 2007; Mathias, 2007). Several authors (e.g. Saleem et al. (2005) and Bertoldi et al. (2007)) comment directly on the impracticality of using n-best lists to translate speech. Although translation is fundamentally a nonmonotonic relationship between most language pairs, reordering has tended to be a secondary concern to the researchers who have worked on lattice translation. Matusov et al. (2005) decodes monotonically and then uses a finite state reordering model on the single-best translation, along the lines of Bangalore and Riccardi (2000). Mathias (2007) and Saleem et al. (2004) only report results of monotonic decoding for the systems they describe. Bertoldi et al. (2007) solve the problem by requiring that their input be in the format of a confusion network, which enables the standard distortion penalty to be used. Finally, the system described by Zhang et al. (2005) uses IBM Model 4 features to translate lattices. For the distortion model, they use the maximum probability value over all possible paths in the lattice for each jump considered, which is similar to the approach we have taken. Mathias and Byrne (2006) build a phrase-based translation system as a cascaded series of FSTs which can accept any input FSA; however, the only reordering that is permitted is the swapping of two adjacent phrases. Applications of source lattices outside of the domain of spoken language translation have been far more limited. Costa-juss`a and Fonollosa (2007) take steps in this direction by using lattices to encode multiple reorderings of the source language. Dyer (2007) uses confusion networks to encode morphological alternatives in Czech-English translation, and Xu et al. (2005) takes an approach very similar to ours for Chinese-English translation and encodes multiple word segmentations in a lattice, but which is decoded with a conventionally trained translation model and without a sophisticated reordering model. The Arabic-English morphological segmentation lattices are similar in spirit to backoff translation models (Yang and Kirchhoff, 2006), which consider alternative morphological segmentations and simpli1018 MT05 MT06 (Source Type) BLEU BLEU cs 0.2833 0.2694 hs 0.2905 0.2835 ss 0.2894 0.2801 hs+ss 0.2938 0.2870 hs+ss+cs 0.2993 0.2865 hs+ss+cs.lexRo 0.3072 0.2992 MT05 MT06 (Source Type) BLEU BLEU cs 0.2904 0.2821 hs 0.3008 0.2907 ss 0.3071 0.2964 hs+ss 0.3132 0.3006 hs+ss+cs 0.3176 0.3043 (a) Phrase-based model (b) Hierarchical model Table 4: Chinese Word Segmentation Results MT05 MT06 (Source Type) BLEU BLEU surface 0.4682 0.3512 morph 0.5087 0.3841 morph+surface 0.5225 0.4008 MT05 MT06 (Source Type) BLEU BLEU surface 0.5253 0.3991 morph 0.5377 0.4180 morph+surface 0.5453 0.4287 (a) Phrase-based model (b) Hierarchical model Table 5: Arabic Morphology Results fications of a surface token when the surface token can not be translated. Parsing and formal language theory. There has been considerable work on parsing word lattices, much of it for language modeling applications in speech recognition (Ney, 1991; Cheppalier and Rajman, 1998). Additionally, Grune and Jacobs (2008) refines an algorithm originally due to Bar-Hillel for intersecting an arbitrary FSA (of which word lattices are a subset) with a CFG. Klein and Manning (2001) formalize parsing as a hypergraph search problem and derive an O(n3) parser for lattices. 6 Conclusions We have achieved substantial gains in translation performance by decoding compact representations of alternative source language analyses, rather than single-best representations. Our results generalize previous gains for lattice translation of spoken language input, and we have further generalized the approach by introducing an algorithm for lattice decoding using a hierarchical phrase-based model. Additionally, we have shown that although word lattices complicate modeling of word reordering, a simple heuristic offers good performance and enables many standard distortion models to be used directly with lattice input. Acknowledgments This research was supported by the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR0011-06-2-0001. The authors wish to thank Niyu Ge for the Chinese named-entity analysis, Pi-Chuan Chang for her assistance with the Stanford Chinese segmenter, and Tie-Jun Zhao and Congui Zhu for making the Harbin Chinese segmenter available to us. References S. Bangalore and G. Riccardi. 2000. Finite state models for lexical reordering in spoken language translation. In Proc. Int. Conf. on Spoken Language Processing, pages 422–425, Beijing, China. A.L. Berger, V.J. Della Pietra, and S.A. Della Pietra. 1996. A maximum entropy approach to natural language processing. Comput. Linguist., 22(1):39–71. N. Bertoldi and M. Federico. 2005. A new decoder for spoken language translation based on confusion networks. In Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop. N. Bertoldi, R. Zens, and M. Federico. 2007. Speech translation by confusion network decoding. In Proceeding of ICASSP 2007, Honolulu, Hawaii, April. P.F. Brown, J. Cocke, S. Della-Pietra, V.J. Della-Pietra, F. Jelinek, J.D. Lafferty, R.L. Mercer, and P.S. Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16:79–85. F. Casacuberta, H. Ney, F. J. Och, E. Vidal, J. M. Vilar, S. Barrachina, I. Garcia-Varea, D. Llorens, C. Mar1019 tinez, S. Molau, F. Nevado, M. Pastor, D. Pico, A. Sanchis, and C. Tillmann. 2004. Some approaches to statistical and finite-state speech-to-speech translation. Computer Speech & Language, 18(1):25–47, January. J. Cheppalier and M. Rajman. 1998. A generalized CYK algorithm for parsing stochastic CFG. In Proceedings of the Workshop on Tabulation in Parsing and Deduction (TAPD98), pages 133–137, Paris, France. J. Cheppalier, M. Rajman, R. Aragues, and A. Rozenknop. 1999. Lattice parsing for speech recognition. In Sixth Conference sur le Traitement Automatique du Langage Naturel (TANL’99), pages 95–104. D. Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 263–270. D. Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. T.H. Cormen, C. E. Leiserson, and R. L. Rivest, 1989. Introduction to Algorithms, pages 558–565. The MIT Press and McGraw-Hill Book Company. M. Costa-juss`a and J.A.R. Fonollosa. 2007. Analysis of statistical and morphological classes to generate weighted reordering hypotheses on a statistical machine translation system. In Proc. of the Second Workshop on SMT, pages 171–176, Prague. C. Dyer. 2007. Noisier channel translation: translation from morphologically complex languages. In Proceedings of the Second Workshop on Statistical Machine Translation, Prague, June. R. Florian, H. Hassan, A. Ittycheriah, H. Jing, N. Kambhatla, X. Luo, N Nicolov, and S Roukos. 2004. A statistical model for multilingual entity detection and tracking. In Proc. of HLT-NAACL 2004, pages 1–8. J. Goodman. 1999. Semiring parsing. Computational Linguistics, 25:573–605. D. Grune and C.J. H. Jacobs. 2008. Parsing as intersection. Parsing Techniques, pages 425–442. N. Habash and F. Sadat. 2006. Arabic preprocessing schemes for statistical machine translation. In Proc. of NAACL, New York. D. Klein and C. D. Manning. 2001. Parsing with hypergraphs. In Proceedings of IWPT 2001. R. Kneser and H. Ney. 1995. Improved backing-off for m-gram language modeling. In Proceedings of IEEE Internation Conference on Acoustics, Speech, and Signal Processing, pages 181–184. P. Koehn, F.J. Och, and D. Marcu. 2003. Statistical phrase-based translation. In Proceedings of NAACL 2003, pages 48–54. P. Koehn, A. Axelrod, A. Birch Mayne, C. CallisonBurch, M. Osborne, and D. Talbot. 2005. Edinburgh system description for the 2005 IWSLT speech translation evaluation. In Proc. of IWSLT 2005, Pittsburgh. P. Koehn, H. Hoang, A. Birch Mayne, C. CallisonBurch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Annual Meeting of the Association for Computation Linguistics (ACL), Demonstration Session, pages 177–180, Jun. P. Koehn. 2004. Statistical significance tests for machine translation evluation. In Proc. of the 2004 Conf. on EMNLP, pages 388–395. A. Lopez. to appear 2008. Statistical machine translation. ACM Computing Surveys. L. Mathias and W. Byrne. 2006. Statistical phrasebased speech translation. In IEEE Conf. on Acoustics, Speech and Signal Processing. L. Mathias. 2007. Statistical Machine Translation and Automatic Speech Recognition under Uncertainty. Ph.D. thesis, The Johns Hopkins University. E. Matusov, S. Kanthak, and H. Ney. 2005. On the integration of speech recognition and statistical machine translation. In Proceedings of Interspeech 2005. H. Ney. 1991. Dynamic programming parsing for context-free grammars in continuous speech recognition. IEEE Transactions on Signal Processing, 39(2). H. Ney. 1999. Speech translation: Coupling of recognition and translation. In Proc. of ICASSP, pages 517– 520, Phoenix. F. Och and H. Ney. 2002. Discriminitive training and maximum entropy models for statistical machine translation. In Proceedings of the 40th Annual Meeting of the ACL, pages 295–302. S. Saleem, S.-C. Jou, S. Vogel, and T. Schulz. 2005. Using word lattice information for a tighter coupling in speech translation systems. In Proc. of ICSLP, Jeju Island, Korea. H. Tseng, P. Chang, G. Andrew, D. Jurafsky, and C. Manning. 2005. A conditional random field word segmenter. In Fourth SIGHAN Workshop on Chinese Language Processing. J. Xu, E. Matusov, R. Zens, and H. Ney. 2005. Integrated Chinese word segmentation in statistical machine translation. In Proc. of IWSLT 2005, Pittsburgh. M. Yang and K. Kirchhoff. 2006. Phrase-based backoff models for machine translation of highly inflected languages. In Proceedings of the EACL 2006, pages 41–48. R. Zhang, G. Kikui, H. Yamamoto, and W. Lo. 2005. A decoding algorithm for word lattice translation in speech translation. In Proceedings of the 2005 International Workshop on Spoken Language Translation. T. Zhao, L. Yajuan, Y. Muyun, and Y. Hao. 2001. Increasing accuracy of chinese segmentation with strategy of multi-step processing. In J Chinese Information Processing (Chinese Version), volume 1, pages 13–18. 1020
2008
115
Proceedings of ACL-08: HLT, pages 1021–1029, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Combining Multiple Resources to Improve SMT-based Paraphrasing Model∗ Shiqi Zhao1, Cheng Niu2, Ming Zhou2, Ting Liu1, Sheng Li1 1Harbin Institute of Technology, Harbin, China {zhaosq,tliu,lisheng}@ir.hit.edu.cn 2Microsoft Research Asia, Beijing, China {chengniu,mingzhou}@microsoft.com Abstract This paper proposes a novel method that exploits multiple resources to improve statistical machine translation (SMT) based paraphrasing. In detail, a phrasal paraphrase table and a feature function are derived from each resource, which are then combined in a log-linear SMT model for sentence-level paraphrase generation. Experimental results show that the SMT-based paraphrasing model can be enhanced using multiple resources. The phrase-level and sentence-level precision of the generated paraphrases are above 60% and 55%, respectively. In addition, the contribution of each resource is evaluated, which indicates that all the exploited resources are useful for generating paraphrases of high quality. 1 Introduction Paraphrases are alternative ways of conveying the same meaning. Paraphrases are important in many natural language processing (NLP) applications, such as machine translation (MT), question answering (QA), information extraction (IE), multidocument summarization (MDS), and natural language generation (NLG). This paper addresses the problem of sentencelevel paraphrase generation, which aims at generating paraphrases for input sentences. An example of sentence-level paraphrases can be seen below: S1: The table was set up in the carriage shed. S2: The table was laid under the cart-shed. ∗This research was finished while the first author worked as an intern in Microsoft Research Asia. Paraphrase generation can be viewed as monolingual machine translation (Quirk et al., 2004), which typically includes a translation model and a language model. The translation model can be trained using monolingual parallel corpora. However, acquiring such corpora is not easy. Hence, data sparseness is a key problem for the SMT-based paraphrasing. On the other hand, various methods have been presented to extract phrasal paraphrases from different resources, which include thesauri, monolingual corpora, bilingual corpora, and the web. However, little work has been focused on using the extracted phrasal paraphrases in sentence-level paraphrase generation. In this paper, we exploit multiple resources to improve the SMT-based paraphrase generation. In detail, six kinds of resources are utilized, including: (1) an automatically constructed thesaurus, (2) a monolingual parallel corpus from novels, (3) a monolingual comparable corpus from news articles, (4) a bilingual phrase table, (5) word definitions from Encarta dictionary, and (6) a corpus of similar user queries. Among the resources, (1), (2), (3), and (4) have been investigated by other researchers, while (5) and (6) are first used in this paper. From those resources, six phrasal paraphrase tables are extracted, which are then used in a log-linear SMTbased paraphrasing model. Both phrase-level and sentence-level evaluations were carried out in the experiments. In the former one, phrase substitutes occurring in the paraphrase sentences were evaluated. While in the latter one, the acceptability of the paraphrase sentences was evaluated. Experimental results show that: (1) The 1021 SMT-based paraphrasing is enhanced using multiple resources. The phrase-level and sentence-level precision of the generated paraphrases exceed 60% and 55%, respectively. (2) Although the contributions of the resources differ a lot, all the resources are useful. (3) The performance of the method varies greatly on different test sets and it performs best on the test set of news sentences, which are from the same source as most of the training data. The rest of the paper is organized as follows: Section 2 reviews related work. Section 3 introduces the log-linear model for paraphrase generation. Section 4 describes the phrasal paraphrase extraction from different resources. Section 5 presents the parameter estimation method. Section 6 shows the experiments and results. Section 7 draws the conclusion. 2 Related Work Paraphrases have been used in many NLP applications. In MT, Callison-Burch et al. (2006) utilized paraphrases of unseen source phrases to alleviate data sparseness. Kauchak and Barzilay (2006) used paraphrases of the reference translations to improve automatic MT evaluation. In QA, Lin and Pantel (2001) and Ravichandran and Hovy (2002) paraphrased the answer patterns to enhance the recall of answer extraction. In IE, Shinyama et al. (2002) automatically learned paraphrases of IE patterns to reduce the cost of creating IE patterns by hand. In MDS, McKeown et al. (2002) identified paraphrase sentences across documents before generating summarizations. In NLG, Iordanskaja et al. (1991) used paraphrases to generate more varied and fluent texts. Previous work has examined various resources for acquiring paraphrases, including thesauri, monolingual corpora, bilingual corpora, and the web. Thesauri, such as WordNet, have been widely used for extracting paraphrases. Some researchers extract synonyms as paraphrases (Kauchak and Barzilay, 2006), while some others use looser definitions, such as hypernyms and holonyms (Barzilay and Elhadad, 1997). Besides, the automatically constructed thesauri can also be used. Lin (1998) constructed a thesaurus by automatically clustering words based on context similarity. Barzilay and McKeown (2001) used monolingual parallel corpora for identifying paraphrases. They exploited a corpus of multiple English translations of the same source text written in a foreign language, from which phrases in aligned sentences that appear in similar contexts were extracted as paraphrases. In addition, Finch et al. (2005) applied MT evaluation methods (BLEU, NIST, WER and PER) to build classifiers for paraphrase identification. Monolingual parallel corpora are difficult to find, especially in non-literature domains. Alternatively, some researchers utilized monolingual comparable corpora for paraphrase extraction. Different news articles reporting on the same event are commonly used as monolingual comparable corpora, from which both paraphrase patterns and phrasal paraphrases can be derived (Shinyama et al., 2002; Barzilay and Lee, 2003; Quirk et al., 2004). Lin and Pantel (2001) learned paraphrases from a parsed monolingual corpus based on an extended distributional hypothesis, where if two paths in dependency trees tend to occur in similar contexts it is hypothesized that the meanings of the paths are similar. The monolingual corpus used in their work is not necessarily parallel or comparable. Thus it is easy to obtain. However, since this resource is used to extract paraphrase patterns other than phrasal paraphrases, we do not use it in this paper. Bannard and Callison-Burch (2005) learned phrasal paraphrases using bilingual parallel corpora. The basic idea is that if two phrases are aligned to the same translation in a foreign language, they may be paraphrases. This method has been demonstrated effective in extracting large volume of phrasal paraphrases. Besides, Wu and Zhou (2003) exploited bilingual corpora and translation information in learning synonymous collocations. In addition, some researchers extracted paraphrases from the web. For example, Ravichandran and Hovy (2002) retrieved paraphrase patterns from the web using hand-crafted queries. Pasca and Dienes (2005) extracted sentence fragments occurring in identical contexts as paraphrases from one billion web documents. Since web mining is rather time consuming, we do not exploit the web to extract paraphrases in this paper. So far, two kinds of methods have been proposed for sentence-level paraphrase generation, i.e., the pattern-based and SMT-based methods. Automatically learned patterns have been used in para1022 phrase generation. For example, Barzilay and Lee (2003) applied multiple-sequence alignment (MSA) to parallel news sentences and induced paraphrasing patterns for generating new sentences. Pang et al. (2003) built finite state automata (FSA) from semantically equivalent translation sets based on syntactic alignment and used the FSAs in paraphrase generation. The pattern-based methods can generate complex paraphrases that usually involve syntactic variation. However, the methods were demonstrated to be of limited generality (Quirk et al., 2004). Quirk et al. (2004) first recast paraphrase generation as monolingual SMT. They generated paraphrases using a SMT system trained on parallel sentences extracted from clustered news articles. In addition, Madnani et al. (2007) also generated sentence-level paraphrases based on a SMT model. The advantage of the SMT-based method is that it achieves better coverage than the pattern-based method. The main difference between their methods and ours is that they only used bilingual parallel corpora as paraphrase resource, while we exploit and combine multiple resources. 3 SMT-based Paraphrasing Model The SMT-based paraphrasing model used by Quirk et al. (2004) was the noisy channel model of Brown et al. (1993), which identified the optimal paraphrase T ∗of a sentence S by finding: T ∗= arg max T {P(T|S)} = arg max T {P(S|T)P(T)} (1) In contrast, we adopt a log-linear model (Och and Ney, 2002) in this work, since multiple paraphrase tables can be easily combined in the loglinear model. Specifically, feature functions are derived from each paraphrase resource and then combined with the language model feature1: T ∗= arg max T { N X i=1 λT M ihT M i(T, S)+ λLMhLM(T, S)} (2) where N is the number of paraphrase tables. hTM i(T, S) is the feature function based on the ith paraphrase table PTi. hLM(T, S) is the language 1The reordering model is not considered in our model. model feature. λTM i and λLM are the weights of the feature functions. hTM i(T, S) is defined as: hT M i(T, S) = log Ki Y k=1 Scorei(Tk, Sk) (3) where Ki is the number of phrase substitutes from S to T based on PTi. Tk in T and Sk in S are phrasal paraphrases in PTi. Scorei(Tk, Sk) is the paraphrase likelihood according to PTi2. A 5-gram language model is used, therefore: hLM(T, S) = log J Y j=1 p(tj|tj−4, ..., tj−1) (4) where J is the length of T, tj is the j-th word of T. 4 Exploiting Multiple Resources This section describes the extraction of phrasal paraphrases using various resources. Similar to Pharaoh (Koehn, 2004), our decoder3 uses top 20 paraphrase options for each input phrase in the default setting. Therefore, we keep at most 20 paraphrases for a phrase when extracting phrasal paraphrases using each resource. 1 - Thesaurus: The thesaurus4 used in this work was automatically constructed by Lin (1998). The similarity of two words e1 and e2 was calculated through the surrounding context words that have dependency relations with the investigated words: Sim(e1, e2) = P (r,e)∈Tr(e1)∩Tr(e2)(I(e1, r, e) + I(e2, r, e)) P (r,e)∈Tr(e1) I(e1, r, e) + P (r,e)∈Tr(e2) I(e2, r, e) (5) where Tr(ei) denotes the set of words that have dependency relation r with word ei. I(ei, r, e) is the mutual information between ei, r and e. For each word, we keep 20 most similar words as paraphrases. In this way, we extract 502,305 pairs of paraphrases. The paraphrasing score Score1(p1, p2) used in Equation (3) is defined as the similarity based on Equation (5). 2If none of the phrase substitutes from S to T is from PTi (i.e., Ki = 0), we cannot compute hT M i(T, S) as in Equation (3). In this case, we assign hT M i(T, S) a minimum value. 3The decoder used here is a re-implementation of Pharaoh. 4http://www.cs.ualberta.ca/ lindek/downloads.htm. 1023 2 - Monolingual parallel corpus: Following Barzilay and McKeown (2001), we exploit a corpus of multiple English translations of foreign novels, which contains 25,804 parallel sentence pairs. We find that most paraphrases extracted using the method of Barzilay and McKeown (2001) are quite short. Thus we employ a new approach for paraphrase extraction. Specifically, we parse the sentences with CollinsParser5 and extract the chunks from the parsing results. Let S1 and S2 be a pair of parallel sentences, p1 and p2 two chunks from S1 and S2, we compute the similarity of p1 and p2 as: Sim(p1, p2) = αSimcontent(p1, p2)+ (1 −α)Simcontext(p1, p2) (6) where, Simcontent(p1, p2) is the content similarity, which is the word overlapping rate of p1 and p2. Simcontext(p1, p2) is the context similarity, which is the word overlapping rate of the contexts of p1 and p26. If the similarity of p1 and p2 exceeds a threshold Th1, they are identified as paraphrases. We extract 18,698 pairs of phrasal paraphrases from this resource. The paraphrasing score Score2(p1, p2) is defined as the similarity in Equation (6). For the paraphrases occurring more than once, we use their maximum similarity as the paraphrasing score. 3 - Monolingual comparable corpus: Similar to the methods in (Shinyama et al., 2002; Barzilay and Lee, 2003), we construct a corpus of comparable documents from a large corpus D of news articles. The corpus D contains 612,549 news articles. Given articles d1 and d2 from D, if their publication date interval is less than 2 days and their similarity7 exceeds a threshold Th2, they are recognized as comparable documents. In this way, a corpus containing 5,672,864 pairs of comparable documents is constructed. From the comparable corpus, parallel sentences are extracted. Let s1 and s2 be two sentences from comparable documents d1 and d2, if their similarity based on word overlapping rate is above a threshold Th3, s1 and s2 are identified as parallel sentences. In this way, 872,330 parallel sentence pairs are extracted. 5http://people.csail.mit.edu/mcollins/code.html 6The context of a chunk is made up of 6 words around the chunk, 3 to the left and 3 to the right. 7The similarity of two documents is computed using the vector space model and the word weights are based on tf·idf. We run Giza++ (Och and Ney, 2000) on the parallel sentences and then extract aligned phrases as described in (Koehn, 2004). The generated paraphrase table is pruned by keeping the top 20 paraphrases for each phrase. After pruning, 100,621 pairs of paraphrases are extracted. Given phrase p1 and its paraphrase p2, we compute Score3(p1, p2) by relative frequency (Koehn et al., 2003): Score3(p1, p2) = p(p2|p1) = count(p2, p1) P p′ count(p′, p1) (7) People may wonder why we do not use the same method on the monolingual parallel and comparable corpora. This is mainly because the volumes of the two corpora differ a lot. In detail, the monolingual parallel corpus is fairly small, thus automatical word alignment tool like Giza++ may not work well on it. In contrast, the monolingual comparable corpus is quite large, hence we cannot conduct the timeconsuming syntactic parsing on it as we do on the monolingual parallel corpus. 4 - Bilingual phrase table: We first construct a bilingual phrase table that contains 15,352,469 phrase pairs from an English-Chinese parallel corpus. We extract paraphrases from the bilingual phrase table and compute the paraphrasing score of phrases p1 and p2 as in (Bannard and CallisonBurch, 2005): Score4(p1, p2) = X f p(f|p1)p(p2|f) (8) where f denotes a Chinese translation of both p1 and p2. p(f|p1) and p(p2|f) are the translation probabilities provided by the bilingual phrase table. For each phrase, the top 20 paraphrases are kept according to the score in Equation (8). As a result, 3,177,600 pairs of phrasal paraphrases are extracted. 5 - Encarta dictionary definitions: Words and their definitions can be regarded as paraphrases. Here are some examples from Encarta dictionary: “hurricane: severe storm”, “clever: intelligent”, “travel: go on journey”. In this work, we extract words’ definitions from Encarta dictionary web pages8. If a word has more than one definition, all of them are extracted. Note that the words and definitions in the 8http://encarta.msn.com/encnet/features/dictionary/dictionaryhome.aspx 1024 dictionary are lemmatized, but words in sentences are usually inflected. Hence, we expand the word - definition pairs by providing the inflected forms. Here we use an inflection list and some rules for inflection. After expanding, 159,456 pairs of phrasal paraphrases are extracted. Let < p1, p2 > be a word - definition pair, the paraphrasing score is defined according to the rank of p2 in all of p1’s definitions: Score5(p1, p2) = γi−1 (9) where γ is a constant (we empirically set γ = 0.9) and i is the rank of p2 in p1’s definitions. 6 - Similar user queries: Clusters of similar user queries have been used for query expansion and suggestion (Gao et al., 2007). Since most queries are at the phrase level, we exploit similar user queries as phrasal paraphrases. In our experiment, we use the corpus of clustered similar MSN queries constructed by Gao et al. (2007). The similarity of two queries p1 and p2 is computed as: Sim(p1, p2) = βSimcontent(p1, p2)+ (1 −β)Simclick−through(p1, p2) (10) where Simcontent(p1, p2) is the content similarity, which is computed as the word overlapping rate of p1 and p2. Simclick−through(p1, p2) is the click through similarity, which is the overlapping rate of the user clicked documents for p1 and p2. For each query q, we keep the top 20 similar queries, whose similarity with q exceeds a threshold Th4. As a result, 395,284 pairs of paraphrases are extracted. The score Score6(p1, p2) is defined as the similarity in Equation (10). 7 - Self-paraphrase: In addition to the six resources introduced above, a special paraphrase table is used, which is made up of pairs of identical words. The reason why this paraphrase table is necessary is that a word should be allowed to keep unchanged in paraphrasing. This is a difference between paraphrasing and MT, since all words should be translated in MT. In our experiments, all the words that occur in the six paraphrase table extracted above are gathered to form the self-paraphrase table, which contains 110,403 word pairs. The score Score7(p1, p2) is set 1 for each identical word pair. 5 Parameter Estimation The weights of the feature functions, namely λTM i (i = 1, 2, ..., 7) and λLM, need estimation9. In MT, the max-BLEU algorithm is widely used to estimate parameters. However, it may not work in our case, since it is more difficult to create a reference set of paraphrases. We propose a new technique to estimate parameters in paraphrasing. The assumption is that, since a SMT-based paraphrase is generated through phrase substitution, we can measure the quality of a generated paraphrase by measuring its phrase substitutes. Generally, the paraphrases containing more correct phrase substitutes are judged as better paraphrases10. We therefore present the phrase substitution error rate (PSER) to score a generated paraphrase T: PSER(T) = ∥PS0(T)∥/∥PS(T)∥ (11) where PS(T) is the set of phrase substitutes in T and PS0(T) is the set of incorrect substitutes. In practice, we keep top n paraphrases for each sentence S. Thus we calculate the PSER for each source sentence S as: PSER(S) = ∥ n [ i=1 PS0(Ti)∥/∥ n [ i=1 PS(Ti)∥ (12) where Ti is the i-th generated paraphrase of S. Suppose there are N sentences in the development set, the overall PSER is computed as: PSER = N X j=1 PSER(Sj) (13) where Sj is the j-th sentence in the development set. Our development set contains 75 sentences (described in detail in Section 6). For each sentence, all possible phrase substitutes are extracted from the six paraphrase tables above. The extracted phrase substitutes are then manually labeled as “correct” or “incorrect”. A phrase substitute is considered as correct only if the two phrases have the same meaning in the given sentence and the sentence generated by 9Note that, we also use some other parameters when extracting phrasal paraphrases from different resources, such as the thresholds Th1, Th2, Th3, Th4, as well as α and β in Equation (6) and (10). These parameters are estimated using different development sets from the investigated resources. We do not describe the estimation of them due to space limitation. 10Paraphrasing a word to itself (based on the 7-th paraphrase table above) is not regarded as a substitute. 1025 substituting the source phrase with the target phrase remains grammatical. In decoding, the phrase substitutes are printed out and then the PSER is computed based on the labeled data. Using each set of parameters, we generate paraphrases for the sentences in the development set based on Equation (2). PSER is then computed as in Equation (13). We use the gradient descent algorithm (Press et al., 1992) to minimize PSER on the development set and get the optimal parameters. 6 Experiments To evaluate the performance of the method on different types of test data, we used three kinds of sentences for testing, which were randomly extracted from Google news, free online novels, and forums, respectively. For each type, 50 sentences were extracted as test data and another 25 were extracted as development data. For each test sentence, top 10 of the generated paraphrases were kept for evaluation. 6.1 Phrase-level Evaluation The phrase-level evaluation was carried out to investigate the contributions of the paraphrase tables. For each test sentence, all possible phrase substitutes were first extracted from the paraphrase tables and manually labeled as “correct” or “incorrect”. Here, the criterion for identifying paraphrases is the same as that described in Section 5. Then, in the stage of decoding, the phrase substitutes were printed out and evaluated using the labeled data. Two metrics were used here. The first is the number of distinct correct substitutes (#DCS). Obviously, the more distinct correct phrase substitutes a paraphrase table can provide, the more valuable it is. The second is the accuracy of the phrase substitutes, which is computed as: Accuracy = #correct phrase substitutes #all phrase substitutes (14) To evaluate the PTs learned from different resources, we first used each PT (from 1 to 6) along with PT-7 in decoding. The results are shown in Table 1. It can be seen that PT-4 is the most useful, as it provides the most correct substitutes and the accuracy is the highest. We believe that it is because PT-4 is much larger than the other PTs. Compared with PT-4, the accuracies of the other PTs are fairly PT combination #DCS Accuracy 1+7 178 14.61% 2+7 94 25.06% 3+7 202 18.35% 4+7 553 56.93% 5+7 231 20.48% 6+7 21 14.42% Table 1: Contributions of the paraphrase tables. PT-1: from the thesaurus; PT-2: from the monolingual parallel corpus; PT-3: from the monolingual comparable corpus; PT-4: from the bilingual parallel corpus; PT-5: from the Encarta dictionary definitions; PT-6: from the similar MSN user queries; PT-7: self-paraphrases. low. This is because those PTs are smaller, thus they can provide fewer correct phrase substitutes. As a result, plenty of incorrect substitutes were included in the top 10 generated paraphrases. PT-6 provides the least correct phrase substitutes and the accuracy is the lowest. There are several reasons. First, many phrases in PT-6 are not real phrases but only sets of keywords (e.g., “lottery results ny”), which may not appear in sentences. Second, many words in this table have spelling mistakes (e.g., “widows vista”). Third, some phrase pairs in PT-6 are not paraphrases but only “related queries” (e.g., “back tattoo” vs. “butterfly tattoo”). Fourth, many phrases of PT-6 contain proper names or out-of-vocabulary words, which are difficult to be matched. The accuracy based on PT-1 is also quite low. We found that it is mainly because the phrase pairs in PT-1 are automatically clustered, many of which are merely “similar” words rather than synonyms (e.g., “borrow” vs. “buy”). Next, we try to find out whether it is necessary to combine all PTs. Thus we conducted several runs, each of which added the most useful PT from the left ones. The results are shown in Table 2. We can see that all the PTs are useful, as each PT provides some new correct phrase substitutes and the accuracy increases when adding each PT except PT-1. Since the PTs are extracted from different resources, they have different contributions. Here we only discuss the contributions of PT-5 and PT-6, which are first used in paraphrasing in this paper. PT-5 is useful for paraphrasing uncommon concepts since it can “explain” concepts with their definitions. 1026 PT combination #DCS Accuracy 4+7 553 56.93% 4+5+7 581 58.97% 4+5+3+7 638 59.42% 4+5+3+2+7 649 60.15% 4+5+3+2+1+7 699 60.14% 4+5+3+2+1+6+7 711 60.16% Table 2: Performances of different combinations of paraphrase tables. For instance, in the following test sentence S1, the word “amnesia” is a relatively uncommon word, especially for the people using English as the second language. Based on PT-5, S1 can be paraphrased into T1, which is much easier to understand. S1: I was suffering from amnesia. T1: I was suffering from memory loss. The disadvantage of PT-5 is that substituting words with the definitions sometimes leads to grammatical errors. For instance, substituting “heat shield” in the sentence S2 with “protective barrier against heat” keeps the meaning unchanged. However, the paraphrased sentence T2 is ungrammatical. S2: The U.S. space agency has been cautious about heat shield damage. T2: The U.S. space administration has been cautious about protective barrier against heat damage. As previously mentioned, PT-6 is less effective compared with the other PTs. However, it is useful for paraphrasing some special phrases, such as digital products, computer software, etc, since these phrases often appear in user queries. For example, S3 below can be paraphrased into T3 using PT-6. S3: I have a canon powershot S230 that uses CF memory cards. T3: I have a canon digital camera S230 that uses CF memory cards. The phrase “canon powershot” can hardly be paraphrased using the other PTs. It suggests that PT6 is useful for paraphrasing new emerging concepts and expressions. Test sentences Top-1 Top-5 Top-10 All 150 55.33% 45.20% 39.28% 50 from news 70.00% 62.00% 57.03% 50 from novel 56.00% 46.00% 37.42% 50 from forum 40.00% 27.60% 23.34% Table 3: Top-n accuracy on different test sentences. 6.2 Sentence-level Evaluation In this section, we evaluated the sentence-level quality of the generated paraphrases11. In detail, each generated paraphrase was manually labeled as “acceptable” or “unacceptable”. Here, the criterion for counting a sentence T as an acceptable paraphrase of sentence S is that T is understandable and its meaning is not evidently changed compared with S. For example, for the sentence S4, T4 is an acceptable paraphrase generated using our method. S4: The strain on US forces of fighting in Iraq and Afghanistan was exposed yesterday when the Pentagon published a report showing that the number of suicides among US troops is at its highest level since the 1991 Gulf war. T4: The pressure on US troops of fighting in Iraq and Afghanistan was revealed yesterday when the Pentagon released a report showing that the amount of suicides among US forces is at its top since the 1991 Gulf conflict. We carried out sentence-level evaluation using the top-1, top-5, and top-10 results of each test sentence. The accuracy of the top-n results was computed as: Accuracytop−n = PN i=1 ni N × n (15) where N is the number of test sentences. ni is the number of acceptable paraphrases in the top-n paraphrases of the i-th test sentence. We computed the accuracy on the whole test set (150 sentences) as well as on the three subsets, i.e., the 50 news sentences, 50 novel sentences, and 50 forum sentences. The results are shown in table 3. It can be seen that the accuracy varies greatly on different test sets. The accuracy on the news sentences is the highest, while that on the forum sentences is the lowest. There are several reasons. First, 11The evaluation was based on the paraphrasing results using the combination of all seven PTs. 1027 the largest PT used in the experiments is extracted using the bilingual parallel data, which are mostly from news documents. Thus, the test set of news sentences is more similar to the training data. Second, the news sentences are formal while the novel and forum sentences are less formal. Especially, some of the forum sentences contain spelling mistakes and grammar mistakes. Third, we find in the results that, most phrases paraphrased in the novel and forum sentences are commonly used phrases or words, such as “food”, “good”, “find”, etc. These phrases are more difficult to paraphrase than the less common phrases, since they usually have much more paraphrases in the PTs. Therefore, it is more difficult to choose the right paraphrase from all the candidates when conducting sentence-level paraphrase generation. Fourth, the forum sentences contain plenty of words such as “board (means computer board)”, “site (means web site)”, “mouse (means computer mouse)”, etc. These words are polysemous and have particular meanings in the domains of computer science and internet. Our method performs poor when paraphrasing these words since the domain of a context sentence is hard to identify. After observing the results, we find that there are three types of errors: (1) syntactic errors: the generated sentences are ungrammatical. About 32% of the unacceptable results are due to syntactic errors. (2) semantic errors: the generated sentences are incomprehensible. Nearly 60% of the unacceptable paraphrases have semantic errors. (3) non-paraphrase: the generated sentences are well formed and comprehensible but are not paraphrases of the input sentences. 8% of the unacceptable results are of this type. We believe that many of the errors above can be avoided by applying syntactic constraints and by making better use of context information in decoding, which is left as our future work. 7 Conclusion This paper proposes a method that improves the SMT-based sentence-level paraphrase generation using phrasal paraphrases automatically extracted from different resources. Our contribution is that we combine multiple resources in the framework of SMT for paraphrase generation, in which the dictionary definitions and similar user queries are first used as phrasal paraphrases. In addition, we analyze and compare the contributions of different resources. Experimental results indicate that although the contributions of the exploited resources differ a lot, they are all useful to sentence-level paraphrase generation. Especially, the dictionary definitions and similar user queries are effective for paraphrasing some certain types of phrases. In the future work, we will try to use syntactic and context constraints in paraphrase generation to enhance the acceptability of the paraphrases. In addition, we will extract paraphrase patterns that contain more structural variation and try to combine the SMT-based and pattern-based systems for sentencelevel paraphrase generation. Acknowledgments We would like to thank Mu Li for providing us with the SMT decoder. We are also grateful to Dongdong Zhang for his help in the experiments. References Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with Bilingual Parallel Corpora. In Proceedings of ACL, pages 597-604. Regina Barzilay and Michael Elhadad. 1997. Using Lexical Chains for Text Summarization. In Proceedings of the ACL Workshop on Intelligent Scalable Text Summarization, pages 10-17. Regina Barzilay and Lillian Lee. 2003. Learning to Paraphrase: An Unsupervised Approach Using MultipleSequence Alignment. In Proceedings of HLT-NAACL, pages 16-23. Regina Barzilay and Kathleen R. McKeown. 2001. Extracting Paraphrases from a Parallel Corpus. In Proceedings of ACL, pages 50-57. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. In Computational Linguistics 19(2): 263-311. Chris Callison-Burch, Philipp Koehn, and Miles Osborne. 2006. Improved Statistical Machine Translation Using Paraphrases. In Proceedings of HLTNAACL, pages 17-24. Andrew Finch, Young-Sook Hwang, and Eiichiro Sumita. 2005. Using Machine Translation Evaluation Techniques to Determine Sentence-level Semantic Equivalence. In Proceedings of IWP, pages 17-24. 1028 Wei Gao, Cheng Niu, Jian-Yun Nie, Ming Zhou, Jian Hu, Kam-Fai Wong, and Hsiao-Wuen Hon. 2007. CrossLingual Query Suggestion Using Query Logs of Different Languages. In Proceedings of SIGIR, pages 463-470. Lidija Iordanskaja, Richard Kittredge, and Alain Polgu`ere. 1991. Lexical Selection and Paraphrase in a Meaning-Text Generation Model. In Natural Language Generation in Artificial Intelligence and Computational Linguistics, pages 293-312. David Kauchak and Regina Barzilay. 2006. Paraphrasing for Automatic Evaluation. In Proceedings of HLTNAACL, pages 455-462. Philipp Koehn. 2004. Pharaoh: a Beam Search Decoder for Phrase-Based Statistical Machine Translation Models: User Manual and Description for Version 1.2. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proceedings of HLT-NAACL, pages 127-133. De-Kang Lin. 1998. Automatic Retrieval and Clustering of Similar Words. In Proceedings of COLING/ACL, pages 768-774. De-Kang Lin and Patrick Pantel. 2001. Discovery of Inference Rules for Question Answering. In Natural Language Engineering 7(4): 343-360. Nitin Madnani, Necip Fazil Ayan, Philip Resnik, and Bonnie J. Dorr. 2007. Using Paraphrases for Parameter Tuning in Statistical Machine Translation. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 120-127. Kathleen R. Mckeown, Regina Barzilay, David Evans, Vasileios Hatzivassiloglou, Judith L. Klavans, Ani Nenkova, Carl Sable, Barry Schiffman, and Sergey Sigelman. 2002. Tracking and Summarizing News on a Daily Basis with Columbia’s Newsblaster. In Proceedings of HLT, pages 280-285. Franz Josef Och and Hermann Ney. 2000. Improved Statistical Alignment Models. In Proceedings of ACL, pages 440-447. Franz Josef Och and Hermann Ney. 2002. Discriminative Training and Maximum Entropy Models for Statistical Machine Translation. In Proceedings of ACL, pages 295-302. Bo Pang, Kevin Knight, and Daniel Marcu. 2003. Syntax-based Alignment of Multiple Translations: Extracting Paraphrases and Generating New Sentences. In Proceedings of HLT-NAACL, pages 102-109. Marius Pasca and P´eter Dienes. 2005. Aligning Needles in a Haystack: Paraphrase Acquisition Across the Web. In Proceedings of IJCNLP, pages 119-130. William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. 1992. Numerical Recipes in C: The Art of Scientific Computing. Cambridge University Press, Cambridge, U.K., 1992, 412-420. Chris Quirk, Chris Brockett, and William Dolan. 2004. Monolingual Machine Translation for Paraphrase Generation. In Proceedings of EMNLP, pages 142149. Deepak Ravichandran and Eduard Hovy. 2002. Learning Surface Text Patterns for a Question Answering System. In Proceedings of ACL, pages 41-47. Yusuke Shinyama, Satoshi Sekine, and Kiyoshi Sudo. 2002. Automatic Paraphrase Acquisition from News Articles. In Proceedings of HLT, pages 40-46. Hua Wu and Ming Zhou. 2003. Synonymous Collocation Extraction Using Translation Information. In Proceedings of ACL, pages 120-127. 1029
2008
116
Proceedings of ACL-08: HLT, pages 1030–1038, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Extraction of Entailed Semantic Relations Through Syntax-based Comma Resolution Vivek Srikumar 1 Roi Reichart2 Mark Sammons1 Ari Rappoport2 Dan Roth1 1University of Illinois at Urbana-Champaign {vsrikum2|mssammon|danr}@uiuc.edu 2Institute of Computer Science, Hebrew University of Jerusalem {roiri|arir}@cs.huji.ac.il Abstract This paper studies textual inference by investigating comma structures, which are highly frequent elements whose major role in the extraction of semantic relations has not been hitherto recognized. We introduce the problem of comma resolution, defined as understanding the role of commas and extracting the relations they imply. We show the importance of the problem using examples from Textual Entailment tasks, and present A Sentence Transformation Rule Learner (ASTRL), a machine learning algorithm that uses a syntactic analysis of the sentence to learn sentence transformation rules that can then be used to extract relations. We have manually annotated a corpus identifying comma structures and relations they entail and experimented with both gold standard parses and parses created by a leading statistical parser, obtaining F-scores of 80.2% and 70.4% respectively. 1 Introduction Recognizing relations expressed in text sentences is a major topic in NLP, fundamental in applications such as Textual Entailment (or Inference), Question Answering and Text Mining. In this paper we address this issue from a novel perspective, that of understanding the role of the commas in a sentence, which we argue is a key component in sentence comprehension. Consider for example the following three sentences: 1. Authorities have arrested John Smith, a retired police officer. 2. Authorities have arrested John Smith, his friend and his brother. 3. Authorities have arrested John Smith, a retired police officer announced this morning. Sentence (1) states that John Smith is a retired police officer. The comma and surrounding sentence structure represent the relation ‘IsA’. In (2), the comma and surrounding structure signifies a list, so the sentence states that three people were arrested: (i) John Smith, (ii) his friend, and (iii) his brother. In (3), a retired police officer announced that John Smith has been arrested. Here, the comma and surrounding sentence structure indicate clause boundaries. In all three sentences, the comma and the surrounding sentence structure signify relations essential to comprehending the meaning of the sentence, in a way that is not easily captured using lexicalor even shallow parse-level information. As a human reader, we understand them easily, but automated systems for Information Retrieval, Question Answering, and Textual Entailment are likely to encounter problems when comparing structures like these, which are lexically similar, but whose meanings are so different. In this paper we present an algorithm for comma resolution, a task that we define to consist of (1) disambiguating comma type and (2) determining the relations entailed from the sentence given the commas’ interpretation. Specifically, in (1) we assign each comma to one of five possible types, and in (2) we generate a set of natural language sentences that express the relations, if any, signified by each comma structure. The algorithm uses information extracted from parse trees. This work, in addition to having immediate significance for natural language processing systems that use semantic content, has potential applications in improving a range of auto1030 mated analysis by decomposing complex sentences into a set of simpler sentences that capture the same meaning. Although there are many other widelyused structures that express relations in a similar way, commas are one of the most commonly used symbols1. By addressing comma resolution, we offer a promising first step toward resolving relations in sentences. To evaluate the algorithm, we have developed annotation guidelines, and manually annotated sentences from the WSJ PennTreebank corpus. We present a range of experiments showing the good performance of the system, using gold-standard and parser-generated parse trees. In Section 2 we motivate comma resolution through Textual Entailment examples. Section 3 describes related work. Sections 4 and 5 present our corpus annotation and learning algorithm. Results are given in Section 6. 2 Motivating Comma Resolution Through Textual Entailment Comma resolution involves not only comma disambiguation but also inference of the arguments (and argument boundaries) of the relationship represented by the comma structure, and the relationships holding between these arguments and the sentence as a whole. To our knowledge, this is the first paper that deals with this problem, so in this section we motivate it in depth by showing its importance to the semantic inference task of Textual Entailment (TE) (Dagan et al., 2006), which is increasingly recognized as a crucial direction for improving a range of NLP tasks such as information extraction, question answering and summarization. TE is the task of deciding whether the meaning of a text T (usually a short snippet) can be inferred from the meaning of another text S. If this is the case, we say that S entails T. For example2, we say that sentence (1) entails sentence (2): 1. S: Parviz Davudi was representing Iran at a meeting of the Shanghai Co-operation Organization (SCO), the fledgling association that 1For example, the WSJ corpus has 49K sentences, among which 32K with one comma or more, 17K with two or more, and 7K with three or more. 2The examples of this section are variations of pairs taken from the Pascal RTE3 (Dagan et al., 2006) dataset. binds two former Soviet republics of central Asia, Russia and China to fight terrorism. 2. T: SCO is the fledgling association that binds several countries. To see that (1) entails (2), one must understand that the first comma structure in sentence (1) is an apposition structure, and does not indicate the beginning of a list. The second comma marks a boundary between entities in a list. To make the correct inference one must determine that the second comma is a list separator, not an apposition marker. Misclassifying the second comma in (1) as an apposition leads to the conclusion that (1) entails (3): 3. T: Russia and China are two former Soviet republics of central Asia . Note that even to an educated native speaker of English, sentence 1 may be initially confusing; during the first reading, one might interpret the first comma as indicating a list, and that ‘the Shanghai Co-operation Organization’ and ‘the fledgling association that binds...’ are two separate entities that are meeting, rather than two representations of the same entity. From these examples we draw the following conclusions: 1. Comma resolution is essential in comprehending natural language text. 2. Explicitly representing relations derived from comma structures can assist a wide range of NLP tasks; this can be done by directly augmenting the lexical-level representation, e.g., by bringing surface forms of two text fragments with the same meaning closer together. 3. Comma structures might be highly ambiguous, nested and overlapping, and consequently their interpretation is a difficult task. The argument boundaries of the corresponding extracted relations are also not easy to detect. The output of our system could be used to augment sentences with an explicit representation of entailed relations that hold in them. In Textual Entailment systems this can increase the likelihood of correct identification of entailed sentences, and in other NLP systems it can help understanding the shallow lexical/syntactic content of a sentence. A similar approach has been taken in (Bar-Haim et al., 2007; de Salvo Braz et al., 2005), which augment the source sentence with entailed relations. 1031 3 Related Work Since we focus on extracting the relations represented by commas, there are two main strands of research with similar goals: 1) systems that directly analyze commas, whether labeling them with syntactic information or correcting inappropriate use in text; and 2) systems that extract relations from text, typically by trying to identify paraphrases. The significance of interpreting the role of commas in sentences has already been identified by (van Delden and Gomez, 2002; Bayraktar et al., 1998) and others. A review of the first line of research is given in (Say and Akman, 1997). In (Bayraktar et al., 1998) the WSJ PennTreebank corpus (Marcus et al., 1993) is analyzed and a very detailed list of syntactic patterns that correspond to different roles of commas is created. However, they do not study the extraction of entailed relations as a function of the comma’s interpretation. Furthermore, the syntactic patterns they identify are unlexicalized and would not support the level of semantic relations that we show in this paper. Finally, theirs is a manual process completely dependent on syntactic patterns. While our comma resolution system uses syntactic parse information as its main source of features, the approach we have developed focuses on the entailed relations, and does not limit implementations to using only syntactic information. The most directly comparable prior work is that of (van Delden and Gomez, 2002), who use finite state automata and a greedy algorithm to learn comma syntactic roles. However, their approach differs from ours in a number of critical ways. First, their comma annotation scheme does not identify arguments of predicates, and therefore cannot be used to extract complete relations. Second, for each comma type they identify, a new Finite State Automaton must be hand-encoded; the learning component of their work simply constrains which FSAs that accept a given, comma containing, text span may co-occur. Third, their corpus is preprocessed by hand to identify specialized phrase types needed by their FSAs; once our system has been trained, it can be applied directly to raw text. Fourth, they exclude from their analysis and evaluation any comma they deem to have been incorrectly used in the source text. We include all commas that are present in the text in our annotation and evaluation. There is a large body of NLP literature on punctuation. Most of it, however, is concerned with aiding syntactic analysis of sentences and with developing comma checkers, much based on (Nunberg, 1990). Pattern-based relation extraction methods (e.g., (Davidov and Rappoport, 2008; Davidov et al., 2007; Banko et al., 2007; Pasca et al., 2006; Sekine, 2006)) could in theory be used to extract relations represented by commas. However, the types of patterns used in web-scale lexical approaches currently constrain discovered patterns to relatively short spans of text, so will most likely fail on structures whose arguments cover large spans (for example, appositional clauses containing relative clauses). Relation extraction approaches such as (Roth and Yih, 2004; Roth and Yih, 2007; Hirano et al., 2007; Culotta and Sorenson, 2004; Zelenko et al., 2003) focus on relations between Named Entities; such approaches miss the more general apposition and list relations we recognize in this work, as the arguments in these relations are not confined to Named Entities. Paraphrase Acquisition work such as that by (Lin and Pantel, 2001; Pantel and Pennacchiotti, 2006; Szpektor et al., 2004) is not constrained to named entities, and by using dependency trees, avoids the locality problems of lexical methods. However, these approaches have so far achieved limited accuracy, and are therefore hard to use to augment existing NLP systems. 4 Corpus Annotation For our corpus, we selected 1,000 sentences containing at least one comma from the Penn Treebank (Marcus et al., 1993) WSJ section 00, and manually annotated them with comma information3. This annotated corpus served as both training and test datasets (using cross-validation). By studying a number of sentences from WSJ (not among the 1,000 selected), we identified four significant types of relations expressed through commas: SUBSTITUTE, ATTRIBUTE, LOCATION, and LIST. Each of these types can in principle be expressed using more than a single comma. We define the notion 3The guidelines and annotations are available at http:// L2R.cs.uiuc.edu/˜cogcomp/data.php. 1032 of a comma structure as a set of one or more commas that all relate to the same relation in the sentence. SUBSTITUTE indicates an IS-A relation. An example is ‘John Smith, a Renaissance artist, was famous’. By removing the relation expressed by the commas, we can derive three sentences: ‘John Smith is a Renaissance artist’, ‘John Smith was famous’, and ‘a Renaissance artist was famous’. Note that in theory, the third relation will not be valid: one example is ‘The brothers, all honest men, testified at the trial’, which does not entail ‘all honest men testified at the trial’. However, we encountered no examples of this kind in the corpus, and leave this refinement to future work. ATTRIBUTE indicates a relation where one argument describes an attribute of the other. For example, from ‘John, who loved chocolate, ate with gusto’, we can derive ‘John loved chocolate’ and ‘John ate with gusto’. LOCATION indicates a LOCATED-IN relation. For example, from ‘Chicago, Illinois saw some heavy snow today’ we can derive ‘Chicago is located in Illinois’ and ‘Chicago saw some heavy snow today’. LIST indicates that some predicate or property is applied to multiple entities. In our annotation, the list does not generate explicit relations; instead, the boundaries of the units comprising the list are marked so that they can be treated as a single unit, and are considered to be related by the single relation ‘GROUP’. For example, the derivation of ‘John, James and Kelly all left last week’ is written as ‘[John, James, and Kelly] [all left last week]’. Any commas not fitting one of the descriptions above are designated as OTHER. This does not indicate that the comma signifies no relations, only that it does not signify a relation of interest in this work (future work will address relations currently subsumed by this category). Analysis of 120 OTHER commas show that approximately half signify clause boundaries, which may occur when sentence constituents are reordered for emphasis, but may also encode implicit temporal, conditional, and other relation types (for example, ‘Opening the drawer, he found the gun.’). The remainder comprises mainly coordination structures (for example, ‘Although he won, he was sad’) and discourse markers indicating inter-sentence relations (such as ‘However, he soon cheered up.’). While we plan to develop an annoRel. Type Avg. Agreement # of Commas # of Rel.s SUBSTITUTE 0.808 243 729 ATTRIBUTE 0.687 193 386 LOCATION 0.929 71 140 LIST 0.803 230 230 OTHER 0.949 909 0 Combined 0.869 1646 1485 Table 1: Average inter-annotator agreement for identifying relations. tation scheme for such relations, this is beyond the scope of the present work. Four annotators annotated the same 10% of the WSJ sentences in order to evaluate inter-annotator agreement. The remaining sentences were divided among the four annotators. The resulting corpus was checked by two judges and the annotation corrected where appropriate; if the two judges disagreed, a third judge was consulted and consensus reached. Our annotators were asked to identify comma structures, and for each structure to write its relation type, its arguments, and all possible simplified version(s) of the original sentence in which the relation implied by the comma has been removed. Arguments must be contiguous units of the sentence and will be referred to as chunks hereafter. Agreement statistics and the number of commas and relations of each type are shown in Table 4. The Accuracy closely approximates Kappa score in this case, since the baseline probability of chance agreement is close to zero. 5 A Sentence Tranformation Rule Learner (ASTRL) In this section, we describe a new machine learning system that learns Sentence Transformation Rules (STRs) for comma resolution. We first define the hypothesis space (i.e., STRs) and two operations – substitution and introduction. We then define the feature space, motivating the use of Syntactic Parse annotation to learn STRs. Finally, we describe the ASTRL algorithm. 5.1 Sentence Transformation Rules A Sentence Transformation Rule (STR) takes a parse tree as input and generates new sentences. We formalize an STR as the pair l →r, where l is a tree fragment that can consist of non-terminals, POS tags and lexical items. r is a set {ri}, each element of which is a template that consists of the non1033 terminals of l and, possibly, some new tokens. This template is used to generate a new sentence, called a relation. The process of applying an STR l →r to a parse tree T of a sentence s begins with finding a match for l in T. A match is said to be found if l is a subtree of T. If matched, the non-terminals of each ri are instantiated with the terminals that they cover in T. Instantiation is followed by generation of the output relations in one of two ways: introduction or substitution, which is specified by the corresponding ri. If an ri is marked as an introductory one, then the relation is the terminal sequence obtained by replacing the non-terminals in ri with their instantiations. For substitution, firstly, the non-terminals of the ri are replaced by their instantiations. The instantiated ri replaces all the terminals in s that are covered by the l-match. The notions of introduction and substitution were motivated by ideas introduced in (BarHaim et al., 2007). Figure 1 shows an example of an STR and Figure 2 shows the application of this STR to a sentence. In the first relation, NP1 and NP2 are instantiated with the corresponding terminals in the parse tree. In the second and third relations, the terminals of NP1 and NP2 replace the terminals covered by NPp. LHS: NPp NP1 , NP2 , RHS: 1. NP1 be NP2 (introduction) 2. NP1 (substitution) 3. NP2 (substitution) Figure 1: Example of a Sentence Transformation Rule. If the LHS matches a part of a given parse tree, then the RHS will generate three relations. 5.2 The Feature Space In Section 2, we discussed the example where there could be an ambiguity between a list and an apposition structure in the fragment two former Soviet republics, Russia and China. In addition, simple surface examination of the sentence could also identify the noun phrases ‘Shanghai Co-operation Organization (SCO)’, ‘the fledgling association that binds S NPp NP1 John Smith , NP2 a renaissance artist , V P was famous RELATIONS: 1 [John Smith]/NP1 be [a renaissance artist]/NP2 2 [John Smith] /NP1 [was famous] 3 [a renaissance artist]/NP2 [was famous] Figure 2: Example of application of the STR in Figure 1. In the first relation, an introduction, we use the verb ‘be’, without dealing with its inflections. NP1 and NP2 are both substitutions, each replacing NPp to generate the last two relations. two former Soviet Republics’, ‘Russia’ and ‘China’ as the four members of a list. To resolve such ambiguities, we need a nested representation of the sentence. This motivates the use of syntactic parse trees as a logical choice of feature space. (Note, however, that semantic and pragmatic ambiguities might still remain.) 5.3 Algorithm Overview In our corpus annotation, the relations and their argument boundaries (chunks) are explicitly marked. For each training example, our learning algorithm first finds the smallest valid STR – the STR with the smallest LHS in terms of depth. Then it refines the LHS by specializing it using statistics taken from the entire data set. 5.4 Generating the Smallest Valid STR To transform an example into the smallest valid STR, we utilize the augmented parse tree of the sentence. For each chunk in the sentence, we find the lowest node in the parse tree that covers the chunk and does not cover other chunks (even partially). It may, however, cover words that do not belong to any chunk. We refer to such a node as a chunk root. We then find the lowest node that covers all the chunk roots, referring to it as the pattern root. The initial LHS consists of the subtree of the parse tree rooted at the pattern root and whose leaf nodes are all either chunk roots or nodes that do not belong to any chunk. All the nodes are labeled with the corresponding labels in the aug1034 mented parse tree. For example, if we consider the parse tree and relations shown in Figure 2, then doing the above procedure gives us the initial LHS as S (NPp(NP1, NP2, ) V P). The three relations gives us the RHS with three elements ‘NP1 be NP2’, ‘NP1 V P’ and ‘NP1 V P’, all three being introduction. This initial LHS need not be the smallest one that explains the example. So, we proceed by finding the lowest node in the initial LHS such that the subtree of the LHS at that node can form a new STR that covers the example using both introduction and substitution. In our example, the initial LHS has a subtree, NPp(NP1, NP2, ) that can cover all the relations with the RHS consisting of ‘NP1 be NP2’, NP1 and NP2. The first RHS is an introduction, while the second and the third are both substitutions. Since no subtree of this LHS can generate all three relations even with substitution, this is the required STR. The final step ensures that we have the smallest valid STR at this stage. 5.5 Statistical Refinement The STR generated using the procedure outlined above explains the relations generated by a single example. In addition to covering the relations generated by the example, we wish to ensure that it does not cover erroneous relations by matching any of the other comma types in the annotated data. Algorithm 1 ASTRL: A Sentence Transformation Rule Learning. 1: for all t: Comma type do 2: Initialize STRList[t] = ∅ 3: p = Set of annotated examples of type t 4: n = Annotated examples of all other types 5: for all x ∈p do 6: r = Smallest Valid STR that covers x 7: Get fringe of r.LHS using the parse tree 8: S = Score(r, p, n) 9: Sprev = −∞ 10: while S ̸= Sprev do 11: if adding some fringe node to r.LHS causes a significant change in score then 12: Set r = New rule that includes that fringe node 13: Sprev = S 14: S = Score(r, p, n) 15: Recompute new fringe nodes 16: end if 17: end while 18: Add r to STRList[t] 19: Remove all examples from p that are covered by r 20: end for 21: end for For this purpose, we specialize the LHS so that it covers as few examples from the other comma types as possible, while covering as many examples from the current comma type as possible. Given the most general STR, we generate a set of additional, more detailed, candidate rules. Each of these is obtained from the original rule by adding a single node to the tree pattern in the rule’s LHS, and updating the rule’s RHS accordingly. We then score each of the candidates (including the original rule). If there is a clear winner, we continue with it using the same procedure (i.e., specialize it). If there isn’t a clear winner, we stop and use the current winner. After finishing with a rule (line 18), we remove from the set of positive examples of its comma type all examples that are covered by it (line 19). To generate the additional candidate rules that we add, we define the fringe of a rule as the siblings and children of the nodes in its LHS in the original parse tree. Each fringe node defines an additional candidate rule, whose LHS is obtained by adding the fringe node to the rule’s LHS tree. We refer to the set of these candidate rules, plus the original one, as the rule’s fringe rules. We define the score of an STR as Score(Rule, p, n) = Rp |p| −Rn |n| where p and n are the set of positive and negative examples for this comma type, and Rp and Rn are the number of positive and negative examples that are covered by the STR. For each example, all examples annotated with the same comma type are positive while all examples of all other comma types are negative. The score is used to select the winner among the fringe rules. The complete algorithm we have used is listed in Algorithm 1. For convenience, the algorithm’s main loop is given in terms of comma types, although this is not strictly necessary. The stopping criterion in line 11 checks whether any fringe rule has a significantly better score than the rule it was derived from, and exits the specialization loop if there is none. Since we start with the smallest STR, we only need to add nodes to it to refine it and never have to delete any nodes from the tree. Also note that the algorithm is essentially a greedy algorithm that performs a single pass over the examples; other, more 1035 complex, search strategies could also be used. 6 Evaluation 6.1 Experimental Setup To evaluate ASTRL, we used the WSJ derived corpus. We experimented with three scenarios; in two of them we trained using the gold standard trees and then tested on gold standard parse trees (GoldGold), and text annotated using a state-of-the-art statistical parser (Charniak and Johnson, 2005) (GoldCharniak), respectively. In the third, we trained and tested on the Charniak Parser (Charniak-Charniak). In gold standard parse trees the syntactic categories are annotated with functional tags. Since current statistical parsers do not annotate sentences with such tags, we augment the syntactic trees with the output of a Named Entity tagger. For the Named Entity information, we used a publicly available NE Recognizer capable of recognizing a range of categories including Person, Location and Organization. On the CoNLL-03 shared task, its f-score is about 90%4. We evaluate our system from different points of view, as described below. For all the evaluation methods, we performed five-fold cross validation and report the average precision, recall and f-scores. 6.2 Relation Extraction Performance Firstly, we present the evaluation of the performance of ASTRL from the point of view of relation extraction. After learning the STRs for the different comma types using the gold standard parses, we generated relations by applying the STRs on the test set once. Table 2 shows the precision, recall and f-score of the relations, without accounting for the comma type of the STR that was used to generate them. This metric, called the Relation metric in further discussion, is the most relevant one from the point of view of the TE task. Since a list does not generate any relations in our annotation scheme, we use the commas to identify the list elements. Treating each list in a sentence as a single relation, we score the list with the fraction of its correctly identified elements. In addition to the Gold-Gold and Gold-Charniak 4A web demo of the NER is at http://L2R.cs.uiuc. edu/˜cogcomp/demos.php. settings described above, for this metric, we also present the results of the Charniak-Charniak setting, where both the train and test sets were annotated with the output of the Charniak parser. The improvement in recall in this setting over the Gold-Charniak case indicates that the parser makes systematic errors with respect to the phenomena considered. Setting P R F Gold-Gold 86.1 75.4 80.2 Gold-Charniak 77.3 60.1 68.1 Charniak-Charniak 77.2 64.8 70.4 Table 2: ASTRL performance (precision, recall and fscore) for relation extraction. The comma types were used only to learn the rules. During evaluation, only the relations were scored. 6.3 Comma Resolution Performance We present a detailed analysis of the performance of the algorithm for comma resolution. Since this paper is the first one that deals with the task, we could not compare our results to previous work. Also, there is no clear baseline to use. We tried a variant of the most frequent baseline common in other disambiguation tasks, in which we labeled all commas as OTHER (the most frequent type) except when there are list indicators like and, or and but in adjacent chunks (which are obtained using a shallow parser), in which case the commas are labeled LIST. This gives an average precision 0.85 and an average recall of 0.36 for identifying the comma type. However, this baseline does not help in identifying relations. We use the following approach to evaluate the comma type resolution and relation extraction performance – a relation extracted by the system is considered correct only if both the relation and the type of the comma structure that generated it are correctly identified. We call this metric the Relation-Type metric. Another way of measuring the performance of comma resolution is to measure the correctness of the relations per comma type. In both cases, lists are scored as in the Relation metric. The performance of our system with respect to these two metrics are presented in Table 3. In this table, we also compare the performance of the STRs learned by ASTRL with the smallest valid STRs without further specialization (i.e., using just the procedure outlined in Section 5.4). 1036 Type Gold-Gold Setting Gold-Charniak Setting Relation-Type metric Smallest Valid STRs ASTRL Smallest Valid STRs ASTRL P R F P R F P R F P R F Total 66.2 76.1 70.7 81.8 73.9 77.6 61.0 58.4 59.5 72.2 59.5 65.1 Relations Metric, Per Comma Type ATTRIBUTE 40.4 68.2 50.4 70.6 59.4 64.1 35.5 39.7 36.2 56.6 37.7 44.9 SUBSTITUTE 80.0 84.3 81.9 87.9 84.8 86.1 75.8 72.9 74.3 78.0 76.1 76.9 LIST 70.9 58.1 63.5 76.2 57.8 65.5 58.7 53.4 55.6 65.2 53.3 58.5 LOCATION 93.8 86.4 89.1 93.8 86.4 89.1 70.3 37.2 47.2 70.3 37.2 47.2 Table 3: Performance of STRs learned by ASTRL and the smallest valid STRs in identifying comma types and generating relations. There is an important difference between the Relation metric (Table 2) and the Relation-type metric (top part of Table 3) that depends on the semantic interpretation of the comma types. For example, consider the sentence ‘John Smith, 59, went home.’ If the system labels the commas in this as both ATTRIBUTE and SUBSTITUTE, then, both will generate the relation ‘John Smith is 59.’ According to the Relation metric, there is no difference between them. However, there is a semantic difference between the two sentences – the ATTRIBUTE relation says that being 59 is an attribute of John Smith while the SUBSTITUTE relation says that John Smith is the number 59. This difference is accounted for by the Relation-Type metric. From this standpoint, we can see that the specialization step performed in the full ASTRL algorithm greatly helps in disambiguating between the ATTRIBUTE and SUBSTITUTE types and consequently, the Relation-Type metric shows an error reduction of 23.5% and 13.8% in the Gold-Gold and GoldCharniak settings respectively. In the Gold-Gold scenario the performance of ASTRL is much better than in the Gold-Charniak scenario. This reflects the non-perfect performance of the parser in annotating these sentences (parser F-score of 90%). Another key evaluation question is the performance of the method in identification of the OTHER category. A comma is judged to be as OTHER if no STR in the system applies to it. The performance of ASTRL in this aspect is presented in Table 4. The categorization of this category is important if we wish to further classify the OTHER commas into finer categories. Setting P R F Gold-Gold 78.9 92.8 85.2 Gold-Charniak 72.5 92.2 81.2 Table 4: ASTRL performance (precision, recall and fscore) for OTHER identification. 7 Conclusions We defined the task of comma resolution, and developed a novel machine learning algorithm that learns Sentence Transformation Rules to perform this task. We experimented with both gold standard and parser annotated sentences, and established a performance level that seems good for a task of this complexity, and which will provide a useful measure of future systems developed for this task. When given automatically parsed sentences, performance degrades but is still much higher than random, in both scenarios. We designed a comma annotation scheme, where each comma unit is assigned one of four types and an inference rule mapping the patterns of the unit with the entailed relations. We created annotated datasets which will be made available over the web to facilitate further research. Future work will investigate four main directions: (i) studying the effects of inclusion of our approach on the performance of Textual Entailment systems; (ii) using features other than those derivable from syntactic parse and named entity annotation of the input sentence; (iii) recognizing a wider range of implicit relations, represented by commas and in other ways; (iv) adaptation to other domains. Acknowledgement The UIUC authors were supported by NSF grant ITR IIS-0428472, DARPA funding under the Bootstrap Learning Program and a grant from Boeing. 1037 References M. Banko, M. Cafarella, M. Soderland, M. Broadhead, and O. Etzioni. 2007. Open information extraction from the web. In Proc. of IJCAI, pages 2670–2676. R. Bar-Haim, I. Dagan, I. Greental, and E. Shnarch. 2007. Semantic inference at the lexical-syntactic level. In Proc. of AAAI, pages 871–876. M. Bayraktar, B. Say, and V. Akman. 1998. An analysis of english punctuation: The special case of comma. International Journal of Corpus Linguistics, 3(1):33– 57. E. Charniak and M. Johnson. 2005. Coarse-to-fine n-best parsing and maxent discriminative reranking. In Proc. of the Annual Meeting of the ACL, pages 173–180. A. Culotta and J. Sorenson. 2004. Dependency tree kernels for relation extraction. In Proc. of the Annual Meeting of the ACL, pages 423–429. I. Dagan, O. Glickman, and B. Magnini, editors. 2006. The PASCAL Recognising Textual Entailment Challenge., volume 3944. Springer-Verlag, Berlin. D. Davidov and A. Rappoport. 2008. Unsupervised discovery of generic relationships using pattern clusters and its evaluation by automatically generated sat analogy questions. In Proc. of the Annual Meeting of the ACL. D. Davidov, A. Rappoport, and M. Koppel. 2007. Fully unsupervised discovery of concept-specific relationships by web mining. In Proc. of the Annual Meeting of the ACL, pages 232–239. R. de Salvo Braz, R. Girju, V. Punyakanok, D. Roth, and M. Sammons. 2005. An inference model for semantic entailment in natural language. In Proc. of AAAI, pages 1678–1679. T. Hirano, Y. Matsuo, and G. Kikui. 2007. Detecting semantic relations between named entities in text using contextual features. In Proc. of the Annual Meeting of the ACL, pages 157–160. D. Lin and P. Pantel. 2001. DIRT: discovery of inference rules from text. In Proc. of ACM SIGKDD Conference on Knowledge Discovery and Data Mining 2001, pages 323–328. M. P. Marcus, B. Santorini, and M. Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. G. Nunberg. 1990. CSLI Lecture Notes 18: The Linguistics of Punctuation. CSLI Publications, Stanford, CA. P. Pantel and M. Pennacchiotti. 2006. Espresso: Leveraging generic patterns for automatically harvesting semantic relations. In Proc. of the Annual Meeting of the ACL, pages 113–120. M. Pasca, D. Lin, J. Bigham, A. Lifchits, and A. Jain. 2006. Names and similarities on the web: Fact extraction in the fast lane. In Proc. of the Annual Meeting of the ACL, pages 809–816. D. Roth and W. Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Hwee Tou Ng and Ellen Riloff, editors, Proc. of the Annual Conference on Computational Natural Language Learning (CoNLL), pages 1–8. Association for Computational Linguistics. D. Roth and W. Yih. 2007. Global inference for entity and relation identification via a linear programming formulation. In Lise Getoor and Ben Taskar, editors, Introduction to Statistical Relational Learning. MIT Press. B. Say and V. Akman. 1997. Current approaches to punctuation in computational linguistics. Computers and the Humanities, 30(6):457–469. S. Sekine. 2006. On-demand information extraction. In Proc. of the Annual Meeting of the ACL, pages 731– 738. I. Szpektor, H. Tanev, I. Dagan, and B. Coppola. 2004. Scaling web-based of entailment relations. In Proc. of EMNLP, pages 49–56. S. van Delden and F. Gomez. 2002. Combining finite state automata and a greedy learning algorithm to determine the syntactic roles of commas. In Proc. of ICTAI, pages 293–300. D. Zelenko, C. Aone, and A. Richardella. 2003. Kernel methods for relation extraction. Journal of Machine Learning Research, 3:1083–1106. 1038
2008
117
Proceedings of ACL-08: HLT, pages 1039–1047, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Finding Contradictions in Text Marie-Catherine de Marneffe, Linguistics Department Stanford University Stanford, CA 94305 [email protected] Anna N. Rafferty and Christopher D. Manning Computer Science Department Stanford University Stanford, CA 94305 {rafferty,manning}@stanford.edu Abstract Detecting conflicting statements is a foundational text understanding task with applications in information analysis. We propose an appropriate definition of contradiction for NLP tasks and develop available corpora, from which we construct a typology of contradictions. We demonstrate that a system for contradiction needs to make more fine-grained distinctions than the common systems for entailment. In particular, we argue for the centrality of event coreference and therefore incorporate such a component based on topicality. We present the first detailed breakdown of performance on this task. Detecting some types of contradiction requires deeper inferential paths than our system is capable of, but we achieve good performance on types arising from negation and antonymy. 1 Introduction In this paper, we seek to understand the ways contradictions occur across texts and describe a system for automatically detecting such constructions. As a foundational task in text understanding (Condoravdi et al., 2003), contradiction detection has many possible applications. Consider applying a contradiction detection system to political candidate debates: by drawing attention to topics in which candidates have conflicting positions, the system could enable voters to make more informed choices between candidates and sift through the amount of available information. Contradiction detection could also be applied to intelligence reports, demonstrating which information may need further verification. In bioinformatics where protein-protein interaction is widely studied, automatically finding conflicting facts about such interactions would be beneficial. Here, we shed light on the complex picture of contradiction in text. We provide a definition of contradiction suitable for NLP tasks, as well as a collection of contradiction corpora. Analyzing these data, we find contradiction is a rare phenomenon that may be created in different ways; we propose a typology of contradiction classes and tabulate their frequencies. Contradictions arise from relatively obvious features such as antonymy, negation, or numeric mismatches. They also arise from complex differences in the structure of assertions, discrepancies based on world-knowledge, and lexical contrasts. (1) Police specializing in explosives defused the rockets. Some 100 people were working inside the plant. (2) 100 people were injured. This pair is contradictory: defused rockets cannot go off, and thus cannot injure anyone. Detecting contradictions appears to be a harder task than detecting entailments. Here, it is relatively easy to identify the lack of entailment: the first sentence involves no injuries, so the second is unlikely to be entailed. Most entailment systems function as weak proof theory (Hickl et al., 2006; MacCartney et al., 2006; Zanzotto et al., 2007), but contradictions require deeper inferences and model building. While mismatching information between sentences is often a good cue of non-entailment (Vanderwende et al., 2006), it is not sufficient for contradiction detection which requires more precise comprehension of the consequences of sentences. Assessing event coreference is also essential: for texts to contradict, they must 1039 refer to the same event. The importance of event coreference was recognized in the MUC information extraction tasks in which it was key to identify scenarios related to the same event (Humphreys et al., 1997). Recent work in text understanding has not focused on this issue, but it must be tackled in a successful contradiction system. Our system includes event coreference, and we present the first detailed examination of contradiction detection performance, on the basis of our typology. 2 Related work Little work has been done on contradiction detection. The PASCAL Recognizing Textual Entailment (RTE) Challenges (Dagan et al., 2006; Bar-Haim et al., 2006; Giampiccolo et al., 2007) focused on textual inference in any domain. Condoravdi et al. (2003) first recognized the importance of handling entailment and contradiction for text understanding, but they rely on a strict logical definition of these phenomena and do not report empirical results. To our knowledge, Harabagiu et al. (2006) provide the first empirical results for contradiction detection, but they focus on specific kinds of contradiction: those featuring negation and those formed by paraphrases. They constructed two corpora for evaluating their system. One was created by overtly negating each entailment in the RTE2 data, producing a balanced dataset (LCC negation). To avoid overtraining, negative markers were also added to each nonentailment, ensuring that they did not create contradictions. The other was produced by paraphrasing the hypothesis sentences from LCC negation, removing the negation (LCC paraphrase): A hunger strike was not attempted →A hunger strike was called off. They achieved very good performance: accuracies of 75.63% on LCC negation and 62.55% on LCC paraphrase. Yet, contradictions are not limited to these constructions; to be practically useful, any system must provide broader coverage. 3 Contradictions 3.1 What is a contradiction? One standard is to adopt a strict logical definition of contradiction: sentences A and B are contradictory if there is no possible world in which A and B are both true. However, for contradiction detection to be useful, a looser definition that more closely matches human intuitions is necessary; contradiction occurs when two sentences are extremely unlikely to be true simultaneously. Pairs such as Sally sold a boat to John and John sold a boat to Sally are tagged as contradictory even though it could be that each sold a boat to the other. This definition captures intuitions of incompatiblity, and perfectly fits applications that seek to highlight discrepancies in descriptions of the same event. Examples of contradiction are given in table 1. For texts to be contradictory, they must involve the same event. Two phenomena must be considered in this determination: implied coreference and embedded texts. Given limited context, whether two entities are coreferent may be probable rather than certain. To match human intuitions, compatible noun phrases between sentences are assumed to be coreferent in the absence of clear countervailing evidence. In the following example, it is not necessary that the woman in the first and second sentences is the same, but one would likely assume it is if the two sentences appeared together: (1) Passions surrounding Germany’s final match turned violent when a woman stabbed her partner because she didn’t want to watch the game. (2) A woman passionately wanted to watch the game. We also mark as contradictions pairs reporting contradictory statements. The following sentences refer to the same event (de Menezes in a subway station), and display incompatible views of this event: (1) Eyewitnesses said de Menezes had jumped over the turnstile at Stockwell subway station. (2) The documents leaked to ITV News suggest that Menezes walked casually into the subway station. This example contains an “embedded contradiction.” Contrary to Zaenen et al. (2005), we argue that recognizing embedded contradictions is important for the application of a contradiction detection system: if John thinks that he is incompetent, and his boss believes that John is not being given a chance, one would like to detect that the targeted information in the two sentences is contradictory, even though the two sentences can be true simultaneously. 3.2 Typology of contradictions Contradictions may arise from a number of different constructions, some overt and others that are com1040 ID Type Text Hypothesis 1 Antonym Capital punishment is a catalyst for more crime. Capital punishment is a deterrent to crime. 2 Negation A closely divided Supreme Court said that juries and not judges must impose a death sentence. The Supreme Court decided that only judges can impose the death sentence. 3 Numeric The tragedy of the explosion in Qana that killed more than 50 civilians has presented Israel with a dilemma. An investigation into the strike in Qana found 28 confirmed dead thus far. 4 Factive Prime Minister John Howard says he will not be swayed by a warning that Australia faces more terrorism attacks unless it withdraws its troops from Iraq. Australia withdraws from Iraq. 5 Factive The bombers had not managed to enter the embassy. The bombers entered the embassy. 6 Structure Jacques Santer succeeded Jacques Delors as president of the European Commission in 1995. Delors succeeded Santer in the presidency of the European Commission. 7 Structure The Channel Tunnel stretches from England to France. It is the second-longest rail tunnel in the world, the longest being a tunnel in Japan. The Channel Tunnel connects France and Japan. 8 Lexical The Canadian parliament’s Ethics Commission said former immigration minister, Judy Sgro, did nothing wrong and her staff had put her into a conflict of interest. The Canadian parliament’s Ethics Commission accuses Judy Sgro. 9 Lexical In the election, Bush called for U.S. troops to be withdrawn from the peacekeeping mission in the Balkans. He cites such missions as an example of how America must “stay the course.” 10 WK Microsoft Israel, one of the first Microsoft branches outside the USA, was founded in 1989. Microsoft was established in 1989. Table 1: Examples of contradiction types. plex even for humans to detect. Analyzing contradiction corpora (see section 3.3), we find two primary categories of contradiction: (1) those occurring via antonymy, negation, and date/number mismatch, which are relatively simple to detect, and (2) contradictions arising from the use of factive or modal words, structural and subtle lexical contrasts, as well as world knowledge (WK). We consider contradictions in category (1) ‘easy’ because they can often be automatically detected without full sentence comprehension. For example, if words in the two passages are antonyms and the sentences are reasonably similar, especially in polarity, a contradiction occurs. Additionally, little external information is needed to gain broad coverage of antonymy, negation, and numeric mismatch contradictions; each involves only a closed set of words or data that can be obtained using existing resources and techniques (e.g., WordNet (Fellbaum, 1998), VerbOcean (Chklovski and Pantel, 2004)). However, contradictions in category (2) are more difficult to detect automatically because they require precise models of sentence meaning. For instance, to find the contradiction in example 8 (table 1), it is necessary to learn that X said Y did nothing wrong and X accuses Y are incompatible. Presently, there exist methods for learning oppositional terms (Marcu and Echihabi, 2002) and paraphrase learning has been thoroughly studied, but successfully extending these techniques to learn incompatible phrases poses difficulties because of the data distribution. Example 9 provides an even more difficult instance of contradiction created by a lexical discrepancy. Structural issues also create contradictions (examples 6 and 7). Lexical complexities and variations in the function of arguments across verbs can make recognizing these contradictions complicated. Even when similar verbs are used and argument differences exist, structural differences may indicate non-entailment or contradiction, and distinguishing the two automatically is problematic. Consider contradiction 7 in table 1 and the following non-contradiction: (1) The CFAP purchases food stamps from the government and distributes them to eligible recipients. (2) A government purchases food. 1041 Data # contradictions # total pairs RTE1 dev1 48 287 RTE1 dev2 55 280 RTE1 test 149 800 RTE2 dev 111 800 RTE3 dev 80 800 RTE3 test 72 800 Table 2: Number of contradictions in the RTE datasets. In both cases, the first sentence discusses one entity (CFAP, The Channel Tunnel) with a relationship (purchase, stretch) to other entities. The second sentence posits a similar relationship that includes one of the entities involved in the original relationship as well as an entity that was not involved. However, different outcomes result because a tunnel connects only two unique locations whereas more than one entity may purchase food. These frequent interactions between world-knowledge and structure make it hard to ensure that any particular instance of structural mismatch is a contradiction. 3.3 Contradiction corpora Following the guidelines above, we annotated the RTE datasets for contradiction. These datasets contain pairs consisting of a short text and a onesentence hypothesis. Table 2 gives the number of contradictions in each dataset. The RTE datasets are balanced between entailments and non-entailments, and even in these datasets targeting inference, there are few contradictions. Using our guidelines, RTE3 test was annotated by NIST as part of the RTE3 Pilot task in which systems made a 3-way decision as to whether pairs of sentences were entailed, contradictory, or neither (Voorhees, 2008).1 Our annotations and those of NIST were performed on the original RTE datasets, contrary to Harabagiu et al. (2006). Because their corpora are constructed using negation and paraphrase, they are unlikely to cover all types of contradictions in section 3.2. We might hypothesize that rewriting explicit negations commonly occurs via the substitution of antonyms. Imagine, e.g.: H: Bill has finished his math. 1Information about this task as well as data can be found at http://nlp.stanford.edu/RTE3-pilot/. Type RTE sets ‘Real’ corpus 1 Antonym 15.0 9.2 Negation 8.8 17.6 Numeric 8.8 29.0 2 Factive/Modal 5.0 6.9 Structure 16.3 3.1 Lexical 18.8 21.4 WK 27.5 13.0 Table 3: Percentages of contradiction types in the RTE3 dev dataset and the real contradiction corpus. Neg-H: Bill hasn’t finished his math. Para-Neg-H: Bill is still working on his math. The rewriting in both the negated and the paraphrased corpora is likely to leave one in the space of ‘easy’ contradictions and addresses fewer than 30% of contradictions (table 3). We contacted the LCC authors to obtain their datasets, but they were unable to make them available to us. Thus, we simulated the LCC negation corpus, adding negative markers to the RTE2 test data (Neg test), and to a development set (Neg dev) constructed by randomly sampling 50 pairs of entailments and 50 pairs of non-entailments from the RTE2 development set. Since the RTE datasets were constructed for textual inference, these corpora do not reflect ‘real-life’ contradictions. We therefore collected contradictions ‘in the wild.’ The resulting corpus contains 131 contradictory pairs: 19 from newswire, mainly looking at related articles in Google News, 51 from Wikipedia, 10 from the Lexis Nexis database, and 51 from the data prepared by LDC for the distillation task of the DARPA GALE program. Despite the randomness of the collection, we argue that this corpus best reflects naturally occurring contradictions.2 Table 3 gives the distribution of contradiction types for RTE3 dev and the real contradiction corpus. Globally, we see that contradictions in category (2) occur frequently and dominate the RTE development set. In the real contradiction corpus, there is a much higher rate of the negation, numeric and lexical contradictions. This supports the intuition that in the real world, contradictions primarily occur for two reasons: information is updated as knowledge 2Our corpora—the simulation of the LLC negation corpus, the RTE datasets and the real contradictions—are available at http://nlp.stanford.edu/projects/contradiction. 1042 of an event is acquired over time (e.g., a rising death toll) or various parties have divergent views of an event (e.g., example 9 in table 1). 4 System overview Our system is based on the stage architecture of the Stanford RTE system (MacCartney et al., 2006), but adds a stage for event coreference decision. 4.1 Linguistic analysis The first stage computes linguistic representations containing information about the semantic content of the passages. The text and hypothesis are converted to typed dependency graphs produced by the Stanford parser (Klein and Manning, 2003; de Marneffe et al., 2006). To improve the dependency graph as a pseudo-semantic representation, collocations in WordNet and named entities are collapsed, causing entities and multiword relations to become single nodes. 4.2 Alignment between graphs The second stage provides an alignment between text and hypothesis graphs, consisting of a mapping from each node in the hypothesis to a unique node in the text or to null. The scoring measure uses node similarity (irrespective of polarity) and structural information based on the dependency graphs. Similarity measures and structural information are combined via weights learned using the passiveaggressive online learning algorithm MIRA (Crammer and Singer, 2001). Alignment weights were learned using manually annotated RTE development sets (see Chambers et al., 2007). 4.3 Filtering non-coreferent events Contradiction features are extracted based on mismatches between the text and hypothesis. Therefore, we must first remove pairs of sentences which do not describe the same event, and thus cannot be contradictory to one another. In the following example, it is necessary to recognize that Pluto’s moon is not the same as the moon Titan; otherwise conflicting diameters result in labeling the pair a contradiction. T: Pluto’s moon, which is only about 25 miles in diameter, was photographed 13 years ago. H: The moon Titan has a diameter of 5100 kms. This issue does not arise for textual entailment: elements in the hypothesis not supported by the text lead to non-entailment, regardless of whether the same event is described. For contradiction, however, it is critical to filter unrelated sentences to avoid finding false evidence of contradiction when there is contrasting information about different events. Given the structure of RTE data, in which the hypotheses are shorter and simpler than the texts, one straightforward strategy for detecting coreferent events is to check whether the root of the hypothesis graph is aligned in the text graph. However, some RTE hypotheses are testing systems’ abilities to detect relations between entities (e.g., John of IBM ... →John works for IBM). Thus, we do not filter verb roots that are indicative of such relations. As shown in table 4, this strategy improves results on RTE data. For real world data, however, the assumption of directionality made in this strategy is unfounded, and we cannot assume that one sentence will be short and the other more complex. Assuming two sentences of comparable complexity, we hypothesize that modeling topicality could be used to assess whether the sentences describe the same event. There is a continuum of topicality from the start to the end of a sentence (Firbas, 1971). We thus originally defined the topicality of an NP by nw where n is the nth NP in the sentence. Additionally, we accounted for multiple clauses by weighting each clause equally; in example 4 in table 1, Australia receives the same weight as Prime Minister because each begins a clause. However, this weighting was not supported empirically, and we thus use a simpler, unweighted model. The topicality score of a sentence is calculated as a normalized score across all aligned NPs.3 The text and hypothesis are topically related if either sentence score is above a tuned threshold. Modeling topicality provides an additional improvement in precision (table 4). While filtering provides improvements in performance, some examples of non-coreferent events are still not filtered, such as: T: Also Friday, five Iraqi soldiers were killed and nine 3Since dates can often be viewed as scene setting rather than what the sentence is about, we ignore these in the model. However, ignoring or including dates in the model creates no significant differences in performance on RTE data. 1043 Strategy Precision Recall No filter 55.10 32.93 Root 61.36 32.93 Root + topic 61.90 31.71 Table 4: Precision and recall for contradiction detection on RTE3 dev using different filtering strategies. wounded in a bombing, targeting their convoy near Beiji, 150 miles north of Baghdad. H: Three Iraqi soldiers also died Saturday when their convoy was attacked by gunmen near Adhaim. It seems that the real world frequency of events needs to be taken into account. In this case, attacks in Iraq are unfortunately frequent enough to assert that it is unlikely that the two sentences present mismatching information (i.e., different location) about the same event. But compare the following example: T: President Kennedy was assassinated in Texas. H: Kennedy’s murder occurred in Washington. The two sentences refer to one unique event, and the location mismatch renders them contradictory. 4.4 Extraction of contradiction features In the final stage, we extract contradiction features on which we apply logistic regression to classify the pair as contradictory or not. The feature weights are hand-set, guided by linguistic intuition. 5 Features for contradiction detection In this section, we define each of the feature sets used to capture salient patterns of contradiction. Polarity features. Polarity difference between the text and hypothesis is often a good indicator of contradiction, provided there is a good alignment (see example 2 in table 1). The polarity features capture the presence (or absence) of linguistic markers of negative polarity contexts. These markers are scoped such that words are considered negated if they have a negation dependency in the graph or are an explicit linguistic marker of negation (e.g., simple negation (not), downward-monotone quantifiers (no, few), or restricting prepositions). If one word is negated and the other is not, we may have a polarity difference. This difference is confirmed by checking that the words are not antonyms and that they lack unaligned prepositions or other context that suggests they do not refer to the same thing. In some cases, negations are propagated onto the governor, which allows one to see that no bullet penetrated and a bullet did not penetrate have the same polarity. Number, date and time features. Numeric mismatches can indicate contradiction (example 3 in table 1). The numeric features recognize (mis-)matches between numbers, dates, and times. We normalize date and time expressions, and represent numbers as ranges. This includes expression matching (e.g., over 100 and 200 is not a mismatch). Aligned numbers are marked as mismatches when they are incompatible and surrounding words match well, indicating the numbers refer to the same entity. Antonymy features. Aligned antonyms are a very good cue for contradiction. Our list of antonyms and contrasting words comes from WordNet, from which we extract words with direct antonymy links and expand the list by adding words from the same synset as the antonyms. We also use oppositional verbs from VerbOcean. We check whether an aligned pair of words appears in the list, as well as checking for common antonym prefixes (e.g., anti, un). The polarity of the context is used to determine if the antonyms create a contradiction. Structural features. These features aim to determine whether the syntactic structures of the text and hypothesis create contradictory statements. For example, we compare the subjects and objects for each aligned verb. If the subject in the text overlaps with the object in the hypothesis, we find evidence for a contradiction. Consider example 6 in table 1. In the text, the subject of succeed is Jacques Santer while in the hypothesis, Santer is the object of succeed, suggesting that the two sentences are incompatible. Factivity features. The context in which a verb phrase is embedded may give rise to contradiction, as in example 5 (table 1). Negation influences some factivity patterns: Bill forgot to take his wallet contradicts Bill took his wallet while Bill did not forget to take his wallet does not contradict Bill took his wallet. For each text/hypothesis pair, we check the (grand)parent of the text word aligned to the hypothesis verb, and generate a feature based on its factiv1044 ity class. Factivity classes are formed by clustering our expansion of the PARC lists of factive, implicative and non-factive verbs (Nairn et al., 2006) according to how they create contradiction. Modality features. Simple patterns of modal reasoning are captured by mapping the text and hypothesis to one of six modalities ((not )possible, (not )actual, (not )necessary), according to the presence of predefined modality markers such as can or maybe. A feature is produced if the text/hypothesis modality pair gives rise to a contradiction. For instance, the following pair will be mapped to the contradiction judgment (possible, not possible): T: The trial court may allow the prevailing party reasonable attorney fees as part of costs. H: The prevailing party may not recover attorney fees. Relational features. A large proportion of the RTE data is derived from information extraction tasks where the hypothesis captures a relation between elements in the text. Using Semgrex, a pattern matching language for dependency graphs, we find such relations and ensure that the arguments between the text and the hypothesis match. In the following example, we detect that Fernandez works for FEMA, and that because of the negation, a contradiction arises. T: Fernandez, of FEMA, was on scene when Martin arrived at a FEMA base camp. H: Fernandez doesn’t work for FEMA. Relational features provide accurate information but are difficult to extend for broad coverage. 6 Results Our contradiction detection system was developed on all datasets listed in the first part of table 5. As test sets, we used RTE1 test, the independently annotated RTE3 test, and Neg test. We focused on attaining high precision. In a real world setting, it is likely that the contradiction rate is extremely low; rather than overwhelming true positives with false positives, rendering the system impractical, we mark contradictions conservatively. We found reasonable inter-annotator agreement between NIST and our post-hoc annotation of RTE3 test (κ = 0.81), showing that, even with limited context, humans tend to Precision Recall Accuracy RTE1 dev1 70.37 40.43 – RTE1 dev2 72.41 38.18 – RTE2 dev 64.00 28.83 – RTE3 dev 61.90 31.71 – Neg dev 74.07 78.43 75.49 Neg test 62.97 62.50 62.74 LCC negation – – 75.63 RTE1 test 42.22 26.21 – RTE3 test 22.95 19.44 – Avg. RTE3 test 10.72 11.69 – Table 5: Precision and recall figures for contradiction detection. Accuracy is given for balanced datasets only. ‘LCC negation’ refers to performance of Harabagiu et al. (2006); ‘Avg. RTE3 test’ refers to mean performance of the 12 submissions to the RTE3 Pilot. agree on contradictions.4 The results on the test sets show that performance drops on new data, highlighting the difficulty in generalizing from a small corpus of positive contradiction examples, as well as underlining the complexity of building a broad coverage system. This drop in accuracy on the test sets is greater than that of many RTE systems, suggesting that generalizing for contradiction is more difficult than for entailment. Particularly when addressing contradictions that require lexical and world knowledge, we are only able to add coverage in a piecemeal fashion, resulting in improved performance on the development sets but only small gains for the test sets. Thus, as shown in table 6, we achieve 13.3% recall on lexical contradictions in RTE3 dev but are unable to identify any such contradictions in RTE3 test. Additionally, we found that the precision of category (2) features was less than that of category (1) features. Structural features, for example, caused us to tag 36 non-contradictions as contradictions in RTE3 test, over 75% of the precision errors. Despite these issues, we achieve much higher precision and recall than the average submission to the RTE3 Pilot task on detecting contradictions, as shown in the last two lines of table 5. 4This stands in contrast with the low inter-annotator agreement reported by Sanchez-Graillet and Poesio (2007) for contradictions in protein-protein interactions. The only hypothesis we have to explain this contrast is the difficulty of scientific material. 1045 Type RTE3 dev RTE3 test 1 Antonym 25.0 (3/12) 42.9 (3/7) Negation 71.4 (5/7) 60.0 (3/5) Numeric 71.4 (5/7) 28.6 (2/7) 2 Factive/Modal 25.0 (1/4) 10.0 (1/10) Structure 46.2 (6/13) 21.1 (4/19) Lexical 13.3 (2/15) 0.0 (0/12) WK 18.2 (4/22) 8.3 (1/12) Table 6: Recall by contradiction type. 7 Error analysis and discussion One significant issue in contradiction detection is lack of feature generalization. This problem is especially apparent for items in category (2) requiring lexical and world knowledge, which proved to be the most difficult contradictions to detect on a broad scale. While we are able to find certain specific relationships in the development sets, these features attained only limited coverage. Many contradictions in this category require multiple inferences and remain beyond our capabilities: T: The Auburn High School Athletic Hall of Fame recently introduced its Class of 2005 which includes 10 members. H: The Auburn High School Athletic Hall of Fame has ten members. Of the types of contradictions in category (2), we are best at addressing those formed via structural differences and factive/modal constructions as shown in table 6. For instance, we detect examples 5 and 6 in table 1. However, creating features with sufficient precision is an issue for these types of contradictions. Intuitively, two sentences that have aligned verbs with the same subject and different objects (or vice versa) are contradictory. This indeed indicates a contradiction 55% of the time on our development sets, but this is not high enough precision given the rarity of contradictions. Another type of contradiction where precision falters is numeric mismatch. We obtain high recall for this type (table 6), as it is relatively simple to determine if two numbers are compatible, but high precision is difficult to achieve due to differences in what numbers may mean. Consider: T: Nike Inc. said that its profit grew 32 percent, as the company posted broad gains in sales and orders. H: Nike said orders for footwear totaled $4.9 billion, including a 12 percent increase in U.S. orders. Our system detects a mismatch between 32 percent and 12 percent, ignoring the fact that one refers to profit and the other to orders. Accounting for context requires extensive text comprehension; it is not enough to simply look at whether the two numbers are headed by similar words (grew and increase). This emphasizes the fact that mismatching information is not sufficient to indicate contradiction. As demonstrated by our 63% accuracy on Neg test, we are reasonably good at detecting negation and correctly ascertaining whether it is a symptom of contradiction. Similarly, we handle single word antonymy with high precision (78.9%). Nevertheless, Harabagiu et al.’s performance demonstrates that further improvement on these types is possible; indeed, they use more sophisticated techniques to extract oppositional terms and detect polarity differences. Thus, detecting category (1) contradictions is feasible with current systems. While these contradictions are only a third of those in the RTE datasets, detecting such contradictions accurately would solve half of the problems found in the real corpus. This suggests that we may be able to gain sufficient traction on contradiction detection for real world applications. Even so, category (2) contradictions must be targeted to detect many of the most interesting examples and to solve the entire problem of contradiction detection. Some types of these contradictions, such as lexical and world knowledge, are currently beyond our grasp, but we have demonstrated that progress may be made on the structure and factive/modal types. Despite being rare, contradiction is foundational in text comprehension. Our detailed investigation demonstrates which aspects of it can be resolved and where further research must be directed. Acknowledgments This paper is based on work funded in part by the Defense Advanced Research Projects Agency through IBM and by the Disruptive Technology Office (DTO) Phase III Program for Advanced Question Answering for Intelligence (AQUAINT) through Broad Agency Announcement (BAA) N61339-06-R-0034. 1046 References Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second PASCAL recognising textual entailment challenge. In Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment, Venice, Italy. Nathanael Chambers, Daniel Cer, Trond Grenager, David Hall, Chloe Kiddon, Bill MacCartney, MarieCatherine de Marneffe, Daniel Ramage, Eric Yeh, and Christopher D. Manning. 2007. Learning alignments and leveraging natural logic. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. Timothy Chklovski and Patrick Pantel. 2004. Verbocean: Mining the web for fine-grained semantic verb relations. In Proceedings of EMNLP-04. Cleo Condoravdi, Dick Crouch, Valeria de Pavia, Reinhard Stolle, and Daniel G. Bobrow. 2003. Entailment, intensionality and text understanding. Workshop on Text Meaning (2003 May 31). Koby Crammer and Yoram Singer. 2001. Ultraconservative online algorithms for multiclass problems. In Proceedings of COLT-2001. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Quinonero-Candela et al., editor, MLCW 2005, LNAI Volume 3944, pages 177–190. SpringerVerlag. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC-06). Christiane Fellbaum. 1998. WordNet: an electronic lexical database. MIT Press. Jan Firbas. 1971. On the concept of communicative dynamism in the theory of functional sentence perspective. Brno Studies in English, 7:23–47. Danilo Giampiccolo, Ido Dagan, Bernardo Magnini, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACLPASCAL Workshop on Textual Entailment and Paraphrasing. Sanda Harabagiu, Andrew Hickl, and Finley Lacatusu. 2006. Negation, contrast, and contradiction in text processing. In Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI-06). Andrew Hickl, John Williams, Jeremy Bensley, Kirk Roberts, Bryan Rink, and Ying Shi. 2006. Recognizing textual entailment with LCC’s GROUNDHOG system. In Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment. Kevin Humphreys, Robert Gaizauskas, and Saliha Azzam. 1997. Event coreference for information extraction. In Proceedings of the Workshop on Operational Factors in Pratical, Robust Anaphora Resolution for Unrestricted Texts, 35th ACL meeting. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association of Computational Linguistics. Bill MacCartney, Trond Grenager, Marie-Catherine de Marneffe, Daniel Cer, and Christopher D. Manning. 2006. Learning to recognize features of valid textual entailments. In Proceedings of the North American Association of Computational Linguistics (NAACL06). Daniel Marcu and Abdessamad Echihabi. 2002. An unsupervised approach to recognizing discourse relations. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Rowan Nairn, Cleo Condoravdi, and Lauri Karttunen. 2006. Computing relative polarity for textual inference. In Proceedings of ICoS-5. Olivia Sanchez-Graillet and Massimo Poesio. 2007. Discovering contradiction protein-protein interactions in text. In Proceedings of BioNLP 2007: Biological, translational, and clinical language processing. Lucy Vanderwende, Arul Menezes, and Rion Snow. 2006. Microsoft research at rte-2: Syntactic contributions in the entailment task: an implementation. In Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment. Ellen Voorhees. 2008. Contradictions and justifications: Extensions to the textual entailment task. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics. Annie Zaenen, Lauri Karttunen, and Richard S. Crouch. 2005. Local textual inference: can it be defined or circumscribed? In ACL 2005 Workshop on Empirical Modeling of Semantic Equivalence and Entailment. Fabio Massimo Zanzotto, Marco Pennacchiotti, and Alessandro Moschitti. 2007. Shallow semantics in fast textual entailment rule learners. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. 1047
2008
118
Proceedings of ACL-08: HLT, pages 1048–1056, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Semantic Class Learning from the Web with Hyponym Pattern Linkage Graphs Zornitsa Kozareva DLSI, University of Alicante Campus de San Vicente Alicante, Spain 03080 [email protected] Ellen Riloff School of Computing University of Utah Salt Lake City, UT 84112 [email protected] Eduard Hovy USC Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292-6695 [email protected] Abstract We present a novel approach to weakly supervised semantic class learning from the web, using a single powerful hyponym pattern combined with graph structures, which capture two properties associated with pattern-based extractions: popularity and productivity. Intuitively, a candidate is popular if it was discovered many times by other instances in the hyponym pattern. A candidate is productive if it frequently leads to the discovery of other instances. Together, these two measures capture not only frequency of occurrence, but also cross-checking that the candidate occurs both near the class name and near other class members. We developed two algorithms that begin with just a class name and one seed instance and then automatically generate a ranked list of new class instances. We conducted experiments on four semantic classes and consistently achieved high accuracies. 1 Introduction Knowing the semantic classes of words (e.g., “trout” is a kind of FISH) can be extremely valuable for many natural language processing tasks. Although some semantic dictionaries do exist (e.g., WordNet (Miller, 1990)), they are rarely complete, especially for large open classes (e.g., classes of people and objects) and rapidly changing categories (e.g., computer technology). (Roark and Charniak, 1998) reported that 3 of every 5 terms generated by their semantic lexicon learner were not present in WordNet. Automatic semantic lexicon acquisition could be used to enhance existing resources such as WordNet, or to produce semantic lexicons for specialized categories or domains. A variety of methods have been developed for automatic semantic class identification, under the rubrics of lexical acquisition, hyponym acquisition, semantic lexicon induction, semantic class learning, and web-based information extraction. Many of these approaches employ surface-level patterns to identify words and their associated semantic classes. However, such patterns tend to overgenerate (i.e., deliver incorrect results) and hence require additional filtering mechanisms. To overcome this problem, we employed one single powerful doubly-anchored hyponym pattern to query the web and extract semantic class instances: CLASS NAME such as CLASS MEMBER and *. We hypothesized that a doubly-anchored pattern, which includes both the class name and a class member, would achieve high accuracy because of its specificity. To address concerns about coverage, we embedded the search in a bootstrapping process. This method produced many correct instances, but despite the highly restrictive nature of the pattern, still produced many incorrect instances. This result led us to explore new ways to improve the accuracy of hyponym patterns without requiring additional training resources. The main contribution of this work is a novel method for combining hyponym patterns with graph structures that capture two properties associated with pattern extraction: popularity and productivity. Intuitively, a candidate word (or phrase) is popular if it was discovered many times by other words (or 1048 phrases) in a hyponym pattern. A candidate word is productive if it frequently leads to the discovery of other words. Together, these two measures capture not only frequency of occurrence, but also crosschecking that the word occurs both near the class name and near other class members. We present two algorithms that use hyponym pattern linkage graphs (HPLGs) to represent popularity and productivity information. The first method uses a dynamically constructed HPLG to assess the popularity of each candidate and steer the bootstrapping process. This approach produces an efficient bootstrapping process that performs reasonably well, but it cannot take advantage of productivity information because of the dynamic nature of the process. The second method is a two-step procedure that begins with an exhaustive pattern search that acquires popularity and productivity information about candidate instances. The candidates are then ranked based on properties of the HPLG. We conducted experiments with four semantic classes, achieving high accuracies and outperforming the results reported by others who have worked on the same classes. 2 Related Work A substantial amount of research has been done in the area of semantic class learning, under a variety of different names and with a variety of different goals. Given the great deal of similar work in information extraction and ontology learning, we focus here only on techniques for weakly supervised or unsupervised semantic class (i.e., supertype-based) learning, since that is most related to the work in this paper. Fully unsupervised semantic clustering (e.g., (Lin, 1998; Lin and Pantel, 2002; Davidov and Rappoport, 2006)) has the disadvantage that it may or may not produce the types and granularities of semantic classes desired by a user. Another related line of work is automated ontology construction, which aims to create lexical hierarchies based on semantic classes (e.g., (Caraballo, 1999; Cimiano and Volker, 2005; Mann, 2002)), and learning semantic relations such as meronymy (Berland and Charniak, 1999; Girju et al., 2003). Our research focuses on semantic lexicon induction, which aims to generate lists of words that belong to a given semantic class (e.g., lists of FISH or VEHICLE words). Weakly supervised learning methods for semantic lexicon generation have utilized co-occurrence statistics (Riloff and Shepherd, 1997; Roark and Charniak, 1998), syntactic information (Tanev and Magnini, 2006; Pantel and Ravichandran, 2004; Phillips and Riloff, 2002), lexico-syntactic contextual patterns (e.g., “resides in <location>” or “moved to <location>”) (Riloff and Jones, 1999; Thelen and Riloff, 2002), and local and global contexts (Fleischman and Hovy, 2002). These methods have been evaluated only on fixed corpora1, although (Pantel et al., 2004) demonstrated how to scale up their algorithms for the web. Several techniques for semantic class induction have also been developed specifically for learning from the web. (Pas¸ca, 2004) uses Hearst’s patterns (Hearst, 1992) to learn semantic class instances and class groups by acquiring contexts around the pattern. Pasca also developed a second technique (Pas¸ca, 2007b) that creates context vectors for a group of seed instances by searching web query logs, and uses them to learn similar instances. The work most closely related to ours is Hearst’s early work on hyponym learning (Hearst, 1992) and more recent work that has followed up on her idea. Hearst’s system exploited patterns that explicitly identify a hyponym relation between a semantic class and a word (e.g., “such authors as Shakespeare”). We will refer to these as hyponym patterns. Pasca’s previously mentioned system (Pas¸ca, 2004) applies hyponym patterns to the web and acquires contexts around them. The KnowItAll system (Etzioni et al., 2005) also uses hyponym patterns to extract class instances from the web and then evaluates them further by computing mutual information scores based on web queries. The work by (Widdows and Dorow, 2002) on lexical acquisition is similar to ours because they also use graph structures to learn semantic classes. However, their graph is based entirely on syntactic relations between words, while our graph captures the ability of instances to find each other in a hyponym pattern based on web querying, without any part-ofspeech tagging or parsing. 1Meta-bootstrapping (Riloff and Jones, 1999) was evaluated on web pages, but used a precompiled corpus of downloaded web pages. 1049 3 Semantic Class Learning with Hyponym Pattern Linkage Graphs 3.1 A Doubly-Anchored Hyponym Pattern Our work was motivated by early research on hyponym learning (Hearst, 1992), which applied patterns to a corpus to associate words with semantic classes. Hearst’s system exploited patterns that explicitly link a class name with a class member, such as “X and other Ys” and “Ys such as X”. Relying on surface-level patterns, however, is risky because incorrect items are frequently extracted due to polysemy, idiomatic expressions, parsing errors, etc. Our work began with the simple idea of using an extremely specific pattern to extract semantic class members with high accuracy. Our expectation was that a very specific pattern would virtually eliminate the most common types of false hits that are caused by phenomena such as polysemy and idiomatic expressions. A concern, however, was that an extremely specific pattern would suffer from sparse data and not extract many new instances. By using the web as a corpus, we hoped that the pattern could extract at least a few instances for virtually any class, and then we could gain additional traction by bootstrapping these instances. All of the work presented in this paper uses just one doubly-anchored pattern to identify candidate instances for a semantic class: <class name> such as <class member> and * This pattern has two variables: the name of the semantic class to be learned (class name) and a member of the semantic class (class member). The asterisk (*) indicates the location of the extracted words. We describe this pattern as being doubly-anchored because it is instantiated with both the name of the semantic class as well as a class member. For example, the pattern “CARS such as FORD and *” will extract automobiles, and the pattern “PRESIDENTS such as FORD and *” will extract presidents. The doubly-anchored nature of the pattern serves two purposes. First, it increases the likelihood of finding a true list construction for the class. Our system does not use part-of-speech tagging or parsing, so the pattern itself is the only guide for finding an appropriate linguistic context. Second, the doubly-anchored pattern virtually Members = {Seed}; P0= “Class such as Seed and *”; P = {P0}; iter = 0; While ((iter < Max Iters) and (P ̸= {})) iter++; For each Pi ∈P Snippets = web query(Pi); Candidates = extract words(Snippets,Pi); Pnew = {}; For each Candidatek ∈Candidates If (Candidatek /∈Members); Members = Members ∪{Candidatek}; Pk= “Class such as Candidatek and *”; Pnew = Pnew ∪{ Pk }; P = Pnew; Figure 1: Reckless Bootstrapping eliminates ambiguity because the class name and class member mutually disambiguate each other. For example, the word FORD could refer to an automobile or a person, but in the pattern “CARS such as FORD and *” it will almost certainly refer to an automobile. Similarly, the class “PRESIDENT” could refer to country presidents or corporate presidents, and “BUSH” could refer to a plant or a person. But in the pattern “PRESIDENTS such as BUSH”, both words will surely refer to country presidents. Another advantage of the doubly-anchored pattern is that an ambiguous or underspecified class name will be constrained by the presence of the class member. For example, to generate a list of company presidents, someone might naively define the class name as PRESIDENTS. A singly-anchored pattern (e.g., “PRESIDENTS such as *”) might generate lists of other types of presidents (e.g., country presidents, university presidents, etc.). Because the doubly-anchored pattern also requires a class member (e.g., “PRESIDENTS such as BILL GATES and *”), it is likely to generate only the desired types of instances. 3.2 Reckless Bootstrapping To evaluate the performance of the doubly-anchored pattern, we began by using the pattern to search the web and embedded this process in a simple bootstrapping loop, which is presented in Figure 1. As input, the user must provide the name of the desired 1050 semantic class (Class) and a seed example (Seed), which are used to instantiate the pattern. On the first iteration, the pattern is given to Google as a web query, and new class members are extracted from the retrieved text snippets. We wanted the system to be as language-independent as possible, so we refrained from using any taggers or parsing tools. As a result, instances are extracted using only word boundaries and orthographic information. For proper name classes, we extract all capitalized words that immediately follow the pattern. For common noun classes, we extract just one word, if it is not capitalized. Examples are shown below, with the extracted items underlined: countries such as China and Sri Lanka are ... fishes such as trout and bass can ... One limitation is that our system cannot learn multi-word instances of common noun categories, or proper names that include uncapitalized words (e.g., “United States of America”). These limitations could be easily overcome by incorporating a noun phrase (NP) chunker and extracting NPs. Each new class member is then used as a seed instance in the bootstrapping loop. We implemented this process as breadth-first search, where each “ply” of the search process is the result of bootstrapping the class members learned during the previous iteration as seed instances for the next one. During each iteration, we issue a new web query and add the newly extracted class members to the queue for the next cycle. We run this bootstrapping process for a fixed number of iterations (search ply), or until no new class members are produced. We will refer to this process as reckless bootstrapping because there are no checks of any kind. Every term extracted by the pattern is assumed to be a class member. 3.2.1 Results Table 1 shows the results for 4 iterations of reckless bootstrapping for four semantic categories: U.S. states, countries, singers, and fish. The first two categories are relatively small, closed sets (our gold standard contains 50 U.S. states and 194 countries). The singers and fish categories are much larger, open sets (see Section 4 for details). Table 1 reveals that the doubly-anchored pattern achieves high accuracy during the first iteration, but Iter. countries states singers fish 1 .80 .79 .91 .76 2 .57 .21 .87 .64 3 .21 .18 .86 .54 4 .16 – .83 .54 Table 1: Reckless Bootstrapping Accuracies quality deteriorates rapidly as bootstrapping progresses. Figure 2 shows the recall and precision curves for countries and states. High precision is achieved only with low levels of recall for countries. Our initial hypothesis was that such a specific pattern would be able to maintain high precision because non-class members would be unlikely to cooccur with the pattern. But we were surprised to find that many incorrect entries were generated for reasons such as broken expressions like “Merce -dez”, misidentified list constructions (e.g., “In countries such as China U.S. Policy is failing...”), and incomplete proper names due to insufficient length of the retrieved text snippet. Incorporating a noun phrase chunker would eliminate some of these cases, but far from all of them. We concluded that even such a restrictive pattern is not sufficient for semantic class learning on its own. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Precision Recall Country/State Country State Figure 2: Recall/precision for reckless bootstrapping In the next section, we present a new approach that creates a Hyponym Pattern Linkage Graph to steer bootstrapping and improve accuracy. 3.3 Using Dynamic Graphs to Steer Bootstrapping Intuitively, we expect true class members to occur frequently in pattern contexts with other class mem1051 bers. To operationalize this intuition, we create a hyponym pattern linkage graph, which represents the frequencies with which candidate instances generate each other in the pattern contexts. We define a hyponym pattern linkage graph (HPLG) as a G = (V, E), where each vertex v ∈V is a candidate instance and each edge (u, v) ∈E means that instance v was generated by instance u. The weight w of an edge is the frequency with which u generated v. For example, consider the following sentence, where the pattern is italicized and the extracted instance is underlined: Countries such as China and Laos have been... In the HPLG, an edge e = (China, Laos) would be created because the pattern anchored by China extracted Laos as a new candidate instance. If this pattern extracted Laos from 15 different snippets, then the edge’s weight would be 15. The in-degree of a node represents its popularity, i.e., the number of instance occurrences that generated it. The graph is constructed dynamically as bootstrapping progresses. Initially, the seed is the only trusted class member and the only vertex in the graph. The bootstrapping process begins by instantiating the doubly-anchored pattern with the seed class member, issuing a web query to generate new candidate instances, and adding these new instances to the graph. A score is then assigned to every node in the graph, using one of several different metrics defined below. The highest-scoring unexplored node is then added to the set of trusted class members, and used as the seed for the next bootstrapping iteration. We experimented with three scoring functions for selecting nodes. The In-Degree (inD) score for vertex v is the sum of the weights of all incoming edges (u, v), where u is a trusted class member. Intuitively, this captures the popularity of v among instances that have already been identified as good instances. The Best Edge (BE) score for vertex v is the maximum edge weight among the incoming edges (u, v), where u is a trusted class member. The Key Player Problem (KPP) measure is used in social network analysis (Borgatti and Everett, 2006) to identify nodes whose removal would result in a residual network of minimum cohesion. A node receives a high value if it is highly connected and relatively close to most other nodes in the graph. The KPP score for vertex v is computed as: KPP(v) = X u∈V 1 d(u, v) |V |−1 where d(u, v) is the shortest path between two vertices, where u is a trusted node. For tie-breaking, the distances are multiplied by the weight of the edge. Note that all of these measures rely only on incoming edges because a node does not acquire outgoing edges until it has already been selected as a trusted class member and used to acquire new instances. In the next section, we describe a two-step process for creating graphs that can take advantage of both incoming and outgoing edges. 3.4 Re-Ranking with Precompiled Graphs One way to try to confirm (or disconfirm) whether a candidate instance is a true class member is to see whether it can produce new candidate instances. If we instantiate our pattern with the candidate (i.e., “CLASS NAME such as CANDIDATE and *”) and successfully extract many new instances, then this is evidence that the candidate frequently occurs with the CLASS NAME in list constructions. We will refer to the ability of a candidate to generate new instances as its productivity. The previous bootstrapping algorithm uses a dynamically constructed graph that is constantly evolving as new nodes are selected and explored. Each node is scored based only on the set of instances that have been generated and identified as “trusted” at that point in the bootstrapping process. To use productivity information, we must adopt a different procedure because we need to know not only who generated each candidate, but also the complete set of instances that the candidate itself can generate. We adopted a two-step process that can use both popularity and productivity information in a hyponym pattern linkage graph to assess the quality of candidate instances. First, we perform reckless bootstrapping for a class name and seed until no new instances are generated. Second, we assign a score to each node in the graph using a scoring function that takes into account both the in-degree (popularity) and out-degree (productivity) of each node. We experimented with four different scoring functions, some of which were motivated by work on word 1052 sense disambiguation to identify the most “important” node in a graph containing its possible senses (Navigli and Lapata, 2007). The Out-degree (outD) score for vertex v is the weighted sum of v’s outgoing edges, normalized by the number of other nodes in the graph. outD(v) = X ∀(v,p)∈E w(v, p) |V |−1 This measure captures only productivity, while the next three measures consider both productivity and popularity. The Total-degree (totD) score for vertex v is the weighted sum of both incoming and outgoing edges, normalized by the number of other nodes in the graph. The Betweenness (BT) score (Freeman, 1979) considers a vertex to be important if it occurs on many shortest paths between other vertices. BT(v) = X s,t∈V :s̸=v̸=t σst(v) σst where σst is the number of shortest paths from s to t, and σst(v) is the number of shortest paths from s to t that pass through vertex v. PageRank (Page et al., 1998) establishes the relative importance of a vertex v through an iterative Markov chain model. The PageRank (PR) score of a vertex v is determined on the basis of the nodes it is connected to. PR(v) = (1−α) |V | + α X u,v∈E PR(u) outdegree(u) α is a damping factor that we set to 0.85. We discarded all instances that produced zero productivity links, meaning that they did not generate any other candidates when used in web queries. 4 Experimental evaluation 4.1 Data We evaluated our algorithms on four semantic categories: U.S. states, countries, singers, and fish. The states and countries categories are relatively small, closed sets: our gold standards consist of 50 U.S. states and 194 countries (based on a list found on Wikipedia). The singers and fish categories are much larger, open classes. As our gold standard for fish, we used a list of common fish names found on Wikipedia.2 All the singer names generated by our 2We also counted as correct plural versions of items found on the list. The total size of our fish list is 1102. States Popularity Prd Pop&Prd N BE KPP inD outD totD BT PR 25 1.0 1.0 1.0 1.0 1.0 .88 .88 50 .96 .98 .98 1.0 1.0 .86 .82 64 .77 .78 .77 .78 .78 .77 .67 Countries Popularity Prd Pop&Prd N BE KPP inD outD totD BT PR 50 .98 .97 .98 1.0 1.0 .98 .97 100 .96 .97 .94 1.0 .99 .97 .95 150 .90 .92 .91 1.0 .95 .94 .92 200 .83 .81 .83 .90 .87 .82 .80 300 .60 .59 .61 .61 .62 .56 .60 323 .57 .55 .57 .57 .58 .52 .57 Singers Popularity Prd Pop&Prd N BE KPP inD outD totD BT PR 10 .92 .96 .92 1.0 1.0 1.0 1.0 25 .89 .90 .91 1.0 1.0 1.0 .99 50 .92 .85 .92 .97 .98 .95 .97 75 .89 .83 .91 .96 .95 .93 .95 100 .86 .81 .89 .96 .93 .94 .94 150 .86 .79 .88 .95 .92 .93 .87 180 .86 .80 .87 .91 .91 .91 .88 Fish Popularity Prd Pop&Prd N BE KPP inD outD totD BT PR 10 .90 .90 .90 1.0 1.0 .90 .70 25 .80 .88 .76 1.0 .96 .96 .72 50 .82 .80 .78 1.0 .94 .88 .66 75 .72 .69 .72 .93 .87 .79 .64 100 .63 .68 .66 .84 .80 .74 .62 116 .60 .65 .66 .80 .78 .71 .59 Table 2: Accuracies for each semantic class algorithms were manually reviewed for correctness. We evaluated performance in terms of accuracy (the percentage of instances that were correct).3 4.2 Performance Table 2 shows the accuracy results of the two algorithms that use hyponym pattern linkage graphs. We display results for the top-ranked N candidates, for all instances that have a productivity value > zero.4 The Popularity columns show results for the 3We never generated duplicates so the instances are distinct. 4Obviously, this cutoff is not available to the popularitybased bootstrapping algorithm, but here we are just comparing the top N results for both algorithms. 1053 bootstrapping algorithm described in Section 3.3, using three different scoring functions. The results for the ranking algorithm described in Section 3.4 are shown in the Productivity (Prd) and Popularity&Productivity (Pop&Prd) columns. For the states, countries, and singers categories, we randomly selected 5 different initial seeds and then averaged the results. For the fish category we ran each algorithm using just the seed “salmon”. The popularity-based metrics produced good accuracies on the states, countries, and singers categories under all 3 scoring functions. For fish, KPP performed better than the others. The Out-degree (outD) scoring function, which uses only Productivity information, obtained the best results across all 4 categories. OutD achieved 100% accuracy for the first 50 states and fish, 100% accuracy for the top 150 countries, and 97% accuracy for the top 50 singers. The three scoring metrics that use both popularity and productivity also performed well, but productivity information by itself seems to perform better in some cases. It can be difficult to compare the results of different semantic class learners because there is no standard set of benchmark categories, so researchers report results for different classes. For the state and country categories, however, we can compare our results with that of other web-based semantic class learners such as Pasca (Pas¸ca, 2007a) and the KnowItAll system (Etzioni et al., 2005). For the U.S. states category, our system achieved 100% recall and 100% precision for the first 50 items generated, and KnowItAll performed similarly achieving 98% recall with 100% precision. Pasca did not evaluate his system on states. For the countries category, our system achieved 100% precision for the first 150 generated instances (77% recall). (Pas¸ca, 2007a) reports results of 100% precision for the first 25 instances generated, and 82% precision for the first 150 instances generated. The KnowItAll system (Etzioni et al., 2005) achieved 97% precision with 58% recall, and 79% precision with 87% recall.5 To the best of our knowledge, other researchers have not reported results for the singer and fish categories. 5(Etzioni et al., 2005) do not report exactly how many countries were in their gold standard. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 50 100 150 200 250 300 350 400 Accuracy Iterations outD inD cutoff, t Figure 3: Learning curve for Placido Domingo Figure 3 shows the learning curve for both algorithms using their best scoring functions on the singer category with Placido Domingo as the initial seed. In total, 400 candidate words were generated. The Out-degree scoring function ranked the candidates well. Figure 3 also includes a vertical line indicating where the candidate list was cut (at 180 instances) based on the zero productivity cutoff. One observation is that the rankings do a good job of identifying borderline cases, which typically are ranked just below most correct instances but just above the obviously bad entries. For example, for states, the 50 U.S. states are ranked first, followed by 14 more entries (in order): Russia, Ukraine, Uzbekistan, Azerbaijan, Moldova, Tajikistan, Armenia, Chicago, Boston, Atlanta, Detroit, Philadelphia, Tampa, Moldavia The first 7 entries are all former states of the Soviet Union. In retrospect, we realized that we should have searched for “U.S. states” instead of just “states”. This example illustrates the power of the doubly-anchored hyponym pattern to correctly identify our intended semantic class by disambiguating our class name based on the seed class member. The algorithms also seem to be robust with respect to initial seed choice. For the states, countries, and singers categories, we ran experiments with 5 different initial seeds, which were randomly selected. The 5 country seeds represented a diverse set of nations, some of which are rarely mentioned in the news: Brazil, France, Guinea-Bissau, Uganda, 1054 and Zimbabwe. All of these seeds obtained ≥92% recall with ≥90% precision. 4.3 Error Analysis We examined the incorrect instances produced by our algorithms and found that most of them fell into five categories. Type 1 errors were caused by incorrect proper name extraction. For example, in the sentence “states such as Georgia and English speaking countries like Canada...”, “English” was extracted as a state. These errors resulted from complex noun phrases and conjunctions, as well as unusual syntactic constructions. An NP chunker might prevent some of these cases, but we suspect that many of them would have been misparsed regardless. Type 2 errors were caused by instances that formerly belonged to the semantic class (e.g., SerbiaMontenegro and Czechoslovakia are no longer countries). In this error type, we also include borderline cases that could arguably belong to the semantic class (e.g., Wales as a country). Type 3 errors were spelling variants (e.g., Kyrgystan vs. Kyrgyzhstan) and name variants (e.g., Beyonce vs. Beyonce Knowles). Officially, every entity has one official spelling and one complete name, but in practice there are often variations that may occur nearly as frequently as the official name. For example, it is most common to refer to the singer Beyonce by just her first name. Type 4 errors were caused by sentences that were just flat out wrong in their factual assertions. For example, some sentences referred to “North America” as a country. Type 5 errors were caused by broken expressions found in the retrieved snippets (e.g. Michi -gan). These errors may be fixable by cleaning up the web pages or applying heuristics to prevent or recognize partial words. It is worth noting that incorrect instances of Types 2 and 3 may not be problematic to encounter in a dictionary or ontology. Name variants and former class members may in fact be useful to have. 5 Conclusions Combining hyponym patterns with pattern linkage graphs is an effective way to produce a highly accurate semantic class learner that requires truly minimal supervision: just the class name and one class member as a seed. Our results consistently produced high accuracy and for the states and countries categories produced very high recall. The singers and fish categories, which are much larger open classes, also achieved high accuracy and generated many instances, but the resulting lists are far from complete. Even on the web, the doublyanchored hyponym pattern eventually ran out of steam and could not produce more instances. However, all of our experiments were conducted using just a single hyponym pattern. Other researchers have successfully used sets of hyponym patterns (e.g., (Hearst, 1992; Etzioni et al., 2005; Pas¸ca, 2004)), and multiple patterns could be used with our algorithms as well. Incorporating additional hyponym patterns will almost certainly improve coverage, and could potentially improve the quality of the graphs as well. Our popularity-based algorithm was very effective and is practical to use. Our best-performing algorithm, however, was the 2-step process that begins with an exhaustive search (reckless bootstrapping) and then ranks the candidates using the Outdegree scoring function, which represents productivity. The first step is expensive, however, because it exhaustively applies the pattern to the web until no more extractions are found. In our evaluation, we ran this process on a single PC and it usually finished overnight, and we were able to learn a substantial number of new class instances. If more hyponym patterns are used, then this could get considerably more expensive, but the process could be easily parallelized to perform queries across a cluster of machines. With access to a cluster of ordinary PCs, this technique could be used to automatically create extremely large, high-quality semantic lexicons, for virtually any categories, without external training resources. Acknowledgments This research was supported in part by the Department of Homeland Security under ONR Grants N00014-07-1-014 and N0014-07-1-0152, the European Union Sixth Framework project QALLME FP6 IST-033860, and the Spanish Ministry of Science and Technology TEXT-MESS TIN2006-15265-C0601. 1055 References M. Berland and E. Charniak. 1999. Finding Parts in Very Large Corpora. In Proc. of the 37th Annual Meeting of the Association for Computational Linguistics. S. Borgatti and M. Everett. 2006. A graph-theoretic perspective on centrality. Social Networks, 28(4). S. Caraballo. 1999. Automatic Acquisition of a Hypernym-Labeled Noun Hierarchy from Text. In Proc. of the 37th Annual Meeting of the Association for Computational Linguistics, pages 120–126. P. Cimiano and J. Volker. 2005. Towards large-scale, open-domain and ontology-based named entity classification. In Proc. of Recent Advances in Natural Language Processing, pages 166–172. D. Davidov and A. Rappoport. 2006. Efficient unsupervised discovery of word categories using symmetric patterns and high frequency words. In Proc. of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACL. O. Etzioni, M. Cafarella, D. Downey, A. Popescu, T. Shaked, S. Soderland, D. Weld, and A. Yates. 2005. Unsupervised named-entity extraction from the web: an experimental study. Artificial Intelligence, 165(1):91–134, June. M.B. Fleischman and E.H. Hovy. 2002. Fine grained classification of named entities. In Proc. of the 19th International Conference on Computational Linguistics, pages 1–7. C. Freeman. 1979. Centrality in social networks: Conceptual clarification. Social Networks, 1:215–239. R. Girju, A. Badulescu, and D. Moldovan. 2003. Learning semantic constraints for the automatic discovery of part-whole relations. In Proc. of Conference of HLT / North American Chapter of the Association for Computational Linguistics. M. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proc. of the 14th conference on Computational linguistics, pages 539–545. D. Lin and P. Pantel. 2002. Concept discovery from text. In Proc. of the 19th International Conference on Computational linguistics, pages 1–7. D. Lin. 1998. Automatic retrieval and clustering of similar words. In Proc. of the 17th international conference on Computational linguistics, pages 768–774. G. Mann. 2002. Fine-grained proper noun ontologies for question answering. In Proc. of the 19th International Conference on Computational Linguistics, pages 1–7. G. Miller. 1990. Wordnet: An On-line Lexical Database. International Journal of Lexicography, 3(4). R. Navigli and M. Lapata. 2007. Graph connectivity measures for unsupervised word sense disambiguation. In Proc. of the 20th International Joint Conference on Artificial Intelligence, pages 1683–1688. M. Pas¸ca. 2004. Acquisition of categorized named entities for web search. In Proc. of the Thirteenth ACM International Conference on Information and Knowledge Management, pages 137–145. M. Pas¸ca. 2007a. Organizing and searching the world wide web of facts – step two: harnessing the wisdom of the crowds. In Proc. of the 16th International Conference on World Wide Web, pages 101–110. M. Pas¸ca. 2007b. Weakly-supervised discovery of named entities using web search queries. In Proc. of the sixteenth ACM conference on Conference on information and knowledge management, pages 683–690. L. Page, S. Brin, R. Motwani, and T. Winograd. 1998. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford Digital Library Technologies Project. P. Pantel and D. Ravichandran. 2004. Automatically labeling semantic classes. In Proc. of Conference of HLT / North American Chapter of the Association for Computational Linguistics, pages 321–328. P. Pantel, D. Ravichandran, and E. Hovy. 2004. Towards terascale knowledge acquisition. In Proc. of the 20th international conference on Computational Linguistics, page 771. W. Phillips and E. Riloff. 2002. Exploiting Strong Syntactic Heuristics and Co-Training to Learn Semantic Lexicons. In Proc. of the 2002 Conference on Empirical Methods in Natural Language Processing. E. Riloff and R. Jones. 1999. Learning Dictionaries for Information Extraction by Multi-Level Bootstrapping. In Proc. of the Sixteenth National Conference on Artificial Intelligence. E. Riloff and J. Shepherd. 1997. A Corpus-Based Approach for Building Semantic Lexicons. In Proc. of the Second Conference on Empirical Methods in Natural Language Processing, pages 117–124. B. Roark and E. Charniak. 1998. Noun-phrase Cooccurrence Statistics for Semi-automatic Semantic Lexicon Construction. In Proc. of the 36th Annual Meeting of the Association for Computational Linguistics, pages 1110–1116. H. Tanev and B. Magnini. 2006. Weakly supervised approaches for ontology population. In Proc. of 11st Conference of the European Chapter of the Association for Computational Linguistics. M. Thelen and E. Riloff. 2002. A Bootstrapping Method for Learning Semantic Lexicons Using Extraction Pattern Contexts. In Proc. of the 2002 Conference on Empirical Methods in Natural Language Processing. D. Widdows and B. Dorow. 2002. A graph model for unsupervised lexical acquisition. In Proc. of the 19th International Conference on Computational Linguistics, pages 1–7. 1056
2008
119
Proceedings of ACL-08: HLT, pages 97–105, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Bayesian Learning of Non-compositional Phrases with Synchronous Parsing Hao Zhang Computer Science Department University of Rochester Rochester, NY 14627 [email protected] Chris Quirk Microsoft Research One Microsoft Way Redmond, WA 98052 USA [email protected] Robert C. Moore Microsoft Research One Microsoft Way Redmond, WA 98052 USA [email protected] Daniel Gildea Computer Science Department University of Rochester Rochester, NY 14627 [email protected] Abstract We combine the strengths of Bayesian modeling and synchronous grammar in unsupervised learning of basic translation phrase pairs. The structured space of a synchronous grammar is a natural fit for phrase pair probability estimation, though the search space can be prohibitively large. Therefore we explore efficient algorithms for pruning this space that lead to empirically effective results. Incorporating a sparse prior using Variational Bayes, biases the models toward generalizable, parsimonious parameter sets, leading to significant improvements in word alignment. This preference for sparse solutions together with effective pruning methods forms a phrase alignment regimen that produces better end-to-end translations than standard word alignment approaches. 1 Introduction Most state-of-the-art statistical machine translation systems are based on large phrase tables extracted from parallel text using word-level alignments. These word-level alignments are most often obtained using Expectation Maximization on the conditional generative models of Brown et al. (1993) and Vogel et al. (1996). As these word-level alignment models restrict the word alignment complexity by requiring each target word to align to zero or one source words, results are improved by aligning both source-to-target as well as target-to-source, then heuristically combining these alignments. Finally, the set of phrases consistent with the word alignments are extracted from every sentence pair; these form the basis of the decoding process. While this approach has been very successful, poor wordlevel alignments are nonetheless a common source of error in machine translation systems. A natural solution to several of these issues is unite the word-level and phrase-level models into one learning procedure. Ideally, such a procedure would remedy the deficiencies of word-level alignment models, including the strong restrictions on the form of the alignment, and the strong independence assumption between words. Furthermore it would obviate the need for heuristic combination of word alignments. A unified procedure may also improve the identification of non-compositional phrasal translations, and the attachment decisions for unaligned words. In this direction, Expectation Maximization at the phrase level was proposed by Marcu and Wong (2002), who, however, experienced two major difficulties: computational complexity and controlling overfitting. Computational complexity arises from the exponentially large number of decompositions of a sentence pair into phrase pairs; overfitting is a problem because as EM attempts to maximize the likelihood of its training data, it prefers to directly explain a sentence pair with a single phrase pair. In this paper, we attempt to address these two issues in order to apply EM above the word level. 97 We attack computational complexity by adopting the polynomial-time Inversion Transduction Grammar framework, and by only learning small noncompositional phrases. We address the tendency of EM to overfit by using Bayesian methods, where sparse priors assign greater mass to parameter vectors with fewer non-zero values therefore favoring shorter, more frequent phrases. We test our model by extracting longer phrases from our model’s alignments using traditional phrase extraction, and find that a phrase table based on our system improves MT results over a phrase table extracted from traditional word-level alignments. 2 Phrasal Inversion Transduction Grammar We use a phrasal extension of Inversion Transduction Grammar (Wu, 1997) as the generative framework. Our ITG has two nonterminals: X and C, where X represents compositional phrase pairs that can have recursive structures and C is the preterminal over terminal phrase pairs. There are three rules with X on the left-hand side: X → [X X], X → ⟨X X⟩, X → C. The first two rules are the straight rule and inverted rule respectively. They split the left-hand side constituent which represents a phrase pair into two smaller phrase pairs on the right-hand side and order them according to one of the two possible permutations. The rewriting process continues until the third rule is invoked. C is our unique pre-terminal for generating terminal multi-word pairs: C → e/f. We parameterize our probabilistic model in the manner of a PCFG: we associate a multinomial distribution with each nonterminal, where each outcome in this distribution corresponds to an expansion of that nonterminal. Specifically, we place one multinomial distribution θX over the three expansions of the nonterminal X, and another multinomial distribution θC over the expansions of C. Thus, the parameters in our model can be listed as θX = (P⟨⟩, P[], PC), where P⟨⟩is for the inverted rule, P[] for the straight rule, PC for the third rule, satisfying P⟨⟩+P[]+PC = 1, and θC = (P(e/f), P(e′/f′), . . . ), where P e/f P(e/f) = 1 is a multinomial distribution over phrase pairs. This is our model in a nutshell. We can train this model using a two-dimensional extension of the inside-outside algorithm on bilingual data, assuming every phrase pair that can appear as a leaf in a parse tree of the grammar a valid candidate. However, it is easy to show that the maximum likelihood training will lead to the saturated solution where PC = 1 — each sentence pair is generated by a single phrase spanning the whole sentence. From the computational point of view, the full EM algorithm runs in O(n6) where n is the average length of the two input sentences, which is too slow in practice. The key is to control the number of parameters, and therefore the size of the set of candidate phrases. We deal with this problem in two directions. First we change the objective function by incorporating a prior over the phrasal parameters. This has the effect of preferring parameter vectors in θC with fewer non-zero values. Our second approach was to constrain the search space using simpler alignment models, which has the further benefit of significantly speeding up training. First we train a lower level word alignment model, then we place hard constraints on the phrasal alignment space using confident word links from this simpler model. Combining the two approaches, we have a staged training procedure going from the simplest unconstrained word based model to a constrained Bayesian word-level ITG model, and finally proceeding to a constrained Bayesian phrasal model. 3 Variational Bayes for ITG Goldwater and Griffiths (2007) and Johnson (2007) show that modifying an HMM to include a sparse prior over its parameters and using Bayesian estimation leads to improved accuracy for unsupervised part-of-speech tagging. In this section, we describe a Bayesian estimator for ITG: we select parameters that optimize the probability of the data given a prior. The traditional estimation method for word 98 alignment models is the EM algorithm (Brown et al., 1993) which iteratively updates parameters to maximize the likelihood of the data. The drawback of maximum likelihood is obvious for phrase-based models. If we do not put any constraint on the distribution of phrases, EM overfits the data by memorizing every sentence pair. A sparse prior over a multinomial distribution such as the distribution of phrase pairs may bias the estimator toward skewed distributions that generalize better. In the context of phrasal models, this means learning the more representative phrases in the space of all possible phrases. The Dirichlet distribution, which is parameterized by a vector of real values often interpreted as pseudo-counts, is a natural choice for the prior, for two main reasons. First, the Dirichlet is conjugate to the multinomial distribution, meaning that if we select a Dirichlet prior and a multinomial likelihood function, the posterior distribution will again be a Dirichlet. This makes parameter estimation quite simple. Second, Dirichlet distributions with small, non-zero parameters place more probability mass on multinomials on the edges or faces of the probability simplex, distributions with fewer non-zero parameters. Starting from the model from Section 2, we propose the following Bayesian extension, where A ∼Dir(B) means the random variable A is distributed according to a Dirichlet with parameter B: θX | αX ∼Dir(αX), θC | αC ∼Dir(αC), [X X] ⟨X X⟩ C X ∼Multi(θX), e/f | C ∼Multi(θC). The parameters αX and αC control the sparsity of the two distributions in our model. One is the distribution of the three possible branching choices. The other is the distribution of the phrase pairs. αC is crucial, since the multinomial it is controlling has a high dimension. By adjusting αC to a very small number, we hope to place more posterior mass on parsimonious solutions with fewer but more confident and general phrase pairs. Having defined the Bayesian model, it remains to decide the inference procedure. We chose Variational Bayes, for its procedural similarity to EM and ease of implementation. Another potential option would be Gibbs sampling (or some other sampling technique). However, in experiments in unsupervised POS tag learning using HMM structured models, Johnson (2007) shows that VB is more effective than Gibbs sampling in approaching distributions that agree with the Zipf’s law, which is prominent in natural languages. Kurihara and Sato (2006) describe VB for PCFGs, showing the only need is to change the M step of the EM algorithm. As in the case of maximum likelihood estimation, Bayesian estimation for ITGs is very similar to PCFGs, which follows due to the strong isomorphism between the two models. Specific to our ITG case, the M step becomes: ˜P (l+1) [] = exp(ψ(E(X →[X X]) + αX)) exp(ψ(E(X) + sαX)) , ˜P (l+1) ⟨⟩ = exp(ψ(E(X →⟨X X⟩) + αX)) exp(ψ(E(X) + sαX)) , ˜P (l+1) C = exp(ψ(E(X →C) + αX)) exp(ψ(E(X) + sαX)) , ˜P (l+1)(e/f) = exp(ψ(E(e/f) + αC)) exp(ψ(E(C) + mαC)), where ψ is the digamma function (Beal, 2003), s = 3 is the number of right-hand-sides for X, and m is the number of observed phrase pairs in the data. The sole difference between EM and VB with a sparse prior α is that the raw fractional counts c are replaced by exp(ψ(c + α)), an operation that resembles smoothing. As pointed out by Johnson (2007), in effect this expression adds to c a small value that asymptotically approaches α −0.5 as c approaches ∞, and 0 as c approaches 0. For small values of α the net effect is the opposite of typical smoothing, since it tends to redistribute probably mass away from unlikely events onto more likely ones. 4 Bitext Pruning Strategy ITG is slow mainly because it considers every pair of spans in two sentences as a possible chart element. In reality, the set of useful chart elements is much 99 smaller than the possible scriptO(n4), where n is the average sentence length. Pruning the span pairs (bitext cells) that can participate in a tree (either as terminals or non-terminals) serves to not only speed up ITG parsing, but also to provide a kind of initialization hint to the training procedures, encouraging it to focus on promising regions of the alignment space. Given a bitext cell defined by the four boundary indices (i, j, l, m) as shown in Figure 1a, we prune based on a figure of merit V (i, j, l, m) approximating the utility of that cell in a full ITG parse. The figure of merit considers the Model 1 scores of not only the words inside a given cell, but also all the words not included in the source and target spans, as in Moore (2003) and Vogel (2005). Like Zhang and Gildea (2005), it is used to prune bitext cells rather than score phrases. The total score is the product of the Model 1 probabilities for each column; “inside” columns in the range [l, m] are scored according to the sum (or maximum) of Model 1 probabilities for [i, j], and “outside” columns use the sum (or maximum) of all probabilities not in the range [i, j]. Our pruning differs from Zhang and Gildea (2005) in two major ways. First, we perform pruning using both directions of the IBM Model 1 scores; instead of a single figure of merit V , we have two: VF and VB. Only those spans that pass the pruning threshold in both directions are kept. Second, we allow whole spans to be pruned. The figure of merit for a span is VF (i, j) = maxl,m VF (i, j, l, m). Only spans that are within some threshold of the unrestricted Model 1 scores VF and VB are kept: VF (i, j) VF ≥τs and VB(l, m) VB ≥τs. Amongst those spans retained by this first threshold, we keep only those bitext cells satisfying both VF (i, j, l, m) VF (i, j) ≥τb and VB(i, j, l, m) VB(l, m) ≥τb. 4.1 Fast Tic-tac-toe Pruning The tic-tac-toe pruning algorithm (Zhang and Gildea, 2005) uses dynamic programming to compute the product of inside and outside scores for all cells in O(n4) time. However, even this can be slow for large values of n. Therefore we describe an Figure 1: (a) shows the original tic-tac-toe score for a bitext cell (i, j, l, m). (b) demonstrates the finite state representation using the machine in (c), assuming a fixed source span (i, j). improved algorithm with best case n3 performance. Although the worst case performance is also O(n4), in practice it is significantly faster. To begin, let us restrict our attention to the forward direction for a fixed source span (i, j). Pruning bitext spans and cells requires VF (i, j), the score of the best bitext cell within a given span, as well as all cells within a given threshold of that best score. For a fixed i and j, we need to search over the starting and ending points l and m of the inside region. Note that there is an isomorphism between the set of spans and a simple finite state machine: any span (l, m) can be represented by a sequence of l OUTSIDE columns, followed by m−l+1 INSIDE columns, followed by n −m + 1 OUTSIDE columns. This simple machine has the restricted form described in Figure 1c: it has three states, L, M, and R; each transition generates either an OUTSIDE column O or an INSIDE column I. The cost of generating an OUTSIDE at position a is O(a) = P(ta|NULL) + P b̸∈[i,j] P(ta|sb); likewise the cost of generating an INSIDE column is I(a) = P(ta|NULL) + P b∈[i,j] P(ta|sb), with 100 O(0) = O(n + 1) = 1 and I(0) = I(n + 1) = 0. Directly computing O and I would take time O(n2) for each source span, leading to an overall runtime of O(n4). Luckily there are faster ways to find the inside and outside scores. First we can precompute following arrays in O(n2) time and space: pre[0, l] := P(tl|NULL) pre[i, l] := pre[i −1, l] + P(tl|si) suf[n + 1, l] := 0 suf[i, l] := suf[i + 1, l] + P(tl|si) Then for any (i, j), O(a) = P(ta|NULL) + P b̸∈[i,j] P(ta|sb) = pre[i −1, a] + suf[j + 1, a]. I(a) can be incrementally updated as the source span varies: when i = j, I(a) = P(ta|NULL) + P(ta|si). As j is incremented, we add P(ta|sj) to I(a). Thus we have linear time updates for O and I. We can then find the best scoring sequence using the familiar Viterbi algorithm. Let δ[a, σ] be the cost of the best scoring sequence ending at in state σ at time a: δ[0, σ] := 1 if σ = L; 0 otherwise δ[a, L] := δ[a −1, L] · O(a) δ[a, M] := max σ∈L,M{δ[a −1, σ]} · I(a) δ[a, R] := max σ∈M,R{δ[a −1, σ]} · O(a) Then VF (i, j) = δ[n + 1, R], using the isomorphism between state sequences and spans. This linear time algorithm allows us to compute span pruning in O(n3) time. The same algorithm may be performed using the backward figure of merit after transposing rows and columns. Having cast the problem in terms of finite state automata, we can use finite state algorithms for pruning. For instance, fixing a source span we can enumerate the target spans in decreasing order by score (Soong and Huang, 1991), stopping once we encounter the first span below threshold. In practice the overhead of maintaining the priority queue outweighs any benefit, as seen in Figure 2. An alternate approach that avoids this overhead is to enumerate spans by position. Note that δ[m, R] · Qn a=m+1 O(a) is within threshold iff there is a span with right boundary m′ < m within threshold. Furthermore if δ[m, M] · Qn a=m+1 O(a) is 0 100 200 300 400 500 600 700 800 900 10 20 30 40 50 Pruning time (thousands of seconds) Average sentence length Baseline k-best Fast Figure 2: Speed comparison of the O(n4) tic-tac-toe pruning algorithm, the A* top-x algorithm, and the fast tic-tac-toe pruning. All produce the same set of bitext cells, those within threshold of the best bitext cell. within threshold, then m is the right boundary within threshold. Using these facts, we can gradually sweep the right boundary m from n toward 1 until the first condition fails to hold. For each value where the second condition holds, we pause to search for the set of left boundaries within threshold. Likewise for the left edge, δ[l, M] · Qm a=l+1 I(a) · Qn a=m+1 O(a) is within threshold iff there is some l′ < l identifying a span (l′, m) within threshold. Finally if V (i, j, l, m) = δ[l −1, L] · Qm a=l I(a) · Qn a=m+1 O(a) is within threshold, then (i, j, l, m) is a bitext cell within threshold. For right edges that are known to be within threshold, we can sweep the left edges leftward until the first condition no longer holds, keeping only those spans for which the second condition holds. The filtering algorithm behaves extremely well. Although the worst case runtime is still O(n4), the best case has improved to n3; empirically it seems to significantly reduce the amount of time spent exploring spans. Figure 2 compares the speed of the fast tic-tac-toe algorithm against the algorithm in Zhang and Gildea (2005). 101 Figure 3: Example output from the ITG using non-compositional phrases. (a) is the Viterbi alignment from the wordbased ITG. The shaded regions indicate phrasal alignments that are allowed by the non-compositional constraint; all other phrasal alignments will not be considered. (b) is the Viterbi alignment from the phrasal ITG, with the multi-word alignments highlighted. 5 Bootstrapping Phrasal ITG from Word-based ITG This section introduces a technique that bootstraps candidate phrase pairs for phrase-based ITG from word-based ITG Viterbi alignments. The wordbased ITG uses the same expansions for the nonterminal X, but the expansions of C are limited to generate only 1-1, 1-0, and 0-1 alignments: C → e/f, C → e/ǫ, C → ǫ/f where ǫ indicates that no word was generated. Broadly speaking, the goal of this section is the same as the previous section, namely, to limit the set of phrase pairs that needs to be considered in the training process. The tic-tac-toe pruning relies on IBM model 1 for scoring a given aligned area. In this part, we use word-based ITG alignments as anchor points in the alignment space to pin down the potential phrases. The scope of iterative phrasal ITG training, therefore, is limited to determining the boundaries of the phrases anchored on the given one-toone word alignments. The heuristic method is based on the NonCompositional Constraint of Cherry and Lin (2007). Cherry and Lin (2007) use GIZA++ intersections which have high precision as anchor points in the bitext space to constraint ITG phrases. We use ITG Viterbi alignments instead. The benefit is two-fold. First of all, we do not have to run a GIZA++ aligner. Second, we do not need to worry about non-ITG word alignments, such as the (2, 4, 1, 3) permutation patterns. GIZA++ does not limit the set of permutations allowed during translation, so it can produce permutations that are not reachable using an ITG. Formally, given a word-based ITG alignment, the bootstrapping algorithm finds all the phrase pairs according to the definition of Och and Ney (2004) and Chiang (2005) with the additional constraint that each phrase pair contains at most one word link. Mathematically, let e(i, j) count the number of word links that are emitted from the substring ei...j, and f(l, m) count the number of word links emitted from the substring fl...m. The non-compositional phrase pairs satisfy e(i, j) = f(l, m) ≤1. Figure 3 (a) shows all possible non-compositional phrases given the Viterbi word alignment of the example sentence pair. 6 Summary of the Pipeline We summarize the pipeline of our system, demonstrating the interactions between the three main contributions of this paper: Variational Bayes, tic-tactoe pruning, and word-to-phrase bootstrapping. We 102 start from sentence-aligned bilingual data and run IBM Model 1 in both directions to obtain two translation tables. Then we use the efficient bidirectional tic-tac-toe pruning to prune the bitext space within each of the sentence pairs; ITG parsing will be carried out on only this this sparse set of bitext cells. The first stage of training is word-based ITG, using the standard iterative training procedure, except VB replaces EM to focus on a sparse prior. After several training iterations, we obtain the Viterbi alignments on the training data according to the final model. Now we transition into the second stage – the phrasal training. Before the training starts, we apply the non-compositional constraints over the pruned bitext space to further constrain the space of phrase pairs. Finally, we run phrasal ITG iterative training using VB for a certain number of iterations. In the end, a Viterbi pass for the phrasal ITG is executed to produce the non-compositional phrasal alignments. From this alignment, phrase pairs are extracted in the usual manner, and a phrase-based translation system is trained. 7 Experiments The training data was a subset of 175K sentence pairs from the NIST Chinese-English training data, automatically selected to maximize character-level overlap with the source side of the test data. We put a length limit of 35 on both sides, producing a training set of 141K sentence pairs. 500 Chinese-English pairs from this set were manually aligned and used as a gold standard. 7.1 Word Alignment Evaluation First, using evaluations of alignment quality, we demonstrate the effectiveness of VB over EM, and explore the effect of the prior. Figure 4 examines the difference between EM and VB with varying sparse priors for the word-based model of ITG on the 500 sentence pairs, both after 10 iterations of training. Using EM, because of overfitting, AER drops first and increases again as the number of iterations varies from 1 to 10. The lowest AER using EM is achieved after the second iteration, which is .40. At iteration 10, AER for EM increases to .42. On the other hand, using VB, AER decreases monotonically over the 10 iterations and 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 1e-009 1e-006 0.001 1 AER Prior value VB EM Figure 4: AER drops as αC approaches zero; a more sparse solution leads to better results. stabilizes at iteration 10. When αC is 1e −9, VB gets AER close to .35 at iteration 10. As we increase the bias toward sparsity, the AER decreases, following a long slow plateau. Although the magnitude of improvement is not large, the trend is encouraging. These experiments also indicate that a very sparse prior is needed for machine translation tasks. Unlike Johnson (2007), who found optimal performance when α was approximately 10−4, we observed monotonic increases in performance as α dropped. The dimensionality of this MT problem is significantly larger than that of the sequence problem, though, therefore it may take a stronger push from the prior to achieve the desired result. 7.2 End-to-end Evaluation Given an unlimited amount of time, we would tune the prior to maximize end-to-end performance, using an objective function such as BLEU. Unfortunately these experiments are very slow. Since we observed monotonic increases in alignment performance with smaller values of αC, we simply fixed the prior at a very small value (10−100) for all translation experiments. We do compare VB against EM in terms of final BLEU scores in the translation experiments to ensure that this sparse prior has a sig103 nificant impact on the output. We also trained a baseline model with GIZA++ (Och and Ney, 2003) following a regimen of 5 iterations of Model 1, 5 iterations of HMM, and 5 iterations of Model 4. We computed Chinese-toEnglish and English-to-Chinese word translation tables using five iterations of Model 1. These values were used to perform tic-tac-toe pruning with τb = 1 × 10−3 and τs = 1 × 10−6. Over the pruned charts, we ran 10 iterations of word-based ITG using EM or VB. The charts were then pruned further by applying the non-compositional constraint from the Viterbi alignment links of that model. Finally we ran 10 iterations of phrase-based ITG over the residual charts, using EM or VB, and extracted the Viterbi alignments. For translation, we used the standard phrasal decoding approach, based on a re-implementation of the Pharaoh system (Koehn, 2004). The output of the word alignment systems (GIZA++ or ITG) were fed to a standard phrase extraction procedure that extracted all phrases of length up to 7 and estimated the conditional probabilities of source given target and target given source using relative frequencies. Thus our phrasal ITG learns only the minimal non-compositional phrases; the standard phrase-extraction algorithm learns larger combinations of these minimal units. In addition the phrases were annotated with lexical weights using the IBM Model 1 tables. The decoder also used a trigram language model trained on the target side of the training data, as well as word count, phrase count, and distortion penalty features. Minimum Error Rate training (Och, 2003) over BLEU was used to optimize the weights for each of these models over the development test data. We used the NIST 2002 evaluation datasets for tuning and evaluation; the 10-reference development set was used for minimum error rate training, and the 4-reference test set was used for evaluation. We trained several phrasal translation systems, varying only the word alignment (or phrasal alignment) method. Table 1 compares the four systems: the GIZA++ baseline, the ITG word-based model, the ITG multiword model using EM training, and the ITG multiword model using VB training. ITG-mwm-VB is our best model. We see an improvement of nearly Development Test GIZA++ 37.46 28.24 ITG-word 35.47 26.55 ITG-mwm (VB) 39.21 29.02 ITG-mwm (EM) 39.15 28.47 Table 1: Translation results on Chinese-English, using the subset of training data (141K sentence pairs) that have length limit 35 on both sides. (No length limit in translation. ) 2 points dev set and nearly 1 point of improvement on the test set. We also observe the consistent superiority of VB over EM. The gain is especially large on the test data set, indicating VB is less prone to overfitting. 8 Conclusion We have presented an improved and more efficient method of estimating phrase pairs directly. By both changing the objective function to include a bias toward sparser models and improving the pruning techniques and efficiency, we achieve significant gains on test data with practical speed. In addition, these gains were shown without resorting to external models, such as GIZA++. We have shown that VB is both practical and effective for use in MT models. However, our best system does not apply VB to a single probability model, as we found an appreciable benefit from bootstrapping each model from simpler models, much as the IBM word alignment models are usually trained in succession. We find that VB alone is not sufficient to counteract the tendency of EM to prefer analyses with smaller trees using fewer rules and longer phrases. Both the tic-tac-toe pruning and the non-compositional constraint address this problem by reducing the space of possible phrase pairs. On top of these hard constraints, the sparse prior of VB helps make the model less prone to overfitting to infrequent phrase pairs, and thus improves the quality of the phrase pairs the model learns. Acknowledgments This work was done while the first author was at Microsoft Research; thanks to Xiaodong He, Mark Johnson, and Kristina Toutanova. The last author was supported by NSF IIS-0546554. 104 References Matthew Beal. 2003. Variational Algorithms for Approximate Bayesian Inference. Ph.D. thesis, Gatsby Computational Neuroscience Unit, University College London. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311, June. Colin Cherry and Dekang Lin. 2007. Inversion transduction grammar for joint phrasal translation modeling. In Proceedings of SSST, NAACL-HLT 2007 / AMTA Workshop on Syntax and Structure in Statistical Translation, pages 17–24, Rochester, New York, April. Association for Computational Linguistics. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL, pages 263–270, Ann Arbor, Michigan, USA. Sharon Goldwater and Tom Griffiths. 2007. A fully bayesian approach to unsupervised part-of-speech tagging. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 744–751, Prague, Czech Republic, June. Association for Computational Linguistics. Mark Johnson. 2007. Why doesn’t EM find good HMM POS-taggers? In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 296–305. Philipp Koehn. 2004. Pharaoh: A beam search decoder for phrase-based statistical machine translation models. In Proceedings of the 6th Conference of the Association for Machine Translation in the Americas (AMTA), pages 115–124, Washington, USA, September. Kenichi Kurihara and Taisuke Sato. 2006. Variational bayesian grammar induction for natural language. In International Colloquium on Grammatical Inference, pages 84–96, Tokyo, Japan. Daniel Marcu and William Wong. 2002. A phrase-based, joint probability model for statistical machine translation. In 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP). Robert C. Moore. 2003. Learning translations of namedentity phrases from parallel corpora. In Proceedings of EACL, Budapest, Hungary. Franz Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51, March. Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417–449, December. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL, pages 160–167, Sapporo, Japan. Frank Soong and Eng Huang. 1991. A tree-trellis based fast search for finding the n best sentence hypotheses in continuous speech recognition. In Proceedings of ICASSP 1991. Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical translation. In Proceedings of COLING, pages 836–741, Copenhagen, Denmark. Stephan Vogel. 2005. PESA: Phrase pair extraction as sentence splitting. In MT Summit X, Phuket, Thailand. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403, September. Hao Zhang and Daniel Gildea. 2005. Stochastic lexicalized inversion transduction grammar for alignment. In Proceedings of ACL. 105
2008
12
Proceedings of ACL-08: HLT, pages 106–113, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Applying a Grammar-based Language Model to a Simplified Broadcast-News Transcription Task Tobias Kaufmann Speech Processing Group ETH Z¨urich Z¨urich, Switzerland [email protected] Beat Pfister Speech Processing Group ETH Z¨urich Z¨urich, Switzerland [email protected] Abstract We propose a language model based on a precise, linguistically motivated grammar (a hand-crafted Head-driven Phrase Structure Grammar) and a statistical model estimating the probability of a parse tree. The language model is applied by means of an N-best rescoring step, which allows to directly measure the performance gains relative to the baseline system without rescoring. To demonstrate that our approach is feasible and beneficial for non-trivial broad-domain speech recognition tasks, we applied it to a simplified German broadcast-news transcription task. We report a significant reduction in word error rate compared to a state-of-the-art baseline system. 1 Introduction It has repeatedly been pointed out that N-grams model natural language only superficially: an Nthorder Markov chain is a very crude model of the complex dependencies between words in an utterance. More accurate statistical models of natural language have mainly been developed in the field of statistical parsing, e.g. Collins (2003), Charniak (2000) and Ratnaparkhi (1999). Other linguistically inspired language models like Chelba and Jelinek (2000) and Roark (2001) have been applied to continuous speech recognition. These models have in common that they explicitly or implicitly use a context-free grammar induced from a treebank, with the exception of Chelba and Jelinek (2000). The probability of a rule expansion or parser operation is conditioned on various contextual information and the derivation history. An important reason for the success of these models is the fact that they are lexicalized: the probability distributions are also conditioned on the actual words occuring in the utterance, and not only on their parts of speech. Most statistical parsers achieve a high robustness with respect to out-of-grammar sentences by allowing for arbitrary derivations and rule expansions. On the other hand, they are not suited to reliably decide on the grammaticality of a given phrase, as they do not accurately model the linguistic constraints inherent in natural language. We take a completely different position. In the first place, we want our language model to reliably distinguish between grammatical and ungrammatical phrases. To this end, we have developed a precise, linguistically motivated grammar. To distinguish between common and uncommon phrases, we use a statistical model that estimates the probability of a phrase based on the syntactic dependencies established by the parser. We achieve some degree of robustness by letting the grammar accept arbitrary sequences of words and phrases. To keep the grammar restrictive, such sequences are penalized by the statistical model. Accurate hand-crafted grammars have been applied to speech recognition before, e.g. Kiefer et al. (2000) and van Noord et al. (1999). However, they primarily served as a basis for a speech understanding component and were applied to narrowdomain tasks such as appointment scheduling or public transport information. We are mainly concerned with speech recognition performance on broad-domain recognition tasks. Beutler et al. (2005) pursued a similar approach. 106 However, their grammar-based language model did not make use of a probabilistic component, and it was applied to a rather simple recognition task (dictation texts for pupils read and recorded under good acoustic conditions, no out-of-vocabulary words). Besides proposing an improved language model, this paper presents experimental results for a much more difficult and realistic task and compares them to the performance of a state-of-the-art baseline system. In the following Section, we will first describe our grammar-based language model. Next, we will turn to the linguistic components of the model, namely the grammar, the lexicon and the parser. We will point out some of the challenges arising from the broad-domain speech recognition application and propose ways to deal with them. Finally, we will describe our experiments on broadcast news data and discuss the results. 2 Language Model 2.1 The General Approach Speech recognizers choose the word sequence ˆW which maximizes the posterior probability P(W|O), where O is the acoustic observation. This is achieved by optimizing ˆW = argmax W P(O|W) · P(W)λ · ip|W| (1) The language model weight λ and the word insertion penalty ip lead to a better performance in practice, but they have no theoretical justification. Our grammar-based language model is incorporated into the above expression as an additional probability Pgram(W), weighted by a parameter µ: ˆW = argmax W P(O|W)·P(W)λ·Pgram(W)µ·ip|W| (2) Pgram(W) is defined as the probability of the most likely parse tree of a word sequence W: Pgram(W) = max T∈parses(W) P(T) (3) To determine Pgram(W) is an expensive operation as it involves parsing. For this reason, we pursue an N-best rescoring approach. We first produce the N best hypotheses according to the criterion in equation (1). From these hypotheses we then choose the final recognition result according to equation (2). 2.2 The Probability of a Parse Tree The parse trees produced by our parser are binarybranching and rather deep. In order to compute the probability of a parse tree, it is transformed to a flat dependency tree similar to the syntax graph representation used in the TIGER treebank Brants et al (2002). An inner node of such a dependency tree represents a constituent or phrase. Typically, it directly connects to a leaf node representing the most important word of the phrase, the head child. The other children represent phrases or words directly depending on the head child. To give an example, the immediate children of a sentence node are the finite verb (the head child), the adverbials, the subject and the all other (verbal and non-verbal) complements. This flat structure has the advantage that the information which is most relevant for the head child is represented within the locality of an inner node. Assuming statistical independence between the internal structures of the inner nodes ni, we can factor P(T) much like it is done for probabilistic contextfree grammars: P(T) ≈ Y ni P( childtags(ni) | tag(ni) ) (4) In the above equation, tag(ni) is simply the label assigned to the tree node ni, and childtags(ni) denotes the tags assigned to the child nodes of ni. Our statistical model for German sentences distinguishes between eight different tags. Three tags are used for different types of noun phrases: pronominal NPs, non-pronominal NPs and prenominal genitives. Prenominal genitives were given a dedicated tag because they are much more restricted than ordinary NPs. Another two tags were used to distinguish between clauses with sentence-initial finite verbs (main clauses) and clauses with sentence-final finite verbs (subordinate clauses). Finally, there are specific tags for infinitive verb phrases, adjective phrases and prepositional phrases. P was modeled by means of a dedicated probability distribution for each conditioning tag. The probability of the internal structure of a sentence was modeled as the trigram probability of the corresponding tag sequence (the sequence of the sentence node’s child tags). The probability of an adjective phrase was decomposed into the probability 107 of the adjective type (participle or non-participle and attributive, adverbial or predicative) and the probability of its length in words given the adjective type. This allows the model to directly penalize long adjective phrases, which are very rare. The model for noun phrases is based on the joint probability of the head type (either noun, adjective or proper name), the presence of a determiner and the presence of preand postnominal modifiers. The probabilities of various other events are conditioned on those four variables, namely the number of prepositional phrases, relative clauses and adjectives, as well as the presence of appositions and prenominal or postnominal genitives. The resulting probability distributions were trained on the German TIGER treebank which consists of about 50000 sentences of newspaper text. 2.3 Robustness Issues A major problem of grammar-based approaches to language modeling is how to deal with out-ofgrammar utterances. Obviously, the utterance to be recognized may be ungrammatical, or it could be grammatical but not covered by the given grammar. But even if the utterance is both grammatical and covered by the grammar, the correct word sequence may not be among the N best hypotheses due to out-of-vocabulary words or bad acoustic conditions. In all these cases, the best hypothesis available is likely to be out-of-grammar, but the language model should nevertheless prefer it to competing hypotheses. To make things worse, it is not unlikely that some of the competing hypotheses are grammatical. It is therefore important that our language model is robust with respect to out-of-grammar sentences. In particular this means that it should provide a reasonable parse tree for any possible word sequence W. However, our approach is to use an accurate, linguistically motivated grammar, and it is undesirable to weaken the constraints encoded in the grammar. Instead, we allow the parser to attach any sequence of words or correct phrases to the root node, where each attachment is penalized by the probabilistic model P(T). This can be thought of as adding two probabilistic context-free rules: S −→S′ S with probability q S −→S′ with probability 1−q In order to guarantee that all possible word sequences are parseable, S′ can produce both saturated phrases and arbitrary words. To include such a productive set of rules into the grammar would lead to serious efficiency problems. For this reason, these rules were actually implemented as a dynamic programming pass: after the parser has identified all correct phrases, the most probable sequence of phrases or words is computed. 2.4 Model Parameters Besides the distributions required to specify P(T), our language model has three parameters: the language model weight µ, the attachment probability q and the number of hypotheses N. The parameters µ and q are considered to be task-dependent. For instance, if the utterances are well-covered by the grammar and the acoustic conditions are good, it can be expected that µ is relatively large and that q is relatively small. The choice of N is restricted by the available computing power. For our experiments, we chose N = 100. The influence of N on the word error rate is discussed in the results section. 3 Linguistic Resources 3.1 Particularities of the Recognizer Output The linguistic resources presented in this Section are partly influenced by the form of the recognizer output. In particular, the speech recognizer does not always transcribe numbers, compounds and acronyms as single words. For instance, the word “einundzwanzig” (twenty-one) is transcribed as “ein und zwanzig”, “Kriegspl¨ane” (war plans) as “Kriegs Pl¨ane” and ”BMW” as “B. M. W.” These transcription variants are considered to be correct by our evaluation scheme. Therefore, the grammar should accept them as well. 3.2 Grammar and Parser We used the Head-driven Phrase Structure Grammar (HPSG, see Pollard and Sag (1994)) formalism to develop a precise large-coverage grammar for German. HPSG is an unrestricted grammar (Chomsky type 0) which is based on a context-free skeleton and the unification of complex feature structures. There are several variants of HPSG which mainly differ in the formal tools they provide for stating lin108 guistic constraints. Our particular variant requires that constituents (phrases) be continuous, but it provides a mechanism for dealing with discontinuities as present e.g. in the German main clause, see Kaufmann and Pfister (2007). HPSG typically distinguishes between immediate dominance schemata (rough equivalents of phrase structure rules, but making no assumptions about constituent order) and linear precedence rules (constraints on constituent order). We do not make this distinction but rather let immediate dominance schemata specify constituent order. Further, the formalism allows to express complex linguistic constraints by means of predicates or relational constraints. At parse time, predicates are backed by program code that can perform arbitrary computations to check or specify feature structures. We have implemented an efficient Java parser for our variant of the HPSG formalism. The parser supports ambiguity packing, which is a technique for merging constituents with different derivational histories but identical syntactic properties. This is essential for parsing long and ambiguous sentences. Our grammar incorporates many ideas from existing linguistic work, e.g. M¨uller (2007), M¨uller (1999), Crysmann (2005), Crysmann (2003). In addition, we have modeled a few constructions which occur frequently but are often neglected in formal syntactic theories. Among them are prenominal and postnominal genitives, expressions of quantity and expressions of date and time. Further, we have implemented dedicated subgrammars for analyzing written numbers, compounds and acronyms that are written as separate words. To reduce ambiguity, only noun-noun compounds are covered by the grammar. Noun-noun compounds are by far the most productive compound type. The grammar consists of 17 rules for general linguistic phenomena (e.g. subcategorization, modification and extraction), 12 rules for modeling the German verbal complex and another 13 construction-specific rules (relative clauses, genitive attributes, optional determiners, nominalized adjectives, etc.). The various subgrammars (expressions of date and time, written numbers, noun-noun compounds and acronyms) amount to a total of 43 rules. The grammar allows the derivation of “intermediate products” which cannot be regarded as complete phrases. We consider complete phrases to be sentences, subordinate clauses, relative and interrogative clauses, noun phrases, prepositional phrases, adjective phrases and expressions of date and time. 3.3 Lexicon The lexicon was created manually based on a list of more than 5000 words appearing in the N-best lists of our experiment. As the domain of our recognition task is very broad, we attempted to include any possible reading of a given word. Our main source of dictionary information was Duden (1999). Each word was annotated with precise morphological and syntactic information. For example, the roughly 2700 verbs were annotated with over 7000 valency frames. We distinguish 86 basic valency frames, for most of which the complement types can be further specified. A major difficulty was the acquisition of multiword lexemes. Slightly deviating from the common notion, we use the following definition: A syntactic unit consisting of two or more words is a multiword lexeme, if the grammar cannot derive it from its parts. English examples are idioms like “by and large” and phrasal verbs such as “to call sth off”. Such multi-word lexemes have to be entered into the lexicon, but they cannot directly be identified in the word list. Therefore, they have to be extracted from supplementary resources. For our work, we used a newspaper text corpus of 230M words (Frankfurter Rundschau and Neue Z¨urcher Zeitung). This corpus included only articles which are dated before the first broadcast news show used in the experiment. In the next few paragraphs we will discuss some types of multiword lexemes and our methods of extracting them. There is a large and very productive class of German prefix verbs whose prefixes can appear separated from the verb, similar to English phrasal verbs. For example, the prefix of the verb “untergehen” (to sink) is separated in “das Schiff geht unter” (the ship sinks) and attached in “weil das Schiff untergeht” (because the ship sinks). The set of possible valency frames of a prefix verb has to be looked up in a dictionary as it cannot be derived systematically from its parts. Exploiting the fact that prefixes are attached to their verb under certain circumstances, we extracted a list of prefix verbs from the above newspaper text corpus. As the number of prefix verbs is 109 very large, a candidate prefix verb was included into the lexicon only if there is a recognizer hypothesis in which both parts are present. Note that this procedure does not amount to optimizing on test data: when parsing a hypothesis, the parser chart contains only those multiword lexemes for which all parts are present in the hypothesis. Other multi-word lexemes are fixed word clusters of various types. For instance, some prepositional phrases appearing in support verb constructions lack an otherwise mandatory determiner, e.g. “unter Beschuss” (under fire). Many multi-word lexemes are adverbials, e.g. “nach wie vor” (still), “auf die Dauer” (in the long run). To extract such word clusters we used suffix arrays proposed in Yamamoto and Church (2001) and the pointwise mutual information measure, see Church and Hanks (1990). Again, it is feasible to consider only those clusters appearing in some recognizer hypothesis. The list of candidate clusters was reduced using different filter heuristics and finally checked manually. For our task, split compounds are to be considered as multi-word lexemes as well. As our grammar only models noun-noun compounds, other compounds such as “unionsgef¨uhrt” (led by the union) have to be entered into the lexicon. We applied the decompounding algorithm proposed in AddaDecker (2003) to our corpus to extract such compounds. The resulting candidate list was again filtered manually. We observed that many proper nouns (e.g. personal names and geographic names) are identical to some noun, adjective or verb form. For example, about 40% of the nouns in our lexicon share inflected forms with personal names. Proper nouns considerably contribute to ambiguity, as most of them do not require a determiner. Therefore, a proper noun which is a homograph of an open-class word was entered only if it is “relevant” for our task. The “relevant” proper nouns were extracted automatically from our text corpus. We used small databases of unambiguous given names and forms of address to spot personal names in significant bigrams. Relevant geographic names were extracted by considering capitalized words which significantly often follow certain local prepositions. The final lexicon contains about 2700 verbs (including 1900 verbs with separable prefixes), 3500 nouns, 450 adjectives, 570 closed-class words and 220 multiword lexemes. All lexicon entries amount to a total of 137500 full forms. Noun-noun compounds are not included in these numbers, as they are handled in a morphological analysis component. 4 Experiments 4.1 Experimental Setup The experiment was designed to measure how much a given speech recognition system can benefit from our grammar-based language model. To this end, we used a baseline speech recognition system which provided the N best hypotheses of an utterance along with their respective scores. The grammarbased language model was then applied to the N best hypotheses as described in Section 2.1, yielding a new best hypothesis. For a given test set we could then compare the word error rate of the baseline system with that of the extended system employing the grammar-based language model. 4.2 Data and Preprocessing Our experiments are based on word lattice output from the LIMSI German broadcast news transcription system (McTait and Adda-Decker, 2003), which employs 4-gram backoff language models. From the experiment reported in McTait and AddaDecker (2003), we used the first three broadcast news shows1 which corresponds to a signal length of roughly 50 minutes. Rather than applying our model to the original broadcast-news transcription task, we used the above data to create an artificial recognition task with manageable complexity. Our primary aim was to design a task which allows us to investigate the properties of our grammar-based approach and to compare its performance with that of a competitive baseline system. As a first simplification, we assumed perfect sentence segmentation. We manually split the original word lattices at the sentence boundaries and merged them where a sentence crossed a lattice boundary. This resulted in a set of 636 lattices (sentences). Second, we classified the sentences with respect to content type and removed those classes with an excep1The 8 o’clock broadcasts of the “Tagesschau” from the 14th of April, 21st of April and 7th of Mai 2002. 110 tionally high baseline word error rate. These classes are interviews (a word error rate of 36.1%), sports reports (28.4%) and press conferences (25.7%). The baseline word error rate of the remaining 447 lattices (sentences) is 11.8%. From each of these 447 lattices, the 100 best hypotheses were extracted. We next compiled a list containing all words present in the recognizer hypotheses. These words were entered into the lexicon as described in Section 3.3. Finally, all extracted recognizer hypotheses were parsed. Only 25 of the 44000 hypotheses2 caused an early termination of the parser due to the imposed memory limits. However, the inversion of ambiguity packing (see Section 3.2) turned out to be a bottleneck. As P(T) does not directly apply to parse trees, all possible readings have to be unpacked. For 24 of the 447 lattices, some of the N best hypotheses contained phrases with more than 1000 readings. For these lattices the grammar-based language model was simply switched off in the experiment, as no parse trees were produced for efficiency reasons. To assess the difficulty of our task, we inspected the reference transcriptions, the word lattices and the N-best lists for the 447 selected utterances. We found that for only 59% of the utterances the correct transcription is among the 100-best hypotheses. The first-best hypothesis is completely correct for 34% of the utterances. The out-of-vocabulary rate (estimated from the number of reference transcription words which do not appear in any of the lattices) is 1.7%. The first-best word error rate is 11.79%, and the 100-best oracle word error rate is 4.8%. We further attempted to judge the grammaticality of the reference transcriptions. We considered only 1% of the sentences to be clearly ungrammatical. 19% of the remaining sentences were found to contain general grammatical constructions which are not handled by our grammar. Some of these constructions (most notably ellipses, which are omnipresent in broadcast-news reports) are notoriously difficult as they would dramatically increase ambiguity when implemented in a grammar. About 45% of the reference sentences were correctly analyzed by the grammar. 2Some of the word lattices contain less than 100 different hypotheses. 4.3 Training and Testing The parameter N, the maximum number of hypotheses to be considered, was set to 100 (the effect of choosing different values of N will be discussed in section 4.4). The remaining parameters µ and q were trained using the leave-one-out crossvalidation method: each of the 447 utterances served as the single test item once, whereas the remaining 446 utterances were used for training. As the error landscape is complex and discrete, we could not use gradient-based optimization methods. Instead, we chose µ and q from 500 equidistant points within the intervals [0, 20] and [0, 0.25], respectively. The word error rate was evaluated for each possible pair of parameter values. The evaluation scheme was taken from McTait and Adda-Decker (2003). It ignores capitalization, and written numbers, compounds and acronyms need not be written as single words. 4.4 Results As shown in Table 1, the grammar-based language model reduced the word error rate by 9.2% relative over the baseline system. This improvement is statistically significant on a level of < 0.1% for both the Matched Pairs Sentence-Segment Word Error test (MAPSSWE) and McNemar’s test (Gillick and Cox, 1989). If the parameters are optimized on all 447 sentences (i.e. on the test data), the word error rate is reduced by 10.7% relative. For comparison, we redefined the probabilistic model as P(T) = (1 −q)qk−1, where k is the number of phrases attached to the root node. This reduced model only considers the grammaticality of a phrase, completely ignoring the probability of its internal structure. It achieved a relative word error reduction of 5.9%, which is statistically significant on a level of < 0.1% for both tests. The improvement of the full model compared to the reduced model is weakly significant on a level of 2.6% for the MAPSSWE test. For both models, the optimal value of q was 0.001 for almost all training runs. The language model weight µ of the reduced model was about 60% smaller than the respective value for the full model, which confirms that the full model provides more reliable information. 111 experiment word error rate baseline 11.79% grammar, no statistics 11.09% (-5.9% rel.) grammar 10.70% (-9.2% rel.) grammar, cheating 10.60% (-10.7% rel.) 100-best oracle 4.80% Table 1: The impact of the grammar-based language model on the word error rate. For comparison, the results for alternative experiments are shown. In the experiment “grammar, cheating”, the parameters were optimized on test data. Figure 1 shows the effect of varying N (the maximum number of hypotheses) on the word error rate both for leave-one-out training and for optimizing the parameters on test data. The similar shapes of the two curves suggest that the observed variations are partly due to the problem structure. In fact, if N is increased and new hypotheses with a high value of Pgram(W) appear, the benefit of the grammarbased language model can increase (if the hypotheses are predominantly good with respect to word error rate) or decrease (if they are bad). This horizon effect tends to be reduced with increasing N (with the exception of 89 ≤N ≤93) because hypotheses with high ranks need a much higher Pgram(W) in order to compensate for their lower value of P(O|W) · P(W)λ. For small N, the parameter estimation is more severely affected by the rather accidental horizon effects and therefore is prone to overfitting. 5 Conclusions and Outlook We have presented a language model based on a precise, linguistically motivated grammar, and we have successfully applied it to a difficult broad-domain task. It is a well-known fact that natural language is highly ambiguous: a correct and seemingly unambiguous sentence may have an enormous number of readings. A related – and for our approach even more relevant – phenomenon is that many weirdlooking and seemingly incorrect word sequences are in fact grammatical. This obviously reduces the benefit of pure grammaticality information. A solution is to use additional information to asses how “natural” a reading of a word sequence is. We have done a 0 20 40 60 80 100 −12 −10 −8 −6 −4 −2 0 N ∆WER (relative) leave−one−out optimized on test data Figure 1: The word error rate as a function of the maximum number of best hypotheses N. first step in this direction by estimating the probability of a parse tree. However, our model only looks at the structure of a parse tree and does not take the actual words into account. As N-grams and statistical parsers demonstrate, word information can be very valuable. It would therefore be interesting to investigate ways of introducing word information into our grammar-based model. Acknowledgements This work was supported by the Swiss National Science Foundation. We cordially thank Jean-Luc Gauvain of LIMSI for providing us with word lattices from their German broadcast news transcription system. 112 References M. Adda-Decker. 2003. A corpus-based decompounding algorithm for German lexical modeling in LVCSR. In Proceedings of Eurospeech, pages 257–260, Geneva, Switzerland. R. Beutler, T. Kaufmann, and B. Pfister. 2005. Integrating a non-probabilistic grammar into large vocabulary continuous speech recognition. In Proceedings of the IEEE ASRU 2005 Workshop, pages 104–109, San Juan (Puerto Rico). S. Brants, S. Dipper, S. Hansen, W. Lezius, and G. Smith. 2002. The TIGER treebank. In Proceedings of the Workshop on Treebanks and Linguistic Theories, Sozopol, Bulgaria. E. Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the NAACL, pages 132–139, San Francisco, USA. C. Chelba and F. Jelinek. 2000. Structured language modeling. Computer Speech & Language, 14(4):283– 332. K. W. Church and P. Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22–29. M. Collins. 2003. Head-driven statistical models for natural language parsing. Computational Linguistics, 29(4):589–637. B. Crysmann. 2003. On the efficient implementation of German verb placement in HPSG. In Proceedings of RANLP. B. Crysmann. 2005. Relative clause extraposition in German: An efficient and portable implementation. Research on Language and Computation, 3(1):61–82. Duden. 1999. – Das große W¨orterbuch der deutschen Sprache in zehn B¨anden. Dudenverlag, dritte Auflage. L. Gillick and S. Cox. 1989. Some statistical issues in the comparison of speech recognition algorithms. In Proceedings of the ICASSP, pages 532–535. T. Kaufmann and B. Pfister. 2007. Applying licenser rules to a grammar with continuous constituents. In Stefan M¨uller, editor, The Proceedings of the 14th International Conference on Head-Driven Phrase Structure Grammar, pages 150–162, Stanford, USA. CSLI Publications. B. Kiefer, H.-U. Krieger, and M.-J. Nederhof. 2000. Efficient and robust parsing of word hypotheses graphs. In Wolfgang Wahlster, editor, Verbmobil. Foundations of Speech-to-Speech Translation, pages 280–295. Springer, Berlin, Germany, artificial intelligence edition. K. McTait and M. Adda-Decker. 2003. The 300k LIMSI German broadcast news transcription system. In Proceedings of Eurospeech, Geneva, Switzerland. S. M¨uller. 1999. Deutsche Syntax deklarativ. HeadDriven Phrase Structure Grammar f¨ur das Deutsche. Number 394 in Linguistische Arbeiten. Max Niemeyer Verlag, T¨ubingen. S. M¨uller. 2007. Head-Driven Phrase Structure Grammar: Eine Einf¨uhrung. Stauffenburg Einf¨uhrungen, Nr. 17. Stauffenburg Verlag, T¨ubingen. G. Van Noord, G. Bouma, R. Koeling, and M.-J. Nederhof. 1999. Robust grammatical analysis for spoken dialogue systems. Natural Language Engineering, 5(1):45–93. C. J. Pollard and I. A. Sag. 1994. Head-Driven Phrase Structure Grammar. University of Chicago Press, Chicago. A. Ratnaparkhi. 1999. Learning to parse natural language with maximum entropy models. Machine Learning, 34(1-3):151–175. B. Roark. 2001. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249–276. M. Yamamoto and K. W. Church. 2001. Using suffix arrays to compute term frequency and document frequency for all substrings in a corpus. Computational Linguistics, 27(1):1–30. 113
2008
13
Proceedings of ACL-08: HLT, pages 114–120, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Automatic Editing in a Back-End Speech-to-Text System Maximilian Bisani Paul Vozila Olivier Divay Jeff Adams Nuance Communications One Wayside Road Burlington, MA 01803, U.S.A. {maximilian.bisani,paul.vozila,olivier.divay,jeff.adams}@nuance.com Abstract Written documents created through dictation differ significantly from a true verbatim transcript of the recorded speech. This poses an obstacle in automatic dictation systems as speech recognition output needs to undergo a fair amount of editing in order to turn it into a document that complies with the customary standards. We present an approach that attempts to perform this edit from recognized words to final document automatically by learning the appropriate transformations from example documents. This addresses a number of problems in an integrated way, which have so far been studied independently, in particular automatic punctuation, text segmentation, error correction and disfluency repair. We study two different learning methods, one based on rule induction and one based on a probabilistic sequence model. Quantitative evaluation shows that the probabilistic method performs more accurately. 1 Introduction Large vocabulary speech recognition today achieves a level of accuracy that makes it useful in the production of written documents. Especially in the medical and legal domains large volumes of text are traditionally produced by means of dictation. Here document creation is typically a “back-end” process. The author dictates all necessary information into a telephone handset or a portable recording device and is not concerned with the actual production of the document any further. A transcriptionist will then listen to the recorded dictation and produce a wellformed document using a word processor. The goal of introducing speech recognition in this process is to create a draft document automatically, so that the transcriptionist only has to verify the accuracy of the document and to fix occasional recognition errors. We observe that users try to spend as little time as possible dictating. They usually focus only on the content and rely on the transcriptionist to compose a readable, syntactically correct, stylistically acceptable and formally compliant document. For this reason there is a considerable discrepancy between the final document and what the speaker has said literally. In particular in medical reports we see differences of the following kinds: • Punctuation marks are typically not verbalized. • No instructions on the formatting of the report are dictated. Section headings are not identified as such. • Frequently section headings are only implied. (“vitals are” →“PHYSICAL EXAMINATION: VITAL SIGNS:”) • Enumerated lists. Typically speakers use phrases like “number one ...next number ...”, which need to be turned into “1. ...2. ...” • The dictation usually begins with a preamble (e.g. “This is doctor Xyz ...”) which does not appear in the report. Similarly there are typical phrases at the end of the dictation which should not be transcribed (e.g. “End of dictation. Thank you.”) 114 • There are specific standards regarding the use of medical terminology. Transcriptionists frequently expand dictated abbreviations (e.g. “CVA” →“cerebrovascular accident”) or otherwise use equivalent terms (e.g. “nonicteric sclerae” →“no scleral icterus”). • The dictation typically has a more narrative style (e.g. “She has no allergies.”, “I examined him”). In contrast, the report is normally more impersonal and structured (e.g. “ALLERGIES: None.”, “he was examined”). • For the sake of brevity, speakers frequently omit function words. (“patient” →“the patient”, “denies fever pain” →“he denies any fever or pain”) • As the dictation is spontaneous, disfluencies are quite frequent, in particular false starts, corrections and repetitions. (e.g. “22-year-old female, sorry, male 22-year-old male” →“22year-old male”) • Instruction to the transcriptionist and so-called normal reports, pre-defined text templates invoked by a short phrase like “This is a normal chest x-ray.” • In addition to the above, speech recognition output has the usual share of recognition errors some of which may occur systematically. These phenomena pose a problem that goes beyond the speech recognition task which has traditionally focused on correctly identifying speech utterances. Even with a perfectly accurate verbatim transcript of the user’s utterances, the transcriptionist would need to perform a significant amount of editing to obtain a document conforming to the customary standards. We need to look for what the user wants rather than what he says. Natural language processing research has addressed a number of these issues as individual problems: automatic punctuation (Liu et al., 2005), text segmentation (Beeferman et al., 1999; Matusov et al., 2003) disfluency repair (Heeman et al., 1996) and error correction (Ringger and Allen, 1996; Strzalkowski and Brandow, 1997; Peters and Drexel, 2004). The method we present in the following attempts to address all this by a unified transformation model. The goal is simply stated as transforming the recognition output into a text document. We will first describe the general framework of learning transformations from example documents. In the following two sections we will discuss a ruleinduction-based and a probabilistic transformation method respectively. Finally we present experimental results in the context of medical transcription and conclude with an assessment of both methods. 2 Text transformation In dictation and transcription management systems corresponding pairs of recognition output and edited and corrected documents are readily available. The idea of transformation modeling, outlined in figure 1, is to learn to emulate the transcriptionist. To this end we first process archived dictations with the speech recognizer to create approximate verbatim transcriptions. For each document this yields the spoken or source word sequence S = s1 . . . sM, which is supposed to be a word-by-word transcription of the user’s utterances, but which may actually contain recognition errors. The corresponding final reports are cleaned (removal of page headers etc.), tagged (identification of section headings and enumerated lists) and tokenized, yielding the text or target token sequence T = t1...tN for each document. Generally, the token sequence corresponds to the spoken form. (E.g. “25mg” is tokenized as “twenty five milligrams”.) Tokens can be ordinary words or special symbols representing line breaks, section headings, etc. Specifically, we represent each section heading by a single indivisible token, even if the section name consists of multiple words. Enumerations are represented by special tokens, too. Different techniques can be applied to learn and execute the actual transformation from S to T. Two options are discussed in the following. With the transformation model at hand, a draft for a new document is created in three steps. First the speech recognizer processes the audio recording and produces the source word sequence S. Next, the transformation step converts S into the target sequence T. Finally the transformation output T is formatted into a text document. Formatting is the 115 archived dictations recognize  new dictation recognize  store o transcripts @A train / transcript transform  transformation model / targets GF / tokens format  archived documents tokenize O draft document manual correction  final document @A store O Figure 1: Illustration of how text transformation is integrated into a speech-to-text system. inverse of tokenization and includes conversion of number words to digits, rendition of paragraphs and section headings, etc. Before we turn to concrete transformation techniques, we can make two general statements about this problem. Firstly, in the absence of observations to the contrary, it is reasonable to leave words unchanged. So, a priori the mapping should be the identity. Secondly, the transformation is mostly monotonous. Out-of-order sections do occur but are the exception rather than the rule. 3 Transformation based learning Following Strzalkowski and Brandow (1997) and Peters and Drexel (2004) we have implemented a transformation-based learning (TBL) algorithm (Brill, 1995). This method iteratively improves the match (as measured by token error rate) of a collection of corresponding source and target token sequences by positing and applying a sequence of substitution rules. In each iteration the source and target tokens are aligned using a minimum edit distance criterion. We refer to maximal contiguous subsequences of non-matching tokens as error regions. These consist of paired sequences of source and target tokens, where either sequence may be empty. Each error region serves as a candidate substitution rule. Additionally we consider refinements of these rules with varying amounts of contiguous context tokens on either side. Deviating from Peters and Drexel (2004), in the special case of an empty target sequence, i.e. a deletion rule, we consider deleting all (non-empty) contiguous subsequences of the source sequence as well. For each candidate rule we accumulate two counts: the number of exactly matching error regions and the number of false alarms, i.e. when its left-hand-side matches a sequence of already correct tokens. Rules are ranked by the difference in these counts scaled by the number of errors corrected by a single rule application, which is the length of the corresponding error region. This is an approximation to the total number of errors corrected by a rule, ignoring rule interactions and non-local changes in the minimum edit distance alignment. A subset of the topranked non-overlapping rules satisfying frequency and minimum impact constraints are selected and the source sequences are updated by applying the selected rules. Again deviating from Peters and Drexel (2004), we consider two rules as overlapping if the left-hand-side of one is a contiguous subsequence of the other. This procedure is iterated until no additional rules can be selected. The initial rule set is populated by a small sequence of hand-crafted rules (e.g. “impression colon” →“IMPRESSION:”). A user-independent baseline rule set is generated by applying the algorithm to data from a collection of users. We construct speaker-dependent models by initializing the algorithm with the speakerindependent rule set and applying it to data from the given user. 4 Probabilistic model The canonical approach to text transformation following statistical decision theory is to maximize the text document posterior probability given the spoken document. T ∗= argmax T p(T|S) (1) Obviously, the global model p(T|S) must be constructed from smaller scale observations on the cor116 respondence between source and target words. We use a 1-to-n alignment scheme. This means each source word is assigned to a sequence of zero, one or more target words. We denote the target words assigned to source word si as τi. Each replacement τi is a possibly empty sequence of target words. A source word together with its replacement sequence will be called a segment. We constrain the set of possible transformations by selecting a relatively small set of allowable replacements A(s) to each source word. This means we require τi ∈A(si). We use the usual m-gram approximation to model the joint probability of a transformation: p(S, T) = M Y i=1 p(si, τi|si−m+1, τi−m+1, . . . si−1, τi−1) (2) The work of Ringger and Allen (1996) is similar in spirit to this method, but uses a factored sourcechannel model. Note that the decision rule (1) is over whole documents. Therefore we processes complete documents at a time without prior segmentation into sentences. To estimate this model we first align all training documents. That is, for each document, the target word sequence is segmented into M segments T = τ1⌣. . . ⌣τM. The criterion for this alignment is to maximize the likelihood of a segment unigram model. The alignment is performed by an expectation maximization algorithm. Subsequent to the alignment step, m-gram probabilities are estimated by standard language modeling techniques. We create speaker-specific models by linearly interpolating an m-gram model based on data from the user with a speaker-independent background m-gram model trained on data pooled from a collection of users. To select the allowable replacements for each source word we count how often each particular target sequence is aligned to it in the training data. A source target pair is selected if it occurs twice or more times. Source words that were not observed in training are immutable, i.e. the word itself is its only allowable replacement A(s) = {(s)}. As an example suppose “patient” was deleted 10 times, left unchanged 105 times, replaced by “the patient” 113 times and once replaced by “she”. The word patient would then have three allowables: A(patient) = {(), (patient), (the, patient)}.) The decision rule (1) minimizes the document error rate. A more appropriate loss function is the number of source words that are replaced incorrectly. Therefore we use the following minimum word risk (MWR) decision strategy, which minimizes source word loss. T ∗= (argmax τ1∈A(si) p(τ1|S))⌣. . . ⌣( argmax τM∈A(sM) p(τM|S)) (3) This means for each source sequence position we choose the replacement that has the highest posterior probability p(τi|S) given the entire source sequence. To compute the posterior probabilities, first a graph is created representing alternatives “around” the most probable transform using beam search. Then the forward-backward algorithm is applied to compute edge posterior probabilities. Finally edge posterior probabilities for each source position are accumulated. 5 Experimental evaluation The methods presented were evaluated on a set of real-life medical reports dictated by 51 doctors. For each doctor we use 30 reports as a test set. Transformation models are trained on a disjoint set of reports that predated the evaluation reports. The typical document length is between one hundred and one thousand words. All dictations were recorded via telephone. The speech recognizer works with acoustic models that are specifically adapted for each user, not using the test data, of course. It is hard to quote the verbatim word error rate of the recognizer, because this would require a careful and time-consuming manual transcription of the test set. The recognition output is auto-punctuated by a method similar in spirit to the one proposed by Liu et al. (2005) before being passed to the transformation model. This was done because we considered the auto-punctuation output as the status quo ante which transformation modeling was to be compared to. Neither of both transformation methods actually relies on having auto-punctuated input. The auto-punctuation step only inserts periods and commas and the document is not explicitly segmented into sentences. (The transformation step always applies to entire documents and the interpretation of a period as a sentence boundary is left to the human 117 Table 1: Experimental evaluation of different text transformation techniques with different amounts of user-specific data. Precision, recall, deletion, insertion and error rate values are given in percent and represent the average of 51 users, where the results for each user are the ratios of sums over 30 reports. user sections punctuation all tokens method docs precision recall precision recall deletions insertions errors none (only auto-punct) 0.00 0.00 66.68 71.21 11.32 27.48 45.32 TBL SI 69.18 44.43 73.90 67.22 11.41 17.73 34.99 3-gram SI 65.19 44.41 73.79 62.26 18.15 12.27 36.09 TBL 25 75.38 53.39 75.59 69.11 10.97 15.97 32.62 3-gram 25 80.90 59.37 78.88 69.81 11.50 12.09 28.87 TBL 50 76.67 56.18 76.11 69.81 10.81 15.53 31.92 3-gram 50 81.10 62.69 79.39 70.94 11.31 11.46 27.76 TBL 100 77.92 58.03 76.41 70.52 10.67 15.19 31.29 3-gram 100 81.69 64.36 79.35 71.38 11.48 10.82 27.12 3-gram without MWR 100 81.39 64.23 79.01 71.52 11.55 10.92 27.29 reader of the document.) For each doctor a background transformation model was constructed using 100 reports from each of the other users. This is referred to as the speaker-independent (SI) model. In the case of the probabilistic model, all models were 3-gram models. User-specific models were created by augmenting the SI model with 25, 50 or 100 reports. One report from the test set is shown as an example in the appendix. 5.1 Evaluation metric The output of the text transformation is aligned with the corresponding tokenized report using a minimum edit cost criterion. Alignments between section headings and non-section headings are not permitted. Likewise no alignment of punctuation and non-punctuation tokens is allowed. Using the alignment we compute precision and recall for sections headings and punctuation marks as well as the overall token error rate. It should be noted that the so derived error rate is not comparable to word error rates usually reported in speech recognition research. All missing or erroneous section headings, punctuation marks and line breaks are counted as errors. As pointed out in the introduction the reference texts do not represent a literal transcript of the dictation. Furthermore the data were not cleaned manually. There are, for example, instances of letter heads or page numbers that were not correctly removed when the text was extracted from the word processor’s file format. The example report shown in the appendix features some of the typical differences between the produced draft and the final report that may or may not be judged as errors. (For example, the date of the report was not given in the dictation, the section names “laboratory data” and “laboratory evaluation” are presumably equivalent and whether “stable” is preceded by a hyphen or a period in the last section might not be important.) Nevertheless, the numbers reported do permit a quantitative comparison between different methods. 5.2 Results Results are stated in table 1. In the baseline setup no transformation is applied to the auto-punctuated recognition output. Since many parts of the source data do not need to be altered, this constitutes the reference point for assessing the benefit of transformation modeling. For obvious reasons precision and recall of section headings are zero. A high rate of insertion errors is observed which can largely be attributed to preambles. Both transformation methods reduce the discrepancy between the draft document and the final corrected document significantly. With 100 training documents per user the mean token error rate is reduced by up to 40% relative by the probabilistic model. When user specific data is used, the probabilistic approach performs consistently better than TBL on all accounts. In particular it always has much lower insertion rates reflecting its supe118 rior ability to remove utterances that are not typically part of the report. On the other hand the probabilistic model suffers from a slightly higher deletion rate due to being overzealous in this regard. In speaker independent mode, however, the deletion rate is excessively high and leads to inferior overall performance. Interestingly the precision of the automatic punctuation is increased by the transformation step, without compromising on recall, at least when enough user specific training data is available. The minimum word risk criterion (3) yields slightly better results than the simpler document risk criterion (1). 6 Conclusions Automatic text transformation brings speech recognition output much closer to the end result desired by the user of a back-end dictation system. It automatically punctuates, sections and rephrases the document and thereby greatly enhances transcriptionist productivity. The holistic approach followed here is simpler and more comprehensive than a cascade of more specialized methods. Whether or not the holistic approach is also more accurate is not an easy question to answer. Clearly the outcome would depend on the specifics of the specialized methods one would compare to, as well as the complexity of the integrated transformation model one applies. The simple models studied in this work admittedly have little provisions for targeting specific transformation problems. For example the typical length of a section is not taken into account. However, this is not a limitation of the general approach. We have observed that a simple probabilistic sequence model performs consistently better than the transformationbased learning approach. Even though neither of both methods is novel, we deem this an important finding since none of the previous publications we know of in this domain allow this conclusion. While the present experiments have used a separate autopunctuation step, future work will aim to eliminate it by integrating the punctuation features into the transformation step. In the future we plan to integrate additional knowledge sources into our statistical method in order to more specifically address each of the various phenomena encountered in spontaneous dictation. References Beeferman, Doug, Adam Berger, and John Lafferty. 1999. Statistical models for text segmentation. Machine Learning, 34(1-3):177 – 210. Brill, Eric. 1995. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 21(4):543 – 565. Heeman, Peter A., Kyung-ho Loken-Kim, and James F. Allen. 1996. Combining the detection and correction of speech repairs. In Proc. Int. Conf. Spoken Language Processing (ICSLP), pages 362 – 365. Philadelphia, PA, USA. Liu, Yang, Andreas Stolcke, Elizabeth Shriberg, and Mary Harper. 2005. Using conditional random fields for sentence boundary detection in speech. In Proc. Annual Meeting of the ACL, pages 451 – 458. Ann Arbor, MI, USA. Matusov, Evgeny, Jochen Peters, Carsten Meyer, and Hermann Ney. 2003. Topic segmentation using markov models on section level. In Proc. IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 471 – 476. IEEE, St. Thomas, U.S. Virgin Islands. Peters, Jochen and Christina Drexel. 2004. Transformation-based error correction for speechto-text systems. In Proc. Int. Conf. Spoken Language Processing (ICSLP), pages 1449 – 1452. Jeju Island, Korea. Ringger, Eric K. and James F. Allen. 1996. A fertility channel model for post-correction of continuous speech recognition. In Proc. Int. Conf. Spoken Language Processing (ICSLP), pages 897 – 900. Philadelphia, PA, USA. Strzalkowski, Tomek and Ronald Brandow. 1997. A natural language correction model for continuous speech recognition. In Proc. 5th Workshop on Very Large Corpora (WVVLC-5):, pages 168 – 177. Beijing-Hong Kong. 119 Appendix A. Example of a medical report Recognition output. Vertical space was added to facilitate visual comparison. doctors name dictating a progress note on first name last name patient without complaints has been ambulating without problems no chest pain chest pressure still has some shortness of breath but overall has improved significantly vital signs are stable she is afebrile lungs show decreased breath sounds at the bases with bilateral rales and rhonchi heart is regular rate and rhythm two over six crescendo decrescendo murmur at the right sternal border abdomen soft nontender nondistended extremities show one plus pedal edema bilaterally neurological exam is nonfocal white count of five point seven H. and H. eleven point six and thirty five point five platelet count of one fifty five sodium one thirty seven potassium three point nine chloride one hundred carbon dioxide thirty nine calcium eight point seven glucose ninety one BUN and creatinine thirty seven and one point one impression number one COPD exacerbation continue breathing treatments number two asthma exacerbation continue oral prednisone number three bronchitis continue Levaquin number four hypertension stable number five uncontrolled diabetes mellitus improved number six gastroesophageal reflux disease stable number seven congestive heart failure stable new paragraph patient is in stable condition and will be discharged to name nursing home and will be monitored closely on an outpatient basis progress note Automatically generated draft (speech recognition output after transformation and formatting) Progress note SUBJECTIVE: The patient is without complaints. Has been ambulating without problems. No chest pain, chest pressure, still has some shortness of breath, but overall has improved significantly. PHYSICAL EXAMINATION: VITAL SIGNS: Stable. She is afebrile. LUNGS: Show decreased breath sounds at the bases with bilateral rales and rhonchi. HEART: Regular rate and rhythm 2/6 crescendo decrescendo murmur at the right sternal border. ABDOMEN: Soft, nontender, nondistended. EXTREMITIES: Show 1+ pedal edema bilaterally. NEUROLOGICAL: Nonfocal. LABORATORY DATA: White count of 5.7, hemoglobin and hematocrit 11.6 and 35.5, platelet count of 155, sodium 137, potassium 3.9, chloride 100, CO2 39, calcium 8.7, glucose 91, BUN and creatinine 37 and 1.1. IMPRESSION: 1. Chronic obstructive pulmonary disease exacerbation. Continue breathing treatments. 2. Asthma exacerbation. Continue oral prednisone. 3. Bronchitis. Continue Levaquin. 4. Hypertension. Stable. 5. Uncontrolled diabetes mellitus. Improved. 6. Gastroesophageal reflux disease, stable. 7. Congestive heart failure. Stable. PLAN: The patient is in stable condition and will be discharged to name nursing home and will be monitored closely on an outpatient basis. Final report produced by a human transcriptionist without reference to the automatic draft. Progress Note DATE: July 26, 2005. HISTORY OF PRESENT ILLNESS: The patient has no complaints. She is ambulating without problems. No chest pain or chest pressure. She still has some shortness of breath, but overall has improved significantly. PHYSICAL EXAMINATION: VITAL SIGNS: Stable. She’s afebrile. LUNGS: Decreased breath sounds at the bases with bilateral rales and rhonchi. HEART: Regular rate and rhythm. 2/6 crescendo, decrescendo murmur at the right sternal border. ABDOMEN: Soft, nontender and nondistended. EXTREMITIES: 1+ pedal edema bilaterally. NEUROLOGICAL EXAMINATION: Nonfocal. LABORATORY EVALUATION: White count 5.7, H&H 11.6 and 35.5, platelet count of 155, sodium 137, potassium 3.9, chloride 100, co2 39, calcium 8.7, glucose 91, BUN and creatinine 37 and 1.1. IMPRESSION: 1. Chronic obstructive pulmonary disease exacerbation. Continue breathing treatments. 2. Asthma exacerbation. Continue oral prednisone. 3. Bronchitis. Continue Levaquin. 4. Hypertension-stable. 5. Uncontrolled diabetes mellitus-improved. 6. Gastroesophageal reflux disease-stable. 7. Congestive heart failure-stable. The patient is in stable condition and will be discharged to name Nursing Home, and will be monitored on an outpatient basis. 120
2008
14
Proceedings of ACL-08: HLT, pages 121–129, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Grounded Language Modeling for Automatic Speech Recognition of Sports Video Michael Fleischman Massachusetts Institute of Technology Media Laboratory [email protected] Deb Roy Massachusetts Institute of Technology Media Laboratory [email protected] Abstract Grounded language models represent the relationship between words and the non-linguistic context in which they are said. This paper describes how they are learned from large corpora of unlabeled video, and are applied to the task of automatic speech recognition of sports video. Results show that grounded language models improve perplexity and word error rate over text based language models, and further, support video information retrieval better than human generated speech transcriptions. 1 Introduction Recognizing speech in broadcast video is a necessary precursor to many multimodal applications such as video search and summarization (Snoek and Worring, 2005;). Although performance is often reasonable in controlled environments (such as studio news rooms), automatic speech recognition (ASR) systems have significant difficulty in noisier settings (such as those found in live sports broadcasts) (Wactlar et al., 1996). While many researches have examined how to compensate for such noise using acoustic techniques, few have attempted to leverage information in the visual stream to improve speech recognition performance (for an exception see Murkherjee and Roy, 2003). In many types of video, however, visual context can provide valuable clues as to what has been said. For example, in video of Major League Baseball games, the likelihood of the phrase “home run” increases dramatically when a home run has actually been hit. This paper describes a method for incorporating such visual information in an ASR system for sports video. The method is based on the use of grounded language models to represent the relationship between words and the nonlinguistic context to which they refer (Fleischman and Roy, 2007). Grounded language models are based on research from cognitive science on grounded models of meaning. (for a review see Roy, 2005, and Roy and Reiter, 2005). In such models, the meaning of a word is defined by its relationship to representations of the language users’ environment. Thus, for a robot operating in a laboratory setting, words for colors and shapes may be grounded in the outputs of its computer vision system (Roy & Pentland, 2002); while for a simulated agent operating in a virtual world, words for actions and events may be mapped to representations of the agent’s plans or goals (Fleischman & Roy, 2005). This paper extends previous work on grounded models of meaning by learning a grounded language model from naturalistic data collected from broadcast video of Major League Baseball games. A large corpus of unlabeled sports videos is collected and paired with closed captioning transcriptions of the announcers’ speech. 1 This corpus is used to train the grounded language model, which like traditional language models encode the prior probability of words for an ASR system. Unlike traditional language models, however, grounded language models represent the probability of a word conditioned not only on the previous word(s), but also on features of the non-linguistic context in which the word was uttered. Our approach to learning grounded language models operates in two phases. In the first phase, events that occur in the video are represented using hierarchical temporal pattern automatically mined 1 Closed captioning refers to human transcriptions of speech embedded in the video stream primarily for the hearing impaired. Closed captioning is reasonably accurate (although not perfect) and available on some, but not all, video broadcasts. 121 Figure 1. Representing events in video. a) Events are represented by first abstracting the raw video into visual context, camera motion, and audio context features. b) Temporal data mining is then used to discover hierarchical temporal patterns in the parallel streams of features. c) Temporal patterns found significant in each iteration are stored in a codebook that is used to represent high level events in video. from low level features. In the second phase, a conditional probability distribution is estimated that describes the probability that a word was uttered given such event representations. In the following sections we describe these two aspects of our approach and evaluate the performance of our grounded language model on a speech recognition task using video highlights from Major League Baseball games. Results indicate improved performance using three metrics: perplexity, word error rate, and precision on an information retrieval task. 2 Representing Events in Sports Video Recent work in video surveillance has demonstrated the benefit of representing complex events as temporal relations between lower level subevents (Hongen et al., 2004). Thus, to represent events in the sports domain, we would ideally first represent the basic sub events that occur in sports video (e.g., hitting, throwing, catching, running, etc.) and then build up complex events (such as home run) as a set of temporal relations between these basic events. Unfortunately, due to the limitations of computer vision techniques, reliably identifying such basic events in video is not feasible. However, sports video does have characteristics that can be exploited to effectively represent complex events. Like much broadcast video, sports video is highly produced, exploiting many different camera angles and a human director who selects which camera is most appropriate given what is happening on the field. The styles that different directors employ are extremely consistent within a sport and make up a “language of film” which the machine can take advantage of in order to represent the events taking place in the video. Thus, even though it is not easy to automatically identify a player hitting a ball in video, it is easy to detect features that correlate with hitting, e.g., when a scene focusing on the pitching mound immediately jumps to one zooming in on the field (see Figure 1). Although these correlations are not perfect, experiments have shown that baseball events can be classified using such features (Fleischman et al., 2007). We exploit the language of film to represent events in sports video in two phases. First, low level features that correlate with basic events in sports are extracted from the video stream. Then, temporal data mining is used to find patterns within this low level event stream. 2.1 Feature Extraction We extract three types of features: visual context features, camera motion features, and audio context features. 122 Visual Context Features Visual context features encode general properties of the visual scene in a video segment. Supervised classifiers are trained to identify these features, which are relatively simple to classify in comparison to high level events (like home runs) that require more training data and achieve lower accuracy. The first step in classifying visual context features is to segment the video into shots (or scenes) based on changes in the visual scene due to editing (e.g. jumping from a close up to a wide shot of the field). Shot detection and segmentation is a well studied problem; in this work we use the method of Tardini et al. (2005). After the video is segmented into shots, individual frames (called key frames) are selected and represented as a vector of low level features that describe the key frame’s color distribution, entropy, etc. (see Fleischman and Roy, 2007 for the full list of low level features used). The WEKA machine learning package is used to train a boosted decision tree to classify these frames into one of three categories: pitching-scene, field-scene, other (Witten and Frank, 2005). Those shots whose key frames are classified as field-scenes are then subcategorized (using boosted decision trees) into one of the following categories: infield, outfield, wall, base, running, and misc. Performance of these classification tasks is approximately 96% and 90% accuracy respectively. Camera Motion Features In addition to visual context features, we also examine the camera motion that occurs within a video. Unlike visual context features, which provide information about the global situation that is being observed, camera motion features represent more precise information about the actions occurring in a video. The intuition here is that the camera is a stand in for a viewer’s focus of attention. As actions occur in a video, the camera moves to follow it; this camera motion thus mirrors the actions themselves, providing informative features for event representation. Like shot boundary detection, detecting the motion of the camera in a video (i.e., the amount it pans left to right, tilts up and down, and zooms in and out) is a well-studied problem. We use the system of Bouthemy et al. (1999) which computes the camera motion using the parameters of a twodimensional affine model to fit every pair of sequential frames in a video. A 15 state 1st order Hidden Markov Model, implemented with the Graphical Modeling Toolkit,2 then converts the output of the Bouthemy system into a stream of clustered characteristic camera motions (e.g. state 12 clusters together motions of zooming in fast while panning slightly left). Audio Context The audio stream of a video can also provide useful information for representing non-linguistic context. We use boosted decision trees to classify audio into segments of speech, excited_speech, cheering, and music. Classification operates on a sequence of overlapping 30 ms frames extracted from the audio stream. For each frame, a feature vector is computed using, MFCCs (often used in speaker identification and speech detection tasks), as well as energy, the number of zero crossings, spectral entropy, and relative power between different frequency bands. The classifier is applied to each frame, producing a sequence of class labels. These labels are then smoothed using a dynamic programming cost minimization algorithm (similar to those used in Hidden Markov Models). Performance of this system achieves between 78% and 94% accuracy. 2.2 Temporal Pattern Mining Given a set of low level features that correlate with the basic events in sports, we can now focus on building up representations of complex events. Unlike previous work (Hongen et al., 2005) in which representations of the temporal relations between low level events are built up by hand, we employ temporal data mining techniques to automatically discover such relations from a large corpus of unannotated video. As described above, ideal basic events (such as hitting and catching) cannot be identified easily in sports video. By finding temporal patterns between audio, visual and camera motion features, however, we can produce representations that are highly correlated with sports events. Importantly, such temporal patterns are not strictly sequential, but rather, are composed of features that can occur 2 http://ssli.ee.washington.edu/~bilmes/gmtk/ 123 in complex and varied temporal relations to each other. To find such patterns automatically, we follow previous work in video content classification in which temporal data mining techniques are used to discover event patterns within streams of lower level features. The algorithm we use is fully unsupervised and proceeds by examining the relations that occur between features in multiple streams within a moving time window. Any two features that occur within this window must be in one of seven temporal relations with each other (e.g. before, during, etc.) (Allen, 1984). The algorithm keeps track of how often each of these relations is observed, and after the entire video corpus is analyzed, uses chi-square analyses to determine which relations are significant. The algorithm iterates through the data, and relations between individual features that are found significant in one iteration (e.g. [OVERLAP, field-scene, cheer]), are themselves treated as individual features in the next. This allows the system to build up higher-order nested relations in each iteration (e.g. [BEFORE, [OVERLAP, field-scene, cheer], field scene]]). The temporal patterns found significant in this way make up a codebook which can then be used as a basis for representing a video. The term codebook is often used in image analysis to describe a set of features (stored in the codebook) that are used to encode raw data (images or video). Such codebooks are used to represent raw video using features that are more easily processed by the computer. Our framework follows a similar approach in which raw video is encoded (using a codebook of temporal patterns) as follows. First, the raw video is abstracted into the visual context, camera motion, and audio context feature streams (as described in Section 2.1). These feature streams are then scanned, looking for any temporal patterns (and nested sub-patterns) that match those found in the codebook. For each pattern, the duration for which it occurs in the feature streams is treated as the value of an element in the vector representation for that video. Thus, a video is represented as an n length vector, where n is the total number of temporal patterns in the codebook. The value of each element of this vector is the duration for which the pattern associated with that element was observed in the video. So, if a pattern was not observed in a video at all, it would have a value of 0, while if it was observed for the entire length of the video, it would have a value equal to the number of frames present in that video. Given this method for representing the nonlinguistic context of a video, we can now examine how to model the relationship between such context and the words used to describe it. 3 Linguistic Mapping Modeling the relationship between words and nonlinguistic context assumes that the speech uttered in a video refers consistently (although not exclusively) to the events being represented by the temporal pattern features. We model this relationship, much like traditional language models, using conditional probability distributions. Unlike traditional language models, however, our grounded language models condition the probability of a word not only on the word(s) uttered before it, but also on the temporal pattern features that describe the non-linguistic context in which it was uttered. We estimate these conditional distributions using a framework similar that used for training acoustic models in ASR and translation models in Machine Translation (MT). We generate a training corpus of utterances paired with representations of the non-linguistic context in which they were uttered. The first step in generating this corpus is to generate the low level features described in Section 2.1 for each video in our training set. We then segment each video into a set of independent events based on the visual context features we have extracted. We follow previous work in sports video processing (Gong et al., 2004) and define an event in a baseball video as any sequence of shots starting with a pitching-scene and continuing for four subsequent shots. This definition follows from the fact that the vast majority of events in baseball start with a pitch and do not last longer than four shots. For each of these events in our corpus, a temporal pattern feature vector is generated as described in section 2.2. These events are then paired with all the words from the closed captioning transcription that occur during each event (plus or minus 10 seconds). Because these transcriptions are not necessarily time synched with the audio, we use the method described in Hauptmann and Witbrock 124 (1998) to align the closed captioning to the announcers’ speech. Previous work has examined applying models often used in MT to the paired corpus described above (Fleischman and Roy, 2006). Recent work in automatic image annotation (Barnard et al., 2003; Blei and Jordan, 2003) and natural language processing (Steyvers et al., 2004), however, have demonstrated the advantages of using hierarchical Bayesian models for related tasks. In this work we follow closely the Author-Topic (AT) model (Steyvers et al., 2004) which is a generalization of Latent Dirichlet Allocation (LDA) (Blei et al., 2005).3 LDA is a technique that was developed to model the distribution of topics discussed in a large corpus of documents. The model assumes that every document is made up of a mixture of topics, and that each word in a document is generated from a probability distribution associated with one of those topics. The AT model generalizes LDA, saying that the mixture of topics is not dependent on the document itself, but rather on the authors who wrote it. According to this model, for each word (or phrase) in a document, an author is chosen uniformly from the set of the authors of the document. Then, a topic is chosen from a distribution of topics associated with that particular author. Finally, the word is generated from the distribution associated with that chosen topic. We can express the probability of the words in a document (W) given its authors (A) as: ∏ ∑∑ ∈ ∈ ∈ = W m A x T z d x z p z m p A A W p ) | ( ) | ( 1 ) | ( (1) where T is the set of latent topics that are induced given a large set of training data. We use the AT model to estimate our grounded language model by making an analogy between documents and events in video. In our framework, the words in a document correspond to the words in the closed captioning transcript associated with an event. The authors of a document correspond to the temporal patterns representing the non- linguistic context of that event. We modify the AT model slightly, such that, instead of selecting from 3 In the discussion that follows, we describe a method for estimating unigram grounded language models. Estimating bigram and trigram models can be done by processing on word pairs or triples, and performing normalization on the resulting conditional distributions. a uniform distribution (as is done with authors of documents), we select patterns from a multinomial distribution based upon the duration of the pattern. The intuition here is that patterns that occur for a longer duration are more salient and thus, should be given greater weight in the generative process. We can now rewrite (1) to give the probability of words during an event (W) given the vector of observed temporal patterns (P) as: ∏∑∑ ∈ ∈ ∈ = W m P x T z x p x z p z m p P W p ) ( ) | ( ) | ( ) | ( (2) In the experiments described below we follow Steyver et al., (2004) and train our AT model using Gibbs sampling, a Markov Chain Monte Carlo technique for obtaining parameter estimates. We run the sampler on a single chain for 200 iterations. We set the number of topics to 15, and normalize the pattern durations first by individual pattern across all events, and then for all patterns within an event. The resulting parameter estimates are smoothed using a simple add N smoothing technique, where N=1 for the word by topic counts and N=.01 for the pattern by topic counts. 4 Evaluation In order to evaluate our grounded language modeling approach, a parallel data set of 99 Major League Baseball games with corresponding closed captioning transcripts was recorded from live television. These games represent data totaling approximately 275 hours and 20,000 distinct events from 25 teams in 23 stadiums, broadcast on five different television stations. From this set, six games were held out for testing (15 hours, 1200 events, nine teams, four stations). From this test set, baseball highlights (i.e., events which terminate with the player either out or safe) were hand annotated for use in evaluation, and manually transcribed in order to get clean text transcriptions for gold standard comparisons. Of the 1200 events in the test set, 237 were highlights with a total word count of 12,626 (vocabulary of 1800 words). The remaining 93 unlabeled games are used to train unigram, bigram, and trigram grounded language models. Only unigrams, bigrams, and trigrams that are not proper names, appear greater than three times, and are not composed only of stop words were used. These grounded language models are then combined in a backoff strategy 125 with traditional unigram, bigram, and trigram language models generated from a combination of the closed captioning transcripts of all training games and data from the switchboard corpus (see below). This backoff is necessary to account for the words not included in the grounded language model itself (i.e. stop words, proper names, low frequency words). The traditional text-only language models (which are also used below as baseline comparisons) are generated with the SRI language modeling toolkit (Stolcke, 2002) using Chen and Goodman's modified Kneser-Ney discounting and interpolation (Chen and Goodman, 1998). The backoff strategy we employ here is very simple: if the ngram appears in the GLM then it is used, otherwise the traditional LM is used. In future work we will examine more complex backoff strategies (Hsu, in review). We evaluate our grounded language modeling approach using 3 metrics: perplexity, word error rate, and precision on an information retrieval task. 4.1 Perplexity Perplexity is an information theoretic measure of how well a model predicts a held out test set. We use perplexity to compare our grounded language model to two baseline language models: a language model generated from the switchboard corpus, a commonly used corpus of spontaneous speech in the telephony domain (3.65M words; 27k vocab); and a language model that interpolates (with equal weight given to both) between the switchboard model and a language model trained only on the baseball-domain closed captioning (1.65M words; 17k vocab). The results of calculating perplexity on the test set highlights for these three models is presented in Table 1 (lower is better). Not surprisingly, the switchboard language model performs far worse than both the interpolated text baseline and the grounded language model. This is due to the large discrepancy between both the style and vocabulary of language about sports compared to the domain of telephony sampled by the switchboard corpus. Of more interest is the decrease in perplexity seen when using the grounded language model compared to the interpolated model. Note that these two language models are generated using the same speech transcriptions, i.e. the closed captioning from the training games and the switchboard corpus. However, whereas the baseline model remains the same for each of the 237 test highlights, the grounded language model generates different word distributions for each highlight depending on the event features extracted from the highlight video. Switchboard Interpolated (Switch+CC) Grounded ppl 1404 145.27 83.88 Table 1. Perplexity measures for three different language models on a held out test set of baseball highlights (12,626 words). We compare the grounded language model to two text based language models: one trained on the switchboard corpus alone; and interpolated with one trained on closed captioning transcriptions of baseball video. 4.2 Word Accuracy and Error Rate Word error rate (WER) is a normalized measure of the number of word insertions, substitutions, and deletions required to transform the output transcription of an ASR system to a human generated gold standard transcription of the same utterance. Word accuracy is simply the number of words in the gold standard that they system correctly recognized. Unlike perplexity which only evaluates the performance of language models, examining word accuracy and error rate requires running an entire ASR system, i.e. both the language and acoustic models. We use the Sphinx system to train baseball specific acoustic models using parallel acoustic/text data automatically mined from our training set. Following Jang and Hauptman (1999), we use an off the shelf acoustic model (the hub4 model) to generate an extremely noisy speech transcript of each game in our training set, and use dynamic programming to align these noisy outputs to the closed captioning stream for those same games. Given these two transcriptions, we then generate a paired acoustic/text corpus by sampling the audio at the time codes where the ASR transcription matches the closed captioning transcription. For example, if the ASR output contains the term sequence “… and farther home run for David forty says…” and the closed captioning contains the sequence “…another home run for David Ortiz…,” the matched phrase “home run for David” is assumed a correct transcription for the audio at the time codes given by the ASR system. Only looking at sequences of three words or more, 126 76.6 80.3 89.6 70 75 80 85 90 95 switchboard interpolated grounded Word Error Rate (WER) 31.3 25.4 15.1 0 5 10 15 20 25 30 35 switchboard interpolated grounded Word Accuracy (%) Figure 3. Word accuracy and error rates for ASR systems using a grounded language model, a text based language model trained on the switchboard corpus, and the switchboard model interpolated with a text based model trained on baseball closed captions. we extract approximately 18 hours of clean paired data from our 275 hour training corpus. A continuous acoustic model with 8 gaussians and 6000 ties states is trained on this data using the Sphinx speech recognizer.4 Figure 3 shows the WERs and accuracy for three ASR systems run using the Sphinx decoder with the acoustic model described above and either the grounded language model or the two baseline models described in section 4.1. Note that performance for all of these systems is very poor due to limited acoustic data and the large amount of background crowd noise present in sports video (and particularly in sports highlights). Even with this noise, however, results indicate that the word accuracy and error rates when using the grounded language model is significantly better than both the switchboard model (absolute WER reduction of 13%; absolute accuracy increase of 15.2%) and the switchboard interpolated with the baseball specific text based language model (absolute WER reduction of 3.7%; absolute accuracy increase of 5.9%). 4 http://cmusphinx.sourceforge.net/html/cmusphinx.php Drawing conclusions about the usefulness of grounded language models using word accuracy or error rate alone is difficult. As it is defined, these measures penalizes a system that mistakes “a” for “uh” as much as one that mistakes “run” for “rum.” When using ASR to support multimedia applications (such as search), though, such substitutions are not of equal importance. Further, while visual information may be useful for distinguishing the latter error, it is unlikely to assist with the former. Thus, in the next section we examine an extrinsic evaluation in which grounded language models are judged not directly on their effect on word accuracy or error rate, but based on their ability to support video information retrieval. 4.3 Precision of Information Retrieval One of the most commonly used applications of ASR for video is to support information retrieval (IR). Such video IR systems often use speech transcriptions to index segments of video in much the same way that words are used to index text documents (Wactlar et al., 1996). For example, in the domain of baseball, if a video IR system were issued the query “home run,” it would typically return a set of video clips by searching its database for events in which someone uttered the phrase “home run.” Because such systems rely on ASR output to search video, the performance of a video IR system gives an indirect evaluation of the ASR’s quality. Further, unlike the case with word accuracy or error rate, such evaluations highlight a systems ability to recognize the more relevant content words without being distracted by the more common stop words. Our metric for evaluation is the precision with which baseball highlights are returned in a video IR system. We examine three systems: one that uses ASR with the grounded language model, a baseline system that uses ASR with the text only interpolated language model, and finally a system that uses human produced closed caption transcriptions to index events. For each system, all 1200 events from the test set (not just the highlights) are indexed. Queries are generated artificially using a method similar to Berger and Lafferty (1999) and used in Fleischman and Roy (2007). First, each highlight is labeled with the event’s type (e.g. fly ball), the event’s location (e.g. left field) and the event’s result (e.g. double play): 13 labels total. Log likelihood ratios 127 are then used to find the phrases (unigram, trigram, and bigram) most indicative of each label (e.g. “fly ball” for category fly ball). For each label, the three most indicative phrases are issued as queries to the system, which ranks its results using the language modeling approach of Ponte and Croft (1998). Precision is measured on how many of the top five returned events are of the correct category. Figure 4 shows the precision of the video IR systems based on ASR with the grounded language model, ASR with the text-only interpolated language model, and closed captioning transcriptions. As with our previous evaluations, the IR results show that the system using ASR with the grounded language model performed better than the one using ASR with the text-only language model (5.1% absolute improvement). More notably, though, Figure 4 shows that the system using the grounded language model performed better than the system using the hand generated closed captioning transcriptions (4.6% absolute improvement). Although this is somewhat counterintuitive given that hand transcriptions are typically considered gold standards, these results follow from a limitation of using text-based methods to index video. Unlike the case with text documents, the occurrence of a query term in a video is often not enough to assume the video’s relevance to that query. For example, when searching through video of baseball games, returning all clips in which the phrase “home run” occurs, results primarily in video of events where a home run does not actually occur. This follows from the fact that in sports, as in life, people often talk not about what is currently happening, but rather, they talk about what did, might, or will happen in the future. By taking into account non-linguistic context during speech recognition, the grounded language model system indirectly circumvents some of these false positive results. This follows from the fact that an effect of using the grounded language model is that when an announcer utters a phrase (e.g., “fly ball”), the system is more likely to recognize that phrase correctly if the event it refers to is actually occurring (e.g. if someone actually hit a fly ball). Because the grounded language model system is biased to recognize phrases that describe what is currently happening, it returns fewer false positives and gets higher precision. 0.26 0.27 0.28 0.29 0.3 0.31 0.32 0.33 0.34 0.35 ASR-LM CC ASR-GLM Precision of Top 5 Figure 4. Precision of top five results of a video IR system based on speech transcriptions. Three different transcriptions are compared: ASR-LM uses ASR with a text-only interpolated language model (trained on baseball closed captioning and the switchboard corpus); ASR-GLM uses ASR with a grounded language model; CC uses human generated closed captioning transcriptions (i.e., no ASR). 5 Conclusions We have described a method for improving speech recognition in video. The method uses grounded language modeling, an extension of tradition language modeling in which the probability of a word is conditioned not only on the previous word(s) but also on the non-linguistic context in which the word is uttered. Context is represented using hierarchical temporal patterns of low level features which are mined automatically from a large unlabeled video corpus. Hierarchical Bayesian models are then used to map these representations to words. Initial results show grounded language models improve performance on measures of perplexity, word accuracy and error rate, and precision on an information retrieval task. In future work, we will examine the ability of grounded language models to improve performance for other natural language tasks that exploit text based language models, such as Machine Translation. Also, we are examining extending this approach to other sports domains such as American football. In theory, however, our approach is applicable to any domain in which there is discussion of the here-and-now (e.g., cooking shows, etc.). In future work, we will examine the strengths and limitations of grounded language modeling in these domains. 128 References Allen, J.F. (1984). A General Model of Action and Time. Artificial Intelligence. 23(2). Barnard, K, Duygulu, P, de Freitas, N, Forsyth, D, Blei, D, and Jordan, M. (2003), Matching Words and Pictures, Journal of Machine Learning Research, Vol 3. Berger, A. and Lafferty, J. (1999). Information Retrieval as Statistical Translation. In Proceedings of SIGIR-99. Blei, D. and Jordan, M. (2003). Modeling annotated data. Proceedings of the 26th International Conference on Research and Development in Information Retrieval, ACM Press, 127–134. Blei, D. Ng, A., and Jordan, M (2003). “Latent Dirichlet allocation.” Journal of Machine Learning Research 3:993–1022. Bouthemy, P., Gelgon, M., Ganansia, F. (1999). A unified approach to shot change detection and camera motion characterization. IEEE Trans. on Circuits and Systems for Video Technology, 9(7). Chen, S. F. and Goodman, J., (1998). An Empirical Study of Smoothing Techniques for Language Modeling, Tech. Report TR-10-98, Computer Science Group, Harvard U., Cambridge, MA. Fleischman M, Roy, D. (2007). Situated Models of Meaning for Sports Video Retrieval. HLT/NAACL. Rochester, NY. Fleischman, M. and Roy, D. (2007). Unsupervised Content-Based Indexing of Sports Video Retrieval. 9th ACM Workshop on Multimedia Information Retrieval (MIR). Augsburg, Germany. Fleischman, M. B. and Roy, D. (2005) Why Verbs are Harder to Learn than Nouns: Initial Insights from a Computational Model of Intention Recognition in Situated Word Learning. 27th Annual Meeting of the Cognitive Science Society, Stresa, Italy. Fleischman, M., DeCamp, P. Roy, D. (2006). Mining Temporal Patterns of Movement for Video Content Classification. ACM Workshop on Multimedia Information Retrieval. Fleischman, M., Roy, B., and Roy, D. (2007). Temporal Feature Induction for Sports Highlight Classification. In Proceedings of ACM Multimedia. Augsburg, Germany. Gong, Y., Han, M., Hua, W., Xu, W. (2004). Maximum entropy model-based baseball highlight detection and classification. Computer Vision and Image Understanding. 96(2). Hauptmann, A. , Witbrock, M., (1998) Story Segmentation and Detection of Commercials in Broadcast News Video, Advances in Digital Libraries. Hongen, S., Nevatia, R. Bremond, F. (2004). Video-based event recognition: activity representation and probabilistic recognition methods. Computer Vision and Image Understanding. 96(2). Hsu , Bo-June (Paul). (in review). Generalized Linear Interpolation of Language Models. Jang, P., Hauptmann, A. (1999). Learning to Recognize Speech by Watching Television. IEEE Intelligent Systems Magazine, 14(5), pp. 51-58. Mukherjee, N. and Roy, D.. (2003). A Visual ContextAware Multimodal System for Spoken Language Processing. Proc. Eurospeech, 4 pages. Ponte, J.M., and Croft, W.B. (1998). A Language Modeling Approach to Information Retrieval. In Proc. of SIGIR’98. Roy, D. (2005). . Grounding Words in Perception and Action: Insights from Computational Models. TICS. Roy, D. and Pentland, A. (2002). Learning Words from Sights and Sounds: A Computational Model. Cognitive Science, 26(1). Roy. D. and Reiter, E. (2005). . Connecting Language to the World. Artificial Intelligence, 167(1-2), 1-12. Snoek, C.G.M. and Worring, M.. (2005). Multimodal video indexing: A review of the state-of-the-art. Multimedia Tools and Applications, 25(1):5-35. Steyvers, M., Smyth, P., Rosen-Zvi, M., & Griffiths, T. (2004). Probabilistic Author-Topic Models for Information Discovery. The Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Seattle, Washington. Stolcke, A., (2002). SRILM - An Extensible Language Modeling Toolkit, in Proc. Intl. Conf. Spoken Language Processing, Denver, Colorado. Tardini, G. Grana C., Marchi, R., Cucchiara, R., (2005). Shot Detection and Motion Analysis for Automatic MPEG-7 Annotation of Sports Videos. In 13th International Conference on Image Analysis and Processing. Wactlar, H., Witbrock, M., Hauptmann, A., (1996 ). Informedia: News-on-Demand Experiments in Speech Recognition. ARPA Speech Recognition Workshop, Arden House, Harriman, NY. Witten, I. and Frank, E. (2005). Data Mining: Practical machine learning tools and techniques. 2nd Edition, Morgan Kaufmann. San Francisco, CA. 129
2008
15
Proceedings of ACL-08: HLT, pages 130–138, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Lexicalized phonotactic word segmentation Margaret M. Fleck Department of Computer Science University of Illinois Urbana, IL 61801, USA [email protected] Abstract This paper presents a new unsupervised algorithm (WordEnds) for inferring word boundaries from transcribed adult conversations. Phone ngrams before and after observed pauses are used to bootstrap a simple discriminative model of boundary marking. This fast algorithm delivers high performance even on morphologically complex words in English and Arabic, and promising results on accurate phonetic transcriptions with extensive pronunciation variation. Expanding training data beyond the traditional miniature datasets pushes performance numbers well above those previously reported. This suggests that WordEnds is a viable model of child language acquisition and might be useful in speech understanding. 1 Introduction Words are essential to most models of language and speech understanding. Word boundaries define the places at which speakers can fluently pause, and limit the application of most phonological rules. Words are a key constituent in structural analyses: the output of morphological rules and the constituents in syntactic parsing. Most speech recognizers are word-based. And, words are entrenched in the writing systems of many languages. Therefore, it is generally accepted that children learning their first language must learn how to segment speech into a sequence of words. Similar, but more limited, learning occurs when adults hear speech containing unfamiliar words. These words must be accurately delimited, so that they can be added to the lexicon and nearby familiar words recognized correctly. Current speech recognizers typically misinterpret such speech. This paper will consider algorithms which segment phonetically transcribed speech into words. For example, Figure 1 shows a transcribed phrase from the Buckeye corpus (Pitt et al., 2005; Pitt et al., 2007) and the automatically segmented output. Like almost all previous researchers, I use humantranscribed input to work around the limitations of current speech recognizers. In most available datasets, words are transcribed using standard dictionary pronunciations (henceforth “dictionary transcriptions”). These transcriptions are approximately phonemic and, more importantly, assign a constant form to each word. I will also use one dataset with accurate phonetic transcriptions, including natural variation in the pronunciation of words. Handling this variation is an important step towards eventually using phone lattices or features produced by real speech recognizers. This paper will focus on segmentation of speech between adults. This is the primary input for speech recognizers. Moreover, understanding such speech is the end goal of child language acquisition. Models tested only on simplified child-directed speech are incomplete without an algorithm for upgrading the understander to handle normal adult speech. 2 The task in more detail This paper uses a simple model of the segmentation task, which matches prior work and the available datasets. Possible enhancements to the model are discussed at the end. 130 "all the kids in there # are people that have kids # or that are having kids" IN REAL: ohlThikidsinner # ahrpiyp@lThA?HAvkids # ohrThADurHAviynqkids DICT: ahlThiykidzinTher # ahrpiyp@lThAtHAvkidz # owrThAtahrHAvinqkidz OUT REAL: ohl Thi kids inner # ahr piyp@l ThA? HAv kids # ohr ThADur HAviynq kids DICT: ahl Thiy kidz in Ther # ahr piyp@l ThAt HAv kidz # owr ThAt ahr HAvinq kidz Figure 1: Part of Buckeye corpus dialog 2101a, in accurate phonetic transcription (REAL) and dictionary pronunciations (DICT). Both use modified arpabet, with # marking pauses. Notice the two distinct pronunciations of “that” in the accurate transcription. Automatically inserted word boundaries are shown at bottom. 2.1 The input data This paper considers only languages with an established tradition of words, e.g. not Chinese. I assume that the authors of each corpus have given us reasonable phonetic transcriptions and word boundaries. The datasets are informal conversations in which debatable word segmentations are rare. The transcribed data is represented as a sequence of phones, with neither prosodic/stress information nor feature representations for the phones. These phone sequences are presented to segmentation algorithms as strings of ASCII characters. Large phonesets may be represented using capital letters and punctuation or, more readably, using multicharacter phone symbols. Well-designed (e.g. easily decodable) multi-character codes do not affect the algorithms or evaluation metrics in this paper. Testing often also uses orthographic datasets. Finally, the transcriptions are divided into “phrases” at pauses in the speech signal (silences, breaths, etc). These pause phrases are not necessarily syntactic or prosodic constituents. Disfluencies in conversational speech create pauses where you might not expect them, e.g. immediately following the definite article (Clark and Wasow, 1998; Fox Tree and Clark, 1997). Therefore, I have chosen corpora in which pauses have been marked carefully. 2.2 Affixes and syllables A theory of word segmentation must explain how affixes differ from free-standing function words. For example, we must explain why English speakers consider “the” to be a word, but “-ing” to be an affix, although neither occurs by itself in fluent prepared English. We must also explain why the Arabic determiner “Al-” is not a word, though its syntactic and semantic role seems similar to English “the”. Viewed another way, we must show how to estimate the average word length. Conversational English has short words (about 3 phones), because most grammatical morphemes are free-standing. Languages with many affixes have longer words, e.g. my Arabic data averages 5.6 phones per word. Pauses are vital for deciding what is an affix. Attempts to segment transcriptions without pauses, e.g. (Christiansen et al., 1998), have worked poorly. Claims that humans can extract words without pauses seem to be based on psychological experiments such as (Saffran, 2001; Jusczyk and Aslin, 1995) which conflate words and morphemes. Even then, explicit boundaries seem to improve performance (Seidl and Johnson, 2006). Another significant part of this task is finding syllable boundaries. For English, many phone strings have multiple possible syllabifications. Because words average only 1.26 syllables, segmenting presyllabified input has a very high baseline: 100% precision and 80% recall of boundary positions. 2.3 Algorithm testing Unsupervised algorithms are presented with the transcription, divided only at phrase boundaries. Their task is to infer the phrase-internal word boundaries. The primary worry in testing is that development may have biased the algorithm towards a particular language, speaking style, and/or corpus size. Addressing this requires showing that different corpora can be handled with a common set of parameter settings. Therefore a test/training split within one corpus serves little purpose and is not standard. Supervised algorithms are given training data with all word boundaries marked, and must infer word boundaries in a separate test set. Simple supervised algorithms perform extremely well (Cairns et al., 1997; Teahan et al., 2000), but don’t address our main goal: learning how to segment. Notice that phrase boundaries are not randomly 131 selected word boundaries. Syntactic and communicative constraints make pauses more likely at certain positions than others. Therefore, the “supervised” algorithms for this task train on a representative set of word boundaries whereas “unsupervised” algorithms train on a biased set of word boundaries. Moreover, supplying all the word boundaries for even a small amount of data effectively tells the supervised algorithms the average word length, a parameter which is otherwise not easy to estimate. Standard evaluation metrics include the precision, recall and F-score 1 of the phrase-internal boundaries (BP, BR, BF), of the extracted word tokens (WP, WR, WF), and of the resulting lexicon of word types (LP, LR, LF). Outputs don’t look good until BF is at least 90%. 3 Previous work Learning to segment words is an old problem, with extensive prior work surveyed in (Batchelder, 2002; Brent and Cartwright, 1996; Cairns et al., 1997; Goldwater, 2006; Hockema, 2006; Rytting, 2007). There are two major approaches. Phonotactic methods model which phone sequences are likely within words and which occur primarily across or adjacent to word boundaries. Language modelling methods build word ngram models, like those used in speech recognition. Statistical criteria define the “best” model fitting the input data. In both cases, details are complex and variable. 3.1 Phonotactic Methods Supervised phonotactic methods date back at least to (Lamel and Zue, 1984), see also (Harrington et al., 1989). Statistics of phone trigrams provide sufficient information to segment adult conversational speech (dictionary transcriptions with simulated phonology) with about 90% precision and 93% recall (Cairns et al., 1997), see also (Hockema, 2006). Teahan et al.’s compression-based model (2000) achieves BF over 99% on orthographic English. Segmentation by adults is sensitive to phonotactic constraints (McQueen, 1998; Weber, 2000). To build unsupervised algorithms, Brent and Cartwright suggested (1996) inferring phonotactic constraints from phone sequences observed at 1F = 2P R P +R where P is the precision and R is the recall. phrase boundaries. However, experimental results are poor. Early results using neural nets by Cairns et al. (1997) and Christiansen et al (1998) are discouraging. Rytting (2007) seems to have the best result: 61.0% boundary recall with 60.3% precision 2 on 26K words of modern Greek data, average word length 4.4 phones. This algorithm used mutual information plus phrase-final 2-phone sequences. He obtained similar results (Rytting, 2004) using phrase-final 3-phone sequences. Word segmentation experiments by Christiansen and Allen (1997) and Harrington et al. (1989). simulated the effects of pronunciation variation and/or recognizer error. Rytting (2007) uses actual speech recognizer output. These experiments broke useful new ground, but poor algorithm performance (BF ≤50% even on dictionary transcriptions) makes it hard to draw conclusions from their results. 3.2 Language modelling methods So far, language modelling methods have been more effective. Brent (1999) and Venkataraman (2001) present incremental splitting algorithms with BF about 82% 3 on the Bernstein-Ratner (BR87) corpus of infant-directed English with disfluencies and interjections removed (Bernstein Ratner, 1987; Brent, 1999). Batchelder (2002) achieved almost identical results using a clustering algorithm. The most recent algorithm (Goldwater, 2006) achieves a BF of 85.8% using a Dirichlet Process bigram model, estimated using a Gibbs sampling algorithm.4 Language modelling methods incorporate a bias towards re-using hypothesized words. This suggests they should systematically segment morphologically complex words, so as to exploit the structure they share with other words. Goldwater, the only author to address this issue explicitly, reports that her algorithm breaks off common affixes (e.g. “ing”, “s”). Batchelder reports a noticable drop in performance on Japanese data, which might relate to its more complex words (average 4.1 phones). 2These numbers have been adjusted so as not to include boundaries between phrases. 3Numbers are from Goldwater’s (2006) replication. 4Goldwater numbers are from the December 2007 version of her code, with its suggested parameter values: α0 = 3000, α1 = 300, p# = 0.2. 132 4 The new approach Previous algorithms have modelled either whole words or very short (e.g. 2-3) phone sequences. The new approach proposed in this paper, “lexicalized phonotactics,” models extended sequences of phones at the starts and ends of word sequences. This allows a new algorithm, called WordEnds, to successfully mark word boundaries with a simple local classifier. 4.1 The idea This method models sequences of phones that start or end at a word boundary. When words are long, such a sequence may cover only part of the word e.g. a group of suffixes or a suffix plus the end of the stem. A sequence may also include parts of multiple short words, capturing some simple bits of syntax. These longer sequences capture not only purely phonotactic constraints, but also information about the inventory of lexical items. This improves handling of complex, messy inputs. (Cf. Ando and Lee’s (2000) kanji segmenter.) On the other hand, modelling only partial words helps the segmenter handle long, infrequent words. Long words are typically created by productive morphology and, thus, often start and end just like other words. Only 32% of words in Switchboard occur both before and after pauses, but many of the other 68% have similar-looking beginnings or endings. Given an inter-character position in a phrase, its right and left contexts are the character sequences to its right and left. By convention, phrases input to WordEnds are padded with a single blank at each end. So the middle position of the phrase “afunjoke” has right context “joke⊔” and left context “⊔afun.” Since this is a word boundary, the right context looks like the start of a real word sequence, and the left context looks like the end of one. This is not true for the immediately previous position, which has right context “njoke⊔” and left context “⊔afu.” Boundaries will be marked where the right and left contexts look like what we have observed at the starts and ends of phrases. 4.2 Statistical model To formalize this, consider a fixed inter-character position in a phrase. It may be a word boundary (b) or not (¬b). Let r and l be its right and left contexts. The input data will (see Section 4.3) give us P(b|r) and P(b|l). Deciding whether to mark a boundary at this position requires estimating P(b|r, l). To express P(b|r, l) in terms of P(b|l) and P(b|r), I will assume that r and l are conditionally independent given b. This corresponds roughly to a unigram language model. Let P(b) be the probability of a boundary at a random inter-character position. I will assume that the average word length, and therefore P(b), is not absurdly small or large. P(b|r, l) is P(r,l|b)P(b) P(r,l) . Conditional independence implies that this is P(r|b)P(l|b)P(b) P(r,l) , which is P(r)P(b|r)P(l)P(b|l) P(b)P(r,l) . This is P(b|r)P(b|l) QP(b) where Q = P(r,l) P(r)P(l). Q is typically not 1, because a right and left context often co-occur simply because they both tend to occur at boundaries. To estimate Q, write P(r, l) as P(r, l, b) + P(r, l, ¬b). Then P(r, l, b) is P(r)P(b|r)P(l)P(b|l) P(b) . If we assume that r and l are also conditionally independent given ¬b, then a similar equation holds for P(r, l, ¬b). So Q = P(b|r)P(b|l) P(b) + P(¬b|r)P(¬b|l) P(¬b) Contexts that occur primarily inside words (e.g. not at a syllable boundary) often restrict the adjacent context, violating conditional independence given ¬b. However, in these cases, P(b|r) and/or P(b|l) will be very low, so P(b|r, l) will be very low. So (correctly) no boundary will be marked. Thus, we can compute P(b|r, l) from P(b|r), P(b|l), and P(b). A boundary is marked if P(b|r, l) ≥0.5. 4.3 Estimating context probabilities Estimation of P(b|r) and P(b|l) uses a simple ngram backoff algorithm. The details will be shown for P(b|l). P(b|r) is similar. Suppose for the moment that word boundaries are marked. The left context l might be very long and unusual. So we will estimate its statistics using a shorter lefthand neighborhood l′. P(b|l) is then estimated as the number of times l′ occurs before a boundary, divided by the total number of times l′ occurs in the corpus. The suffix l′ is chosen to be the longest suffix of l which occurs at least 10 times in the corpus, i.e. often enough for a reliable estimate in the presence 133 corpus language transcription sm size med size lg size pho/wd wd/phr hapax BR87 English dictionary 33K – – 2.9 3.4 31.7 Switchboard English dictionary 34K 409K 3086K 3.1 5.9 33.8 Switchboard English orthographic 34K 409K 3086K [3.8] 5.9 34.2 Buckeye English dictionary 32K 290K – 3.1 5.9 41.9 Buckeye English phonetic 32K 290K – 2.9 5.9 66.0 Arabic Arabic dictionary 30K 405K – 5.6 5.9 60.3 Spanish Spanish dictionary 37K 200K – 3.7 8.4 49.1 Table 1: Key parameters for each test dataset include the language, transcription method, number of words (small, medium, large subsets), average phones per word, average words per phrase, and percent of word types that occur only once (hapax). Phones/word is replaced by characters/word for the orthographic corpus. of noise.5 l′ may cross word boundaries and, if our position is near a pause, may contain the blank at the lefthand end of the phrase. The length of l′ is limited to Nmax characters to reduce overfitting. Unfortunately, our input data has boundaries only at pauses (#). So applying this method to the raw input data produces estimates of P(#|r) and P(#|l). Because phrase boundaries are not a representative selection of word boundaries, P(#|r) and P(#|l) are not good estimates of P(b|r) and P(b|l). Moreover, initially, we don’t know P(b). Therefore, WordEnds bootstraps the estimation using a binary model of the relationship between word and phrase boundaries. To a first approximation, an ngram occurs at the end of a phrase if and only if it can occur at the end of a word. Since the magnitude of P(#, l) isn’t helpful, we simply check whether it is zero and, accordingly, set P(b|l) to either zero or a constant, very high value. In fact, real data contains phrase endings corrupted by disfluencies, foreign words, etc. So WordEnds actually sets P(b|l) high only if P(#|l) is above a threshold (currently 0.003) chosen to reflect the expected amount of corruption. In the equations from Section 4.2, if either P(b|r) or P(b|l) is zero, then P(b|r, l) is zero. If both values are very high, then Q is P(b|r)P(b|l) P(b) + ϵ, with ϵ very small. So P(b|r, l) is close to 1. So, in the bootstrapping phase, the test for marking a boundary is independent of P(b) and reduces to testing whether P(#|r) and P(#|l) are both over threshold. So, WordEnds estimates P(#|r) and P(#|l) from the input data, then uses this bootstrapping 5A single character is used if no suffix occurs 10 times. method (Nmax = 5) 6 to infer preliminary word boundaries. The preliminary boundaries are used to estimate P(b) and to re-estimate P(b|r) and P(b|l), using Nmax = 4. Final boundaries are then marked. 5 Mini-morph In a full understanding system, output of the word segmenter would be passed to morphological and local syntactic processing. Because the segmenter is myopic, certain errors in its output would be easier to fix with the wider perspective available to this later processing. Because standard models of morphological learning don’t address the interaction with word segmentation, WordEnds does a simple version of this repair process using a placeholder algorithm called Mini-morph. Mini-morph fixes two types of defects in the segmentation. Short fragments are created when two nearby boundaries represent alternative reasonable segmentations rather than parts of a common segmentation. For example, “treestake” has potential boundaries both before and after the s. This issue was noted by Harrington et al. (1988) who used a list of known very short words to detect these cases. See also (Cairns et al., 1997). Also, surrounding words sometimes mislead WordEnds into undersegmenting a phone sequence which has an “obvious” analysis using well-established component words. Mini-morph classifies each word in the segmentation as a fragment, a word that is reliable enough to use in subdividing other words, or unknown status. 6Values for Nmax were chosen empirically. They could be adjusted for differences in entropy rate, but this is very similar across the datasets in this paper. 134 Because it has only a feeble model of morphology, Mini-morph has been designed to be cautious: most words are classified as unknown. To classify a word, we compare its frequency w as a word in the segmentation to the frequencies p and s with which it occurs as a prefix and suffix of words in the segmentation (including itself). The word’s fragment ratio f is 2w p+s. Values of f are typically over 0.8 for freely occurring words, under 0.1 for fragments and stronglyattached affixes, and intermediate for clitics, some affixes, and words with restricted usage. However, most words haven’t been seen enough times for f to be reliable. So a word is classified as a fragment if p + s ≥1000 and f ≤0.2. It is classified as a reliable word if p + s ≥50 and f ≥0.5. To revise the input segmentation of the corpus, Mini-morph merges each fragment with an adjacent word if the newly-created merged word occurred at least 10 times in the input segmentation. When mergers with both adjacent words are possible, the algorithm alternates which to prefer. Each word is then sudivided into a sequence of reliable words, when possible. Because words are typically short and reliable words rare, a simple recursive algorithm is used, biased towards using shorter words. 7 WordEnds calls Mini-morph twice, once to revise the preliminary segmentation produced by the bootstrapping phase and a second time to revise the final segmentation. 6 Test corpora WordEnds was tested on a diverse set of seven corpora, summarized in Table 1. Notice that the Arabic dataset has much longer words than those used by previous authors. Subsets were extracted from the larger corpora, to control for training set size. Goldwater’s algorithm, the best performing of previous methods, was also tested on the small versions. 8 The first three corpora all use dictionary transcriptions with 1-character phone symbols. The Bernstein-Ratner (BR87) corpus was described above (Section 3.2). The Arabic corpus was created by removing punctuation and word boundaries from the Buckwalter version of the LDC’s transcripts of 7Subdivision is done only once for each word type. 8It is too slow to run on the larger ones. Gulf Arabic Conversational Telephone Speech (Appen, 2006). Filled pauses and foreign words were kept as is. Word fragments were kept, but the telltale hyphens were removed. The Spanish corpus was produced in a similar way from the Callhome Spanish dataset (Wheatley, 1996), removing all accents. Orthographic forms were used for words without pronunciations (e.g. foreign, fragments) The other two English dictionary transcriptions were produced in a similar way from the Buckeye corpus (Pitt et al., 2005; Pitt et al., 2007) and Mississippi State’s corrected version of the LDC’s Switchboard transcripts (Godfrey and Holliman, 1994; Deshmukh et al., 1998). These use a “readable phonetic” version of arpabet. Each phone is represented with a 1–2 character code, chosen to look like English orthography and to ensure that character sequences decode uniquely into phone sequences. Buckeye does not provide dictionary pronunciations for word fragments, so these were transcribed as “X”. Switchboard was also transcribed using standard English orthography. The Buckeye corpus also provides an accurate phonetic transcription of its data, showing allophonic variation (e.g. glottal stop, dental/nasal flaps), segment deletions, quality shifts/uncertainty, and nasalization. Some words are “massively” reduced (Johnson, 2003), going well beyond standard phonological rules. We represented its 64 phones using codes with 1–3 characters. 7 Test results Table 2 presents test results for the small corpora. The numbers for the four English dictionary and orthographic transcriptions are very similar. This confirms the finding of Batchelder (2002) that variations in transcription method have only minor impacts on segmenter performance. Performance seems to be largely determined by structural and lexical properties (e.g. word length, pause frequency). For the English dictionary datasets, the primary overall evaluation numbers (BF and WF) for the two algorithms differ less than the variation created by tweaking parameters or re-running Goldwater’s (randomized) algorithm. Both degrade similarly on the phonetic version of Buckeye. The most visible overall difference is speed. WordEnds processes 135 WordEnds Goldwater corpus transcription BP BR BF WF LF BP BR BF WF LF BR87 dictionary 94.6 73.7 82.9 70.7 36.6 89.2 82.7 85.8 72.5 56.2 Switchboard dictionary 91.3 80.5 85.5 72.0 37.4 73.9 93.5 82.6 65.8 27.8 Switchboard orthographic 90.0 75.5 82.1 66.3 33.7 73.1 92.4 81.6 63.6 28.4 Buckeye dictionary 89.7 82.2 85.8 72.3 37.4 74.6 94.8 83.5 68.1 26.7 Buckeye phonetic 71.0 64.1 67.4 44.1 28.6 49.6 95.0 65.1 35.4 12.8 Arab dictionary 88.1 68.5 77.1 56.6 40.4 47.5 97.4 63.8 32.6 9.5 Spanish dictionary 89.3 48.5 62.9 38.7 16.6 69.2 92.8 79.3 57.9 17.0 Table 2: Results for WordEnds and Goldwater on the small test corpora. See Section 2.3 for definitions of metrics. medium w/out morph medium large corpus transcription BF WF LF BF WF LF BF WF LF Switchboard dictionary 90.4 78.8 39.4 93.0 84.8 44.2 94.7 88.1 44.3 Switchboard orthographic 89.6 77.4 37.3 91.6 81.8 41.1 94.1 87.0 41.1 Buckeye dictionary 91.2 80.3 41.5 93.7 86.1 47.8 – – – Buckeye phonetic 72.1 48.4 27.1 75.0 54.2 28.2 – – – Arab dictionary 85.7 69.1 49.5 86.4 70.6 50.0 – – – Spanish dictionary 75.1 52.2 19.7 76.3 55.0 20.2 – – – Table 3: Results for WordEnds on the medium and large datasets, also on the medium dataset without Mini-morph. See Table 1 for dataset sizes. each small dataset in around 30-40 seconds. Goldwater requires around 2000 times as long: 14.5-32 hours, depending on the dataset. However, WordEnds keeps affixes on words whereas Goldwater’s algorithm removes them. This creates a systematic difference in the balance between boundary recall and precision. It also causes Goldwater’s LF values to drop dramatically between the child-directed BR87 corpus and the adultdirected speech. For the same reason, WordEnds maintains good performance on the Arabic dataset, but Goldwater’s performance (especially LF) is much worse. It is quite likely that Goldwater’s algorithm is finding morphemes rather than words. Datasets around 30K words are traditional for this task. However, a child learner has access to much more data, e.g. Weijer (1999) measured 1890 words per hour spoken near an infant. WordEnds performs much better when more data is available (Table 3). Numbers for even the harder datasets (Buckeye phonetic, Spanish) are starting to look promising. The Spanish results show that data with infrequent pauses can be handled in two very different ways: aggressive model-based segmentation (Goldwater) or feeding more data to a more cautious segmenter (WordEnds). The two calls to Mini-morph sometimes make almost no difference, e.g. on the Arabic data. But it can make large improvements, e.g. BF +6.9%, WF +10.5%, LF +5.8% on the BR corpus. Table 3 shows details for the medium datasets. Its contribution seems to diminish as the datasets get bigger, e.g. improvements of BF +4.7%, WF +9.3%, LF +3.7% on the small dictionary Switchboard corpus but only BF +1.3%, WF +3.3%, LF +3.4% on the large one. 8 Some specifics of performance Examining specific mistakes confirms that WordEnds does not systematically remove affixes on English dictionary data. On the large Switchboard corpus, “-ed” is never removed from its stem and “-ing” is removed only 16 times. The Mini-morph postprocessor misclassifies, and thus segments off, some affixes that are homophonous with free-standing words, such as “-en”/“in” and “-es”/“is”. A smarter model of morphology and local syntax could probably avoid this. 136 There is a visible difference between English “the” and the Arabic determiner “Al-”. The English determiner is almost always segmented off. From the medium-sized Switchboard corpus, only 434 lexical items are posited with “the” attached to a following word. Arabic “Al” is sometimes attached and sometimes segmented off. In the medium Arabic dataset, the correct and computed lexicons contain similar numbers of words starting with Al (4873 and 4608), but there is only partial overlap (2797 words). Some of this disagreement involves foreign language nouns, which the markup in the original corpus separates from the determiner.9 Mistakes on twenty specific items account for 24% of the errors on the large Switchboard corpus. The first two items, accounting for over 11% of the mistakes, involve splitting “uhhuh” and “umhum”. Most of the rest involve merging common collocations (e.g. “a lot”) or splitting common compounds that have a transparent analysis (e.g. “something”). 9 Discussion and conclusions Performance of WordEnds is much stronger than previous reported results, including good results on Arabic and promising results on accurate phonetic transcriptions. This is partly due to good algorithm design and partly due to using more training data. This sets a much higher standard for models of child language acquisition and also suggests that it is not crazy to speculate about inserting such an algorithm into the speech recognition pipeline. Performance would probably be improved by better models of morphology and/or phonology. An ngram model of morpheme sequences (e.g. like Goldwater uses) might avoid some of the mistakes mentioned in Section 8. Feature-based or gestural phonology (Browman and Goldstein, 1992) might help model segmental variation. Finite-state models (Belz, 2000) might be more compact. Prosody, stress, and other sub-phonemic cues might disambiguate some problem situations (Hockema, 2006; Rytting, 2007; Salverda et al., 2003). However, it is not obvious which of these approaches will actually improve performance. Additional phonetic features may not be easy to detect 9The author does not read Arabic and, thus, is not in a position to explain why the annotaters did this. reliably, e.g. marking lexical stress in the presence of contrastive stress and utterance-final lengthening. The actual phonology of fast speech may not be quite what we expect, e.g. performance on the phonetic version of Buckeye was slightly improved by merging nasal flap with n, and dental flap with d and glottal stop. The sets of word initial and final segments may not form natural phonological classes, because they are partly determined by morphological and lexical constraints (Rytting, 2007). Moreover, the strong performance from the basic segmental model makes it hard to rule out the possibility that high performance could be achieved, even on data with phonetic variation, by throwing enough training data at a simple segmental algorithm. Finally, the role of child-directed speech needs to be examined more carefully. Child-directed speech displays helpful features such as shorter phrases and fewer reductions (Bernstein Ratner, 1996; van de Weijer, 1999). These features may make segmentation easier to learn, but the strong results presented here for adult-directed speech make it trickier to argue that this help is necessary for learning. Moreover, it is not clear how learning to segment child-directed speech might make it easier to learn to segment speech directed at adults or older children. It’s possible that learning child-directed speech makes it easier to learn the basic principles of phonology, semantics, or higher-level linguistic structure. This might somehow feed back into learning segmentation. However, it’s also possible that its only raison d’ˆetre is social: enabling earlier communication between children and adults. Acknowledgments Many thanks to the UIUC prosody group, Mitch Marcus, Cindy Fisher, and Sharon Goldwater. References Rie Kubota Ando and Lillian Lee. 2000. MostlyUnsupervised Statistical Segmentation of Japanese. Proc ANLP-NAACL 2000:241–248. Appen Pty Ltd. 2006. Gulf Arabic Conversational Telephone Speech, Transcripts Linguistic Data Consortium, Philadelphia Eleanor Olds Batchelder 2002. Bootstrapping the lexicon: A computational model of infant speech segmentation. Cognition 83, pp. 167–206. 137 Anja Belz 2000. Multi-Syllable Phonotactic Modelling. 5th ACL SIGPHON, pp. 46–56. Nan Bernstein Ratner. 1987. The phonology of parent child speech. In K. Nelson and A. Van Kleeck (Eds.), Children’s Language: Vol 6, Lawrence Erlbaum. Nan Bernstein Ratner 1996. From “Signal to Syntax”: But what is the Nature of the Signal? In James Morgan and Katherine Demuth (eds) Signal to Syntax, Lawrence Erlbaum, Mahwah, NJ. Michael R. Brent. 1999. An Efficient, Probabalistically Sound Algorithm for Segmentation and Word Discovery. Machine Learning 1999:71–105. Michael R. Brent and Timothy A. Cartwright. 1996. Distributional Regularity and Phonotactic Constraints are Useful for Segmentation Cognition 1996:93–125. C. P. Browman and L. Goldstein. 1992. Articulatory phonology: An overview. Phonetica 49:155–180. Paul Cairns, Richard Shillcock, Nick Chater, and Joe Levy. 1997. Bootstrapping Word Boundaries: A Bottom-up Corpus-based Approach to Speech Segmentation. Cognitive Psychology, 33:111–153. Morten Christiansen and Joseph Allen 1997. Coping with Variation in Speech Segmentation GALA 1997. Morten Christiansen, Joseph Allen, Mark Seidenberg. 1998. Learning to Segment Speech Using Multiple Cues: A Connectionist Model. Language and Cognitive Processes 12/2–3, pp. 221-268. Herbert H. Clark and Thomas Wasow. 1998. Repeating Words in Spontaneous Speech. Cognitive Psychology 37:201–242. N. Deshmukh, A. Ganapathiraju, A. Gleeson, J. Hamaker and J. Picone. 1998. Resegmentation of Switchboard. Proc. Intern. Conf. on Spoken Language Processing:1543-1546. Jean E. Fox Tree and Herbert H. Clark. 1997. Pronouncing “the” as “thee” to signal problems in speaking. Cognition 62(2):151–167. John J. Godfrey and Ed Holliman. 1993. Switchboard1 Transcripts. Linguistic Data Consortium, Philadelphia, PA. Sharon Goldwater. 2006. Nonparametric Bayesian Models of Lexical Acquisition. Ph.D. thesis, Brown Univ. Jonathan Harrington, Gordon Watson, and Maggie Cooper. 1989. Word boundary detection in broad class and phoneme strings. Computer Speech and Language 3:367–382. Jonathan Harrington, Gordon Watson, and Maggie Cooper. 1988. Word Boundary Identification from Phoneme Sequence Constraints in Automatic Continuous Speech Recognition. Coling 1988, pp. 225–230. Stephen A. Hockema. 2006. Finding Words in Speech: An Investigation of American English. Language Learning and Development, 2(2):119-146. Keith Johnson 2003. Massive reduction in conversational American English. Proc. of the Workshop on Spontaneous Speech: Data and Analysis. Peter W. Jusczyk and Richard N. Aslin. 1995. Infants’ Detection of the Sound Patterns of Words in Fluent Speech. Cognitive Psychology 29(1)1–23. Lori F. Lamel and Victor W. Zue. 1984. Properties of Consonant Sequences within Words and Across Word Boundaries. Proc. ICASSP 1984:42.3.1–42.3.4. James M. McQueen. 1998. Segmentation of Continuous Speech Using Phonotactics. Journal of Memory and Language 39:21–46. Mark Pitt, Keith Johnson, Elizabeth Hume, Scott Kiesling, and William Raymond. 2005. The Buckeye Corpus of Conversational Speech: Labeling Conventions and a Test of Transcriber Reliability. Speech Communication, 45, 90-95. M. A. Pitt, L. Dilley, K. Johnson, S. Kiesling, W. Raymond., E. Hume, and E. Fosler-Lussier. 2007. Buckeye Corpus of Conversational Speech (2nd release) Department of Psychology, Ohio State University, Columbus, OH C. Anton Rytting 2004. Greek Word Segmentation using Minimal Information. HLT-NAACL 2004, pp. 78–85. C. Anton Rytting 2007. Preserving Subsegmental Variation in Modelling Word Segmentation. Ph.D. thesis, Ohio State, Columbus OH. J. R. Saffran. 2001 Words in a sea of sounds: The output of statistical learning. Cognition 81:149-169. Anne Pier Salverda, Delphine Dahan, and James M. McQueen. 2003. The role of prosodic boundaries in the resolution of lexical embedding in speech comprehension. Cognition 90:51–89. Amanda Seidl and Elizabeth K. Johnson. 2006. Infant Word Segmentation Revisited: Edge Alignment Facilitates Target Extraction. Developmental Science 9(6):565–573. W. J. Teahan, Y. Wen, R. McNab, I. H. Witten 2000 A compression-based algorithm for Chinese word segmentation. Computational Linguistics 26/3, pp. 375– 393. Anand Venkataraman. 2001. A Statistical Model for Word Discovery in Transcribed Speech. Computational Linguistics, 27(3):351–372. A. Weber. 2000 Phonotactic and acoustic cues for word segmentation. Proc. 6th Intern. Conf. on Spoken Language Processing, Vol. 3: 782-785. pp Joost van de Weijer 1999. Language Input for Word Discovery. Ph.D. thesis, Katholieke Universiteit Nijmegen. Barbara Wheatley. 1996. CALLHOME Spanish Transcripts. Linguistic Data Consortium, Philadelphia. 138
2008
16
Proceedings of ACL-08: HLT, pages 139–147, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics A Re-examination of Query Expansion Using Lexical Resources Hui Fang Department of Computer Science and Engineering The Ohio State University Columbus, OH, 43210 [email protected] Abstract Query expansion is an effective technique to improve the performance of information retrieval systems. Although hand-crafted lexical resources, such as WordNet, could provide more reliable related terms, previous studies showed that query expansion using only WordNet leads to very limited performance improvement. One of the main challenges is how to assign appropriate weights to expanded terms. In this paper, we re-examine this problem using recently proposed axiomatic approaches and find that, with appropriate term weighting strategy, we are able to exploit the information from lexical resources to significantly improve the retrieval performance. Our empirical results on six TREC collections show that query expansion using only hand-crafted lexical resources leads to significant performance improvement. The performance can be further improved if the proposed method is combined with query expansion using co-occurrence-based resources. 1 Introduction Most information retrieval models (Salton et al., 1975; Fuhr, 1992; Ponte and Croft, 1998; Fang and Zhai, 2005) compute relevance scores based on matching of terms in queries and documents. Since various terms can be used to describe a same concept, it is unlikely for a user to use a query term that is exactly the same term as used in relevant documents. Clearly, such vocabulary gaps make the retrieval performance non-optimal. Query expansion (Voorhees, 1994; Mandala et al., 1999a; Fang and Zhai, 2006; Qiu and Frei, 1993; Bai et al., 2005; Cao et al., 2005) is a commonly used strategy to bridge the vocabulary gaps by expanding original queries with related terms. Expanded terms are often selected from either co-occurrence-based thesauri (Qiu and Frei, 1993; Bai et al., 2005; Jing and Croft, 1994; Peat and Willett, 1991; Smeaton and van Rijsbergen, 1983; Fang and Zhai, 2006) or handcrafted thesauri (Voorhees, 1994; Liu et al., 2004) or both (Cao et al., 2005; Mandala et al., 1999b). Intuitively, compared with co-occurrence-based thesauri, hand-crafted thesauri, such as WordNet, could provide more reliable terms for query expansion. However, previous studies failed to show any significant gain in retrieval performance when queries are expanded with terms selected from WordNet (Voorhees, 1994; Stairmand, 1997). Although some researchers have shown that combining terms from both types of resources is effective, the benefit of query expansion using only manually created lexical resources remains unclear. The main challenge is how to assign appropriate weights to the expanded terms. In this paper, we re-examine the problem of query expansion using lexical resources with the recently proposed axiomatic approaches (Fang and Zhai, 2006). The major advantage of axiomatic approaches in query expansion is to provide guidance on how to weight related terms based on a given term similarity function. In our previous study, a cooccurrence-based term similarity function was proposed and studied. In this paper, we study several term similarity functions that exploit various information from two lexical resources, i.e., WordNet 139 and dependency-thesaurus constructed by Lin (Lin, 1998), and then incorporate these similarity functions into the axiomatic retrieval framework. We conduct empirical experiments over several TREC standard collections to systematically evaluate the effectiveness of query expansion based on these similarity functions. Experiment results show that all the similarity functions improve the retrieval performance, although the performance improvement varies for different functions. We find that the most effective way to utilize the information from WordNet is to compute the term similarity based on the overlap of synset definitions. Using this similarity function in query expansion can significantly improve the retrieval performance. According to the retrieval performance, the proposed similarity function is significantly better than simple mutual information based similarity function, while it is comparable to the function proposed in (Fang and Zhai, 2006). Furthermore, we show that the retrieval performance can be further improved if the proposed similarity function is combined with the similarity function derived from co-occurrence-based resources. The main contribution of this paper is to reexamine the problem of query expansion using lexical resources with a new approach. Unlike previous studies, we are able to show that query expansion using only manually created lexical resources can significantly improve the retrieval performance. The rest of the paper is organized as follows. We discuss the related work in Section 2, and briefly review the studies of query expansion using axiomatic approaches in Section 3. We then present our study of using lexical resources, such as WordNet, for query expansion in Section 4, and discuss experiment results in Section 5. Finally, we conclude in Section 6. 2 Related Work Although the use of WordNet in query expansion has been studied by various researchers, the improvement of retrieval performance is often limited. Voorhees (Voorhees, 1994) expanded queries using a combination of synonyms, hypernyms and hyponyms manually selected from WordNet, and achieved limited improvement (i.e., around −2% to +2%) on short verbose queries. Stairmand (Stairmand, 1997) used WordNet for query expansion, but they concluded that the improvement was restricted by the coverage of the WordNet and no empirical results were reported. More recent studies focused on combining the information from both co-occurrence-based and handcrafted thesauri. Mandala et. al. (Mandala et al., 1999a; Mandala et al., 1999b) studied the problem in vector space model, and Cao et. al. (Cao et al., 2005) focused on extending language models. Although they were able to improve the performance, it remains unclear whether using only information from hand-crafted thesauri would help to improve the retrieval performance. Another way to improve retrieval performance using WordNet is to disambiguate word senses. Voorhees (Voorhees, 1993) showed that using WordNet for word sense disambiguation degrade the retrieval performance. Liu et. al. (Liu et al., 2004) used WordNet for both sense disambiugation and query expansion and achieved reasonable performance improvement. However, the computational cost is high and the benefit of query expansion using only WordNet is unclear. Ruch et. al. (Ruch et al., 2006) studied the problem in the domain of biology literature and proposed an argumentative feedback approach, where expanded terms are selected from only sentences classified into one of four disjunct argumentative categories. The goal of this paper is to study whether query expansion using only manually created lexical resources could lead to the performance improvement. The main contribution of our work is to show query expansion using only hand-crafted lexical resources is effective in the recently proposed axiomatic framework, which has not been shown in the previous studies. 3 Query Expansion in Axiomatic Retrieval Model Axiomatic approaches have recently been proposed and studied to develop retrieval functions (Fang and Zhai, 2005; Fang and Zhai, 2006). The main idea is to search for a retrieval function that satisfies all the desirable retrieval constraints, i.e., axioms. The underlying assumption is that a retrieval function sat140 isfying all the constraints would perform well empirically. Unlike other retrieval models, axiomatic retrieval models directly model the relevance with term level retrieval constraints. In (Fang and Zhai, 2005), several axiomatic retrieval functions have been derived based on a set of basic formalized retrieval constraints and an inductive definition of the retrieval function space. The derived retrieval functions are shown to perform as well as the existing retrieval functions with less parameter sensitivity. One of the components in the inductive definition is primitive weighting function, which assigns the retrieval score to a single term document {d} for a single term query {q} based on S({q}, {d}) = ( ω(q) q = d 0 q ̸= d (1) where ω(q) is a term weighting function of q. A limitation of the primitive weighting function described in Equation 1 is that it can not bridge vocabulary gaps between documents and queries. To overcome this limitation, in (Fang and Zhai, 2006), we proposed a set of semantic term matching constraints and modified the previously derived axiomatic functions to make them satisfy these additional constraints. In particular, the primitive weighting function is generalized as S({q}, {d}) = ω(q) × f(s(q, d)), where s(q, d) is a semantic similarity function between two terms q and d, and f is a monotonically increasing function defined as f(s(q, d)) = ( 1 q = d s(q,d) s(q,q) × β q ̸= d (2) where β is a parameter that regulates the weighting of the original query terms and the semantically similar terms. We have shown that the proposed generalization can be implemented as a query expansion method. Specifically, the expanded terms are selected based on a term similarity function s and the weight of an expanded term t is determined by its term similarity with a query term q, i.e., s(q, t), as well as the weight of the query term, i.e., ω(q). Note that the weight of an expanded term t is ω(t) in traditional query expansion methods. In our previous study (Fang and Zhai, 2006), term similarity function s is derived based on the mutual information of terms over collections that are constructed under the guidance of a set of term semantic similarity constraints. The focus of this paper is to study and compare several term similarity functions exploiting the information from lexical resources, and evaluate their effectiveness in the axiomatic retrieval models. 4 Term Similarity based on Lexical Resources In this section, we discuss a set of term similarity functions that exploit the information stored in two lexical resources: WordNet (Miller, 1990) and dependency-based thesaurus (Lin, 1998). The most commonly used lexical resource is WordNet (Miller, 1990), which is a hand-crafted lexical system developed at Princeton University. Words are organized into four taxonomies based on different parts of speech. Every node in the WordNet is a synset, i.e., a set of synonyms. The definition of a synset, which is referred to as gloss, is also provided. For a query term, all the synsets in which the term appears can be returned, along with the definition of the synsets. We now discuss six possible term similarity functions based on the information provided by WordNet. Since the definition provides valuable information about the semantic meaning of a term, we can use the definitions of the terms to measure their semantic similarity. The more common words the definitions of two terms have, the more similar these terms are (Banerjee and Pedersen, 2005). Thus, we can compute the term semantic similarity based on synset definitions in the following way: sdef(t1, t2) = |D(t1) ∩D(t2)| |D(t1) ∪D(t2)|, where D(t) is the concatenation of the definitions for all the synsets containing term t and |D| is the number of words of the set D. Within a taxonomy, synsets are organized by their lexical relations. Thus, given a term, related terms can be found in the synsets related to the synsets containing the term. In this paper, we consider the following five word relations. 141 • Synonym(Syn): X and Y are synonyms if they are interchangeable in some context. • Hypernym(Hyper): Y is a hypernym of X if X is a (kind of) Y. • Hyponym(Hypo): X is a hyponym of Y if X is a (kind of) Y. • Holonym(Holo): Y is a holonym of Y if X is a part of Y. • Meronym(Mero): X is a meronym of Y if X is a part of Y. Since these relations are binary, we define the term similarity functions based on these relations in the following way. sR(t1, t2) = ( αR t1 ∈TR(t2) 0 t1 /∈TR(t2) where R ∈{syn, hyper, hypo, holo, mero}, TR(t) is a set of words that are related to term t based on the relation R, and αs are non-zero parameters to control the similarity between terms based on different relations. However, since the similarity values for all term pairs are same, the values of these parameters can be ignored when we use Equation 2 in query expansion. Another lexical resource we study in the paper is the dependency-based thesaurus provided by Lin 1 (Lin, 1998). The thesaurus provides term similarities that are automatically computed based on dependency relationships extracted from a parsed corpus. We define a similarity function that can utilize this thesaurus as follows: sLin(t1, t2) = ( L(t1, t2) (t1, t2) ∈TPLin 0 (t1, t2) /∈TPLin where L(t1, t2) is the similarity of terms stored in the dependency-based thesaurus and TPLin is a set of all the term pairs stored in the thesaurus. The similarity of two terms would be assigned to zero if we can not find the term pair in the thesaurus. Since all the similarity functions discussed above capture different perspectives of term relations, we 1Available at http://www.cs.ualberta.ca/˜lindek/downloads.htm propose a simple strategy to combine these similarity functions so that the similarity of a term pair is the highest similarity value of these two terms of all the above similarity functions, which is shown as follows. scombined(t1, t2) = maxR∈Rset(sR(t1, t2)), where Rset = {def, syn, hyper, hypo, holo, mero, Lin}. In summary, we have discussed eight possible similarity functions that exploit the information from the lexical resources. We then incorporate these similarity functions into the axiomatic retrieval models based on Equation 2, and perform query expansion based on the procedure described in Section 3. The empirical results are reported in Section 5. 5 Experiments In this section, we experimentally evaluate the effectiveness of query expansion with the term similarity functions discussed in Section 4 in the axiomatic framework. Experiment results show that the similarity function based on synset definitions is most effective. By incorporating this similarity function into the axiomatic retrieval models, we show that query expansion using the information from only WordNet can lead to significant improvement of retrieval performance, which has not been shown in the previous studies (Voorhees, 1994; Stairmand, 1997). 5.1 Experiment Design We conduct three sets of experiments. First, we compare the effectiveness of term similarity functions discussed in Section 4 in the context of query expansion. Second, we compare the best one with the term similarity functions derived from co-occurrence-based resources. Finally, we study whether the combination of term similarity functions from different resources can further improve the performance. All experiments are conducted over six TREC collections: ap88-89, doe, fr88-89, wt2g, trec7 and trec8. Table 1 shows some statistics of the collections, including the description, the collection size, 142 Table 1: Statistics of Test Collections Collection Description Size # Voc. # Doc. #query ap88-89 news articles 491MB 361K 165K 150 doe technical reports 184MB 163K 226K 35 fr88-89 government documents 469MB 204K 204K 42 trec7 ad hoc data 2GB 908K 528K 50 trec8 ad hoc data 2GB 908K 528K 50 wt2g web collections 2GB 1968K 247K 50 the vocabulary size, the number of documents and the number of queries. The preprocessing only involves stemming with Porter’s stemmer. We use WordNet 3.0 2, Lemur Toolkit 3 and TrecWN library 4 in experiments. The results are evaluated with both MAP (mean average precision) and gMAP (geometric mean average precision) (Voorhees, 2005), which emphasizes the performance of difficulty queries. There is one parameter β in the query expansion method presented in Section 3. We tune the value of β and report the best performance. The parameter sensitivity is similar to the observations described in (Fang and Zhai, 2006) and will not be discussed in this paper. In all the result tables, ‡ and † indicate that the performance difference is statistically significant according to Wilcoxon signed rank test at the level of 0.05 and 0.1 respectively. We now explain the notations of different methods. BL is the baseline method without query expansion. In this paper, we use the best performing function derived in axiomatic retrieval models, i.e, F2-EXP in (Fang and Zhai, 2005) with a fixed parameter value (b = 0.5). QEX is the query expansion method with term similarity function sX, where X could be Def., Syn., Hyper., Hypo., Mero., Holo., Lin and Combined. Furthermore, we examine the query expansion method using co-occurrence-based resources. In particular, we evaluate the retrieval performance using the following two similarity functions: sMIBL and sMIImp. Both functions are based on the mutual information of terms in a set of documents. sMIBL uses the collection itself to compute the mutual information, while sMIImp uses the working sets con2http://wordnet.princeton.edu/ 3http://www.lemurproject.org/ 4http://l2r.cs.uiuc.edu/ cogcomp/software.php structed based on several constraints (Fang and Zhai, 2006). The mutual information of two terms t1 and t2 in collection C is computed as follow (van Rijsbergen, 1979): I(Xt1, Xt2) = X p(Xt1, Xt2)log p(Xt1, Xt2) p(Xt1)p(Xt2) Xti is a binary random variable corresponding to the presence/absence of term ti in each document of collection C. 5.2 Effectiveness of Lexical Resources We first compare the retrieval performance of query expansion with different similarity functions using short keyword (i.e., title-only) queries, because query expansion techniques are often more effective for shorter queries (Voorhees, 1994; Fang and Zhai, 2006). The results are presented in Table 2. It is clear that query expansion with these functions can improve the retrieval performance, although the performance gains achieved by different functions vary a lot. In particular, we make the following observations. First, the similarity function based on synset definitions is the most effective one. QEdef significantly improves the retrieval performance for all the data sets. For example, in trec7, it improves the performance from 0.186 to 0.216. As far as we know, none of the previous studies showed such significant performance improvement by using only WordNet as query expansion resource. Second, the similarity functions based on term relations are less effective compared with definitionbased similarity function. We think that the worse performance is related to the following two reasons: (1) The similarity functions based on relations are binary, which is not a good way to model term similarities. (2) The relations are limited by the part 143 Table 2: Performance of query expansion using lexical resources (short keyword queries) trec7 trec8 wt2g MAP gMAP MAP gMAP MAP gMAP BL 0.186 0.083 0.250 0.147 0.282 0.188 QEdef 0.216‡ 0.105‡ 0.266‡ 0.164‡ 0.301‡ 0.210‡ (+16%) (+27%) (+6.4%) (+12%) (+6.7%) (+12%) QEsyn 0.194 0.085‡ 0.252† 0.150† 0.287‡ 0.194‡ (+4.3%) (+2.4%) (+0.8%) (+2.0%) (+1.8%) (+3.2%) QEhyper 0.186 0.086 0.250 0.152 0.286† 0.192† (0%) (+3.6%) (0%) (+3.4%) (+1.4%) (+2.1%) QEhypo 0.186† 0.085‡ 0.250 0.147 0.282† 0.190 (0%) (+2.4%) (0%) (0%) (0%) (+1.1%) QEmero 0.187‡ 0.084‡ 0.250 0.147 0.282 0.189 (+0.5%) (+1.2%) (0%) (0%) (0%) (+0.5%) QEholo 0.191‡ 0.085‡ 0.250 0.147 0.282 0.188 (+2.7%) (+2.4%) (0%) (0%) (0%) (0%) QELin 0.193‡ 0.092‡ 0.256‡ 0.156‡ 0.290‡ 0.200‡ (+3.7%) (+11%) (+2.4%) (+6.1%) (+2.8%) (+6.4%) QECombined 0.214‡ 0.104‡ 0.267‡ 0.165‡ 0.300‡ 0.208‡ (+15%) (+25%) (+6.8%) (+12%) (+6.4%) (+10.5%) ap88-89 doe fr88-89 MAP gMAP MAP gMAP MAP gMAP BL 0.220 0.074 0.174 0.069 0.222 0.062 QEdef 0.254‡ 0.088‡ 0.181‡ 0.075‡ 0.225‡ 0.067‡ (+15%) (+19%) (+4%) (+10%) (+1.4%) (+8.1%) QEsyn 0.222‡ 0.077‡ 0.174 0.074 0.222 0.065 (+0.9%) (+4.1%) (0%) (+7.3%) (0%) (+4.8%) QEhyper 0.222‡ 0.074 0.175 0.070 0.222 0.062 (+0.9%) (0%) (+0.5%) (+1.5%) (0%) (0%) QEhypo 0.222‡ 0.076‡ 0.176† 0.073† 0.222 0.062 (+0.9%) (+2.7%) (+1.1%) (+5.8%) (0%) (0%) QEmero 0.221 0.074† 0.174† 0.070† 0.222 0.062 (+0.45%) (0%) (0%) (+1.5%) (0%) (0%) QEholo 0.221 0.076 0.177† 0.073 0.222 0.062 (+0.45%) (+2.7%) (+1.7%) (+5.8%) (0%) (0%) QELin 0.245‡ 0.082‡ 0.178 0.073 0.222 0.067† (+11%) (+11%) (+2.3%) (+5.8%) (0%) (+8.1%) QECombined 0.254‡ 0.085‡ 0.179† 0.074† 0.223† 0.065 (+15%) (+12%) (+2.9%) (+7.3%) (+0.5%) (+4.3%) 144 Table 3: Performance comparison of hand-crafted and co-occurrence-based thesauri (short keyword queries) Data MAP gMAP QEdef QEMIBL QEMIImp QEdef QEMIBL QEMIImp ap88-89 0.254 0.233‡ 0.265‡ 0.088 0.081‡ 0.089‡ doe 0.181 0.175† 0.183 0.075 0.071† 0.078 fr88-89 0.225 0.222‡ 0.227† 0.067 0.063 0.071‡ trec7 0.216 0.195‡ 0.236‡ 0.105 0.089‡ 0.097 trec8 0.266 0.250‡ 0.278 0.164 0.148‡ 0.172 wt2g 0.301 0.311 0.320‡ 0.210 0.218 0.219‡ of speech of the terms, because two terms in WordNet are related only when they have the same part of speech tags. However, definition-based similarity function does not have such a limitation. Third, the similarity function based on Lin’s thesaurus is more effective than those based on term relations from the WordNet, while it is less effective compared with the definition-based similarity function, which might be caused by its smaller coverage. Finally, combining different WordNet-based similarity functions does not help, which may indicate that the expanded terms selected by different functions are overlapped. 5.3 Comparison with Co-occurrence-based Resources As shown in Table 2, the similarity function based on synset definitions, i.e., sdef, is most effective. We now compare the retrieval performance of using this similarity function with that of using the mutual information based functions, i.e., sMIBL and sMIImp. The experiments are conducted over two types of queries, i.e. short keyword (keyword title) and short verbose (one sentence description) queries. The results for short keyword queries are shown in Table 3. The retrieval performance of query expansion based on sdef is significantly better than that based on sMIBL on almost all the data sets, while it is slightly worse than that based on sMIImp on some data sets. We can make the similar observation from the results for short verbose queries as shown in Table 4. One advantage of sdef over sMIImp is the computational cost, because sdef can be computed offline in advance while sMIImp has to be computed online from query-dependent working sets which takes much more time. The low computational cost and high retrieval performance make sdef more attractive in the real world applications. 5.4 Additive Effect Since both types of similarity functions are able to improve retrieval performance, we now study whether combining them could lead to better performance. Table 5 shows the retrieval performance of combining both types of similarity functions for short keyword queries. The results for short verbose queries are similar. Clearly, combining the similarity functions from different resources could further improve the performance. 6 Conclusions Query expansion is an effective technique in information retrieval to improve the retrieval performance, because it often can bridge the vocabulary gaps between queries and documents. Intuitively, hand-crafted thesaurus could provide reliable related terms, which would help improve the performance. However, none of the previous studies is able to show significant performance improvement through query expansion using information only from manually created lexical resources. In this paper, we re-examine the problem of query expansion using lexical resources in recently proposed axiomatic framework and find that we are able to significantly improve retrieval performance through query expansion using only hand-crafted lexical resources. In particular, we first study a few term similarity functions exploiting the information from two lexical resources: WordNet and dependency-based thesaurus created by Lin. We then incorporate the similarity functions with the query expansion method in the axiomatic retrieval 145 Table 4: Performance Comparison (MAP, short verbose queries) Data BL QEdef QEMIBL QEMIImp ap88-89 0.181 0.220‡ (21.5%) 0.205‡ (13.3%) 0.230‡ (27.1%) doe 0.109 0.121‡ (11%) 0.119 (9.17%) 0.117 (7.34%) fr88-89 0.146 0.164‡ (12.3%) 0.162‡ (11%) 0.164‡ (12.3%) trec7 0.184 0.209‡ (13.6%) 0.196 (6.52%) 0.224‡(21.7%) trec8 0.234 0.238‡(1.71%) 0.235 (0.4%) 0.243† (3.85%) wt2g 0.266 0.276 (3.76%) 0.276† (3.76%) 0.282‡ (6.02%) Table 5: Additive Effect (MAP, short keyword queries) ap88-89 doe fr88-89 trec7 trec8 wt2g QEMIBL 0.233 0.175 0.222 0.195 0.250 0.311 QEdef+MIBL 0.257‡ 0.183‡ 0.225‡ 0.217‡ 0.267‡ 0.320‡ QEMIImp 0.265 0.183 0.227 0.236 0.278 0.320 QEdef+MIImp 0.269‡ 0.187 0.232‡ 0.237‡ 0.280† 0.322† models. Systematical experiments have been conducted over six standard TREC collections and show promising results. All the proposed similarity functions improve the retrieval performance, although the degree of improvement varies for different similarity functions. Among all the functions, the one based on synset definition is most effective and is able to significantly and consistently improve retrieval performance for all the data sets. This similarity function is also compared with some similarity functions using mutual information. Furthermore, experiment results show that combining similarity functions from different resources could further improve the performance. Unlike previous studies, we are able to show that query expansion using only manually created thesauri can lead to significant performance improvement. The main reason is that the axiomatic approach provides guidance on how to appropriately assign weights to expanded terms. There are many interesting future research directions based on this work. First, we will study the same problem in some specialized domain, such as biology literature, to see whether the proposed approach could be generalized to the new domain. Second, the fact that using axiomatic approaches to incorporate linguistic information can improve retrieval performance is encouraging. We plan to extend the axiomatic approach to incorporate more linguistic information, such as phrases and word senses, into retrieval models to further improve the performance. Acknowledgments We thank ChengXiang Zhai, Dan Roth, Rodrigo de Salvo Braz for valuable discussions. We also thank three anonymous reviewers for their useful comments. References J. Bai, D. Song, P. Bruza, J. Nie, and G. Cao. 2005. Query expansion using term relationships in language models for information retrieval. In Fourteenth International Conference on Information and Knowledge Management (CIKM 2005). S. Banerjee and T. Pedersen. 2005. Extended gloss overlaps as a measure of semantic relatedness. In Proceedings of the 18th International Joint Conference on Artificial Intelligence. G. Cao, J. Nie, and J. Bai. 2005. Integrating word relationships into language models. In Proceedings of the 2005 ACM SIGIR Conference on Research and Development in Information Retrieval. H. Fang and C. Zhai. 2005. An exploration of axiomatic approaches to information retrieval. In Proceedings of the 2005 ACM SIGIR Conference on Research and Development in Information Retrieval. H. Fang and C. Zhai. 2006. Semantic term matching in axiomatic approaches to information retrieval. In Proceedings of the 2006 ACM SIGIR Conference on Research and Development in Information Retrieval. 146 N. Fuhr. 1992. Probabilistic models in information retrieval. The Computer Journal, 35(3):243–255. Y. Jing and W. Bruce Croft. 1994. An association thesaurus for information retreival. In Proceedings of RIAO. D. Lin. 1998. An information-theoretic definition of similarity. In Proceedings of International Conference on Machine Learning (ICML). S. Liu, F. Liu, C. Yu, and W. Meng. 2004. An effective approach to document retrieval via utilizing wordnet and recognizing phrases. In Proceedings of the 2004 ACM SIGIR Conference on Research and Development in Information Retrieval. R. Mandala, T. Tokunaga, and H. Tanaka. 1999a. Ad hoc retrieval experiments using wornet and automatically constructed theasuri. In Proceedings of the seventh Text REtrieval Conference (TREC7). R. Mandala, T. Tokunaga, and H. Tanaka. 1999b. Combining multiple evidence from different types of thesaurus for query expansion. In Proceedings of the 1999 ACM SIGIR Conference on Research and Development in Information Retrieval. G. Miller. 1990. Wordnet: An on-line lexical database. International Journal of Lexicography, 3(4). H. J. Peat and P. Willett. 1991. The limitations of term co-occurence data for query expansion in document retrieval systems. Journal of the american society for information science, 42(5):378–383. J. Ponte and W. B. Croft. 1998. A language modeling approach to information retrieval. In Proceedings of the ACM SIGIR’98, pages 275–281. Y. Qiu and H.P. Frei. 1993. Concept based query expansion. In Proceedings of the 1993 ACM SIGIR Conference on Research and Development in Information Retrieval. P. Ruch, I. Tbahriti, J. Gobeill, and A. R. Aronson. 2006. Argumentative feedback: A linguistically-motivated term expansion for information retrieval. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 675–682. G. Salton, C. S. Yang, and C. T. Yu. 1975. A theory of term importance in automatic text analysis. Journal of the American Society for Information Science, 26(1):33–44, Jan-Feb. A. F. Smeaton and C. J. van Rijsbergen. 1983. The retrieval effects of query expansion on a feedback document retrieval system. The Computer Journal, 26(3):239–246. M. A. Stairmand. 1997. Textual context analysis for information retrieval. In Proceedings of the 1997 ACM SIGIR Conference on Research and Development in Information Retrieval. C. J. van Rijsbergen. 1979. Information Retrieval. Butterworths. E. M. Voorhees. 1993. Using wordnet to disambiguate word sense for text retrieval. In Proceedings of the 1993 ACM SIGIR Conference on Research and Development in Information Retrieval. E. M. Voorhees. 1994. Query expansion using lexicalsemantic relations. In Proceedings of the 1994 ACM SIGIR Conference on Research and Development in Information Retrieval. E. M. Voorhees. 2005. Overview of the trec 2005 robust retrieval track. In Notebook of the Thirteenth Text REtrieval Conference (TREC2005). 147
2008
17
Proceedings of ACL-08: HLT, pages 148–155, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Selecting Query Term Alterations for Web Search by Exploiting Query Contexts Guihong Cao Stephen Robertson Jian-Yun Nie Dept. of Computer Science and Operations Research Microsoft Research at Cambridge Dept. of Computer Science and Operations Research University of Montreal, Canada Cambridge, UK University of Montreal, Canada [email protected] [email protected] [email protected] Abstract Query expansion by word alterations (alternative forms of a word) is often used in Web search to replace word stemming. This allows users to specify particular word forms in a query. However, if many alterations are added, query traffic will be greatly increased. In this paper, we propose methods to select only a few useful word alterations for query expansion. The selection is made according to the appropriateness of the alteration to the query context (using a bigram language model), or according to its expected impact on the retrieval effectiveness (using a regression model). Our experiments on two TREC collections will show that both methods only select a few expansion terms, but the retrieval effectiveness can be improved significantly. 1 Introduction Word stemming is a basic NLP technique used in most of Information Retrieval (IR) systems. It transforms words into their root forms so as to increase the chance to match similar words/terms that are morphological variants. For example, with stemming, “controlling” can match “controlled” because both have the same root “control”. Most stemmers, such as the Porter stemmer (Porter, 1980) and Krovetz stemmer (Krovetz, 1993), deal with stemming by stripping word suffixes according to a set of morphological rules. Rule-based approaches are intuitive and easy to implement. However, while in general, most words can be stemmed correctly; there is often erroneous stemming that unifies unrelated words. For instance, “jobs” is stemmed to “job” in both “find jobs in Apple” and “Steve Jobs at Apple”. This is particularly problematic in Web search, where users often use special or new words in their queries. A standard stemmer such as Porter’s will wrongly stem them. To better determine stemming rules, Xu and Croft (1998) propose a selective stemming method based on corpus analysis. They refine the Porter stemmer by means of word clustering: words are first clustered according to their co-occurrences in the text collection. Only word variants belonging to the same cluster will be conflated. Despite this improvement, the basic idea of word stemming is to transform words in both documents and queries to a standard form. Once this is done, there is no means for users to require a specific word form in a query – the word form will be automatically transformed, otherwise, it will not match documents. This approach does not seem to be appropriate for Web search, where users often specify particular word forms in their queries. An example of this is a quoted query such as “Steve Jobs”, or “US Policy”. If documents are stemmed, many pages about job offerings or US police may be returned (“policy” conflates with “police” in Porter stemmer). Another drawback of stemming is that it usually enhances recall, but may hurt precision (Kraaij and Pohlmann, 1996). However, general Web search is basically a precision-oriented task. One alternative approach to word stemming is to do query expansion at query time. The original query terms are expanded by their related forms having the same root. All expansions can be combined by the Boolean operator “OR”. For example, 148 the query “controlling acid rain” can be expanded to “(control OR controlling OR controller OR controlled OR controls) (acid OR acidic OR acidify) (rain OR raining OR rained OR rains)”. We will call each such expansion term an alteration to the original query term. Once a set of possible alterations is determined, the simplest approach to perform expansion is to add all possible alterations. We call this approach Naive Expansion. One can easily show that stemming at indexing time is equivalent to Naive Expansion at retrieval time. This approach has been adopted by most commercial search engines (Peng et al., 2007). However, the expansion approaches proposed previously can have several serious problems: First, they usually do not consider expansion ambiguity – each query term is usually expanded independently. However, some expansion terms may not be appropriate. The case of “Steve Jobs” is one such example, for which the word “job” can be proposed as an expansion term. Second, as each query term may have several alterations, the naïve approach using all the alterations will create a very long query. As a consequence, query traffic (the time required for the evaluation of a query) is greatly increased. Query traffic is a critical problem, as each search engine serves millions of users at the same time. It is important to limit the query traffic as much as possible. In practice, we can observe that some word alterations are irrelevant and undesirable (as in the “Steve Jobs” case), and some other alterations have little impact on the retrieval effectiveness (for example, if we expand a word by a rarely used word form). In this study, we will address these two problems. Our goal is to select only appropriate word alterations to be used in query expansion. This is done for two purposes: On the one hand, we want to limit query traffic as much as possible when query expansion is performed. On the other hand, we also want to remove irrelevant expansion terms so that fewer irrelevant documents will be retrieved, thereby improve the retrieval effectiveness. To deal with the two problems we mentioned above, we will propose two methods to select alterations. In the first method, we make use of the query context to select only the alterations that fit the query. The query context is modeled by a bigram language model. To reduce query traffic, we select only one alteration for each query term, which is the most coherent with the bigram model. We call this model Bigram Expansion. Despite the fact that this method adds far fewer expansion terms than the naïve expansion, our experiments will show that we can achieve comparable or even better retrieval effectiveness. Both the Naive Expansion and the Bigram Expansion determine word alterations solely according to general knowledge about the language (bigram model or morphological rules), and no consideration about the possible effect of the expansion term is made. In practice, some alterations will have virtually no impact on retrieval effectiveness. They can be ignored. Therefore, in our second method, we will try to predict whether an alteration will have some positive impact on retrieval effectiveness. Only the alterations with positive impact will be retained. In this paper, we will use a regression model to predict the impact on retrieval effectiveness. Compared to the bigram expansion method, the regression method results in even fewer alterations, but experiments show that the retrieval effectiveness is even better. Experiments will be conducted on two TREC collections, Gov2 data for Web Track and TREC6&7&8 for ad-hoc retrieval. The results show that the two methods we propose both outperform the original queries significantly with less than two alterations per query on average. Compared to the Naive Expansion method, the two methods can perform at least equally well, while query traffic is dramatically reduced. In the following section, we provide a brief review of related work. Section 3 shows how to generate alteration candidates using a similar approach to Xu and Croft’s corpus analysis (1998). In section 4 and 5, we describe the Bigram Expansion method and Regression method respectively. Section 6 presents some experiments on TREC benchmarks to evaluate our methods. Section 7 concludes this paper and suggests some avenues for future work. 2 Related Work Many stemmers have been implemented and used as standard processing in IR. Among them, the Porter stemmer (Porter, 1980) is the most widely used. It strips term suffixes step-by-step according to a set of morphological rules. However, the Porter stemmer sometimes wrongly transforms a term into an unrelated root. For example, it will unify 149 “news” and “new”, “execute” and “executive”. On the other hand, it may miss some conflations, such as “mice” and “mouse”, “europe” and “european”. Krovetz (1993) developed another stemmer, which uses a machine-readable dictionary, to improve the Porter stemmer. It avoids some of the Porter stemmer’s wrong stripping, but does not produce consistent improvement in IR experiments. Both stemmers use generic rules for English to strip each word in isolation. In practice, the required stemming may vary from one text collection to another. Therefore, attempts have been made to use corpus analysis to improve existing rule-based stemmers. Xu and Croft (1998) create equivalence clusters of words which are morphologically similar and occur in similar contexts. As we stated earlier, the stemming-based IR approaches are not well suited to Web search. Query expansion has been used as an alternative (Peng et al. 2007). To limit the number of expansion terms, and thus the query traffic, Peng et al. only use alterations for some of the query words: They segment each query into phrases and only the head word in each phrase is expanded. The assumptions are: 1)Queries issued in Web search often consist of noun phrases. 2) Only the head word in the noun phrase varies in form and needs to be expanded. However, both assumptions may be questionable. Their experiments did not show that the two assumptions hold. Stemming is related to query expansion or query reformulation (Jones et al., 2006; Anick, 2003; Xu and Croft, 1996), although the latter is not limited to word variants. If the expansion terms used are those that are variant forms of a word, then query expansion can produce the same effect as word stemming. However, if we add all possible word alterations, query expansion/reformulation will run the risk of adding many unrelated terms to the original query, which may result in both heavy traffic and topic drift. Therefore, we need a way to select the most appropriate expansion terms. In (Peng et al. 2007), a bigram language model is used to determine the alteration of the head word that best fits the query. In this paper, one of the proposed methods will also use a bigram language model of the query to determine the appropriate alteration candidates. However, in our approach, alterations are not limited to head words. In addition, we will also propose a supervised learning method to predict if an alteration will have a positive impact on retrieval effectiveness. To our knowledge, no previous method uses the same approach. In the following sections, we will describe our approach, which consists of two steps: the generation of alteration candidates, and the selection of appropriate alterations for a query. The first step is query-independent using corpus analysis, while the second step is query-dependent. The selected word alterations will be OR-ed with the original query words. 3 Generating Alteration Candidates Our method to generate alteration candidates can be described as follows. First, we do word clustering using a Porter stemmer. All words in the vocabulary sharing the same root form are grouped together. Then we do corpus analysis to filter out the words which are clustered incorrectly, according to word distributional similarity, following (Xu and Croft, 1998; Lin 1998). The rationale behind this is that words sharing the same meaning tend to occur in the same contexts. The context of each word in the vocabulary is represented by a vector containing the frequencies of the context words which co-occur with the word within a predefined window in a training corpus. The window size is set empirically at 3 words and the training corpus is about 1/10 of the GOV2 corpus (see section 5 for details about the collection). Similarity is measured by the cosine distance between two vectors. For each word, we select at most 5 similar words as alteration candidates. In the next sections, we will further consider ways to select appropriate alterations according to the query. 4 Bigram Expansion Model for Alteration Selection In this section, we try to select the most suitable alterations according to the query context. The query context is modeled by a bigram language model as in (Peng et al. 2007). Given a query described by a sequence of words, we consider each of the query word as representing a concept ci. In addition to the given word form, ci can also be expressed by other alternative forms. However, the appropriate alterations do not only depend on the original word of ci, but also on other query words or their alterations. 150 Figure 1: Considering all Combinations to Calculate the Plausibility of Alterations Accordingly, a confidence weight is determined for each alteration candidate. For example, in the query “Steve Jobs at Apple”, the alteration “job” of “jobs” should have a low confidence; while in the query “finding jobs in Apple”, it should have a high confidence. One way to measure the confidence of an alteration is the plausibility of its appearing in the query. Since each concept may be expressed by several alterations, we consider all the alterations of context concepts when calculating the plausibility of a given word. Suppose we have the query “controlling acid rain”. The second concept has two alterations - “acidify” and “acidic”. For each of the alterations, our method will consider all the combinations with other words, as illustrated in figure 1, where each combination is shown as a path. More precisely, for a query of n words (or their corresponding concepts), let ei,j∈ci, j=1,2,…,|ci| be the alterations of concept ci. Then we have: ∑ ∑ ∑ ∑ ∑ = = = = − = + − − + + = | | 1 , , , ,2 ,1 | | 1 ,1 | | 1 ,2 | | 1 ,1 | | 1 ,1 ) ,..., ,..., , ( ... ... ... ) ( 2 1 1 1 2 2 1 1 1 1 n n n i i i i i c j n j n j i j j c j c j c j i c j i ij e e e e P e P (1) In equation 1, n i j n j i j j e e e e , , ,2 ,1 ,..., ,..., , 2 1 is a path passing through ei,j. For simplicity, we abbreviate it as e1e2…ei…en. In this work, we used bigram language model to calculate the probability of each path. Then we have: ∏= − = n k k k n i e e P e P e e e e P 2 1 1 2 1 ) | ( ) ( ) ,..., ,..., , ( (2) P(ek|ek-1) is estimated with a back-off bigram language model (Goodman, 2001). In the experiments with TREC6&7&8, we train the model with all text collections; while in the experiments with Gov2 data, we only used about 1/10 of the GOV2 data to train the bigram model because the whole Gov2 collection is too large. Directly calculating P(eij) by summing the probabilities of all paths passing through eij is an NP problem (Rabiner, 1989), and is intractable if the query is long. Therefore, we use the forwardbackward algorithm (Bishop, 2006) to calculate P(eij) in a more efficient way. After calculating P(eij) for each ci, we select one alteration which has the highest probability. We limit the number of additional alterations to 1 in order to limit query traffic. Our experiments will show that this is often sufficient. 5 Regression Model for Alteration Selection None of the previous selection methods considers how well an alteration would perform in retrieval. The Bigram Expansion model assumes that the query replaced with better alterations should have a higher likelihood. This approach belongs to the family of unsupervised learning. In this section, we introduce a method belonging to supervised learning family. This method develops a regression model from a set of training data, and it is capable of predicting the expected change in performance when the original query is augmented by this alteration. The performance change is measured by the difference in the Mean Average Precision (MAP) between the augmented and the original query. The training instances are defined by the original query string, an original query term under consideration and one alteration to the query term. A set of features will be used, which will be defined later in this section. 5.1 Linear Regression Model The goal of the regression model is to predict the performance change when a query term is augmented with an alteration. There are several regression models, ranging from the simplest linear regression model to non-linear alternatives, such as a neural network (Duda et al., 2001), a Regression SVM (Bishop, 2006). For simplicity, we use linear regression model here. We denote an instance in the feature space as X, and the weights of features are denoted as W. Then the linear regression model is defined as: f(X)=WTX (3) where WT is the transpose of W. However, we will have a technical problem if we set the target value to the performance change directly: The range of controlling control controlled controller acidify acidic rain rains raining 151 values of f(X) is ) , ( +∞ −∞ , while the range of performance change is [-1,1]. The two value ranges do not match. This inconsistency may result in severe problems when the scales of feature values vary dramatically (Duda et al., 2001). To solve this problem, we do a simple transformation on the performance change. Let the change be ]1,1 [− ∈ y , then the transformed performance change is: ]1,1 [ 1 1 log ) ( − ∈ + − + + = y y y y γ γ ϕ (4) where γ is a very small positive real number (set to be 1e-37 in the experiments), which acts as a smoothing factor. The value of ) (y ϕ can be an arbitrary real number. ) (y ϕ is a monotonic function defined in the range of [-1,1]. Moreover, the fixed point of ) (y ϕ is 0, i.e., y y = ) ( ϕ when y=0. This property is nice; it means that the expansion brings positive improvement if and only if f(X)>0, which makes it easy to determine which alteration is better. We train the regression model by minimizing the mean square error. Suppose there are training instances X1,X2,…,Xm, and the corresponding performance change is yi, i=1,2,…,m. We calculate the mean square error with the following equation: ∑= − = m i i i T y X W W err 1 2 )) ( ( ) ( ϕ (5) Then the optimal weight is defined as: ∑= − = = m i i i T W W y X W W err W 1 2 * )) ( ( min arg ) ( min arg ϕ (6) Because err(W) is a convex function of W, it has a global minimum and obtains its minimum when the gradient is zero (Bazaraa et al., 2006). Then we have: 0 )) ( ( ) ( 1 * * = − = ∂ ∂ ∑= m i T i i i T X y X W W W err ϕ So, ∑ ∑ = = = m i T i i m i T i i T X y X X W 1 1 * ) ( ϕ In fact, ∑= m i T i iX X 1 is a square matrix, we denote it as XXT. Then we have: [ ] ∑= − = m i i i T X y XX W 1 1 * ) ( ) ( ϕ (7) The matrix XXT is an l l × square matrix, where l is the number of features. In our experiments, we only use three features. Therefore the optimal weights can be calculated efficiently even we have a large number of training instances. 5.2 Constructing Training Data As a supervised learning method, the regression model is trained with a set of training data. We illustrate here the procedure to generate training instances with an example. Given a query “controlling acid rain”, we obtain the MAP of the original query at first. Then we augment the query with an alteration to the original term (one term at a time) at each time. We retain the MAP of the augmented query and compare it with the original query to obtain the performance change. For this query, we expand “controlling” by “control” and get an augmented query “(controlling OR control) acid rain”. We can obtain the difference between the MAP of the augmented query and that of the original query. By doing this, we can generate a series of training instances consisting of the original query string, the original query term under consideration, its alteration and the performance change, for example: <controlling acid rain, controlling, control, 0.05> Note that we use MAP to measure performance, but we could well use other metrics such as NDCG (Peng et al., 2007) or P@N (precision at top-N documents). 5.3 Features Used for Regression Model Three features are used. The first feature reflects to what degree an alteration is coherent with the other terms. For example, for the query “controlling acid rain”, the coherence of the alteration “acidic” is measured by the logarithm of its co-occurrence with the other query terms within a predefined window (90 words) in the corpus. That is: log(count(controlling…acidic…rain|window)+0.5) where “…” means there may be some words between two query terms. Word order is ignored. The second feature is an extension to point-wise mutual information (Rijsbergen, 1979), defined as follows:       ) ( ) ( ) ( ) | ... ... ( log rain P acidic P g controllin P window rain acidic g controllin P where P(controlling…acidic…rain|window) is the co-occurrence probability of the trigram containing acidic within a predefined window (50 words). P(controlling), p(acidic), P(rain) are probabilities of the three words in the collection. The three words are defined as: the term under consideration, the first term to the left of that term, and the first term to the right. If a query contains less than 3 152 terms or the term under consideration is the beginning/ending term in the query, we will set the probability of the missed term/terms to be 1. Therefore, it becomes point-wise mutual information when the query contains only two terms. In fact, this feature is supplemental to the first feature. When the query is very long and the first feature always obtains a value of log(0.5), so it does not have any discriminative ability. On the other hand, the second feature helps because it can capture some co-occurrence information no matter how long the query is. The last feature is the bias, whose value is always set to be 1.0. The regression model is trained in a leave-oneout cross-validation manner on three collections; each of them is used in turn as a test collection while the two others are used for training. For each incoming query, the regression model predicts the expected performance change when one alteration is used. For each query term, we only select the alteration with the largest positive performance change. If none of its alterations produce a positive performance change, we do not expand the query term. This selection is therefore more restrictive than the Bigram Expansion Model. Nevertheless, our experiments show that it improves retrieval effectiveness further. 6 Experiments 6.1 Experiment Settings In this section, our aim is to evaluate the two context-sensitive word alteration selection methods. The ideal evaluation corpus should be composed of some Web data. Unfortunately, such data are not publicly available and the results also could not be compared with other published results. Therefore, we use two TREC collections. The first one is the ad-hoc retrieval test collections used for TREC6&7& 8. This collection is relative small and homogeneous. The second one is the Gov2 data. It is obtained by crawling the entire .gov domain and has been used for three TREC Terabyte tracks (TREC2004-2006). Table 1 shows some statistics of the two collections. For each collection, we use 150 queries. Since the Regression model needs some data for training, we divided the queries into three parts, each containing 50 queries. We then use leave-one-out cross-validation. The evaluation metrics shown below are the average value of the Name Description Size (GB) #Doc Query TREC6 &7&8 TREC disk4&5, Newpapers 1.7 500,447 301-450 Gov2 2004 crawl of entire .gov domain 427 25,205,179 701-850 Table1: Overview of Test Collections three-fold cross-validation. Because the queries in Web are usually very short, we use only the title field of each query. To correspond to Web search practice, both documents and queries are not stemmed. We do not filter the stop words either. Two main metrics are used: the Mean Average Precision (MAP) for the top 1000 documents to measure retrieval effectiveness, and the number of terms in the query to reflect query traffic. In addition, we also provide precision for the top 30 documents (P@30) to show the impact on top ranked documents. We also conducted t-tests to determine whether the improvement is statistically significant. The Indri 2.5 search engine (Strohman et al., 2004) is used as our basic retrieval system. It provides for a rich query language allowing disjunctive combinations of words in queries. 6.2 Experimental Results The first baseline method we compare with only uses the original query, which is named Original. In addition to this, we also compare with the following methods: Naïve Exp: The Naïve expansion model expands each query term with all terms in the vocabulary sharing the same root with it. This model is equivalent to the traditional stemming method. UMASS: This is the result reported in (Metzler et al., 2006) using Porter stemming for both document and query terms. This reflects a state-of-the-art result using Porter stemming. Similarity: We select the alterations (at most 5) with the highest similarity to the original term. This is the method described in section 3. The two methods we propose in this paper are the following ones: Bigram Exp: the alteration is chosen by a Bigram Expansion model. Regression: the alteration is chosen by a Regression model. 153 Model P@30 #term MAP Imp. Original 0.4701 158 0.2440 ---- UMASS ------- ------- 0.2666 9.26 Naïve Exp 0.4714 1345 0.2653 8.73 Similarity 0.4900 303 0.2689 10.20* Bigram Exp 0.5007 303 0.2751 12.75** Regression 0.5054 237 0.2773 13.65** Table 2: Results of Query 701-750 Over Gov2 Data Model P@30 #term MAP Imp. Original 0.4907 158 0.2738 ---- UMASS ------- ------- 0.3251 18.73 Naive Exp 0.5213 1167 0.3224 17.75** Similarity 0.5140 290 0.3043 11.14** Bigram Exp. 0.5153 290 0.3107 13.47** Regression 0.5140 256 0.3144 14.82** Table 3: Results of Query 751-800 over Gov2 Data Model P@30 #term MAP Imp. Original 0.4710 154 0.2887 ---- UMASS ------- ------- 0.2996 3.78 Naïve Exp 0.4633 1225 0.2999 3.87 Similarity 0.4710 288 0.2976 3.08 Bigram Exp 0.4730 288 0.3137 8.66** Regression 0.4748 237 0.3118 8.00* Table 4: Results of Query 801-850 over Gov2 Data Model P@30 #term MAP Imp. Original 0.2673 137 0.1669 ---- Naïve Exp 0.3053 783 0.2146 28.57** Similarity 0.3007 255 0.2020 21.03** Bigram Exp 0.3033 255 0.2091 25.28** Regression 0.3113 224 0.2161 29.48** Table 5: Results of Query 301-350 over TREC6&7&8 Model P@30 #term MAP Imp. Original 0.2820 126 0.1639 ----- Naive Exp 0.2787 736 0.1665 1.59 Similarity 0.2867 244 0.1650 0.67 Bigram Exp. 0.2800 244 0.1641 0.12 Regression 0.2867 214 0.1664 1.53 Table 6: Results of Query 351-400 over TREC6&7&8 Model P@30 #term MAP Imp. Original 0.2833 124 0.1759 ----- Naïve Exp 0.3167 685 0.2138 21.55** Similarity 0.3080 240 0.2066 17.45** Bigram Exp 0.3133 240 0.2080 18.25** Regression 0.3220 187 0.2144 21.88** Table7: Results of Query 401-450 over TREC6&7&8 Tables 2, 3, 4 show the results of Gov2 data while table 5, 6, 7 show the results of the TREC6&7&8 collection. In the tables, the * mark indicates that the improvement over the original model is statistically significant with p-value<0.05, and ** means the p-values<0.01. From the tables, we see that both word stemming (UMASS) and expansion with word alterations can improve MAP for all six tasks. In most cases (except in table 4 and 6), it also improve the precision of top ranked documents. This shows the usefulness of word stemming or word alteration expansion for IR. We can make several additional observations: 1). Stemming Vs Expansion. UMASS uses document and query stemming while Naive Exp uses expansion by word alteration. We stated that both approaches are equivalent. The equivalence is confirmed by our experiment results: for all Gov2 collections, these approaches perform equivalently. 2). The Similarity model performs very well. Compared with the Naïve Expansion model, it produces quite similar retrieval effectiveness, while the query traffic is dramatically reduced. This approach is similar to the work of Xu and Croft (1998), and can be considered as another state-ofthe-art result. 3). In comparison, the Bigram Expansion model performs better than the Similarity model. This shows that it is useful to consider query context in selecting word alterations. 4). The Regression model performs the best of all the models. Compared with the Original query, it adds fewer than 2 alterations for each query on average (since each group has 50 queries); nevertheless we obtained improvements on all the six collections. Moreover, the improvements on five collections are statistically significant. It also performs slightly better than the Similarity and Bigram Expansion methods, but with fewer alterations. This shows that the supervised learning approach, if used in the correct way, is superior to an unsupervised approach. Another advantage over the two other models is that the Regression model can reduce the number of alterations further. Because the Regression model selects alterations according to their expected improvement, the improvement of the alterations to one query term can be compared with that of the alterations to other query terms. Therefore, we can select at most one optimal alteration for the whole query. However, with the Similarity or Bigram Expansion models, the selection value, either similarity or query likelihood, cannot be 154 compared across the query terms. As a consequence, more alterations need to be selected, leading to heavier query traffic. 7 Conclusion Traditional IR approaches stem terms in both documents and queries. This approach is appropriate for general purpose IR, but is ill-suited for the specific retrieval needs in Web search such as quoted queries or queries with a specific word form that should not be stemmed. The current practice in Web search is not to stem words in index, but rather to perform a form of expansion using word alteration. However, a naïve expansion will result in many alterations and this will increase the query traffic. This paper has proposed two alternative methods to select precise alterations by considering the query context. We seek to produce similar or better improvements in retrieval effectiveness, while limiting the query traffic. In the first method proposed – the Bigram Expansion model, query context is modeled by a bigram language model. For each query term, the selected alteration is the one which maximizes the query likelihood. In the second method - Regression model, we fit a regression model to calculate the expected improvement when the original query is expanded by an alteration. Only the alteration that is expected to yield the largest improvement to retrieval effectiveness is added. The proposed methods were evaluated on two TREC benchmarks: the ad-hoc retrieval test collection for TREC6&7&8 and the Gov2 data. Our experimental results show that both proposed methods perform significantly better than the original queries. Compared with traditional word stemming or the naïve expansion approach, our methods can not only improve retrieval effectiveness, but also greatly reduce the query traffic. This work shows that query expansion with word alterations is a reasonable alternative to word stemming. It is possible to limit the query traffic by a query-dependent selection of word alterations. Our work shows that both unsupervised and supervised learning can be used to perform alteration selection. Our methods can be further improved in several aspects. For example, we could integrate other features in the regression model, and use other nonlinear regression models, such as Bayesian regression models (e.g. Gaussian Process regression) (Rasmussen and Williams, 2006). The additional advantage of these models is that we can not only obtain the expected improvement in retrieval effectiveness for an alteration, but also the probability of obtaining an improvement (i.e. the robustness of the alteration). Finally, it would be interesting to test the approaches using real Web data. References Anick, P. (2003) Using Terminological Feedback for Web Search Refinement: a Log-based Study. In SIGIR, pp. 88-95. Bazaraa, M., Sherali, H., and Shett, C. (2006). Nonlinear Programming, Theory and Algorithms. John Wiley & Sons Inc. Bishop, C. (2006). Pattern Recognition and Machine Learning. Springer. Duda, R., Hart, P., and Stork, D. (2001). Pattern Classification, John Wiley & Sons, Inc. Goodman, J. (2001). A Bit of Progress in Language Modeling. Technical report. Jones, R., Rey, B., Madani, O., and Greiner, W. (2006). Generating Query Substitutions. In WWW2006, pp. 387-396 Kraaij, W. and Pohlmann, R. (1996) Viewing Stemming as Recall Enhancement. Proc. SIGIR, pp. 40-48. Krovetz, R. (1993). Viewing Morphology as an Inference Process. Proc. ACM SIGIR, pp. 191-202. Lin, D. (1998). Automatic Retrieval and Clustering of Similar Words. In COLING-ACL, pp. 768-774. Metzler, D., Strohman, T. and Croft, B. (2006). Indri TREC Notebook 2006: Lessons learned from Three Terabyte Tracks. In the Proceedings of TREC 2006. Peng, F., Ahmed, N., Li, X., and Lu, Y. (2007). Context Sensitive Stemming for Web Search. Proc. ACM SIGIR, pp. 639-636 . Porter, M. (1980) An Algorithm for Suffix Stripping. Program, 14(3): 130-137. Rabiner, L. (1989). A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. In Proceedings of IEEE Vol. 77(2), pp. 257-286. Rijsbergen, V. (1979). Information Retrieval. Butterworths, second version. Strohman, T., Metzler, D. and Turtle, H., and Croft, B. (2004). Indri: A Language Model-based Search Engine for Complex Queries. In Proceedings of the International conference on Intelligence Analysis. Xu, J. and Croft, B. (1996). Query Expansion Using Local and Global Document Analysis. Proc. ACM SIGIR, pp. 4-11. Xu, J. and Croft, B. (1998). Corpus-based Stemming Using Co-occurrence of Word Variants. ACM TOIS, 16(1): 61-81. 155
2008
18
Proceedings of ACL-08: HLT, pages 156–164, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Searching Questions by Identifying Question Topic and Question Focus Huizhong Duan1, Yunbo Cao1,2, Chin-Yew Lin2 and Yong Yu1 1Shanghai Jiao Tong University, Shanghai, China, 200240 {summer, yyu}@apex.sjtu.edu.cn 2Microsoft Research Asia, Beijing, China, 100080 {yunbo.cao, cyl}@microsoft.com Abstract This paper is concerned with the problem of question search. In question search, given a question as query, we are to return questions semantically equivalent or close to the queried question. In this paper, we propose to conduct question search by identifying question topic and question focus. More specifically, we first summarize questions in a data structure consisting of question topic and question focus. Then we model question topic and question focus in a language modeling framework for search. We also propose to use the MDLbased tree cut model for identifying question topic and question focus automatically. Experimental results indicate that our approach of identifying question topic and question focus for search significantly outperforms the baseline methods such as Vector Space Model (VSM) and Language Model for Information Retrieval (LMIR). 1 Introduction Over the past few years, online services have been building up very large archives of questions and their answers, for example, traditional FAQ services and emerging community-based Q&A services (e.g., Yahoo! Answers1, Live QnA2, and Baidu Zhidao3). To make use of the large archives of questions and their answers, it is critical to have functionality facilitating users to search previous answers. Typically, such functionality is achieved by first retrieving questions expected to have the same answers as a queried question and then returning the related answers to users. For example, given question Q1 in Table 1, question Q2 can be re 1 http://answers.yahoo.com 2 http://qna.live.com 3 http://zhidao.baidu.com turned and its answer will then be used to answer Q1 because the answer of Q2 is expected to partially satisfy the queried question Q1. This is what we called question search. In question search, returned questions are semantically equivalent or close to the queried question. Query: Q1: Any cool clubs in Berlin or Hamburg? Expected: Q2: What are the best/most fun clubs in Berlin? Not Expected: Q3: Any nice hotels in Berlin or Hamburg? Q4: How long does it take to Hamburg from Berlin? Q5: Cheap hotels in Berlin? Table 1. An Example on Question Search Many methods have been investigated for tackling the problem of question search. For example, Jeon et al. have compared the uses of four different retrieval methods, i.e. vector space model, Okapi, language model, and translation-based model, within the setting of question search (Jeon et al., 2005b). However, all the existing methods treat questions just as plain texts (without considering question structure). For example, obviously, Q2 can be considered semantically closer to Q1 than Q3-Q5 although all questions (Q2-Q5) are related to Q1. The existing methods are not able to tell the difference between question Q2 and questions Q3, Q4, and Q5 in terms of their relevance to question Q1. We will clarify this in the following. In this paper, we propose to conduct question search by identifying question topic and question focus. The question topic usually represents the major context/constraint of a question (e.g., Berlin, Hamburg) which characterizes users’ interests. In contrast, question focus (e.g., cool club, cheap hotel) presents certain aspect (or descriptive features) of the question topic. For the aim of retrieving semantically equivalent (or close) questions, we need to 156 assure that returned questions are related to the queried question with respect to both question topic and question focus. For example, in Table 1, Q2 preserves certain useful information of Q1 in the aspects of both question topic (Berlin) and question focus (fun club) although it loses some useful information in question topic (Hamburg). In contrast, questions Q3-Q5 are not related to Q1 in question focus (although being related in question topic, e.g. Hamburg, Berlin), which makes them unsuitable as the results of question search. We also propose to use the MDL-based (Minimum Description Length) tree cut model for automatically identifying question topic and question focus. Given a question as query, a structure called question tree is constructed over the question collection including the queried question and all the related questions, and then the MDL principle is applied to find a cut of the question tree specifying the question topic and the question focus of each question. In a summary, we summarize questions in a data structure consisting of question topic and question focus. On the basis of this, we then propose to model question topic and question focus in a language modeling framework for search. To the best of our knowledge, none of the existing studies addressed question search by modeling both question topic and question focus. We empirically conduct the question search with questions about ‘travel’ and ‘computers & internet’. Both kinds of questions are from Yahoo! Answers. Experimental results show that our approach can significantly improve traditional methods (e.g. VSM, LMIR) in retrieving relevant questions. The rest of the paper is organized as follow. In Section 2, we present our approach to question search which is based on identifying question topic and question focus. In Section 3, we empirically verify the effectiveness of our approach to question search. In Section 4, we employ a translation-based retrieval framework for extending our approach to fix the issue called ‘lexical chasm’. Section 5 surveys the related work. Section 6 concludes the paper by summarizing our work and discussing the future directions. 2 Our Approach to Question Search Our approach to question search consists of two steps: (a) summarize questions in a data structure consisting of question topic and question focus; (b) model question topic and question focus in a language modeling framework for search. In the step (a), we employ the MDL-based (Minimum Description Length) tree cut model for automatically identifying question topic and question focus. Thus, this section will begin with a brief review of the MDL-based tree cut model and then follow that by an explanation of steps (a) and (b). 2.1 The MDL-based tree cut model Formally, a tree cut model ܯ (Li and Abe, 1998) can be represented by a pair consisting of a tree cut ߁, and a probability parameter vector ߠ of the same length, that is, ܯൌሺ߁, ߠሻ (1) where ߁ and ߠ are ߁ൌሾܥଵ, ܥଶ, . . ܥ௞ሿ, ߠൌሾ݌ሺܥଵሻ, ݌ሺܥଶሻ, … , ݌ሺܥ௞ሻሿ (2) where ܥଵ, ܥଶ, … ܥ௞ are classes determined by a cut in the tree and ∑ ݌ሺܥ௜ሻൌ1 ௞ ௜ୀଵ . A ‘cut’ in a tree is any set of nodes in the tree that defines a partition of all the nodes, viewing each node as representing the set of child nodes as well as itself. For example, the cut indicated by the dash line in Figure 1 corresponds to three classes:ሾ݊଴, ݊ଵଵሿ,ሾ݊ଵଷ, ݊ଶସሿ, and ሾ݊ଵଶ, ݊ଶଵ, ݊ଶଶ, ݊ଶଷሿ. Figure 1. An Example on the Tree Cut Model A straightforward way for determining a cut of a tree is to collapse the nodes of less frequency into their parent nodes. However, the method is too heuristic for it relies much on manually tuned frequency threshold. In our practice, we turn to use a theoretically well-motivated method based on the MDL principle. MDL is a principle of data compression and statistical estimation from information theory (Rissanen, 1978). Given a sample ܵ and a tree cut ߁, we employ MLE to estimate the parameters of the corresponding tree cut model ܯ෡ൌሺ߁, ߠ෠ሻ, where ߠ෠ denotes the estimated parameters. According to the MDL principle, the description length (Li and Abe, 1998) ܮሺܯ෡, ܵሻ of the tree cut model ܯ෡ and the sample  ܵ is the sum of the model ݊଴ ݊ଵଵ ݊ଵଶ ݊ଵଷ ݊ଶଵ ݊ଶଶ ݊ଶଷ ݊ଶସ 157 description length ܮሺ߁ሻ, the parameter description length ܮሺߠ෠|߁ሻ, and the data description length ܮሺܵ|Γ, ߠ෠ሻ, i.e. ܮ൫ܯ෡, ܵ൯ൌܮሺ߁ሻ൅ܮ൫ߠ෠ห߁൯൅ܮሺܵ|߁, ߠ෠ሻ (3) The model description length ܮሺ߁ሻ is a subjective quantity which depends on the coding scheme employed. Here, we simply assume that each tree cut model is equally likely a priori. The parameter description length ܮሺߠ෠|߁ሻ is calculated as ܮ൫ߠ෠ห߁൯ൌ ௞ ଶൈlog |ܵ| (4) where |ܵ| denotes the sample size and ݇ denotes the number of free parameters in the tree cut model, i.e. ݇ equals the number of nodes in ߁ minus one. The data description length ܮሺܵ|Γ, ߠ෠ሻ is calculated as ܮ൫ܵห߁, ߠ෠൯ൌെ∑ ݈݋݃݌̂ሺ݊ሻ ௡אௌ (5) where ݌̂ሺ݊ሻൌ ଵ |஼| ൈ ௙ሺ஼ሻ |ௌ| (6) where ܥ is the class that ݊ belongs to and ݂ሺܥሻ denotes the total frequency of instances in class ܥ in the sample ܵ. With the description length defined as (3), we wish to select a tree cut model with the minimum description length and output it as the result. Note that the model description length ܮሺ߁ሻ can be ignored because it is the same for all tree cut models. The MDL-based tree cut model was originally introduced for handling the problem of generalizing case frames using a thesaurus (Li and Abe, 1998). To the best of our knowledge, no existing work utilizes it for question search. This may be partially because of the unavailability of the resources (e.g., thesaurus) which can be used for embodying the questions in a tree structure. In Section 2.2, we will introduce a tree structure called question tree for representing questions. 2.2 Identifying question topic and question focus In principle, it is possible to identify question topic and question focus of a question by only parsing the question itself (for example, utilizing a syntactic parser). However, such a method requires accurate parsing results which cannot be obtained from the noisy data from online services. Instead, we propose using the MDL-based tree cut model which identifies question topics and question foci for a set of questions together. More specifically, the method consists of two phases: 1) Constructing a question tree: represent the queried question and all the related questions in a tree structure called question tree; 2) Determining a tree cut: apply the MDL principle to the question tree, which yields the cut specifying question topic and question focus. 2.2.1 Constructing a question tree In the following, with a series of definitions, we will describe how a question tree is constructed from a collection of questions. Let’s begin with explaining the representation of a question. A straightforward method is to represent a question as a bag-of-words (possibly ignoring stop words). However, this method cannot discern ‘the hotels in Paris’ from ‘the Paris hotel’. Thus, we turn to use the linguistic units carrying on more semantic information. Specifically, we make use of two kinds of units: BaseNP (Base Noun Phrase) and WH-ngram. A BaseNP is defined as a simple and non-recursive noun phrase (Cao and Li, 2002). A WH-ngram is an ngram beginning with WH-words. The WH-words that we consider include ‘when’, ‘what’, ‘where’, ‘which’, and ‘how’. We refer to these two kinds of units as ‘topic terms’. With ‘topic terms’, we represent a question as a topic chain and a set of questions as a question tree. Definition 1 (Topic Profile) The topic profile ߠ௧ of a topic term ݐ in a categorized question collection is a probability distribution of categories ሼ݌ሺܿ|ݐሻሽ௖א஼ where ܥ is a set of categories. ݌ሺܿ|ݐሻൌ ௖௢௨௡௧ሺ௖,௧ሻ ∑ ௖௢௨௡௧ሺ௖,௧ሻ ೎א಴ (7) where ܿ݋ݑ݊ݐሺܿ, ݐሻ is the frequency of the topic term ݐ within category ܿ. Clearly, we have ∑ ݌ሺܿ|ݐሻ ௖א஼ ൌ1. By ‘categorized questions’, we refer to the questions that are organized in a tree of taxonomy. For example, at Yahoo! Answers, the question “How do I install my wireless router” is categorized as “Computers & Internet Æ Computer Networking”. Actually, we can find categorized questions at other online services such as FAQ sites, too. Definition 2 (Specificity) The specificity ݏሺݐሻ of a topic term  ݐ is the inverse of the entropy of the topic profile ߠ௧. More specifically, ݏሺݐሻൌ1 ሺെ∑ ݌ሺܿ|ݐሻlog ݌ሺܿ|ݐሻ ௖א஼ ൅ߝሻ ൗ (8) 158 where ߝ is a smoothing parameter used to cope with the topic terms whose entropy is 0. In our experiments, the value of ߝ was set 0.001. We use the term specificity to denote how specific a topic term is in characterizing information needs of users who post questions. A topic term of high specificity (e.g., Hamburg, Berlin) usually specifies the question topic corresponding to the main context of a question because it tends to occur only in a few categories. A topic term of low specificity is usually used to represent the question focus (e.g., cool club, where to see) which is relatively volatile and might occur in many categories. Definition 3 (Topic Chain) A topic chain ݍ௖ of a question ݍ is a sequence of ordered topic terms ݐଵ՜ ݐଶ՜ ڮ ՜ ݐ௠ such that 1) ݐ௜ is included in ݍ, 1 ൑݅൑݉; 2) ݏሺݐ௞ሻ൐ݏሺݐ௟ሻ, 1 ൑݇൏݈൑݉. For example, the topic chain of “any cool clubs in Berlin or Hamburg?” is “Hamburg ՜ Berlin ՜ cool club” because the specificities for ‘Hamburg’, ‘Berlin’, and ‘cool club’ are 0.99, 0.62, and 0.36. Definition 4 (Question Tree) A question tree of a question set ܳൌሼݍ௜ሽ௜ୀଵ ே is a prefix tree built over the topic chains ܳ௖ൌሼݍ௜ ௖ሽ௜ୀଵ ே of the question set ܳ. Clearly, if a question set contains only one question, its question tree will be exactly same as the topic chain of the question. Note that the root node of a question tree is associated with empty string as the definition of prefix tree requires (Fredkin, 1960). Figure 2. An Example of a Question Tree Given the topic chains with respect to the questions in Table 1 as follow, • Q1: Hamburg ՜ Berlin ՜ cool club  • Q2: Berlin ՜ fun club  • Q3: Hamburg ՜ Berlin ՜ nice hotel  • Q4: Hamburg ՜ Berlin ՜ how long does it take  • Q5: Berlin ՜ cheap hotel  we can have the question tree presented in Figure 2. 2.2.2 Determining the tree cut According to the definition of a topic chain, the topic terms in a topic chain of a question are ordered by their specificity values. Thus, a cut of a topic chain naturally separates the topic terms of low specificity (representing question focus) from the topic terms of high specificity (representing question topic). Given a topic chain of a question consisting of ݉ topic terms, there exist (݉െ1ሻ possible cuts. The question is: which cut is the best? We propose using the MDL-based tree cut model for the search of the best cut in a topic chain. Instead of dealing with each topic chain individually, the proposed method handles a set of questions together. Specifically, given a queried question, we construct a question tree consisting of both the queried question and the related questions, and then apply the MDL principle to select the best cut of the question tree. For example, in Figure 2, we hope to get the cut indicated by the dashed line. The topic terms on the left of the dashed line represent the question topic and those on the right of the dashed line represent the question focus. Note that the tree cut yields a cut for each individual topic chain (each path) within the question tree accordingly. A cut of a topic chain  ݍ௖ of a question q separates the topic chain in two parts: HEAD and TAIL. HEAD (denoted as ܪሺݍ௖ሻ) is the subsequence of the original topic chain  ݍ௖ before the cut. TAIL (denoted as ܶሺݍ௖ሻ) is the subsequence of  ݍ௖ after the cut. Thus, ݍ௖ൌܪሺݍ௖ሻ՜ ܶሺݍ௖ሻ. For instance, given the tree cut specified in Figure 2, for the topic chain of Q1 “Hamburg ՜ Berlin ՜ cool club”, the HEAD and TAIL are “Hamburg ՜ Berlin” and “cool club” respectively. 2.3 Modeling question topic and question focus for search We employ the framework of language modeling (for information retrieval) to develop our approach to question search. In the language modeling approach to information retrieval, the relevance of a targeted question ݍ෤ to a queried question ݍ is given by the probability ݌ሺݍ|ݍ෤ሻ of generating the queried question ݍ Q1: Any cool clubs in Berlin or Hamburg? Q2: What are the most/best fun clubs in Berlin? Q3: Any nice hotels in Berlin or Hamburg? Q4: How long does it take to Hamburg from Berlin? Q5: Cheap hotels in Berlin? ROOT Hamburg Berlin Berlin cheap hotel fun club cool club nice hotel how long does it take 159 from the language model formed by the targeted question ݍ෤. The targeted question ݍ෤ is from a collection ܥ of questions. Following the framework, we propose a mixture model for modeling question structure (namely, question topic and question focus) within the process of searching questions: ݌ሺݍ|ݍ෤ሻ ൌ ߣ· ݌ሺܪሺݍሻ|ܪሺݍ෤ሻሻ       ൅ሺ1 െߣሻ  · ݌ሺܶሺݍሻ|ܶሺݍ෤ሻሻ (9) In the mixture model, it is assumed that the process of generating question topics and the process of generating question foci are independent from each other. In traditional language modeling, a single multinomial model ݌ሺݐ|ݍ෤ሻ over terms is estimated for each targeted question ݍ෤. In our case, two multinomial models ݌൫ݐหܪሺݍ෤ሻ൯ and ݌൫ݐหܶሺݍ෤ሻ൯ need to be estimated for each targeted question ݍ෤. If unigram document language models are used, the equation (9) can then be re-written as, ݌ሺݍ|ݍ෤ሻൌߣ· ∏ ݌൫ݐหܪሺݍ෤ሻ൯ ௖௢௨௡௧ሺ௤,௧ሻ ௧אுሺ௤ሻ ൅ ሺ1 െߣሻ  · ∏ ݌൫ݐหܶሺݍ෥ሻ൯ ܿ݋ݑ݊ݐሺݍ,ݐሻ ݐאܶሺݍሻ (10) where ܿ݋ݑ݊ݐሺݍ, ݐሻ is the frequency of ݐ within ݍ. To avoid zero probabilities and estimate more accurate language models, the HEAD and TAIL of questions are smoothed using background collection, ݌൫ݐหܪሺݍ෤ሻ൯ൌߙ· ݌̂൫ݐหܪሺݍ෤ሻ൯                        ൅ሺ1 െߙሻ· ݌̂ሺݐ|ܥሻ (11) ݌൫ݐหܶሺݍ෤ሻ൯ൌߚ· ݌̂൫ݐหܶሺݍ෤ሻ൯                         ൅ሺ1 െߚሻ· ݌̂ሺݐ|ܥሻ (12) where ݌̂ሺݐ|ܪሺݍ෤ሻሻ, ݌̂ሺݐ|ܶሺݍ෤ሻሻ, and ݌̂ሺݐ|ܥሻ are the MLE estimators with respect to the HEAD of ݍ෤, the TAIL of ݍ෤, and the collection ܥ. 3 Experimental Results We have conducted experiments to verify the effectiveness of our approach to question search. Particularly, we have investigated the use of identifying question topic and question focus for search. 3.1 Dataset and evaluation measures We made use of the questions obtained from Yahoo! Answers for the evaluation. More specifically, we utilized the resolved questions under two of the top-level categories at Yahoo! Answers, namely ‘travel’ and ‘computers & internet’. The questions include 314,616 items from the ‘travel’ category and 210,785 items from the ‘computers & internet’ category. Each resolved question consists of three fields: ‘title’, ‘description’, and ‘answers’. For search we use only the ‘title’ field. It is assumed that the titles of the questions already provide enough semantic information for understanding users’ information needs. We developed two test sets, one for the category ‘travel’ denoted as ‘TRL-TST’, and the other for ‘computers & internet’ denoted as ‘CI-TST’. In order to create the test sets, we randomly selected 200 questions for each category. To obtain the ground-truth of question search, we employed the Vector Space Model (VSM) (Salton et al., 1975) to retrieve the top 20 results and obtained manual judgments. The top 20 results don’t include the queried question itself. Given a returned result by VSM, an assessor is asked to label it with ‘relevant’ or ‘irrelevant’. If a returned result is considered semantically equivalent (or close) to the queried question, the assessor will label it as ‘relevant’; otherwise, the assessor will label it as ‘irrelevant’. Two assessors were involved in the manual judgments. Each of them was asked to label 100 questions from ‘TRL-TST’ and 100 from ‘CI-TST’. In the process of manually judging questions, the assessors were presented only the titles of the questions (for both the queried questions and the returned questions). Table 2 provides the statistics on the final test set. # Queries # Returned # Relevant TRL-TST 200 4,000 256 CI-TST 200 4,000 510 Table 2. Statistics on the Test Data We utilized two baseline methods for demonstrating the effectiveness of our approach, the VSM and the LMIR (language modeling method for information retrieval) (Ponte and Croft, 1998). We made use of three measures for evaluating the results of question search methods. They are MAP, R-precision, and MRR. 3.2 Searching questions about ‘travel’ In the experiments, we made use of the questions about ‘travel’ to test the performance of our approach to question search. More specifically, we used the 200 queries in the test set ‘TRL-TST’ to search for ‘relevant’ questions from the 314,616 160 questions categorized as ‘travel’. Note that only the questions occurring in the test set can be evaluated. We made use of the taxonomy of questions provided at Yahoo! Answers for the calculation of specificity of topic terms. The taxonomy is organized in a tree structure. In the following experiments, we only utilized as the categories of questions the leaf nodes of the taxonomy tree (regarding ‘travel’), which includes 355 categories. We randomly divided the test queries into five even subsets and conducted 5-fold cross-validation experiments. In each trial, we tuned the parameters ߣ, ߙ, and ߚ in the equation (10)-(12) with four of the five subsets and then applied it to one remaining subset. The experimental results reported below are those averaged over the five trials. Methods MAP R-Precision MRR VSM 0.198 0.138 0.228 LMIR 0.203 0.154 0.248 LMIR-CUT 0.236 0.192 0.279 Table 3. Searching Questions about ‘Travel’ In Table 3, our approach denoted by LMIRCUT is implemented exactly as equation (10). Neither VSM nor LMIR uses the data structure composed of question topic and question focus. From Table 3, we see that our approach outperforms the baseline approaches VSM and LMIR in terms of all the measures. We conducted a significance test (t-test) on the improvements of our approach over VSM and LMIR. The result indicates that the improvements are statistically significant (p-value < 0.05) in terms of all the evaluation measures. Figure 3. Balancing between Question Topic and Question Focus In equation (9), we use the parameter λ to balance the contribution of question topic and the contribution of question focus. Figure 3 illustrates how influential the value of λ is on the performance of question search in terms of MRR. The result was obtained with the 200 queries directly, instead of 5-fold cross-validation. From Figure 3, we see that our approach performs best when λ is around 0.7. That is, our approach tends to emphasize question topic more than question focus. We also examined the correctness of question topics and question foci of the 200 queried questions. The question topics and question foci were obtained with the MDL-based tree cut model automatically. In the result, 69 questions have incorrect question topics or question foci. Further analysis shows that the errors came from two categories: (a) 59 questions have only the HEAD parts (that is, none of the topic terms fall within the TAIL part), and (b) 10 have incorrect orders of topic terms because the specificities of topic terms were estimated inaccurately. For questions only having the HEAD parts, our approach (equation (9)) reduces to traditional language modeling approach. Thus, even when the errors of category (a) occur, our approach can still work not worse than the traditional language modeling approach. This also explains why our approach performs best when λ is around 0.7. The error category (a) pushes our model to emphasize more in question topic. Methods Results VSM 1. How cold does it usually get in Charlotte, NC during winters? 2. How long and cold are the winters in Rochester, NY? 3. How cold is it in Alaska? LMIR 1. How cold is it in Alaska? 2. How cold does it get really in Toronto in the winter? 3. How cold does the Mojave Desert get in the winter? LMIRCUT 1. How cold is it in Alaska? 2. How cold is Alaska in March and outdoor activities? 3. How cold does it get in Nova Scotia in the winter? Table 4. Search Results for “How cold does it get in winters in Alaska?” Table 4 provides the TOP-3 search results which are given by VSM, LMIR, and LMIR-CUT (our approach) respectively. The questions in bold are labeled as ‘relevant’ in the evaluation set. The queried question seeks for the ‘weather’ information about ‘Alaska’. Both VSM and LMIR rank certain 0.05 0.1 0.15 0.2 0.25 0.3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 MRR λ 161 ‘irrelevant’ questions higher than ‘relevant’ questions. The ‘irrelevant’ questions are not about ‘Alaska’ although they are about ‘weather’. The reason is that neither VSM nor PVSM is aware that the query consists of the two aspects ‘weather’ (how cold, winter) and ‘Alaska’. In contrast, our approach assures that both aspects are matched. Note that the HEAD part of the topic chain of the queried question given by our approach is “Alaska” and the TAIL part is “winter ՜ how cold”. 3.3 Searching questions about ‘computers & internet’ In the experiments, we made use of the questions about ‘computers & internet’ to test the performance of our proposed approach to question search. More specifically, we used the 200 queries in the test set ‘CI-TST’’ to search for ‘relevant’ questions from the 210,785 questions categorized as ‘computers & internet’. For the calculation of specificity of topic terms, we utilized as the categories of questions the leaf nodes of the taxonomy tree regarding ‘computers & Internet’, which include 23 categories. We conducted 5-fold cross-validation for the parameter tuning. The experimental results reported in Table 5 are averaged over the five trials. Methods MAP R-Precision MRR VSM 0.236 0.175 0.289 LMIR 0.248 0.191 0.304 LMIR-CUT 0.279 0.230 0.341 Table 5. Searching Questions about ‘Computers & Internet’ Again, we see that our approach outperforms the baseline approaches VSM and LMIR in terms of all the measures. We conducted a significance test (t-test) on the improvements of our approach over VSM and LMIR. The result indicates that the improvements are statistically significant (p-value < 0.05) in terms of all the evaluation measures. We also conducted the experiment similar to that in Figure 3. Figure 4 provides the result. The trend is consistent with that in Figure 3. We examined the correctness of (automatically identified) question topics and question foci of the 200 queried questions, too. In the result, 65 questions have incorrect question topics or question foci. Among them, 47 fall in the error category (a) and 18 in the error category (b). The distribution of errors is also similar to that in Section 3.2, which also justifies the trend presented in Figure 4. Figure 4. Balancing between Question Topic and Question Focus 4 Using Translation Probability In the setting of question search, besides the topic what we address in the previous sections, another research topic is to fix lexical chasm between questions. Sometimes, two questions that have the same meaning use very different wording. For example, the questions “where to stay in Hamburg?” and “the best hotel in Hamburg?” have almost the same meaning but are lexically different in question focus (where to stay vs. best hotel). This is the socalled ‘lexical chasm’. Jeon and Bruce (2007) proposed a mixture model for fixing the lexical chasm between questions. The model is a combination of the language modeling approach (for information retrieval) and translation-based approach (for information retrieval). Our idea of modeling question structure for search can naturally extend to Jeon et al.’s model. More specifically, by using translation probabilities, we can rewrite equation (11) and (12) as follow: ݌൫ݐหܪሺݍ෤ሻ൯ൌߙଵ· ݌̂൫ݐหܪሺݍ෤ሻ൯ ൅ߙଶ· ∑ ܶݎሺݐ|ݐᇱሻ· ݌̂൫ݐᇱหܪሺݍ෤ሻ൯ ௧ᇲאுሺ௤෤ሻ ൅ሺ1 െߙଵെߙଶሻ· ݌̂ሺݐ|ܥሻ (13) ݌൫ݐหܶሺݍ෤ሻ൯ൌߚଵ· ݌̂൫ݐหܶሺݍ෤ሻ൯ ൅ߚଶ· ∑ ܶݎሺݐ|ݐᇱሻ· ݌̂൫ݐᇱหܶሺݍ෤ሻ൯ ௧ᇲא்ሺ௤෤ሻ ൅ሺ1 െߚଵെߚଶሻ· ݌̂ሺݐ|ܥሻ (14) where ܶݎሺݐ|ݐᇱሻ denotes the probability that topic term ݐ is the translation of ݐᇱ. In our experiments, to estimate the probability ܶݎሺݐ|ݐᇱሻ, we used the collections of question titles and question descriptions as the parallel corpus and the IBM model 1 (Brown et al., 1993) as the alignment model. 0.15 0.2 0.25 0.3 0.35 0.4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 MRR λ 162 Usually, users reiterate or paraphrase their questions (already described in question titles) in question descriptions. We utilized the new model elaborated by equation (13) and (14) for searching questions about ‘travel’ and ‘computers & internet’. The new model is denoted as ‘SMT-CUT’. Table 6 provides the evaluation results. The evaluation was conducted with exactly the same setting as in Section 3. From Table 6, we see that the performance of our approach can be further boosted by using translation probability. Data Methods MAP R-Precision MRR TRLTST LMIR-CUT 0.236 0.192 0.279 SMT-CUT 0.266 0.225 0.308 CITST LMIR-CUT 0.279 0.230 0.341 SMT-CUT 0.282 0.236 0.337 Table 6. Using Translation Probability 5 Related Work The major focus of previous research efforts on question search is to tackle the lexical chasm problem between questions. The research of question search is first conducted using FAQ data. FAQ Finder (Burke et al., 1997) heuristically combines statistical similarities and semantic similarities between questions to rank FAQs. Conventional vector space models are used to calculate the statistical similarity and WordNet (Fellbaum, 1998) is used to estimate the semantic similarity. Sneiders (2002) proposed template based FAQ retrieval systems. Lai et al. (2002) proposed an approach to automatically mine FAQs from the Web. Jijkoun and Rijke (2005) used supervised learning methods to extend heuristic extraction of Q/A pairs from FAQ pages, and treated Q/A pair retrieval as a fielded search task. Harabagiu et al. (2005) used a Question Answer Database (known as QUAB) to support interactive question answering. They compared seven different similarity metrics for selecting related questions from QUAB and found that the conceptbased metric performed best. Recently, the research of question search has been further extended to the community-based Q&A data. For example, Jeon et al. (Jeon et al., 2005a; Jeon et al., 2005b) compared four different retrieval methods, i.e. vector space model, Okapi, language model (LM), and translation-based model, for automatically fixing the lexical chasm between questions of question search. They found that the translation-based model performed best. However, all the existing methods treat questions just as plain texts (without considering question structure). In this paper, we proposed to conduct question search by identifying question topic and question focus. To the best of our knowledge, none of the existing studies addressed question search by modeling both question topic and question focus. Question answering (e.g., Pasca and Harabagiu, 2001; Echihabi and Marcu, 2003; Voorhees, 2004; Metzler and Croft, 2005) relates to question search. Question answering automatically extracts short answers for a relatively limited class of question types from document collections. In contrast to that, question search retrieves answers for an unlimited range of questions by focusing on finding semantically similar questions in an archive. 6 Conclusions and Future Work In this paper, we have proposed an approach to question search which models question topic and question focus in a language modeling framework. The contribution of this paper can be summarized in 4-fold: (1) A data structure consisting of question topic and question focus was proposed for summarizing questions; (2) The MDL-based tree cut model was employed to identify question topic and question focus automatically; (3) A new form of language modeling using question topic and question focus was developed for question search; (4) Extensive experiments have been conducted to evaluate the proposed approach using a large collection of real questions obtained from Yahoo! Answers. Though we only utilize data from communitybased question answering service in our experiments, we could also use categorized questions from forum sites and FAQ sites. Thus, as future work, we will try to investigate the use of the proposed approach for other kinds of web services. Acknowledgement We would like to thank Xinying Song, Shasha Li, and Shilin Ding for their efforts on developing the evaluation data. We would also like to thank Stephan H. Stiller for his proof-reading of the paper. 163 References A. Echihabi and D. Marcu. 2003. A Noisy-Channel Approach to Question Answering. In Proc. of ACL’03. C. Fellbaum. 1998. WordNet: An electronic lexical database. MIT Press. D. Metzler and W. B. Croft. 2005. Analysis of statistical question classification for fact-based questions. Information Retrieval, 8(3), pages 481-504. E. Fredkin. 1960. Trie memory. Communications of the ACM, D. 3(9):490-499. E. M. Voorhees. 2004. Overview of the TREC 2004 question answering track. In Proc. of TREC’04. E. Sneiders. 2002. Automated question answering using question templates that cover the conceptual model of the database. In Proc. of the 6th International Conference on Applications of Natural Language to Information Systems, pages 235-239. G. Salton, A. Wong, and C. S. Yang 1975. A vector space model for automatic indexing. Communications of the ACM, vol. 18, nr. 11, pages 613-620. H. Li and N. Abe. 1998. Generalizing case frames using a thesaurus and the MDL principle. Computational Linguistics, 24(2), pages 217-244. J. Jeon and W.B. Croft. 2007. Learning translationbased language models using Q&A archives. Technical report, University of Massachusetts. J. Jeon, W. B. Croft, and J. Lee. 2005a. Finding semantically similar questions based on their answers. In Proc. of SIGIR’05. J. Jeon, W. B. Croft, and J. Lee. 2005b. Finding similar questions in large question and answer archives. In Proc. of CIKM ‘05, pages 84-90. J. Rissanen. 1978. Modeling by shortest data description. Automatica, vol. 14, pages. 465-471 J.M. Ponte, W.B. Croft. 1998. A language modeling approach to information retrieval. In Proc. of SIGIR’98. M. A. Pasca and S. M. Harabagiu. 2001. High performance question/answering. In Proc. of SIGIR’01, pages 366-374. P. F. Brown, V. J. D. Pietra, S. A. D. Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2):263-311. R. D. Burke, K. J. Hammond, V. A. Kulyukin, S. L. Lytinen, N. Tomuro, and S. Schoenberg. 1997. Question answering from frequently asked question files: Experiences with the FAQ finder system. Technical report, University of Chicago. S. Harabagiu, A. Hickl, J. Lehmann and D. Moldovan. 2005. Experiments with Interactive QuestionAnswering. In Proc. of ACL’05. V. Jijkoun, M. D. Rijke. 2005. Retrieving Answers from Frequently Asked Questions Pages on the Web. In Proc. of CIKM’05. Y. Cao and H. Li. 2002. Base noun phrase translation using web data and the EM algorithm. In Proc. of COLING’02. Y.-S. Lai, K.-A. Fung, and C.-H. Wu. 2002. Faq mining via list detection. In Proc. of the Workshop on Multilingual Summarization and Question Answering, 2002. 164
2008
19
Proceedings of ACL-08: HLT, pages 10–18, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Distributional Identification of Non-Referential Pronouns Shane Bergsma Department of Computing Science University of Alberta Edmonton, Alberta Canada, T6G 2E8 [email protected] Dekang Lin Google, Inc. 1600 Amphitheatre Parkway Mountain View California, 94301 [email protected] Randy Goebel Department of Computing Science University of Alberta Edmonton, Alberta Canada, T6G 2E8 [email protected] Abstract We present an automatic approach to determining whether a pronoun in text refers to a preceding noun phrase or is instead nonreferential. We extract the surrounding textual context of the pronoun and gather, from a large corpus, the distribution of words that occur within that context. We learn to reliably classify these distributions as representing either referential or non-referential pronoun instances. Despite its simplicity, experimental results on classifying the English pronoun it show the system achieves the highest performance yet attained on this important task. 1 Introduction The goal of coreference resolution is to determine which noun phrases in a document refer to the same real-world entity. As part of this task, coreference resolution systems must decide which pronouns refer to preceding noun phrases (called antecedents) and which do not. In particular, a long-standing challenge has been to correctly classify instances of the English pronoun it. Consider the sentences: (1) You can make it in advance. (2) You can make it in Hollywood. In sentence (1), it is an anaphoric pronoun referring to some previous noun phrase, like “the sauce” or “an appointment.” In sentence (2), it is part of the idiomatic expression “make it” meaning “succeed.” A coreference resolution system should find an antecedent for the first it but not the second. Pronouns that do not refer to preceding noun phrases are called non-anaphoric or non-referential pronouns. The word it is one of the most frequent words in the English language, accounting for about 1% of tokens in text and over a quarter of all third-person pronouns.1 Usually between a quarter and a half of it instances are non-referential (e.g. Section 4, Table 3). As with other pronouns, the preceding discourse can affect it’s interpretation. For example, sentence (2) can be interpreted as referential if the preceding sentence is “You want to make a movie?” We show, however, that we can reliably classify a pronoun as being referential or non-referential based solely on the local context surrounding the pronoun. We do this by turning the context into patterns and enumerating all the words that can take the place of it in these patterns. For sentence (1), we can extract the context pattern “make * in advance” and for sentence (2) “make * in Hollywood,” where “*” is a wildcard that can be filled by any token. Nonreferential distributions tend to have the word it filling the wildcard position. Referential distributions occur with many other noun phrase fillers. For example, in our n-gram collection (Section 3.4), “make it in advance” and “make them in advance” occur roughly the same number of times (442 vs. 449), indicating a referential pattern. In contrast, “make it in Hollywood” occurs 3421 times while “make them in Hollywood” does not occur at all. These simple counts strongly indicate whether another noun can replace the pronoun. Thus we can computationally distinguish between a) pronouns that refer to nouns, and b) all other instances: including those that have no antecedent, like sentence (2), 1e.g. http://ucrel.lancs.ac.uk/bncfreq/flists.html 10 and those that refer to sentences, clauses, or implied topics of discourse. Beyond the practical value of this distinction, Section 3 provides some theoretical justification for our binary classification. Section 3 also shows how to automatically extract and collect counts for context patterns, and how to combine the information using a machine learned classifier. Section 4 describes our data for learning and evaluation, It-Bank: a set of over three thousand labelled instances of the pronoun it from a variety of text sources. Section 4 also explains our comparison approaches and experimental methodology. Section 5 presents our results, including an interesting comparison of our system to human classification given equivalent segments of context. 2 Related Work The difficulty of non-referential pronouns has been acknowledged since the beginning of computational resolution of anaphora. Hobbs (1978) notes his algorithm does not handle pronominal references to sentences nor cases where it occurs in time or weather expressions. Hirst (1981, page 17) emphasizes the importance of detecting non-referential pronouns, “lest precious hours be lost in bootless searches for textual referents.” M¨uller (2006) summarizes the evolution of computational approaches to nonreferential it detection. In particular, note the pioneering work of Paice and Husk (1987), the inclusion of non-referential it detection in a full anaphora resolution system by Lappin and Leass (1994), and the machine learning approach of Evans (2001). There has recently been renewed interest in non-referential pronouns, driven by three primary sources. First of all, research in coreference resolution has shown the benefits of modules for general noun anaphoricity determination (Ng and Cardie, 2002; Denis and Baldridge, 2007). Unfortunately, these studies handle pronouns inadequately; judging from the decision trees and performance figures, Ng and Cardie (2002)’s system treats all pronouns as anaphoric by default. Secondly, while most pronoun resolution evaluations simply exclude non-referential pronouns, recent unsupervised approaches (Cherry and Bergsma, 2005; Haghighi and Klein, 2007) must deal with all pronouns in unrestricted text, and therefore need robust modules to automatically handle non-referential instances. Finally, reference resolution has moved beyond written text into in spoken dialog. Here, non-referential pronouns are pervasive. Eckert and Strube (2000) report that in the Switchboard corpus, only 45% of demonstratives and third-person pronouns have a noun phrase antecedent. Handling the common nonreferential instances is thus especially vital. One issue with systems for non-referential detection is the amount of language-specific knowledge that must be encoded. Consider a system that jointly performs anaphora resolution and word alignment in parallel corpora for machine translation. For this task, we need to identify non-referential anaphora in multiple languages. It is not always clear to what extent the features and modules developed for English systems apply to other languages. For example, the detector of Lappin and Leass (1994) labels a pronoun as non-referential if it matches one of several syntactic patterns, including: “It is Cogv-ed that Sentence,” where Cogv is a “cognitive verb” such as recommend, think, believe, know, anticipate, etc. Porting this approach to a new language would require not only access to a syntactic parser and a list of cognitive verbs in that language, but the development of new patterns to catch non-referential pronoun uses that do not exist in English. Moreover, writing a set of rules to capture this phenomenon is likely to miss many less-common uses. Alternatively, recent machine-learning approaches leverage a more general representation of a pronoun instance. For example, M¨uller (2006) has a feature for “distance to next complementizer (that, if, whether)” and features for the tokens and part-of-speech tags of the context words. Unfortunately, there is still a lot of implicit and explicit English-specific knowledge needed to develop these features, including, for example, lists of “seem” verbs such as appear, look, mean, happen. Similarly, the machine-learned system of Boyd et al. (2005) uses a set of “idiom patterns” like “on the face of it” that trigger binary features if detected in the pronoun context. Although machine learned systems can flexibly balance the various indicators and contra-indicators of non-referentiality, a particular feature is only useful if it is relevant to an example in limited labelled training data. Our approach avoids hand-crafting a set of spe11 cific indicator features; we simply use the distribution of the pronoun’s context. Our method is thus related to previous work based on Harris (1985)’s distributional hypothesis.2 It has been used to determine both word and syntactic path similarity (Hindle, 1990; Lin, 1998a; Lin and Pantel, 2001). Our work is part of a trend of extracting other important information from statistical distributions. Dagan and Itai (1990) use the distribution of a pronoun’s context to determine which candidate antecedents can fit the context. Bergsma and Lin (2006) determine the likelihood of coreference along the syntactic path connecting a pronoun to a possible antecedent, by looking at the distribution of the path in text. These approaches, like ours, are ways to inject sophisticated “world knowledge” into anaphora resolution. 3 Methodology 3.1 Definition Our approach distinguishes contexts where pronouns cannot be replaced by a preceding noun phrase (non-noun-referential) from those where nouns can occur (noun-referential). Although coreference evaluations, such as the MUC (1997) tasks, also make this distinction, it is not necessarily used by all researchers. Evans (2001), for example, distinguishes between “clause anaphoric” and “pleonastic” as in the following two instances: (3) The paper reported that it had snowed. It was obvious. (clause anaphoric) (4) It was obvious that it had snowed. (pleonastic) The word It in sentence (3) is considered referential, while the word It in sentence (4) is considered non-referential.3 From our perspective, this interpretation is somewhat arbitrary. One could also say that the It in both cases refers to the clause “that it had snowed.” Indeed, annotation experiments using very fine-grained categories show low annotation reliability (M¨uller, 2006). On the other hand, there is no debate over the importance nor the definition of distinguishing pronouns that refer to nouns from those that do not. We adopt this distinction for our 2Words occurring in similar contexts have similar meanings 3The it in “it had snowed” is, of course, non-referential. work, and show it has good inter-annotator reliability (Section 4.1). We henceforth refer to non-nounreferential simply as non-referential, and thus consider the word It in both sentences (3) and (4) as non-referential. Non-referential pronouns are widespread in natural language. The es in the German “Wie geht es Ihnen” and the il in the French “S’il vous plaˆıt” are both non-referential. In pro-drop languages that may omit subject pronouns, there remains the question of whether an omitted pronoun is referential (Zhao and Ng, 2007). Although we focus on the English pronoun it, our approach should differentiate any words that have both a structural and a referential role in language, e.g. words like this, there and that (M¨uller, 2007). We believe a distributional approach could also help in related tasks like identifying the generic use of you (Gupta et al., 2007). 3.2 Context Distribution Our method extracts the context surrounding a pronoun and determines which other words can take the place of the pronoun in the context. The extracted segments of context are called context patterns. The words that take the place of the pronoun are called pattern fillers. We gather pattern fillers from a large collection of n-gram frequencies. The maximum size of a context pattern depends on the size of ngrams available in the data. In our n-gram collection (Section 3.4), the lengths of the n-grams range from unigrams to 5-grams, so our maximum pattern size is five. For a particular pronoun in text, there are five possible 5-grams that span the pronoun. For example, in the following instance of it: ... said here Thursday that it is unnecessary to continue ... We can extract the following 5-gram patterns: said here Thursday that * here Thursday that * is Thursday that * is unnecessary that * is unnecessary to * is unnecessary to continue Similarly, we extract the four 4-gram patterns. Shorter n-grams were not found to improve performance on development data and hence are not extracted. We only use context within the current sentence (including the beginning-of-sentence and endof-sentence tokens) so if a pronoun occurs near a sentence boundary, some patterns may be missing. 12 Pattern Filler Type String #1: 3rd-person pron. sing. it/its #2: 3rd-person pron. plur. they/them/their #3: any other pronoun he/him/his/, I/me/my, etc. #4: infrequent word token ⟨UNK⟩ #5: any other token * Table 1: Pattern filler types We take a few steps to improve generality. We change the patterns to lower-case, convert sequences of digits to the # symbol, and run the Porter stemmer4 (Porter, 1980). To generalize rare names, we convert capitalized words longer than five characters to a special NE tag. We also added a few simple rules to stem the irregular verbs be, have, do, and said, and convert the common contractions ’nt, ’s, ’m, ’re, ’ve, ’d, and ’ll to their most likely stem. We do the same processing to our n-gram corpus. We then find all n-grams matching our patterns, allowing any token to match the wildcard in place of it. Also, other pronouns in the pattern are allowed to match a corresponding pronoun in an n-gram, regardless of differences in inflection and class. We now discuss how to use the distribution of pattern fillers. For identifying non-referential it in English, we are interested in how often it occurs as a pattern filler versus other nouns. However, determining part-of-speech in a large n-gram corpus is not simple, nor would it easily extend to other languages. Instead, we gather counts for five different classes of words that fill the wildcard position, easily determined by string match (Table 1). The third-person plural they (#2) reliably occurs in patterns where referential it also resides. The occurrence of any other pronoun (#3) guarantees that at the very least the pattern filler is a noun. A match with the infrequent word token ⟨UNK⟩(#4) (explained in Section 3.4) will likely be a noun because nouns account for a large proportion of rare words in a corpus. Gathering any other token (#5) also mostly finds nouns; inserting another part-of-speech usually 4Adapted from the Bow-toolkit (McCallum, 1996). Our method also works without the stemmer; we simply truncate the words in the pattern at a given maximum length (see Section 5.1). With simple truncation, all the pattern processing can be easily applied to other languages. Pattern Filler Counts #1 #2 #3 #5 sai here NE that * 84 0 291 3985 here NE that * be 0 0 0 93 NE that * be unnecessari 0 0 0 0 that * be unnecessari to 16726 56 0 228 * be unnecessari to continu 258 0 0 0 Table 2: 5-gram context patterns and pattern-filler counts for the Section 3.2 example. results in an unlikely, ungrammatical pattern. Table 2 gives the stemmed context patterns for our running example. It also gives the n-gram counts of pattern fillers matching the first four filler types (there were no matches of the ⟨UNK⟩type, #4). 3.3 Feature Vector Representation There are many possible ways to use the above counts. Intuitively, our method should identify as non-referential those instances that have a high proportion of fillers of type #1 (i.e., the word it), while labelling as referential those with high counts for other types of fillers. We would also like to leverage the possibility that some of the patterns may be more predictive than others, depending on where the wildcard lies in the pattern. For example, in Table 2, the cases where the it-position is near the beginning of the pattern best reflect the non-referential nature of this instance. We can achieve these aims by ordering the counts in a feature vector, and using a labelled set of training examples to learn a classifier that optimally weights the counts. For classification, we define non-referential as positive and referential as negative. Our feature representation very much resembles Table 2. For each of the five 5-gram patterns, ordered by the position of the wildcard, we have features for the logarithm of counts for filler types #1, #2, ... #5. Similarly, for each of the four 4-gram patterns, we provide the log-counts corresponding to types #1, #2, ... #5 as well. Before taking the logarithm, we smooth the counts by adding a fixed number to all observed values. We also provide, for each pattern, a feature that indicates if the pattern is not available because the it-position would cause the pattern to span beyond the current sentence. There are twenty-five 5-gram, twenty 4-gram, and nine indicator features in total. 13 Our classifier should learn positive weights on the type #1 counts and negative weights on the other types, with higher absolute weights on the more predictive filler types and pattern positions. Note that leaving the pattern counts unnormalized automatically allows patterns with higher counts to contribute more to the prediction of their associated instances. 3.4 N-Gram Data We now describe the collection of n-grams and their counts used in our implementation. We use, to our knowledge, the largest publicly available collection: the Google Web 1T 5-gram Corpus Version 1.1.5 This collection was generated from approximately 1 trillion tokens of online text. In this data, tokens appearing less than 200 times have been mapped to the ⟨UNK⟩symbol. Also, only n-grams appearing more than 40 times are included. For languages where such an extensive n-gram resource is not available, the n-gram counts could also be taken from the pagecounts returned by an Internet search engine. 4 Evaluation 4.1 Labelled It Data We need labelled data for training and evaluation of our system. This data indicates, for every occurrence of the pronoun it, whether it refers to a preceding noun phrase or not. Standard coreference resolution data sets annotate all noun phrases that have an antecedent noun phrase in the text. Therefore, we can extract labelled instances of it from these sets. We do this for the dry-run and formal sets from MUC-7 (1997), and merge them into a single data set. Of course, full coreference-annotated data is a precious resource, with the pronoun it making up only a small portion of the marked-up noun phrases. We thus created annotated data specifically for the pronoun it. We annotated 1020 instances in a collection of Science News articles (from 1995-2000), downloaded from the Science News website. We also annotated 709 instances in the WSJ portion of the DARPA TIPSTER Project (Harman, 1992), and 279 instances in the English portion of the Europarl Corpus (Koehn, 2005). A single annotator (A1) labelled all three data sets, while two additional annotators not connected 5Available from the LDC as LDC2006T13 Data Set Number of It % Non-Referential Europarl 279 50.9 Sci-News 1020 32.6 WSJ 709 25.1 MUC 129 31.8 Train 1069 33.2 Test 1067 31.7 Test-200 200 30.0 Table 3: Data sets used in experiments. with the project (A2 and A3) were asked to separately re-annotate a portion of each, so that interannotator agreement could be calculated. A1 and A2 agreed on 96% of annotation decisions, while A1-A3, and A2-A3, agreed on 91% and 93% of decisions, respectively. The Kappa statistic (Jurafsky and Martin, 2000, page 315), with P(E) computed from the confusion matrices, was a high 0.90 for A1A2, and 0.79 and 0.81 for the other pairs, around the 0.80 considered to be good reliability. These are, perhaps surprisingly, the only known it-annotationagreement statistics available for written text. They contrast favourably with the low agreement seen on categorizing it in spoken dialog (M¨uller, 2006). We make all the annotations available in It-Bank, an online repository for annotated it-instances.6 It-Bank also allows other researchers to distribute their it annotations. Often, the full text of articles containing annotations cannot be shared because of copyright. However, sharing just the sentences containing the word it, randomly-ordered, is permissible under fair-use guidelines. The original annotators retain their copyright on the annotations. We use our annotated data in two ways. First of all, we perform cross-validation experiments on each of the data sets individually, to help gauge the difficulty of resolution on particular domains and volumes of training data. Secondly, we randomly distribute all instances into two main sets, a training set and a test set. We also construct a smaller test set, Test-200, containing only the first 200 instances in the Test set. We use Test-200 for human experiments and error analysis (Section 5.2). Table 3 summarizes all the sets used in the experiments. 6www.cs.ualberta.ca/˜bergsma/ItBank/. It-Bank also contains an additional 1,077 examples used as development data. 14 4.2 Comparison Approaches We represent feature vectors exactly as described in Section 3.3. We smooth by adding 40 to all counts, equal to the minimum count in the n-gram data. For classification, we use a maximum entropy model (Berger et al., 1996), from the logistic regression package in Weka (Witten and Frank, 2005), with all default parameter settings. Results with our distributional approach are labelled as DISTRIB. Note that our maximum entropy classifier actually produces a probability of non-referentiality, which is thresholded at 50% to make a classification. As a baseline, we implemented the non-referential it detector of Lappin and Leass (1994), labelled as LL in the results. This is a syntactic detector, a point missed by Evans (2001) in his criticism: the patterns are robust to intervening words and modifiers (e.g. “it was never thought by the committee that...”) provided the sentence is parsed correctly.7 We automatically parse sentences with Minipar, a broad-coverage dependency parser (Lin, 1998b). We also use a separate, extended version of the LL detector, implemented for large-scale nonreferential detection by Cherry and Bergsma (2005). This system, also for Minipar, additionally detects instances of it labelled with Minipar’s pleonastic category Subj. It uses Minipar’s named-entity recognition to identify time expressions, such as “it was midnight,” and provides a number of other patterns to match common non-referential it uses, such as in expressions like “darn it,” “don’t overdo it,” etc. This extended detector is labelled as MINIPL (for Minipar pleonasticity) in our results. Finally, we tested a system that combines the above three approaches. We simply add the LL and MINIPL decisions as binary features in the DISTRIB system. This system is called COMBO in our results. 4.3 Evaluation Criteria We follow M¨uller (2006)’s evaluation criteria. Precision (P) is the proportion of instances that we label as non-referential that are indeed non-referential. Recall (R) is the proportion of true non-referentials that we detect, and is thus a measure of the coverage 7Our approach, on the other hand, would seem to be susceptible to such intervening material, if it pushes indicative context tokens out of the 5-token window. System P R F Acc LL 93.4 21.0 34.3 74.5 MINIPL 66.4 49.7 56.9 76.1 DISTRIB 81.4 71.0 75.8 85.7 COMBO 81.3 73.4 77.1 86.2 Table 4: Train/Test-split performance (%). of the system. F-Score (F) is the geometric average of precision and recall; it is the most common nonreferential detection metric. Accuracy (Acc) is the percentage of instances labelled correctly. 5 Results 5.1 System Comparison Table 4 gives precision, recall, F-score, and accuracy on the Train/Test split. Note that while the LL system has high detection precision, it has very low recall, sharply reducing F-score. The MINIPL approach sacrifices some precision for much higher recall, but again has fairly low F-score. To our knowledge, our COMBO system, with an F-Score of 77.1%, achieves the highest performance of any non-referential system yet implemented. Even more importantly, DISTRIB, which requires only minimal linguistic processing and no encoding of specific indicator patterns, achieves 75.8% F-Score. The difference between COMBO and DISTRIB is not statistically significant, while both are significantly better than the rule-based approaches.8 This provides strong motivation for a “light-weight” approach to non-referential it detection – one that does not require parsing or hand-crafted rules and – is easily ported to new languages and text domains. Since applying an English stemmer to the context words (Section 3.2) reduces the portability of the distributional technique, we investigated the use of more portable pattern abstraction. Figure 1 compares the use of the stemmer to simply truncating the words in the patterns at a certain maximum length. Using no truncation (Unaltered) drops the F-Score by 4.3%, while truncating the patterns to a length of four only drops the F-Score by 1.4%, a difference which is not statistically significant. Simple truncation may be a good option for other languages where stemmers are not readily available. The optimum 8All significance testing uses McNemar’s test, p<0.05 15 68 70 72 74 76 78 80 1 2 3 4 5 6 7 8 9 10 F-Score Truncated word length Stemmed patterns Truncated patterns Unaltered patterns Figure 1: Effect of pattern-word truncation on nonreferential it detection (COMBO system, Train/Test split). System Europl. Sci-News WSJ MUC LL 44.0 39.3 21.5 13.3 MINIPL 70.3 61.8 22.0 50.7 DISTRIB 79.7 77.2 69.5 68.2 COMBO 76.2 78.7 68.1 65.9 COMBO4 83.6 76.5 67.1 74.7 Table 5: 10-fold cross validation F-Score (%). truncation size will likely depend on the length of the base forms of words in that language. For realworld application of our approach, truncation also reduces the table sizes (and thus storage and lookup costs) of any pre-compiled it-pattern database. Table 5 compares the 10-fold cross-validation Fscore of our systems on the four data sets. The performance of COMBO on Europarl and MUC is affected by the small number of instances in these sets (Section 4, Table 3). We can reduce data fragmentation by removing features. For example, if we only use the length-4 patterns in COMBO (labelled as COMBO4), performance increases dramatically on Europarl and MUC, while dipping slightly for the larger Sci-News and WSJ sets. Furthermore, selecting just the three most useful filler type counts as features (#1,#2,#5), boosts F-Score on Europarl to 86.5%, 10% above the full COMBO system. 5.2 Analysis and Discussion In light of these strong results, it is worth considering where further gains in performance might yet be found. One key question is to what extent a limited context restricts identification performance. We first tested the importance of the pattern length by System P R F Acc DISTRIB 80.0 73.3 76.5 86.5 COMBO 80.7 76.7 78.6 87.5 Human-1 92.7 63.3 75.2 87.5 Human-2 84.0 70.0 76.4 87.0 Human-3 72.2 86.7 78.8 86.0 Table 6: Evaluation on Test-200 (%). using only the length-4 counts in the DISTRIB system (Train/Test split). Surprisingly, the drop in FScore was only one percent, to 74.8%. Using only the length-5 counts drops F-Score to 71.4%. Neither are statistically significant; however there seems to be diminishing returns from longer context patterns. Another way to view the limited context is to ask, given the amount of context we have, are we making optimum use of it? We answer this by seeing how well humans can do with the same information. As explained in Section 3.2, our system uses 5-gram context patterns that together span from four-to-theleft to four-to-the-right of the pronoun. We thus provide these same nine-token windows to our human subjects, and ask them to decide whether the pronouns refer to previous noun phrases or not, based on these contexts. Subjects first performed a dryrun experiment on separate development data. They were shown their errors and sources of confusion were clarified. They then made the judgments unassisted on the final Test-200 data. Three humans performed the experiment. Their results show a range of preferences for precision versus recall, with both F-Score and Accuracy on average below the performance of COMBO (Table 6). Foremost, these results show that our distributional approach is already getting good leverage from the limited context information, around that achieved by our best human. It is instructive to inspect the twenty-five Test-200 instances that the COMBO system classified incorrectly, given human performance on this same set. Seventeen of the twenty-five COMBO errors were also made by one or more human subjects, suggesting system errors are also mostly due to limited context. For example, one of these errors was for the context: “it takes an astounding amount...” Here, the non-referential nature of the instance is not apparent without the infinitive clause that ends the sentence: “... of time to compare very long DNA sequences 16 with each other.” Six of the eight errors unique to the COMBO system were cases where the system falsely said the pronoun was non-referential. Four of these could have referred to entire sentences or clauses rather than nouns. These confusing cases, for both humans and our system, result from our definition of a referential pronoun: pronouns with verbal or clause antecedents are considered non-referential (Section 3.1). If an antecedent verb or clause is replaced by a nominalization (Smith researched... to Smith’s research), a referring pronoun, in the same context, becomes referential. When we inspect the probabilities produced by the maximum entropy classifier (Section 4.2), we see only a weak bias for the non-referential class on these examples, reflecting our classifier’s uncertainty. It would likely be possible to improve accuracy on these cases by encoding the presence or absence of preceding nominalizations as a feature of our classifier. Another false non-referential decision is for the phrase “... machine he had installed it on.” The it is actually referential, but the extracted patterns (e.g. “he had install * on”) are nevertheless usually filled with it.9 Again, it might be possible to fix such examples by leveraging the preceding discourse. Notably, the first noun-phrase before the context is the word “software.” There is strong compatibility between the pronoun-parent “install” and the candidate antecedent “software.” In a full coreference resolution system, when the anaphora resolution module has a strong preference to link it to an antecedent (which it should when the pronoun is indeed referential), we can override a weak non-referential probability. Non-referential it detection should not be a pre-processing step, but rather part of a globallyoptimal configuration, as was done for general noun phrase anaphoricity by Denis and Baldridge (2007). The suitability of this kind of approach to correcting some of our system’s errors is especially obvious when we inspect the probabilities of the maximum entropy model’s output decisions on the Test-200 set. Where the maximum entropy classifier makes mistakes, it does so with less confidence than when it classifies correct examples. The average predicted 9This example also suggests using filler counts for the word “the” as a feature when it is the last word in the pattern. probability of the incorrect classifications is 76.0% while the average probability of the correct classifications is 90.3%. Many incorrect decisions are ready to switch sides; our next step will be to use features of the preceding discourse and the candidate antecedents to help give them a push. 6 Conclusion We have presented an approach to detecting nonreferential pronouns in text based on the distribution of the pronoun’s context. The approach is simple to implement, attains state-of-the-art results, and should be easily ported to other languages. Our technique demonstrates how large volumes of data can be used to gather world knowledge for natural language processing. A consequence of this research was the creation of It-Bank, a collection of thousands of labelled examples of the pronoun it, which will benefit other coreference resolution researchers. Error analysis reveals that our system is getting good leverage out of the pronoun context, achieving results comparable to human performance given equivalent information. To boost performance further, we will need to incorporate information from preceding discourse. Future research will also test the distributional classification of other ambiguous pronouns, like this, you, there, and that. Another avenue of study will look at the interaction between coreference resolution and machine translation. For example, if a single form in English (e.g. that) is separated into different meanings in another language (e.g., Spanish demonstrative ese, nominal reference ´ese, abstract or statement reference eso, and complementizer que), then aligned examples provide automatically-disambiguated English data. We could extract context patterns and collect statistics from these examples like in our current approach. In general, jointly optimizing translation and coreference is an exciting and largely unexplored research area, now partly enabled by our portable nonreferential detection methodology. Acknowledgments We thank Kristin Musselman and Christopher Pinchak for assistance preparing the data, and we thank Google Inc. for sharing their 5-gram corpus. We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Alberta Ingenuity Fund, and the Alberta Informatics Circle of Research Excellence. 17 References Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. Shane Bergsma and Dekang Lin. 2006. Bootstrapping path-based pronoun resolution. In COLINGACL, pages 33–40. Adrianne Boyd, Whitney Gegg-Harrison, and Donna Byron. 2005. Identifying non-referential it: a machine learning approach incorporating linguistically motivated patterns. In ACL Workshop on Feature Engineering for Machine Learning in NLP, pages 40–47. Colin Cherry and Shane Bergsma. 2005. An expectation maximization approach to pronoun resolution. In CoNLL, pages 88–95. Ido Dagan and Alan Itai. 1990. Automatic processing of large corpora for the resolution of anaphora references. In COLING, volume 3, pages 330–332. Pascal Denis and Jason Baldridge. 2007. Joint determination of anaphoricity and coreference using integer programming. In NAACL-HLT, pages 236–243. Miriam Eckert and Michael Strube. 2000. Dialogue acts, synchronizing units, and anaphora resolution. Journal of Semantics, 17(1):51–89. Richard Evans. 2001. Applying machine learning toward an automatic classification of it. Literary and Linguistic Computing, 16(1):45–57. Surabhi Gupta, Matthew Purver, and Dan Jurafsky. 2007. Disambiguating between generic and referential “you” in dialog. In ACL Demo and Poster Sessions, pages 105–108. Aria Haghighi and Dan Klein. 2007. Unsupervised coreference resolution in a nonparametric Bayesian model. In ACL, pages 848–855. Donna Harman. 1992. The DARPA TIPSTER project. ACM SIGIR Forum, 26(2):26–28. Zellig Harris. 1985. Distributional structure. In J.J. Katz, editor, The Philosophy of Linguistics, pages 26– 47. Oxford University Press, New York. Donald Hindle. 1990. Noun classification from predicate-argument structures. In ACL, pages 268– 275. Graeme Hirst. 1981. Anaphora in Natural Language Understanding: A Survey. Springer Verlag. Jerry Hobbs. 1978. Resolving pronoun references. Lingua, 44(311):339–352. Daniel Jurafsky and James H. Martin. 2000. Speech and language processing. Prentice Hall. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT Summit X, pages 79–86. Shalom Lappin and Herbert J. Leass. 1994. An algorithm for pronominal anaphora resolution. Computational Linguistics, 20(4):535–561. Dekang Lin and Patrick Pantel. 2001. Discovery of inference rules for question answering. Natural Language Engineering, 7(4):343–360. Dekang Lin. 1998a. Automatic retrieval and clustering of similar words. In COLING-ACL, pages 768–773. Dekang Lin. 1998b. Dependency-based evaluation of MINIPAR. In LREC Workshop on the Evaluation of Parsing Systems. Andrew Kachites McCallum. 1996. Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering. http://www.cs.cmu.edu/˜mccallum/bow. MUC-7. 1997. Coreference task definition (v3.0, 13 Jul 97). In Proceedings of the Seventh Message Understanding Conference (MUC-7). Christoph M¨uller. 2006. Automatic detection of nonreferential It in spoken multi-party dialog. In EACL, pages 49–56. Christoph M¨uller. 2007. Resolving It, This, and That in unrestricted multi-party dialog. In ACL, pages 816– 823. Vincent Ng and Claire Cardie. 2002. Identifying anaphoric and non-anaphoric noun phrases to improve coreference resolution. In COLING, pages 730–736. Chris D. Paice and Gareth D. Husk. 1987. Towards the automatic recognition of anaphoric features in English text: the impersonal pronoun “it”. Computer Speech and Language, 2:109–132. Martin F. Porter. 1980. An algorithm for suffix stripping. Program, 14(3):130–137. Ian H. Witten and Eibe Frank. 2005. Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann, second edition. Shanheng Zhao and Hwee Tou Ng. 2007. Identification and resolution of Chinese zero pronouns: A machine learning approach. In EMNLP, pages 541–550. 18
2008
2
Proceedings of ACL-08: HLT, pages 165–173, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Trainable Generation of Big-Five Personality Styles through Data-driven Parameter Estimation Franc¸ois Mairesse Cambridge University Engineering Department Trumpington Street Cambridge, CB2 1PZ, United Kingdom [email protected] Marilyn Walker Department of Computer Science University of Sheffield Sheffield, S1 4DP, United Kingdom [email protected] Abstract Previous work on statistical language generation has primarily focused on grammaticality and naturalness, scoring generation possibilities according to a language model or user feedback. More recent work has investigated data-driven techniques for controlling linguistic style without overgeneration, by reproducing variation dimensions extracted from corpora. Another line of work has produced handcrafted rule-based systems to control specific stylistic dimensions, such as politeness and personality. This paper describes a novel approach that automatically learns to produce recognisable variation along a meaningful stylistic dimension— personality—without the computational cost incurred by overgeneration techniques. We present the first evaluation of a data-driven generation method that projects multiple personality traits simultaneously and on a continuous scale. We compare our performance to a rule-based generator in the same domain. 1 Introduction Over the last 20 years, statistical language models (SLMs) have been used successfully in many tasks in natural language processing, and the data available for modeling has steadily grown (Lapata and Keller, 2005). Langkilde and Knight (1998) first applied SLMs to statistical natural language generation (SNLG), showing that high quality paraphrases can be generated from an underspecified representation of meaning, by first applying a very underconstrained, rule-based overgeneration phase, whose outputs are then ranked by an SLM scoring phase. Since then, research in SNLG has explored a range of models for both dialogue and text generation. One line of work has primarily focused on grammaticality and naturalness, scoring the overgeneration phase with a SLM, and evaluating against a gold-standard corpus, using string or tree-match metrics (Langkilde-Geary, 2002; Bangalore and Rambow, 2000; Chambers and Allen, 2004; Belz, 2005; Isard et al., 2006). Another thread investigates SNLG scoring models trained using higher-level linguistic features to replicate human judgments of utterance quality (Rambow et al., 2001; Nakatsu and White, 2006; Stent and Guo, 2005). The error of these scoring models approaches the gold-standard human ranking with a relatively small training set. A third SNLG approach eliminates the overgeneration phase (Paiva and Evans, 2005). It applies factor analysis to a corpus exhibiting stylistic variation, and then learns which generation parameters to manipulate to correlate with factor measurements. The generator was shown to reproduce intended factor levels across several factors, thus modelling the stylistic variation as measured in the original corpus. Our goal is a generation technique that can target multiple stylistic effects simultaneously and over a continuous scale, controlling stylistic dimensions that are commonly understood and thus meaningful to users and application developers. Our intended applications are output utterances for intelligent training or intervention systems, video game characters, or virtual environment avatars. In previous work, we presented PERSONAGE, a psychologically-informed rule-based generator based on the Big Five personality model, and we showed that PERSONAGE can project extreme personality on the extraversion scale, i.e. both introverted and extraverted personality types (Mairesse and Walker, 2007). We used the Big Five model to develop PERSONAGE for several reasons. First, the Big Five has been shown in psychology to ex165 Trait High Low Extraversion warm, assertive, sociable, excitement seeking, active, spontaneous, optimistic, talkative shy, quiet, reserved, passive, solitary, moody Emotional stability calm, even-tempered, reliable, peaceful, confident neurotic, anxious, depressed, self-conscious Agreeableness trustworthy, considerate, friendly, generous, helpful unfriendly, selfish, suspicious, uncooperative, malicious Conscientiousness competent, disciplined, dutiful, achievement striving disorganised, impulsive, unreliable, forgetful Openness to experience creative, intellectual, curious, cultured, complex narrow-minded, conservative, ignorant, simple Table 1: Example adjectives associated with extreme values of the Big Five trait scales. plain much of the variation in human perceptions of personality differences. Second, we believe that the adjectives used to develop the Big Five model provide an intuitive, meaningful definition of linguistic style. Table 1 shows some of the trait adjectives associated with the extremes of each Big Five trait. Third, there are many studies linking personality to linguistic variables (Pennebaker and King, 1999; Mehl et al., 2006, inter alia). See (Mairesse and Walker, 2007) for more detail. In this paper, we further test the utility of basing stylistic variation on the Big Five personality model. The Big Five traits are represented by scalar values that range from 1 to 7, with values normally distributed among humans. While our previous work targeted extreme values of individual traits, here we show that we can target multiple personality traits simultaneously and over the continuous scales of the Big Five model. Section 2 describes a novel parameter-estimation method that automatically learns to produce recognisable variation for all Big Five traits, without overgeneration, implemented in a new SNLG called PERSONAGE-PE. We show that PERSONAGE-PE generates targets for multiple personality dimensions, using linear and non-linear parameter estimation models to predict generation parameters directly from the scalar targets. Section 3.2 shows that humans accurately perceive the intended variation, and Section 3.3 compares PERSONAGE-PE (trained) with PERSONAGE (rule-based; Mairesse and Walker, 2007). We delay a detailed discussion of related work to Section 4, where we summarize and discuss future work. 2 Parameter Estimation Models The data-driven parameter estimation method consists of a development phase and a generation phase (Section 3). The development phase: 1. Uses a base generator to produce multiple utterances by randomly varying its parameters; 2. Collects human judgments rating the personality of each utterance; 3. Trains statistical models to predict the parameters from the personality judgments; 7.00 6.00 5.00 4.00 3.00 2.00 1.00 Agreeableness rating 30 20 10 0 Frequency Figure 1: Distribution of average agreeableness ratings from the 2 expert judges for 160 random utterances. 4. Selects the best model for each parameter via crossvalidation. 2.1 Base Generator We make minimal assumptions about the input to the generator to favor domain independence. The input is a speech act, a potential content pool that can be used to achieve that speech act, and five scalar personality parameters (1. . .7), specifying values for the continuous scalar dimensions of each trait in the Big Five model. See Table 1. This requires a base generator that generates multiple outputs expressing the same input content by varying linguistic parameters related to the Big Five traits. We start with the PERSONAGE generator (Mairesse and Walker, 2007), which generates recommendations and comparisons of restaurants. We extend PERSONAGE with new parameters for a total of 67 parameters in PERSONAGE-PE. See Table 2. These parameters are derived from psychological studies identifying linguistic markers of the Big Five traits (Pennebaker and King, 1999; Mehl et al., 2006, inter alia). As PERSONAGE’s input parameters are domain-independent, most parameters range continuously between 0 and 1, while pragmatic marker insertion parameters are binary, except for the SUBJECT IMPLICITNESS, STUTTERING and PRONOMI166 Parameters Description Content parameters: VERBOSITY Control the number of propositions in the utterance RESTATEMENTS Paraphrase an existing proposition, e.g. ‘Chanpen Thai has great service, it has fantastic waiters’ REPETITIONS Repeat an existing proposition CONTENT POLARITY Control the polarity of the propositions expressed, i.e. referring to negative or positive attributes REPETITIONS POLARITY Control the polarity of the restated propositions CONCESSIONS Emphasise one attribute over another, e.g. ‘even if Chanpen Thai has great food, it has bad service’ CONCESSIONS POLARITY Determine whether positive or negative attributes are emphasised POLARISATION Control whether the expressed polarity is neutral or extreme POSITIVE CONTENT FIRST Determine whether positive propositions—including the claim—are uttered first Syntactic template selection parameters: SELF-REFERENCES Control the number of first person pronouns CLAIM COMPLEXITY Control the syntactic complexity (syntactic embedding) CLAIM POLARITY Control the connotation of the claim, i.e. whether positive or negative affect is expressed Aggregation operations: PERIOD Leave two propositions in their own sentences, e.g. ‘Chanpen Thai has great service. It has nice decor.’ RELATIVE CLAUSE Aggregate propositions with a relative clause, e.g. ‘Chanpen Thai, which has great service, has nice decor’ WITH CUE WORD Aggregate propositions using with, e.g. ‘Chanpen Thai has great service, with nice decor’ CONJUNCTION Join two propositions using a conjunction, or a comma if more than two propositions MERGE Merge the subject and verb of two propositions, e.g. ‘Chanpen Thai has great service and nice decor’ ALSO CUE WORD Join two propositions using also, e.g. ’Chanpen Thai has great service, also it has nice decor’ CONTRAST - CUE WORD Contrast two propositions using while, but, however, on the other hand, e.g. ’While Chanpen Thai has great service, it has bad decor’, ’Chanpen Thai has great service, but it has bad decor’ JUSTIFY - CUE WORD Justify a proposition using because, since, so, e.g. ’Chanpen Thai is the best, because it has great service’ CONCEDE - CUE WORD Concede a proposition using although, even if, but/though, e.g. ‘Although Chanpen Thai has great service, it has bad decor’, ‘Chanpen Thai has great service, but it has bad decor though’ MERGE WITH COMMA Restate a proposition by repeating only the object, e.g. ’Chanpen Thai has great service, nice waiters’ CONJ. WITH ELLIPSIS Restate a proposition after replacing its object by an ellipsis, e.g. ’Chanpen Thai has . . . , it has great service’ Pragmatic markers: SUBJECT IMPLICITNESS Make the restaurant implicit by moving the attribute to the subject, e.g. ‘the service is great’ NEGATION Negate a verb by replacing its modifier by its antonym, e.g. ‘Chanpen Thai doesn’t have bad service’ SOFTENER HEDGES Insert syntactic elements (sort of, kind of, somewhat, quite, around, rather, I think that, it seems that, it seems to me that) to mitigate the strength of a proposition, e.g. ‘Chanpen Thai has kind of great service’ or ‘It seems to me that Chanpen Thai has rather great service’ EMPHASIZER HEDGES Insert syntactic elements (really, basically, actually, just) to strengthen a proposition, e.g. ‘Chanpen Thai has really great service’ or ‘Basically, Chanpen Thai just has great service’ ACKNOWLEDGMENTS Insert an initial back-channel (yeah, right, ok, I see, oh, well), e.g. ‘Well, Chanpen Thai has great service’ FILLED PAUSES Insert syntactic elements expressing hesitancy (like, I mean, err, mmhm, you know), e.g. ‘I mean, Chanpen Thai has great service, you know’ or ‘Err... Chanpen Thai has, like, great service’ EXCLAMATION Insert an exclamation mark, e.g. ‘Chanpen Thai has great service!’ EXPLETIVES Insert a swear word, e.g. ‘the service is damn great’ NEAR-EXPLETIVES Insert a near-swear word, e.g. ‘the service is darn great’ COMPETENCE MITIGATION Express the speaker’s negative appraisal of the hearer’s request, e.g. ‘everybody knows that . . . ’ TAG QUESTION Insert a tag question, e.g. ‘the service is great, isn’t it?’ STUTTERING Duplicate the first letters of a restaurant’s name, e.g. ‘Ch-ch-anpen Thai is the best’ CONFIRMATION Begin the utterance with a confirmation of the restaurant’s name, e.g. ‘did you say Chanpen Thai?’ INITIAL REJECTION Begin the utterance with a mild rejection, e.g. ‘I’m not sure’ IN-GROUP MARKER Refer to the hearer as a member of the same social group, e.g. pal, mate and buddy PRONOMINALIZATION Replace occurrences of the restaurant’s name by pronouns Lexical choice parameters: LEXICAL FREQUENCY Control the average frequency of use of each content word, according to BNC frequency counts WORD LENGTH Control the average number of letters of each content word VERB STRENGTH Control the strength of the selected verbs, e.g. ‘I would suggest’ vs. ‘I would recommend’ Table 2: The 67 generation parameters whose target values are learned. Aggregation cue words, hedges, acknowledgments and filled pauses are learned individually (as separate parameters), e.g. kind of is modeled differently than somewhat in the SOFTENER HEDGES category. Parameters are detailed in previous work (Mairesse and Walker, 2007). NALIZATION parameters. 2.2 Random Sample Generation and Expert Judgments We generate a sample of 160 random utterances by varying the parameters in Table 2 with a uniform distribution. This sample is intended to provide enough training material for estimating all 67 parameters for each personality dimension. Following Mairesse and Walker (2007), two expert judges (not the authors) familiar with the Big Five adjectives (Table 1) evaluate the personality of each utterance using the Ten-Item Personality Inventory (TIPI; Gosling et al., 2003), and also judge the utterance’s naturalness. Thus 11 judgments were made for each utterance for a total of 1760 judgments. The TIPI outputs a rating on a scale from 1 (low) to 7 (high) for each Big Five trait. The expert judgments are approximately nor167 mally distributed; Figure 1 shows the distribution for agreeableness. 2.3 Statistical Model Training Training data is created for each generation parameter—i.e. the output variable—to train statistical models predicting the optimal parameter value from the target personality scores. The models are thus based on the simplifying assumption that the generation parameters are independent. Any personality trait whose correlation with a generation decision is below 0.1 is removed from the training data. This has the effect of removing parameters that do not correlate strongly with any trait, which are set to a constant default value at generation time. Since the input parameter values may not be satisfiable depending on the input content, the actual generation decisions made for each utterance are recorded. For example, the CONCESSIONS decision value is the actual number of concessions produced in the utterance. To ensure that the models’ output can control the generator, the generation decision values are normalized to match the input range (0. . .1) of PERSONAGE-PE. Thus the dataset consists of 160 utterances and the corresponding generation decisions, each associated with 5 personality ratings averaged over both judges. Parameter estimation models are trained to predict either continuous (e.g. VERBOSITY) or binary (e.g. EXCLAMATION) generation decisions. We compare various learning algorithms using the Weka toolkit (with default values unless specified; Witten and Frank, 2005). Continuous parameters are modeled with a linear regression model (LR), an M5’ model tree (M5), and a model based on support vector machines with a linear kernel (SVM). As regression models can extrapolate beyond the [0, 1] interval, the output parameter values are truncated if needed—at generation time—before being sent to the base generator. Binary parameters are modeled using classifiers that predict whether the parameter is enabled or disabled. We test a Naive Bayes classifier (NB), a j48 decision tree (J48), a nearest-neighbor classifier using one neighbor (NN), a Java implementation of the RIPPER rule-based learner (JRIP), the AdaBoost boosting algorithm (ADA), and a support vector machines classifier with a linear kernel (SVM). Figures 2, 3 and 4 show the models learned for the EXCLAMATION (binary), STUTTERING (continuous), and CONTENT POLARITY (continuous) parameters in Table 2. The models predict generation parameters from input personality scores; note that Condition Class Weight -----------------if extraversion > 6.42 then 1 else 0 1.81 if extraversion > 4.42 then 1 else 0 0.38 if extraversion <= 6.58 then 1 else 0 0.22 if extraversion > 4.71 then 1 else 0 0.28 if agreeableness > 5.13 then 1 else 0 0.42 if extraversion <= 6.58 then 1 else 0 0.14 if extraversion > 4.79 then 1 else 0 0.19 if extraversion <= 6.58 then 1 else 0 0.17 Figure 2: AdaBoost model predicting the EXCLAMATION parameter. Given input trait values, the model outputs the class yielding the largest sum of weights for the rules returning that class. Class 0 = disabled, class 1 = enabled. (normalized) Content polarity = 0.054 - 0.102 * (normalized) emotional stability + 0.970 * (normalized) agreeableness - 0.110 * (normalized) conscientiousness + 0.013 * (normalized) openness to experience Figure 3: SVM model with a linear kernel predicting the CONTENT POLARITY parameter. sometimes the best performing model is non-linear. Given input trait values, the AdaBoost model in Figure 2 outputs the class yielding the largest sum of weights for the rules returning that class. For example, the first rule of the EXCLAMATION model shows that an extraversion score above 6.42 out of 7 would increase the weight of the enabled class by 1.81. The fifth rule indicates that a target agreeableness above 5.13 would further increase the weight by .42. The STUTTERING model tree in Figure 4 lets us calculate that a low emotional stability (1.0) together with a neutral conscientiousness and openness to experience (4.0) yield a parameter value of .62 (see LM2), whereas a neutral emotional stability decreases the value down to .17. Figure 4 also shows how personality traits that do not affect the parameter are removed, i.e. emotional stability, conscientiousness and openness to experience are the traits that affect stuttering. The linear model in Figure 3 shows that agreeableness has a strong effect on the CONTENT POLARITY parameter (.97 weight), but emotional stability, conscientiousness and openness to experience also have an effect. 2.4 Model Selection The final step of the development phase identifies the best performing model(s) for each generation parameter via cross-validation. For continuous pa168 ≤3.875 > 3.875 Conscientiousness Emotional stability ≤4.375 > 4.375 Stuttering = -0.0136 * emotional stability + 0.0098 * conscientiousness + 0.0063 * openness to experience + 0.0126 Stuttering = -0.1531 * emotional stability + 0.004 * conscientiousness + 0.1122 * openness to experience + 0.3129 Stuttering = -0.0142 * emotional stability + 0.004 * conscientiousness + 0.0076 * openness to experience + 0.0576 Figure 4: M5’ model tree predicting the STUTTERING parameter. Continuous parameters LR M5 SVM Content parameters: VERBOSITY 0.24 0.26 0.21 RESTATEMENTS 0.14 0.14 0.04 REPETITIONS 0.13 0.13 0.08 CONTENT POLARITY 0.46 0.46 0.47 REPETITIONS POLARITY 0.02 0.15 0.06 CONCESSIONS 0.23 0.23 0.12 CONCESSIONS POLARITY -0.01 0.16 0.07 POLARISATION 0.20 0.21 0.20 Syntactic template selection: CLAIM COMPLEXITY 0.10 0.33 0.26 CLAIM POLARITY 0.04 0.04 0.05 Aggregation operations: INFER - WITH CUE WORD 0.03 0.03 0.01 INFER - ALSO CUE WORD 0.10 0.10 0.06 JUSTIFY - SINCE CUE WORD 0.03 0.07 0.05 JUSTIFY - SO CUE WORD 0.07 0.07 0.04 JUSTIFY - PERIOD 0.36 0.35 0.21 CONTRAST - PERIOD 0.27 0.26 0.26 RESTATE - MERGE WITH COMMA 0.18 0.18 0.09 CONCEDE - ALTHOUGH CUE WORD 0.08 0.08 0.05 CONCEDE - EVEN IF CUE WORD 0.05 0.05 0.03 Pragmatic markers: SUBJECT IMPLICITNESS 0.13 0.13 0.04 STUTTERING INSERTION 0.16 0.23 0.17 PRONOMINALIZATION 0.22 0.20 0.17 Lexical choice parameters: LEXICAL FREQUENCY 0.21 0.21 0.19 WORD LENGTH 0.18 0.18 0.15 Table 3: Pearson’s correlation between parameter model predictions and continuous parameter values, for different regression models. Parameters that do not correlate with any trait are omitted. Aggregation operations are associated with a rhetorical relation (e.g. INFER). Results are averaged over a 10-fold cross-validation. rameters, Table 3 evaluates modeling accuracy by comparing the correlations between the model’s predictions and the actual parameter values in the test folds. Table 4 reports results for binary parameter classifiers, by comparing the F-measures of the enabled class. Best performing models are identified in bold; parameters that do not correlate with any trait or that produce a poor modeling accuracy are omitted. The CONTENT POLARITY parameter is modeled Binary parameters NB J48 NN ADA SVM Pragmatic markers: SOFTENER HEDGES kind of 0.00 0.00 0.16 0.11 0.10 rather 0.00 0.00 0.02 0.01 0.01 quite 0.14 0.08 0.09 0.07 0.06 EMPHASIZER HEDGES basically 0.00 0.00 0.02 0.01 0.01 ACKNOWLEDGMENTS yeah 0.00 0.00 0.04 0.03 0.03 ok 0.13 0.07 0.06 0.05 0.05 FILLED PAUSES err 0.32 0.20 0.24 0.22 0.19 EXCLAMATION 0.23 0.34 0.36 0.38 0.34 EXPLETIVES 0.27 0.18 0.24 0.17 0.15 IN-GROUP MARKER 0.40 0.31 0.31 0.24 0.21 TAG QUESTION 0.32 0.21 0.21 0.15 0.13 CONFIRMATION 0.00 0.00 0.07 0.04 0.04 Table 4: F-measure of the enabled class for classification models of binary parameters. Parameters that do not correlate with any trait are omitted. Results are averaged over a 10-fold cross-validation. JRIP models are not shown as they never perform best. the most accurately, with the SVM model in Figure 3 producing a correlation of .47 with the true parameter values. Models of the PERIOD aggregation operation also perform well, with a linear regression model yielding a correlation of .36 when realizing a justification, and .27 when contrasting two propositions. CLAIM COMPLEXITY and VERBOSITY are also modeled successfully, with correlations of .33 and .26 using a model tree. The model tree controlling the STUTTERING parameter illustrated in Figure 4 produces a correlation of .23. For binary parameters, Table 4 shows that the Naive Bayes classifier is generally the most accurate, with F-measures of .40 for the IN-GROUP MARKER parameter, and .32 for both the insertion of filled pauses (err) and tag questions. The AdaBoost algorithm best predicts the EXCLAMATION parameter, with an F-measure of .38 for the model in Figure 2. 169 # Traits End Rating Nat Output utterance 1.a Extraversion high 4.42 4.79 Radio Perfecto’s price is 25 dollars but Les Routiers provides adequate food. I imagine they’re alright! Agreeableness high 4.94 1.b Emotional stability high 5.35 5.04 Let’s see, Les Routiers and Radio Perfecto... You would probably appreciate them. Radio Perfecto is in the East Village with kind of acceptable food. Les Routiers is located in Manhattan. Its price is 41 dollars. Conscientiousness high 5.21 2.a Extraversion low 3.65 3.21 Err... you would probably appreciate Trattoria Rustica, wouldn’t you? It’s in Manhattan, also it’s an italian restaurant. It offers poor ambience, also it’s quite costly. Agreeableness low 4.02 2.b Emotional stability low 4.13 4.50 Trattoria Rustica isn’t as bad as the others. Err... even if it’s costly, it offers kind of adequate food, alright? It’s an italian place. Openness to low 3.85 experience Table 5: Example outputs controlled by the parameter estimation models for a comparison (#1) and a recommendation (#2), with the average judges’ ratings (Rating) and naturalness (Nat). Ratings are on a scale from 1 to 7, with 1 = very low (e.g. neurotic or introvert) and 7 = very high on the dimension (e.g. emotionally stable or extraverted). 3 Evaluation Experiment The generation phase of our parameter estimation SNLG method consists of the following steps: 1. Use the best performing models to predict parameter values from the desired personality scores; 2. Generate the output utterance using the predicted parameter values. We then evaluate the output utterances using naive human judges to rate their perceived personality and naturalness. 3.1 Evaluation Method Given the best performing model for each generation parameter, we generate 5 utterances for each of 5 recommendation and 5 comparison speech acts. Each utterance targets an extreme value for two traits (either 1 or 7 out of 7) and neutral values for the remaining three traits (4 out of 7). The goal is for each utterance to project multiple traits on a continuous scale. To generate a range of alternatives, a Gaussian noise with a standard deviation of 10% of the full scale is added to each target value. Subjects were 24 native English speakers (12 male and 12 female graduate students from a range of disciplines from both the U.K. and the U.S.). Subjects evaluate the naturalness and personality of each utterance using the TIPI (Gosling et al., 2003). To limit the experiment’s duration, only the two traits with extreme target values are evaluated for each utterance. Subjects thus answered 5 questions for 50 utterances, two from the TIPI for each extreme trait and one about naturalness (250 judgments in total per subject). Subjects were not told that the utterances were intended to manifest extreme trait values. Table 5 shows several sample outputs and the mean personality ratings from the human judges. For example, utterance 1.a projects a high extraversion through the insertion of an exclamation mark based on the model in Figure 2, whereas utterance 2.a conveys introversion by beginning with the filled pause err. The same utterance also projects a low agreeableness by focusing on negative propositions, through a low CONTENT POLARITY parameter value as per the model in Figure 3. This evaluation addresses a number of open questions discussed below. Q1: Is the personality projected by models trained on ratings from a few expert judges recognised by a larger sample of naive judges? (Section 3.2) Q2: Can a combination of multiple traits within a single utterance be detected by naive judges? (Section 3.2) Q3: How does PERSONAGE-PE compare to PERSONAGE, a psychologically-informed rule-based generator for projecting extreme personality? (Section 3.3) Q4: Does the parameter estimation SNLG method produce natural utterances? (Section 3.4) 3.2 Parameter Estimation Evaluation Table 6 shows that extraversion is the dimension modeled most accurately by the parameter estimation models, producing a .45 correlation with the subjects’ ratings (p < .01). Emotional stability, agreeableness, and openness to experience ratings also correlate strongly with the target scores, with correlations of .39, .36 and .17 respectively (p < .01). Additionally, Table 6 shows that the magnitude of the correlation increases when considering the perception of a hypothetical average subject, i.e. smoothing individual variation by averaging the ratings over all 24 judges, producing a correlation ravg up to .80 for extraversion. These correlations are unexpectedly high; in corpus analyses, significant correlations as low as .05 to .10 are typically observed between personality and linguistic markers (Pennebaker and King, 1999; Mehl et al., 2006). Conscientiousness is the only dimension whose ratings do not correlate with the target scores. The 170 comparison with rule-based results in Section 3.3 suggests that this is not because conscientiousness cannot be exhibited in our domain or manifested in a single utterance, so perhaps this arises from differing perceptions of conscientiousness between the expert and naive judges. Trait r ravg e Extraversion .45 • .80 • 1.89 Emotional stability .39 • .64 • 2.14 Agreeableness .36 • .68 • 2.38 Conscientiousness -.01 -.02 2.79 Openness to experience .17 • .41 • 2.51 • statistically significant correlation p < .05, • p = .07 (two-tailed) Table 6: Pearson’s correlation coefficient r and mean absolute error e between the target personality scores and the 480 judges’ ratings (20 ratings per trait for 24 judges); ravg is the correlation between the personality scores and the average judges’ ratings. Table 6 shows that the mean absolute error varies between 1.89 and 2.79 on a scale from 1 to 7. Such large errors result from the decision to ask judges to answer just the TIPI questions for the two traits that were the extreme targets (See Section 3.1), because the judges tend to use the whole scale, with approximately normally distributed ratings. This means that although the judges make distinctions leading to high correlations, they do so on a compressed scale. This explains the large correlations despite the magnitude of the absolute error. Table 7 shows results evaluating whether utterances targeting the extremes of a trait are perceived differently. The ratings differ significantly for all traits but conscientiousness (p ≤.001). Thus parameter estimation models can be used in applications that only require discrete binary variation. Trait Low High Extraversion 3.69 5.06 • Emotional stability 3.75 4.75 • Agreeableness 3.42 4.33 • Conscientiousness 4.16 4.15 Openness to experience 3.71 4.06 • • statistically significant difference p ≤.001 (two-tailed) Table 7: Average personality ratings for the utterances generated with the low and high target values for each trait on a scale from 1 to 7. It is important to emphasize that generation parameters were predicted based on 5 target personality values. Thus, the results show that individual traits are perceived even when utterances project other traits as well, confirming that the Big Five theory models independent dimensions and thus provides a useful and meaningful framework for modeling variation in language. Additionally, although we do not directly evaluate the perception of midrange values of personality target scores, the results suggest that mid-range personality is modeled correctly because the neutral target scores do not affect the perception of extreme traits. 3.3 Comparison with Rule-Based Generation PERSONAGE is a rule-based personality generator based on handcrafted parameter settings derived from psychological studies. Mairesse and Walker (2007) show that this approach generates utterances that are perceptibly different along the extraversion dimension. Table 8 compares the mean ratings of the utterances generated by PERSONAGE-PE with ratings of 20 utterances generated by PERSONAGE for each extreme of each Big Five scale (40 for extraversion, resulting in 240 handcrafted utterances in total). Table 8 shows that the handcrafted parameter settings project a significantly more extreme personality for 6 traits out of 10. However, the learned parameter models for neuroticism, disagreeableness, unconscientiousness and openness to experience do not perform significantly worse than the handcrafted generator. These findings are promising as we discuss further in Section 4. Method Rule-based Learned parameters Trait Low High Low High Extraversion 2.96 5.98 3.69 ◦ 5.05 ◦ Emotional stability 3.29 5.96 3.75 4.75 ◦ Agreeableness 3.41 5.66 3.42 4.33 ◦ Conscientiousness 3.71 5.53 4.16 4.15 ◦ Openness to experience 2.89 4.21 3.71 ◦ 4.06 •,◦significant increase or decrease of the variation range over the average rule-based ratings (p < .05, two-tailed) Table 8: Pair-wise comparison between the ratings of the utterances generated using PERSONAGE-PE with extreme target values (Learned Parameters), and the ratings for utterances generated with Mairesse and Walker’s rulebased PERSONAGE generator, (Rule-based). Ratings are averaged over all judges. 3.4 Naturalness Evaluation The naive judges also evaluated the naturalness of the outputs of our trained models. Table 9 shows that the average naturalness is 3.98 out of 7, which is significantly lower (p < .05) than the naturalness of handcrafted and randomly generated utterances reported by Mairesse and Walker (2007). It is possible that the differences arise from judgments of utterances targeting multiple traits, or that the naive 171 judges are more critical. Trait Rule-based Random Learned All 4.59 4.38 3.98 Table 9: Average naturalness ratings for utterances generated using (1) PERSONAGE, the rule-based generator, (2) the random utterances (expert judges) and (3) the outputs of PERSONAGE-PE using the parameter estimation models (Learned, naive judges). The means differ significantly at the p < .05 level (two-tailed independent sample t-test). 4 Conclusion We present a new method for generating linguistic variation projecting multiple personality traits continuously, by combining and extending previous research in statistical natural language generation (Paiva and Evans, 2005; Rambow et al., 2001; Isard et al., 2006; Mairesse and Walker, 2007). While handcrafted rule-based approaches are limited to variation along a small number of discrete points (Hovy, 1988; Walker et al., 1997; Lester et al., 1997; Power et al., 2003; Cassell and Bickmore, 2003; Piwek, 2003; Mairesse and Walker, 2007; Rehm and Andr´e, in press), we learn models that predict parameter values for any arbitrary value on the variation dimension scales. Additionally, our data-driven approach can be applied to any dimension that is meaningful to human judges, and it provides an elegant way to project multiple dimensions simultaneously, by including the relevant dimensions as features of the parameter models’ training data. Isard et al. (2006) and Mairesse and Walker (2007) also propose a personality generation method, in which a data-driven personality model selects the best utterance from a large candidate set. Isard et al.’s technique has not been evaluated, while Mairesse and Walker’s overgenerate and score approach is inefficient. Paiva and Evans’ technique does not overgenerate (2005), but it requires a search for the optimal generation decisions according to the learned models. Our approach does not require any search or overgeneration, as parameter estimation models predict the generation decisions directly from the target variation dimensions. This technique is therefore beneficial for real-time generation. Moreover the variation dimensions of Paiva and Evans’ data-driven technique are extracted from a corpus: there is thus no guarantee that they can be easily interpreted by humans, and that they generalise to other corpora. Previous work has shown that modeling the relation between personality and language is far from trivial (Pennebaker and King, 1999; Argamon et al., 2005; Oberlander and Nowson, 2006; Mairesse et al., 2007), suggesting that the control of personality is a harder problem than the control of data-driven variation dimensions. We present the first human perceptual evaluation of a data-driven stylistic variation method. In terms of our research questions in Section 3.1, we show that models trained on expert judges to project multiple traits in a single utterance generate utterances whose personality is recognized by naive judges. There is only one other similar evaluation of an SNLG (Rambow et al., 2001). Our models perform only slightly worse than a handcrafted rule-based generator in the same domain. These findings are promising as (1) parameter estimation models are able to target any combination of traits over the full range of the Big Five scales; (2) they do not benefit from psychological knowledge, i.e. they are trained on randomly generated utterances. This work also has several limitations that should be addressed in future work. Even though the parameters of PERSONAGE-PE were suggested by psychological studies (Mairesse and Walker, 2007), some of them are not modeled successfully by our approach, and thus omitted from Tables 3 and 4. This could be due to the relatively small development dataset size (160 utterances to optimize 67 parameters), or to the implementation of some parameters. The strong parameter-independence assumption could also be responsible, but we are not aware of any state of the art implementation for learning multiple dependent variables, and this approach could further aggravate data sparsity issues. In addition, it is unclear why PERSONAGE performs better for projecting extreme personality and produces more natural utterances, and why PERSONAGE-PE fails to project conscientiousness correctly. It might be possible to improve the parameter estimation models with a larger sample of random utterances at development time, or with additional extreme data generated using the rule-based approach. Such hybrid models are likely to perform better for extreme target scores, as they are trained on more uniformly distributed ratings (e.g. compared to the normal distribution in Figure 1). In addition, we have only shown that personality can be expressed by information presentation speech-acts in the restaurant domain; future work should assess the extent to which the parameters derived from psychological findings are culture, domain, and speech act dependent. 172 References S. Argamon, S. Dhawle, M. Koppel, and J. Pennebaker. Lexical predictors of personality type. In Proceedings of the Joint Annual Meeting of the Interface and the Classification Society of North America, 2005. S. Bangalore and O. Rambow. Exploiting a probabilistic hierarchical model for generation. In Proceedings of the 18th International Conference on Computational Linguistics (COLING), pages 42–48, 2000. A. Belz. Corpus-driven generation of weather forecasts. In Proceedings of the 3rd Corpus Linguistics Conference, 2005. J. Cassell and T. Bickmore. Negotiated collusion: Modeling social language and its relationship effects in intelligent agents. User Modeling and User-Adapted Interaction, 13:89–132, 2003. N. Chambers and J. Allen. Stochastic language generation in a dialogue system: Toward a domain independent generator. In Proceedings 5th SIGdial Workshop on Discourse and Dialogue, 2004. S. D. Gosling, P. J. Rentfrow, and W. B. Swann. A very brief measure of the big five personality domains. Journal of Research in Personality, 37:504–528, 2003. E. Hovy. Generating Natural Language under Pragmatic Constraints. Lawrence Erlbaum Associates, 1988. A. Isard, C. Brockmann, and J. Oberlander. Individuality and alignment in generated dialogues. In Proceedings of the 4th International Natural Language Generation Conference (INLG), pages 22–29, 2006. I. Langkilde and K. Knight. Generation that exploits corpus-based statistical knowledge. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics (ACL), pages 704–710, 1998. I. Langkilde-Geary. An empirical verification of coverage and correctness for a general-purpose sentence generator. In Proceedings of the 1st International Conference on Natural Language Generation, 2002. M. Lapata and F. Keller. Web-based models for natural language processing. ACM Transactions on Speech and Language Processing, 2:1–31, 2005. J. Lester, S. Converse, S. Kahler, S. Barlow, B. Stone, and R. Bhogal. The persona effect: affective impact of animated pedagogical agents. Proceedings of the SIGCHI conference on Human factors in computing systems, pages 359–366, 1997. F. Mairesse and M. A. Walker. PERSONAGE: Personality generation for dialogue. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL), pages 496–503, 2007. F. Mairesse, M. A. Walker, M. R. Mehl, and R. K. Moore. Using linguistic cues for the automatic recognition of personality in conversation and text. Journal of Artificial Intelligence Research (JAIR), 30:457–500, 2007. M. R. Mehl, S. D. Gosling, and J. W. Pennebaker. Personality in its natural habitat: Manifestations and implicit folk theories of personality in daily life. Journal of Personality and Social Psychology, 90:862–877, 2006. C. Nakatsu and M. White. Learning to say it well: Reranking realizations by predicted synthesis quality. In Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1113–1120, 2006. J. Oberlander and S. Nowson. Whose thumb is it anyway? classifying author personality from weblog text. In Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics (ACL), 2006. D. S. Paiva and R. Evans. Empirically-based control of natural language generation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL), pages 58–65, 2005. J. W. Pennebaker and L. A. King. Linguistic styles: Language use as an individual difference. Journal of Personality and Social Psychology, 77:1296–1312, 1999. P. Piwek. A flexible pragmatics-driven language generator for animated agents. In Proceedings of Annual Meeting of the European Chapter of the Association for Computational Linguistics (EACL), 2003. R. Power, D. Scott, and N. Bouayad-Agha. Generating texts with style. In Proceedings of the 4th International Conference on Intelligent Text Processing and Computational Linguistics, 2003. O. Rambow, M. Rogati, and M. A. Walker. Evaluating a trainable sentence planner for a spoken dialogue travel system. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL), 2001. M. Rehm and E. Andr´e. From annotated multimodal corpora to simulated human-like behaviors. In I. Wachsmuth and G. Knoblich, editors, Modeling Communication with Robots and Virtual Humans. Springer, Berlin, Heidelberg, in press. A. Stent and H. Guo. A new data-driven approach for multimedia presentation generation. In Proc. EuroIMSA, 2005. M. A. Walker, J. E. Cahn, and S. J. Whittaker. Improvising linguistic style: Social and affective bases for agent personality. In Proceedings of the 1st Conference on Autonomous Agents, pages 96–105, 1997. I. H. Witten and E. Frank. Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann, San Francisco, CA, 2005. 173
2008
20
Proceedings of ACL-08: HLT, pages 174–182, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Correcting Misuse of Verb Forms John Lee and Stephanie Seneff Spoken Language Systems MIT Computer Science and Artificial Intelligence Laboratory Cambridge, MA 02139, USA {jsylee,seneff}@csail.mit.edu Abstract This paper proposes a method to correct English verb form errors made by non-native speakers. A basic approach is template matching on parse trees. The proposed method improves on this approach in two ways. To improve recall, irregularities in parse trees caused by verb form errors are taken into account; to improve precision, n-gram counts are utilized to filter proposed corrections. Evaluation on non-native corpora, representing two genres and mother tongues, shows promising results. 1 Introduction In order to describe the nuances of an action, a verb may be associated with various concepts such as tense, aspect, voice, mood, person and number. In some languages, such as Chinese, the verb itself is not inflected, and these concepts are expressed via other words in the sentence. In highly inflected languages, such as Turkish, many of these concepts are encoded in the inflection of the verb. In between these extremes, English uses a combination of inflections (see Table 1) and “helping words”, or auxiliaries, to form complex verb phrases. It should come as no surprise, then, that the misuse of verb forms is a common error category for some non-native speakers of English. For example, in the Japanese Learners of English corpus (Izumi et al., 2003), errors related to verbs are among the most frequent categories. Table 2 shows some sentences with these errors. Form Example base (bare) speak base (infinitive) to speak third person singular speaks past spoke -ing participle speaking -ed participle spoken Table 1: Five forms of inflections of English verbs (Quirk et al., 1985), illustrated with the verb “speak”. The base form is also used to construct the infinitive with “to”. An exception is the verb “to be”, which has more forms. A system that automatically detects and corrects misused verb forms would be both an educational and practical tool for students of English. It may also potentially improve the performance of machine translation and natural language generation systems, especially when the source and target languages employ very different verb systems. Research on automatic grammar correction has been conducted on a number of different parts-ofspeech, such as articles (Knight and Chander, 1994) and prepositions (Chodorow et al., 2007). Errors in verb forms have been covered as part of larger systems such as (Heidorn, 2000), but we believe that their specific research challenges warrant more detailed examination. We build on the basic approach of templatematching on parse trees in two ways. To improve recall, irregularities in parse trees caused by verb form errors are considered; to improve precision, n-gram counts are utilized to filter proposed corrections. We start with a discussion on the scope of our 174 task in the next section. We then analyze the specific research issues in §3 and survey previous work in §4. A description of our data follows. Finally, we present experimental results and conclude. 2 Background An English verb can be inflected in five forms (see Table 1). Our goal is to correct confusions among these five forms, as well as the infinitive. These confusions can be viewed as symptoms of one of two main underlying categories of errors; roughly speaking, one category is semantic in nature, and the other, syntactic. 2.1 Semantic Errors The first type of error is concerned with inappropriate choices of tense, aspect, voice, or mood. These may be considered errors in semantics. In the sentence below, the verb “live” is expressed in the simple present tense, rather than the perfect progressive: He *lives there since June. (1) Either “has been living” or “had been living” may be the valid correction, depending on the context. If there is no temporal expression, correction of tense and aspect would be even more challenging. Similarly, correcting voice and mood often requires real-world knowledge. Suppose one wants to say “I am prepared for the exam”, but writes “I am preparing for the exam”. Semantic analysis of the context would be required to correct this kind of error, which will not be tackled in this paper1. 1If the input is “I am *prepare for the exam”, however, we will attempt to choose between the two possibilities. Example Usage I take a bath and *reading books. FINITE I can’t *skiing well , but ... BASEmd Why did this *happened? BASEdo But I haven’t *decide where to go. EDperf I don’t want *have a baby. INFverb I have to save my money for *ski. INGprep My son was very *satisfy with ... EDpass I am always *talk to my father. INGprog Table 2: Sentences with verb form errors. The intended usages, shown on the right column, are defined in Table 3. 2.2 Syntactic Errors The second type of error is the misuse of verb forms. Even if the intended tense, aspect, voice and mood are correct, the verb phrase may still be constructed erroneously. This type of error may be further subdivided as follows: Subject-Verb Agreement The verb is not correctly inflected in number and person with respect to the subject. A common error is the confusion between the base form and the third person singular form, e.g., He *have been living there since June. (2) Auxiliary Agreement In addition to the modal auxiliaries, other auxiliaries must be used when specifying the perfective or progressive aspect, or the passive voice. Their use results in a complex verb phrase, i.e., one that consists of two or more verb constituents. Mistakes arise when the main verb does not “agree” with the auxiliary. In the sentence below, the present perfect progressive tense (“has been living”) is intended, but the main verb “live” is mistakenly left in the base form: He has been *live there since June. (3) In general, the auxiliaries can serve as a hint to the intended verb form, even as the auxiliaries “has been” in the above case suggest that the progressive aspect was intended. Complementation A nonfinite clause can serve as complementation to a verb or to a preposition. In the former case, the verb form in the clause is typically an infinitive or an -ing participle; in the latter, it is usually an -ing participle. Here is an example of a wrong choice of verb form in complementation to a verb: He wants *live there. (4) In this sentence, “live”, in its base form, should be modified to its infinitive form as a complementation to the verb “wants”. This paper focuses on correcting the above three error types: subject-verb agreement, auxiliary agreement, and complementation. Table 3 gives a complete list of verb form usages which will be covered. 175 Form Usage Description Example Base Form as BASEmd After modals He may call. May he call? Bare Infinitive BASEdo “Do”-support/-periphrasis; He did not call. Did he call? emphatic positive I did call. Base or 3rd person FINITE Simple present or past tense He calls. Base Form as INFverb Verb complementation He wants her to call. to-Infinitive -ing INGprog Progressive aspect He was calling. Was he calling? participle INGverb Verb complementation He hated calling. INGprep Prepositional complementation The device is designed for calling -ed EDperf Perfect aspect He has called. Has he called? participle EDpass Passive voice He was called. Was he called? Table 3: Usage of various verb forms. In the examples, the italized verbs are the “targets” for correction. In complementations, the main verbs or prepositions are bolded; in all other cases, the auxiliaries are bolded. 3 Research Issues One strategy for correcting verb form errors is to identify the intended syntactic relationships between the verb in question and its neighbors. For subjectverb agreement, the subject of the verb is obviously crucial (e.g., “he” in (2)); the auxiliary is relevant for resolving auxiliary agreement (e.g., “has been” in (3)); determining the verb that receives the complementation is necessary for detecting any complementation errors (e.g., “wants” in (4)). Once these items are identified, most verb form errors may be corrected in a rather straightforward manner. The success of this strategy, then, hinges on accurate identification of these items, for example, from parse trees. Ambiguities will need to be resolved, leading to two research issues (§3.2 and §3.3). 3.1 Ambiguities The three so-called primary verbs, “have”, “do” and “be”, can serve as either main or auxiliary verbs. The verb “be” can be utilized as a main verb, but also as an auxiliary in the progressive aspect (INGprog in Table 3) or the passive voice (EDpass). The three examples below illustrate these possibilities: This is work not play. (main verb) My father is working in the lab. (INGprog) A solution is worked out. (EDpass) These different roles clearly affect the forms required for the verbs (if any) that follow. Disambiguation among these roles is usually straightforward because of the different verb forms (e.g., “working” vs. “worked”). If the verb forms are incorrect, disambiguation is made more difficult: This is work not play. My father is *work in the lab. A solution is *work out. Similar ambiguities are introduced by the other primary verbs2. The verb “have” can function as an auxiliary in the perfect aspect (EDperf) as well as a main verb. The versatile “do” can serve as “do”support or add emphasis (BASEdo), or simply act as a main verb. 3.2 Automatic Parsing The ambiguities discussed above may be expected to cause degradation in automatic parsing performance. In other words, sentences containing verb form errors are more likely to yield an “incorrect” parse tree, sometimes with significant differences. For example, the sentence “My father is *work in the laboratory” is parsed (Collins, 1997) as: (S (NP My father) (VP is (NP work)) (PP in the laboratory)) 2The abbreviations ’s (is or has) and ’d (would or had) compound the ambiguities. 176 The progressive form “working” is substituted with its bare form, which happens to be also a noun. The parser, not unreasonably, identifies “work” as a noun. Correcting the verb form error in this sentence, then, necessitates considering the noun that is apparently a copular complementation. Anecdotal observations like this suggest that one cannot use parser output naively3. We will show that some of the irregularities caused by verb form errors are consistent and can be taken into account. One goal of this paper is to recognize irregularities in parse trees caused by verb form errors, in order to increase recall. 3.3 Overgeneralization One potential consequence of allowing for irregularities in parse tree patterns is overgeneralization. For example, to allow for the “parse error” in §3.2 and to retrieve the word “work”, every determinerless noun would potentially be turned into an -ing participle. This would clearly result in many invalid corrections. We propose using n-gram counts as a filter to counter this kind of overgeneralization. A second goal is to show that n-gram counts can effectively serve as a filter, in order to increase precision. 4 Previous Research This section discusses previous research on processing verb form errors, and contrasts verb form errors with those of the other parts-of-speech. 4.1 Verb Forms Detection and correction of grammatical errors, including verb forms, have been explored in various applications. Hand-crafted error production rules (or “mal-rules”), augmenting a context-free grammar, are designed for a writing tutor aimed at deaf students (Michaud et al., 2000). Similar strategies with parse trees are pursued in (Bender et al., 2004), and error templates are utilized in (Heidorn, 2000) for a word processor. Carefully hand-crafted rules, when used alone, tend to yield high precision; they 3According to a study on parsing ungrammatical sentences (Foster, 2007), subject-verb and determiner-noun agreement errors can lower the F-score of a state-of-the-art probabilistic parser by 1.4%, and context-sensitive spelling errors (not verbs specifically), by 6%. may, however, be less equipped to detect verb form errors within a perfectly grammatical sentence, such as the example given in §3.2. An approach combining a hand-crafted contextfree grammar and stochastic probabilities is pursued in (Lee and Seneff, 2006), but it is designed for a restricted domain only. A maximum entropy model, using lexical and POS features, is trained in (Izumi et al., 2003) to recognize a variety of errors. It achieves 55% precision and 23% recall overall, on evaluation data that partially overlap with those of the present paper. Unfortunately, results on verb form errors are not reported separately, and comparison with our approach is therefore impossible. 4.2 Other Parts-of-speech Automatic error detection has been performed on other parts-of-speech, e.g., articles (Knight and Chander, 1994) and prepositions (Chodorow et al., 2007). The research issues with these parts-ofspeech, however, are quite distinct. Relative to verb forms, errors in these categories do not “disturb” the parse tree as much. The process of feature extraction is thus relatively simple. 5 Data 5.1 Development Data To investigate irregularities in parse tree patterns (see §3.2), we utilized the AQUAINT Corpus of English News Text. After parsing the corpus (Collins, 1997), we artificially introduced verb form errors into these sentences, and observed the resulting “disturbances” to the parse trees. For disambiguation with n-grams (see §3.3), we made use of the WEB 1T 5-GRAM corpus. Prepared by Google Inc., it contains English n-grams, up to 5-grams, with their observed frequency counts from a large number of web pages. 5.2 Evaluation Data Two corpora were used for evaluation. They were selected to represent two different genres, and two different mother tongues. JLE (Japanese Learners of English corpus) This corpus is based on interviews for the Standard Speaking Test, an English-language proficiency test conducted in Japan (Izumi et al., 177 Input Hypothesized Correction None Valid Invalid w/ errors false neg true pos inv pos w/o errors true neg false pos Table 4: Possible outcomes of a hypothesized correction. 2003). For 167 of the transcribed interviews, totalling 15,637 sentences4, grammatical errors were annotated and their corrections provided. By retaining the verb form errors5, but correcting all other error types, we generated a test set in which 477 sentences (3.1%) contain subjectverb agreement errors, and 238 (1.5%) contain auxiliary agreement and complementation errors. HKUST This corpus6 of short essays was collected from students, all native Chinese speakers, at the Hong Kong University of Science and Technology. It contains a total of 2556 sentences. They tend to be longer and have more complex structures than their counterparts in the JLE. Corrections are not provided; however, part-of-speech tags are given for the original words, and for the intended (but unwritten) corrections. Implications on our evaluation procedure are discussed in §5.4. 5.3 Evaluation Metric For each verb in the input sentence, a change in verb form may be hypothesized. There are five possible outcomes for this hypothesis, as enumerated in Table 4. To penalize “false alarms”, a strict definition is used for false positives — even when the hypothesized correction yields a good sentence, it is still considered a false positive so long as the original sentence is acceptable. It can sometimes be difficult to determine which words should be considered verbs, as they are not 4Obtained by segmenting (Reynar and Ratnaparkhi, 1997) the interviewee turns, and discarding sentences with only one word. The HKUST corpus was processed likewise. 5Specifically, those tagged with the “v fml”, “v fin” (covering auxiliary agreement and complementation) and “v agr” (subject-verb agreement) types; those with semantic errors (see §2.1), i.e. “v tns” (tense), are excluded. 6Provided by Prof. John Milton, personal communication. clearly demarcated in our evaluation corpora. We will thus apply the outcomes in Table 4 at the sentence level; that is, the output sentence is considered a true positive only if the original sentence contains errors, and only if valid corrections are offered for all errors. The following statistics are computed: Accuracy The proportion of sentences which, after being treated by the system, have correct verb forms. That is, (true neg + true pos) divided by the total number of sentences. Recall Out of all sentences with verb form errors, the percentage whose errors have been successfully corrected by the system. That is, true pos divided by (true pos+false neg +inv pos). Detection Precision This is the first of two types of precision to be reported, and is defined as follows: Out of all sentences for which the system has hypothesized corrections, the percentage that actually contain errors, without regard to the validity of the corrections. That is, (true pos + inv pos) divided by (true pos + inv pos + false pos). Correction Precision This is the more stringent type of precision. In addition to successfully determining that a correction is needed, the system must offer a valid correction. Formally, it is true pos divided by (true pos + false pos + inv pos). 5.4 Evaluation Procedure For the JLE corpus, all figures above will be reported. The HKUST corpus, however, will not be evaluated on subject-verb agreement, since a sizable number of these errors are induced by other changes in the sentence7. Furthermore, the HKUST corpus will require manual evaluation, since the corrections are not annotated. Two native speakers of English were given the edited sentences, as well as the original input. For each pair, they were asked to select one of four statements: one of the two is better, or both are equally correct, or both are equally incorrect. The 7e.g., the subject of the verb needs to be changed from singular to plural. 178 Expected Tree {⟨usage⟩,...} Tree disturbed by substitution [⟨crr⟩→⟨err⟩] {INGprog,EDpass} A dog is [sleeping→sleep]. I’m [living→live] in XXX city. VP be VP crr/{VBG,VBN} VP be NP err/NN VP be ADJP err/JJ {INGverb,INFverb} I like [skiing→ski] very much; She likes to [go→going] around VP */V SG VP crr/{VBG,TO} ... VP */V NP err/NN VP */V PP to/TO SG VP err/VBG INGprep I lived in France for [studying→study] French language. PP */IN SG VP crr/VBG ... PP */IN NP err/NN Table 5: Effects of incorrect verb forms on parse trees. The left column shows trees normally expected for the indicated usages (see Table 3). The right column shows the resulting trees when the correct verb form ⟨crr⟩is replaced by ⟨err⟩. Detailed comments are provided in §6.1. correction precision is thus the proportion of pairs where the edited sentence is deemed better. Accuracy and recall cannot be computed, since it was impossible to distinguish syntactic errors from semantic ones (see §2). 5.5 Baselines Since the vast majority of verbs are in their correct forms, the majority baseline is to propose no correction. Although trivial, it is a surprisingly strong baseline, achieving more than 98% for auxiliary agreement and complementation in JLE, and just shy of 97% for subject-verb agreement. For auxiliary agreement and complementation, the verb-only baseline is also reported. It attempts corrections only when the word in question is actually tagged as a verb. That is, it ignores the spurious noun- and adjectival phrases in the parse tree discussed in §3.2, and relies only on the output of the part-of-speech tagger. 6 Experiments Corresponding to the issues discussed in §3.2 and §3.3, our experiment consists of two main steps. 6.1 Derivation of Tree Patterns Based on (Quirk et al., 1985), we observed tree patterns for a set of verb form usages, as summarized in Table 3. Using these patterns, we introduced verb form errors into AQUAINT, then re-parsed the corpus (Collins, 1997), and compiled the changes in the “disturbed” trees into a catalog. 179 N-gram Example be {INGprog, The dog is sleeping. EDpass} ∗ The door is open. verb {INGverb, I need to do this. INFverb} ∗ I need beef for the curry. verb1 *ing enjoy reading and and {INGverb, going to pachinko INFverb} go shopping and have dinner prep for studying French language {INGprep} ∗ a class for sign language have I have rented a video {EDperf} * I have lunch in Ginza Table 6: The n-grams used for filtering, with examples of sentences which they are intended to differentiate. The hypothesized usages (shown in the curly brackets) as well as the original verb form, are considered. For example, the first sentence is originally “The dog is *sleep.” The three trigrams “is sleeping .”, “is slept .” and “is sleep .” are compared; the first trigram has the highest count, and the correction “sleeping” is therefore applied. A portion of this catalog8 is shown in Table 5. Comments on {INGprog,EDpass} can be found in §3.2. Two cases are shown for {INGverb,INFverb}. In the first case, an -ing participle in verb complementation is reduced to its base form, resulting in a noun phrase. In the second, an infinitive is constructed with the -ing participle rather than the base form, causing “to” to be misconstrued as a preposition. Finally, in INGprep, an -ing participle in preposition complementation is reduced to its base form, and is subsumed in a noun phrase. 6.2 Disambiguation with N-grams The tree patterns derived from the previous step may be considered as the “necessary” conditions for proposing a change in verb forms. They are not “sufficient”, however, since they tend to be overly general. Indiscriminate application of these patterns on AQUAINT would result in false positives for 46.4% of the sentences. For those categories with a high rate of false positives (all except BASEmd, BASEdo and FINITE), we utilized n-grams as filters, allowing a correction only when its n-gram count in the WEB 1T 5-GRAM 8Due to space constraints, only those trees with significant changes above the leaf level are shown. Hyp. False Hypothesized False Usage Pos. Usage Pos. BASEmd 16.2% {INGverb,INFverb} 33.9% BASEdo 0.9% {INGprog,EDpass} 21.0% FINITE 12.8% INGprep 13.7% EDperf 1.4% Table 7: The distribution of false positives in AQUAINT. The total number of false positives is 994, represents less than 1% of the 100,000 sentences drawn from the corpus. corpus is greater than that of the original. The filtering step reduced false positives from 46.4% to less than 1%. Table 6 shows the n-grams, and Table 7 provides a breakdown of false positives in AQUAINT after n-gram filtering. 6.3 Results for Subject-Verb Agreement In JLE, the accuracy of subject-verb agreement error correction is 98.93%. Compared to the majority baseline of 96.95%, the improvement is statistically significant9. Recall is 80.92%; detection precision is 83.93%, and correction precision is 81.61%. Most mistakes are caused by misidentified subjects. Some wh-questions prove to be especially difficult, perhaps due to their relative infrequency in newswire texts, on which the parser is trained. One example is the question “How much extra time does the local train *takes?”. The word “does” is not recognized as a “do”-support, and so the verb “take” was mistakenly turned into a third person form to agree with “train”. 6.4 Results for Auxiliary Agreement & Complementation Table 8 summarizes the results for auxiliary agreement and complementation, and Table 2 shows some examples of real sentences corrected by the system. Our proposed method yields 98.94% accuracy. It is a statistically significant improvement over the majority baseline (98.47%), although not significant over the verb-only baseline10 (98.85%), perhaps a reflection of the small number of test sentences with verb form errors. The Kappa statistic for the man9p < 0.005 according to McNemar’s test. 10With p = 1∗10−10 and p = 0.038, respectively, according to McNemar’s test 180 Corpus Method Accuracy Precision Precision Recall (correction) (detection) JLE verb-only 98.85% 71.43% 84.75% 31.51% all 98.94% 68.00% 80.67% 42.86% HKUST all not available 71.71% not available Table 8: Results on the JLE and HKUST corpora for auxiliary agreement and complementation. The majority baseline accuracy is 98.47% for JLE. The verb-only baseline accuracy is 98.85%, as indicated on the second row. “All” denotes the complete proposed method. See §6.4 for detailed comments. Usage JLE HKUST Count (Prec.) Count (Prec.) BASEmd 13 (92.3%) 25 (80.0%) BASEdo 5 (100%) 0 FINITE 9 (55.6%) 0 EDperf 11 (90.9%) 3 (66.7%) {INGprog,EDpass} 54 (58.6%) 30 (70.0%) {INGverb,INFverb} 45 (60.0%) 16 (59.4%) INGprep 10 (60.0%) 2 (100%) Table 9: Correction precision of individual correction patterns (see Table 5) on the JLE and HKUST corpus. ual evaluation of HKUST is 0.76, corresponding to “substantial agreement” between the two evaluators (Landis and Koch, 1977). The correction precisions for the JLE and HKUST corpora are comparable. Our analysis will focus on {INGprog,EDpass} and {INGverb,INFverb}, two categories with relatively numerous correction attempts and low precisions, as shown in Table 9. For {INGprog,EDpass}, many invalid corrections are due to wrong predictions of voice, which involve semantic choices (see §2.1). For example, the sentence “... the main duty is study well” is edited to “... the main duty is studied well”, a grammatical sentence but semantically unlikely. For {INGverb,INFverb}, a substantial portion of the false positives are valid, but unnecessary, corrections. For example, there is no need to turn “I like cooking” into “I like to cook”, as the original is perfectly acceptable. Some kind of confidence measure on the n-gram counts might be appropriate for reducing such false alarms. Characteristics of speech transcripts pose some further problems. First, colloquial expressions, such as the word “like”, can be tricky to process. In the question “Can you like give me the money back”, “like” is misconstrued to be the main verb, and “give” is turned into an infinitive, resulting in “Can you like *to give me the money back”. Second, there are quite a few incomplete sentences that lack subjects for the verbs. No correction is attempted on them. Also left uncorrected are misused forms in nonfinite clauses that describe a noun. These are typically base forms that should be replaced with -ing participles, as in “The girl *wear a purple skiwear is a student of this ski school”. Efforts to detect this kind of error had resulted in a large number of false alarms. Recall is further affected by cases where a verb is separated from its auxiliary or main verb by many words, often with conjunctions and other verbs in between. One example is the sentence “I used to climb up the orange trees and *catching insects”. The word “catching” should be an infinitive complementing “used”, but is placed within a noun phrase together with “trees” and “insects”. 7 Conclusion We have presented a method for correcting verb form errors. We investigated the ways in which verb form errors affect parse trees. When allowed for, these unusual tree patterns can expand correction coverage, but also tend to result in overgeneration of hypothesized corrections. N-grams have been shown to be an effective filter for this problem. 8 Acknowledgments We thank Prof. John Milton for the HKUST corpus, Tom Lee and Ken Schutte for their assistance with the evaluation, and the anonymous reviewers for their helpful feedback. 181 References E. Bender, D. Flickinger, S. Oepen, A. Walsh, and T. Baldwin. 2004. Arboretum: Using a Precision Grammar for Grammar Checking in CALL. Proc. InSTIL/ICALL Symposium on Computer Assisted Learning. M. Chodorow, J. R. Tetreault, and N.-R. Han. 2007. Detection of Grammatical Errors Involving Prepositions. In Proc. ACL-SIGSEM Workshop on Prepositions. Prague, Czech Republic. M. Collins. 1997. Three Generative, Lexicalised Models for Statistical Parsing. Proc. ACL. J. Foster. 2007. Treebanks Gone Bad: Generating a Treebank of Ungrammatical English. In Proc. IJCAI Workshop on Analytics for Noisy Unstructured Data. Hyderabad, India. G. Heidorn. 2000. Intelligent Writing Assistance. Handbook of Natural Language Processing. Robert Dale, Hermann Moisi and Harold Somers (ed.). Marcel Dekker, Inc. E. Izumi, K. Uchimoto, T. Saiga, T. Supnithi, and H. Isahara. 2003. Automatic Error Detection in the Japanese Learner’s English Spoken Data. In Companion Volume to Proc. ACL. Sapporo, Japan. K. Knight and I. Chander. 1994. Automated Postediting of Documents. In Proc. AAAI. Seattle, WA. J. R. Landis and G. G. Koch. 1977. The Measurement of Observer Agreement for Categorical Data. Biometrics 33(1):159–174. L. Michaud, K. McCoy and C. Pennington. 2000. An Intelligent Tutoring System for Deaf Learners of Written English. Proc. 4th International ACM Conference on Assistive Technologies. J. Lee and S. Seneff. 2006. Automatic Grammar Correction for Second-Language Learners. In Proc. Interspeech. Pittsburgh, PA. J. C. Reynar and A. Ratnaparkhi. 1997. A Maximum Entropy Approach to Identifying Sentence Boundaries. In Proc. 5th Conference on Applied Natural Language Processing. Washington, D.C. R. Quirk, S. Greenbaum, G. Leech, and J. Svartvik. 1985. A Comprehensive Grammar of the English Language. Longman, New York. 182
2008
21
Proceedings of ACL-08: HLT, pages 183–191, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Hypertagging: Supertagging for Surface Realization with CCG Dominic Espinosa and Michael White and Dennis Mehay Department of Linguistics The Ohio State University Columbus, OH, USA {espinosa,mwhite,mehay}@ling.osu.edu Abstract In lexicalized grammatical formalisms, it is possible to separate lexical category assignment from the combinatory processes that make use of such categories, such as parsing and realization. We adapt techniques from supertagging — a relatively recent technique that performs complex lexical tagging before full parsing (Bangalore and Joshi, 1999; Clark, 2002) — for chart realization in OpenCCG, an open-source NLP toolkit for CCG. We call this approach hypertagging, as it operates at a level “above” the syntax, tagging semantic representations with syntactic lexical categories. Our results demonstrate that a hypertagger-informed chart realizer can achieve substantial improvements in realization speed (being approximately twice as fast) with superior realization quality. 1 Introduction In lexicalized grammatical formalisms such as Lexicalized Tree Adjoining Grammar (Schabes et al., 1988, LTAG), Combinatory Categorial Grammar (Steedman, 2000, CCG) and Head-Driven PhraseStructure Grammar (Pollard and Sag, 1994, HPSG), it is possible to separate lexical category assignment — the assignment of informative syntactic categories to linguistic objects such as words or lexical predicates — from the combinatory processes that make use of such categories — such as parsing and surface realization. One way of performing lexical assignment is simply to hypothesize all possible lexical categories and then search for the best combination thereof, as in the CCG parser in (Hockenmaier, 2003) or the chart realizer in (Carroll and Oepen, 2005). A relatively recent technique for lexical category assignment is supertagging (Bangalore and Joshi, 1999), a preprocessing step to parsing that assigns likely categories based on word and part-ofspeech (POS) contextual information. Supertagging was dubbed “almost parsing” by these authors, because an oracle supertagger left relatively little work for their parser, while speeding up parse times considerably. Supertagging has been more recently extended to a multitagging paradigm in CCG (Clark, 2002; Curran et al., 2006), leading to extremely efficient parsing with state-of-the-art dependency recovery (Clark and Curran, 2007). We have adapted this multitagging approach to lexical category assignment for realization using the CCG-based natural language toolkit OpenCCG.1 Instead of basing category assignment on linear word and POS context, however, we predict lexical categories based on contexts within a directed graph structure representing the logical form (LF) of a proposition to be realized. Assigned categories are instantiated in OpenCCG’s chart realizer where, together with a treebank-derived syntactic grammar (Hockenmaier and Steedman, 2007) and a factored language model (Bilmes and Kirchhoff, 2003), they constrain the English word-strings that are chosen to express the LF. We have dubbed this approach hypertagging, as it operates at a level “above” the syntax, moving from semantic representations to syntactic categories. We evaluate this hypertagger in two ways: first, 1http://openccg.sourceforge.net. 183 we evaluate it as a tagger, where the hypertagger achieves high single-best (93.6%) and multitagging labelling accuracies (95.8–99.4% with category per lexical predication ratios ranging from 1.1 to 3.9).2 Second, we compare a hypertagger-augmented version of OpenCCG’s chart realizer with the preexisting chart realizer (White et al., 2007) that simply instantiates the chart with all possible CCG categories (subject to frequency cutoffs) for each input LF predicate. The hypertagger-seeded realizer runs approximately twice as fast as the pre-existing OpenCCG realizer and finds a larger number of complete realizations, resorting less to chart fragment assembly in order to produce an output within a 15 second per-sentence time limit. Moreover, the overall BLEU (Papineni et al., 2002) and METEOR (Lavie and Agarwal, 2007) scores, as well as numbers of exact string matches (as measured against to the original sentences in the CCGbank) are higher for the hypertagger-seeded realizer than for the preexisting realizer. This paper is structured as follows: Section 2 provides background on chart realization in OpenCCG using a corpus-derived grammar. Section 3 describes our hypertagging approach and how it is integrated into the realizer. Section 4 describes our results, followed by related work in Section 5 and our conclusions in Section 6. 2 Background 2.1 Surface Realization with OpenCCG The OpenCCG surface realizer is based on Steedman’s (2000) version of CCG elaborated with Baldridge and Kruijff’s multi-modal extensions for lexically specified derivation control (Baldridge, 2002; Baldridge and Kruijff, 2003) and hybrid logic dependency semantics (Baldridge and Kruijff, 2002). OpenCCG implements a symbolic-statistical chart realization algorithm (Kay, 1996; Carroll et al., 1999; White, 2006b) combining (1) a theoretically grounded approach to syntax and semantic composition with (2) factored language models (Bilmes and Kirchhoff, 2003) for making choices among the options left open by the grammar. In OpenCCG, the search for complete realizations 2Note that the multitagger is “correct” if the correct tag is anywhere in the multitag set. he h2 a a1 he h3 <Det> <Arg0> <Arg1> <TENSE>pres <NUM>sg <Arg0> w1 want.01 m1 <Arg1> <GenRel> <Arg1> <TENSE>pres p1 point h1 have.03 make.03 Figure 1: Semantic dependency graph from the CCGbank for He has a point he wants to make [...] makes use of n-gram language models over words represented as vectors of factors, including surface form, part of speech, supertag and semantic class. The search proceeds in one of two modes, anytime or two-stage (packing/unpacking). In the anytime mode, a best-first search is performed with a configurable time limit: the scores assigned by the ngram model determine the order of the edges on the agenda, and thus have an impact on realization speed. In the two-stage mode, a packed forest of all possible realizations is created in the first stage; in the second stage, the packed representation is unpacked in bottom-up fashion, with scores assigned to the edge for each sign as it is unpacked, much as in (Langkilde, 2000). Edges are grouped into equivalence classes when they have the same syntactic category and cover the same parts of the input logical form. Pruning takes place within equivalence classes of edges. Additionally, to realize a wide range of paraphrases, OpenCCG implements an algorithm for efficiently generating from disjunctive logical forms (White, 2006a). To illustrate the input to OpenCCG, consider the semantic dependency graph in Figure 1, which is taken from section 00 of a Propbank-enhanced version of the CCGbank (Boxwell and White, 2008). In the graph, each node has a lexical predication (e.g. make.03) and a set of semantic features (e.g. ⟨NUM⟩sg); nodes are connected via dependency relations (e.g. ⟨ARG0⟩). Internally, such 184 graphs are represented using Hybrid Logic Dependency Semantics (HLDS), a dependency-based approach to representing linguistic meaning developed by Baldridge and Kruijff (2002). In HLDS, hybrid logic (Blackburn, 2000) terms are used to describe dependency graphs. These graphs have been suggested as representations for discourse structure, and have their own underlying semantics (White, 2006b). To more robustly support broad coverage surface realization, OpenCCG has recently been enhanced to greedily assemble fragments in the event that the realizer fails to find a complete realization. The fragment assembly algorithm begins with the edge for the best partial realization, i.e. the one that covers the most elementary predications in the input logical form, with ties broken according to the n-gram score. (Larger fragments are preferred under the assumption that they are more likely to be grammatical.) Next, the chart and agenda are greedily searched for the best edge whose semantic coverage is disjoint from those selected so far; this process repeats until no further edges can be added to the set of selected fragments. In the final step, these fragments are concatenated, again in a greedy fashion, this time according to the n-gram score of the concatenated edges: starting with the original best edge, the fragment whose concatenation on the left or right side yields the highest score is chosen as the one to concatenate next, until all the fragments have been concatenated into a single output. 2.2 Realization from an Enhanced CCGbank White et al. (2007) describe an ongoing effort to engineer a grammar from the CCGbank (Hockenmaier and Steedman, 2007) — a corpus of CCG derivations derived from the Penn Treebank — suitable for realization with OpenCCG. This process involves converting the corpus to reflect more precise analyses, where feasible, and adding semantic representations to the lexical categories. In the first step, the derivations in the CCGbank are revised to reflect the desired syntactic derivations. Changes to the derivations are necessary to reflect the lexicalized treatment of coordination and punctuation assumed by the multi-modal version of CCG that is implemented in OpenCCG. Further changes are necessary to support semantic dependencies rather than surface syntactic ones; in particular, the features and unification constraints in the categories related to semantically empty function words such complementizers, infinitival-to, expletive subjects, and case-marking prepositions are adjusted to reflect their purely syntactic status. In the second step, a grammar is extracted from the converted CCGbank and augmented with logical forms. Categories and unary type changing rules (corresponding to zero morphemes) are sorted by frequency and extracted if they meet the specified frequency thresholds. A separate transformation then uses around two dozen generalized templates to add logical forms to the categories, in a fashion reminiscent of (Bos, 2005). The effect of this transformation is illustrated below. Example (1) shows how numbered semantic roles, taken from PropBank (Palmer et al., 2005) when available, are added to the category of an active voice, past tense transitive verb, where *pred* is a placeholder for the lexical predicate; examples (2) and (3) show how more specific relations are introduced in the category for determiners and the category for the possessive ’s, respectively. (1) s1:dcl\np2/np3 =⇒ s1:dcl,x1\np2:x2/np3:x3 : @x1(*pred* ∧ ⟨TENSE⟩pres ∧⟨ARG0⟩x2 ∧⟨ARG1⟩x3) (2) np1/n1 =⇒ np1:x1/n1:x1 : @x1(⟨DET⟩(d ∧*pred*)) (3) np1/n1\np2 =⇒ np1:x1/n1:x1\np2:x2 : @x1(⟨GENOWN⟩x2) After logical form insertion, the extracted and augmented grammar is loaded and used to parse the sentences in the CCGbank according to the goldstandard derivation. If the derivation can be successfully followed, the parse yields a logical form which is saved along with the corpus sentence in order to later test the realizer. The algorithm for following corpus derivations attempts to continue processing if it encounters a blocked derivation due to sentenceinternal punctuation. While punctuation has been partially reanalyzed to use lexical categories, many problem cases remain due to the CCGbank’s reliance on punctuation-specific binary rules that are not supported in OpenCCG. 185 Currently, the algorithm succeeds in creating logical forms for 97.7% of the sentences in the development section (Sect. 00) of the converted CCGbank, and 96.1% of the sentences in the test section (Sect. 23). Of these, 76.6% of the development logical forms are semantic dependency graphs with a single root, while 76.7% of the test logical forms have a single root. The remaining cases, with multiple roots, are missing one or more dependencies required to form a fully connected graph. These missing dependencies usually reflect inadequacies in the current logical form templates. 2.3 Factored Language Models Following White et al. (2007), we use factored trigram models over words, part-of-speech tags and supertags to score partial and complete realizations. The language models were created using the SRILM toolkit (Stolcke, 2002) on the standard training sections (2–21) of the CCGbank, with sentenceinitial words (other than proper names) uncapitalized. While these models are considerably smaller than the ones used in (Langkilde-Geary, 2002; Velldal and Oepen, 2005), the training data does have the advantage of being in the same domain and genre (using larger n-gram models remains for future investigation). The models employ interpolated Kneser-Ney smoothing with the default frequency cutoffs. The best performing model interpolates a word trigram model with a trigram model that chains a POS model with a supertag model, where the POS model conditions on the previous two POS tags, and the supertag model conditions on the previous two POS tags as well as the current one. Note that the use of supertags in the factored language model to score possible realizations is distinct from the prediction of supertags for lexical category assignment: the former takes the words in the local context into account (as in supertagging for parsing), while the latter takes features of the logical form into account. It is this latter process which we call hypertagging, and to which we now turn. 3 The Approach 3.1 Lexical Smoothing and Search Errors In White et al.’s (2007) initial investigation of scaling up OpenCCG for broad coverage realization, test set grammar complete oracle / best dev (00) dev 49.1% / 47.8% train 37.5% / 22.6% Table 1: Percentage of complete realizations using an oracle n-gram model versus the best performing factored language model. all categories observed more often than a threshold frequency were instantiated for lexical predicates; for unseen words, a simple smoothing strategy based on the part of speech was employed, assigning the most frequent categories for the POS. This approach turned out to suffer from a large number of search errors, where the realizer failed to find a complete realization before timing out even in cases where the grammar supported one. To confirm that search errors had become a significant issue, White et al. compared the percentage of complete realizations (versus fragmentary ones) with their top scoring model against an oracle model that uses a simplified BLEU score based on the target string, which is useful for regression testing as it guides the best-first search to the reference sentence. The comparison involved both a medium-sized (non-blind) grammar derived from the development section and a large grammar derived from the training sections (the latter with slightly higher thresholds). As shown in Table 1, with the large grammar derived from the training sections, many fewer complete realizations are found (before timing out) using the factored language model than are possible, as indicated by the results of using the oracle model. By contrast, the difference is small with the medium-sized grammar derived from the development section. This result is not surprising when one considers that a large number of common words are observed to have many possible categories. In the next section, we show that a supertagger for CCG realization, or hypertagger, can reduce the problem of search errors by focusing the search space on the most likely lexical categories. 3.2 Maximum Entropy Hypertagging As supertagging for parsing involves studying a given input word and its local context, the concep186 tual equivalent for a lexical predicate in the LF is to study a given node and its local graph structure. Our implementation makes use of three general types of features: lexicalized features, which are simply the names of the parent and child elementary predication nodes, graph structural features, such as the total number of edges emanating from a node, the number of argument and non-argument dependents, and the names of the relations of the dependent nodes to the parent node, and syntactico-semantic attributes of nodes, such as the tense and number. For example, in the HLDS graph shown in Figure 1, the node representing want has two dependents, and the relational type of make with respect to want is ARG1. Clark (2002) notes in his parsing experiments that the POS tags of the surrounding words are highly informative. As discussed below, a significant gain in hypertagging accuracy resulted from including features sensitive to the POS tags of a node’s parent, the node itself, and all of its arguments and modifiers. Predicting these tags requires the use of a separate POS tagger, which operates in a manner similar to the hypertagger itself, though exploiting a slightly different set of features (e.g., including features corresponding to the four-character prefixes and suffixes of rare logical predication names). Following the (word) supertagging experiments of (Curran et al., 2006) we assigned potentially multiple POS tags to each elementary predication. The POS tags assigned are all those that are some factor β of the highest ranked tag,3 giving an average of 1.1 POS tags per elementary predication. The values of the corresponding feature functions are the POS tag probabilities according to the POS tagger. At this ambiguity level, the POS tagger is correct ≈92% of the time. Features for the hypertagger were extracted from semantic dependency graphs extracted from sections 2 through 21 of the CCGbank. In total, 37,168 dependency graphs were derived from the corpus, yielding 468,628 feature parameters. The resulting contextual features and goldstandard supertag for each predication were then used to train a maximum entropy classifier model. 3I.e., all tags t whose probabilities p(t) ≥β · p∗, where p∗ is the highest ranked tag’s probability. Maximum entropy models describe a set of probability distributions of the form: p(o | x) = 1 Z(x) · exp  n X i=1 λifi(o, x)  where o is an outcome, x is a context, the fi are feature functions, the λi are the respective weights of the feature functions, and Z(x) is a normalizing sum over all competing outcomes. More concretely, given an elementary predication labeled want (as in Figure 1), a feature function over this node could be: f(o, x) = ( 1, if o is (s[dcl]\np)/(s[adj]\np) and number of LF dependents(x) = 2 0, otherwise. We used Zhang Le’s maximum entropy toolkit4 for training the hypertagging model, which uses an implementation of Limited-memory BFGS, an approximate quasi-Newton optimization method from the numerical optimization literature (Liu and Nocedal, 1989). Using L-BFGS allowed us to include continuous feature function values where appropriate (e.g., the probabilities of automatically-assigned POS tags). We trained each hypertagging model to 275 iterations and our POS tagging model to 400 iterations. We used no feature frequency cut-offs, but rather employed Gaussian priors with global variances of 100 and 75, respectively, for the hypertagging and POS tagging models. 3.3 Iterative β-Best Realization During realization, the hypertagger serves to probabilistically filter the categories assigned to an elementary predication, as well as to propose categories for rare or unseen predicates. Given a predication, the tagger returns a β-best list of supertags in order of decreasing probability. Increasing the number of categories returned clearly increases the likelihood that the most-correct supertag is among them, but at a corresponding cost in chart size. Accordingly, the hypertagger begins with a highly restrictive value for β, and backs off to progressively less-restrictive values if no complete realization could be found using the set of supertags returned. The search is restarted 4http://homepages.inf.ed.ac.uk/s0450736/ maxent toolkit.html. 187 Table 2: Hypertagger accuracy on Sections 00 and 23. Results (in percentages) are for per-logical-predication (PR) and per-whole-graph (GRPH) tagging accurcies. Difference between best-only and baselines (b.l.) is significant (p < 2 · 10−16) by McNemar’s χ2 test. Sect00 Sect23 β Tags Pred PR GRPH PR GRPH b.l. 1 1 68.7 1.8 68.7 2.3 b.l. 2 2 84.3 9.9 84.4 10.9 1.0 1 93.6 40.4 93.6 38.2 0.16 1.1 95.8 55.7 96.2 56.8 0.05 1.2 96.6 63.8 97.3 66.0 0.0058 1.5 97.9 74.8 98.3 76.9 1.75e-3 1.8 98.4 78.9 98.7 81.8 6.25e-4 2.2 98.7 82.5 99.0 84.3 1.25e-4 3.2 99.0 85.7 99.3 88.5 5.8e-5 3.9 99.1 87.2 99.4 89.9 from scratch with the next β value, though in principle the same chart could be expanded. The iterative, β-best search for a complete realization uses the realizer’s packing mode, which can more quickly determine whether a complete realization is possible. If the halfway point of the overall time limit is reached with no complete realization, the search switches to best-first mode, ultimately assembling fragments if no complete realization can be found during the remaining time. 4 Results and Discussion Several experiments were performed in training and applying the hypertagger. Three different models were created using 1) non-lexicalized features only, 2) all features excluding POS tags, 3) all, 3) all features except syntactico-semantic attributes such as tense and number and 4) all features available. Models trained on these feature subsets were tested against one another on Section 00, and then the best performing model was run on both Section 00 and 23. 4.1 Feature Ablation Testing The the whole feature set was found in feature ablation testing on the development set to outperform all other feature subsets significantly (p < 2.2 · 10−16). These results listed in Table 3. As we can see, taking Table 3: Hypertagger feature ablation testing results on Section 00. The full feature set outperforms all others significantly (p < 2.2 · 10−16). Results for per-predication (PR) and per-whole-graph (GRPH) tagging percentage accuracies are listed. (Key: no-POS=no POS features; no-attr=no syntactico-semantic attributes such as tense and number; non-lex=non-lexicalized features only (no predication names). FEATURESET PR GRPH full 93.6 40.37 no-POS 91.3 29.5 no-attr 91.8 31.2 non-lex 91.5 28.7 away any one class of features leads to drop in perpredication tagging accuracy of at least 1.8% and a drop per-whole-graph accuracy of at least 9.2%. As expected from previous work in supertagging (for parsing), POS features resulted in a large improvement in overall accuracy (1.8%). Although the POS tagger by itself is only 92% accurate (as a multitagger of 1.1 POS word average ambiguity) — well below the state-of-the-art for the tagging of words — its predictions are still quite valuable to the hypertagger. 4.2 Best Model Hypertagger Accuracy The results for the full feature set on Sections 00 and 23 are outlined in Table 2. Included in this table are accuracy data for a baseline dummy tagger which simply assigns the most-frequently-seen tag(s) for a given predication and backs off to the overall most frequent tag(s) when confronted with an unseen predication. The development set (00) was used to tune the β parameter to obtain reasonable hypertag ambiguity levels; the model was not otherwise tuned to it. The hypertagger achieves high per-predication and whole-graph accuracies even at small ambiguity levels. 4.3 Realizer Performance Tables 4 and 5 show how the hypertagger improves realization performance on the development and test sections of the CCGbank. As Table 4 indicates, using the hypertagger in an iterative beta-best fashion more than doubles the number of grammatically complete realizations found within the time 188 Table 5: Realization quality metrics exact match, BLEU and METEOR, on complete realizations only and overall, with and without hypertagger, on Sections 00 and 23. SecHyperComplete Overall tion tagger BLEU METEOR Exact BLEU METEOR 00 with 0.8137 0.9153 15.3% 0.6567 0.8494 w/o 0.6864 0.8585 11.3% 0.5902 0.8209 23 with 0.8149 0.9162 16.0% 0.6701 0.8557 w/o 0.6910 0.8606 12.3% 0.6022 0.8273 Table 4: Percentage of grammatically complete realizations, runtimes for complete realizations and overall runtimes, with and without hypertagger, on Sections 00 and 23. SecHyperPercent Complete Overall tion tagger Complete Time Time 00 with 47.4% 1.2s 4.5s w/o 22.6% 8.7s 9.5s 23 with 48.5% 1.2s 4.4s w/o 23.5% 8.9s 9.6s limit; on the development set, this improvement elimates more than the number of known search errors (cf. Table 1). Additionally, by reducing the search space, the hypertagger cuts overall realization times by more than half, and in the cases where complete realizations are found, realization times are reduced by a factor of four, down to 1.2 seconds per sentence on a desktop Linux PC. Table 5 shows that increasing the number of complete realizations also yields improved BLEU and METEOR scores, as well as more exact matches. In particular, the hypertagger makes possible a more than 6-point improvement in the overall BLEU score on both the development and test sections, and a more than 12-point improvement on the sentences with complete realizations. As the effort to engineer a grammar suitable for realization from the CCGbank proceeds in parallel to our work on hypertagging, we expect the hypertagger-seeded realizer to continue to improve, since a more complete and precise extracted grammar should enable more complete realizations to be found, and richer semantic representations should simplify the hypertagging task. Even with the current incomplete set of semantic templates, the hypertagger brings realizer performance roughly up to state-of-the-art levels, as our overall test set BLEU score (0.6701) slightly exceeds that of Cahill and van Genabith (2006), though at a coverage of 96% instead of 98%. We caution, however, that it remains unclear how meaningful it is to directly compare these scores when the realizer inputs vary considerably in their specificity, as Langkilde-Geary’s (2002) experiments dramatically illustrate. 5 Related Work Our approach follows Langkilde-Geary (2002) and Callaway (2003) in aiming to leverage the Penn Treebank to develop a broad-coverage surface realizer for English. However, while these earlier, generation-only approaches made use of converters for transforming the outputs of Treebank parsers to inputs for realization, our approach instead employs a shared bidirectional grammar, so that the input to realization is guaranteed to be the same logical form constructed by the parser. In this regard, our approach is more similar to the ones pursued more recently by Carroll, Oepen and Velldal (2005; 2005; 2006), Nakanishi et al. (2005) and Cahill and van Genabith (2006) with HPSG and LFG grammars. While we consider our approach to be the first to employ a supertagger for realization, or hypertagger, the approach is clearly reminiscent of the LTAG tree models of Srinivas and Rambow (2000). The main difference between the approaches is that ours consists of a multitagging step followed by the bottomup construction of a realization chart, while theirs involves the top-down selection of the single most likely supertag for each node that is grammatically 189 compatible with the parent node, with the probability conditioned only on the child nodes. Note that although their approach does involve a subsequent lattice construction step, it requires making non-standard assumptions about the TAG; in contrast, ours follows the chart realization tradition of working with the same operations of grammatical combination as in parsing, including a well-defined notion of semantic composition. Additionally, as our tagger employs maximum entropy modeling, it is able to take into account a greater variety of contextual features, including those derived from parent nodes. In comparison to other recent chart realization approaches, Nakanishi et al.’s is similar to ours in that it employs an iterative beam search, dynamically changing the beam size in order to cope with the large search space. However, their log-linear selection models have been adapted from ones used in parsing, and do not condition choices based on features of the input semantics to the same extent. In particular, while they employ a baseline maximum likelihood model that conditions the probability of a lexical entry upon its predicate argument structure (PAS) — that is, the set of elementary predications introduced by the lexical item — this probability does not take into account other elements of the local context, including parents and modifiers, and their lexical predicates. Similarly, Cahill and van Genabith condition the probability of their lexical rules on the set of feature-value pairs linked to the RHS of the rule, but do not take into account any additional context. Since their probabilistic models involve independence assumptions like those in a PCFG, and since they do not employ n-grams for scoring alternative realizations, their approach only keeps the single most likely edge in an equivalence class, rather than packing them into a forest. Carroll, Oepen and Velldal’s approach is like Nakanishi et al.’s in that they adapt log-linear parsing models to the realization task; however, they employ manually written grammars on much smaller corpora, and perhaps for this reason they have not faced the need to employ an iterative beam search. 6 Conclusion We have introduced a novel type of supertagger, which we have dubbed a hypertagger, that assigns CCG category labels to elementary predications in a structured semantic representation with high accuracy at several levels of tagging ambiguity in a fashion reminiscent of (Bangalore and Rambow, 2000). To our knowledge, we are the first to report tagging results in the semantic-to-syntactic direction. We have also shown that, by integrating this hypertagger with a broad-coverage CCG chart realizer, considerably faster realization times are possible (approximately twice as fast as compared with a realizer that performs simple lexical look-ups) with higher BLEU, METEOR and exact string match scores. Moreover, the hypertagger-augmented realizer finds more than twice the number of complete realizations, and further analysis revealed that the realization quality (as per modified BLEU and METEOR) is higher in the cases when the realizer finds a complete realization. This suggests that further improvements to the hypertagger will lead to more complete realizations, hence more high-quality realizations. Finally, further efforts to engineer a grammar suitable for realization from the CCGbank should provide richer feature sets, which, as our feature ablation study suggests, are useful for boosting hypertagging performance, hence for finding better and more complete realizations. Acknowledgements The authors thank the anonymous reviewers, Chris Brew, Detmar Meurers and Eric Fosler-Lussier for helpful comments and discussion. References Jason Baldridge and Geert-Jan Kruijff. 2002. Coupling CCG and Hybrid Logic Dependency Semantics. In Proc. ACL-02. Jason Baldridge and Geert-Jan Kruijff. 2003. MultiModal Combinatory Categorial Grammar. In Proc. ACL-03. Jason Baldridge. 2002. Lexically Specified Derivational Control in Combinatory Categorial Grammar. Ph.D. thesis, School of Informatics, University of Edinburgh. Srinivas Bangalore and Aravind K. Joshi. 1999. Su190 pertagging: An Approach to Almost Parsing. Computational Linguistics, 25(2):237–265. Srinivas Bangalore and Owen Rambow. 2000. Exploiting a probabilistic hierarchical model for generation. In Proce. COLING-00. Jeff Bilmes and Katrin Kirchhoff. 2003. Factored language models and general parallelized backoff. In Proc. HLT-03. Patrick Blackburn. 2000. Representation, reasoning, and relational structures: a hybrid logic manifesto. Logic Journal of the IGPL, 8(3):339–625. Johan Bos. 2005. Towards wide-coverage semantic interpretation. In Proc. IWCS-6. Stephen Boxwell and Michael White. 2008. Projecting Propbank roles onto the CCGbank. In Proc. LREC-08. To appear. Aoife Cahill and Josef van Genabith. 2006. Robust PCFG-based generation using automatically acquired LFG approximations. In Proc. COLING-ACL ’06. Charles Callaway. 2003. Evaluating coverage for large symbolic NLG grammars. In Proc. IJCAI-03. John Carroll and Stefan Oepen. 2005. High efficiency realization for a wide-coverage unification grammar. In Proc. IJCNLP-05. John Carroll, Ann Copestake, Dan Flickinger, and Victor Pozna´nski. 1999. An efficient chart generator for (semi-) lexicalist grammars. In Proc. ENLG-99. Stephen Clark and James Curran. 2007. Wide-coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4). Stephen Clark. 2002. Supertagging for combinatory categorial grammar. In Proceedings of the 6th International Workshop on Tree Adjoining Grammars and Related Frameworks (TAG+6), pages 19–24, Venice, Italy. James R. Curran, Stephen Clark, and David Vadas. 2006. Multi-tagging for lexicalized-grammar parsing. In Proceedings of the Joint Conference of the International Committee on Computational Linguistics and the Association for Computational Linguistics (COLING/ACL-06), pages 697–704, Sydney, Australia. Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank. Computational Linguistics, 33(3):355–396. Julia Hockenmaier. 2003. Data and Models for Statistical Parsing with Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh, Edinburgh, Scotland. Martin Kay. 1996. Chart generation. In Proc. ACL-96. Irene Langkilde-Geary. 2002. An empirical verification of coverage and correctness for a general-purpose sentence generator. In Proc. INLG-02. Irene Langkilde. 2000. Forest-based statistical sentence generation. In Proc. NAACL-00. Alon Lavie and Abhaya Agarwal. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In Proceedings of Workshop on Statistical Machine Translation at the 45th Annual Meeting of the Association of Computational Linguistics (ACL-2007), Prague. D C Liu and Jorge Nocedal. 1989. On the limited memory method for large scale optimization. Mathematical Programming B, 45(3). Hiroko Nakanishi, Yusuke Miyao, and Jun’ichi Tsujii. 2005. Probabilistic methods for disambiguation of an HPSG-based chart generator. In Proc. IWPT-05. Martha Palmer, Dan Gildea, and Paul Kingsbury. 2005. The proposition bank: A corpus annotated with semantic roles. Computational Linguistics, 31(1). Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, PA. Carl J Pollard and Ivan A Sag. 1994. Head-Driven Phrase Structure Grammar. University Of Chicago Press. Yves Schabes, Anne Abeill´e, and Aravind K. Joshi. 1988. Parsing strategies with ’lexicalized’ grammars: Application to tree adjoining grammars. In Proceedings of the 12th International Conference on Computational Linguistics (COLING-88), Budapest. Mark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge, Massachusetts, USA. Andreas Stolcke. 2002. SRILM — An extensible language modeling toolkit. In Proc. ICSLP-02. Erik Velldal and Stephan Oepen. 2005. Maximum entropy models for realization ranking. In Proc. MT Summit X. Erik Velldal and Stephan Oepen. 2006. Statistical ranking in tactical generation. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, Sydney, Australia, July. Michael White, Rajakrishnan Rajkumar, and Scott Martin. 2007. Towards broad coverage surface realization with CCG. In Proc. of the Workshop on Using Corpora for NLG: Language Generation and Machine Translation (UCNLG+MT). Michael White. 2006a. CCG chart realization from disjunctive inputs. In Proceedings, INLG 2006. Michael White. 2006b. Efficient realization of coordinate structures in Combinatory Categorial Grammar. Research on Language and Computation, 4(1):39–75. 191
2008
22
Proceedings of ACL-08: HLT, pages 192–199, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Forest-Based Translation Haitao Mi† Liang Huang‡ Qun Liu† †Key Lab. of Intelligent Information Processing ‡Department of Computer & Information Science Institute of Computing Technology University of Pennsylvania Chinese Academy of Sciences Levine Hall, 3330 Walnut Street P.O. Box 2704, Beijing 100190, China Philadelphia, PA 19104, USA {htmi,liuqun}@ict.ac.cn [email protected] Abstract Among syntax-based translation models, the tree-based approach, which takes as input a parse tree of the source sentence, is a promising direction being faster and simpler than its string-based counterpart. However, current tree-based systems suffer from a major drawback: they only use the 1-best parse to direct the translation, which potentially introduces translation mistakes due to parsing errors. We propose a forest-based approach that translates a packed forest of exponentially many parses, which encodes many more alternatives than standard n-best lists. Large-scale experiments show an absolute improvement of 1.7 BLEU points over the 1-best baseline. This result is also 0.8 points higher than decoding with 30-best parses, and takes even less time. 1 Introduction Syntax-based machine translation has witnessed promising improvements in recent years. Depending on the type of input, these efforts can be divided into two broad categories: the string-based systems whose input is a string to be simultaneously parsed and translated by a synchronous grammar (Wu, 1997; Chiang, 2005; Galley et al., 2006), and the tree-based systems whose input is already a parse tree to be directly converted into a target tree or string (Lin, 2004; Ding and Palmer, 2005; Quirk et al., 2005; Liu et al., 2006; Huang et al., 2006). Compared with their string-based counterparts, treebased systems offer some attractive features: they are much faster in decoding (linear time vs. cubic time, see (Huang et al., 2006)), do not require a binary-branching grammar as in string-based models (Zhang et al., 2006), and can have separate grammars for parsing and translation, say, a context-free grammar for the former and a tree substitution grammar for the latter (Huang et al., 2006). However, despite these advantages, current tree-based systems suffer from a major drawback: they only use the 1best parse tree to direct the translation, which potentially introduces translation mistakes due to parsing errors (Quirk and Corston-Oliver, 2006). This situation becomes worse with resource-poor source languages without enough Treebank data to train a high-accuracy parser. One obvious solution to this problem is to take as input k-best parses, instead of a single tree. This kbest list postpones some disambiguation to the decoder, which may recover from parsing errors by getting a better translation from a non 1-best parse. However, a k-best list, with its limited scope, often has too few variations and too many redundancies; for example, a 50-best list typically encodes a combination of 5 or 6 binary ambiguities (since 25 < 50 < 26), and many subtrees are repeated across different parses (Huang, 2008). It is thus inefficient either to decode separately with each of these very similar trees. Longer sentences will also aggravate this situation as the number of parses grows exponentially with the sentence length. We instead propose a new approach, forest-based translation (Section 3), where the decoder translates a packed forest of exponentially many parses,1 1There has been some confusion in the MT literature regarding the term forest: the word “forest” in “forest-to-string rules” 192 VP PP P yˇu x1:NPB VPB VV jˇux´ıng AS le x2:NPB →held x2 with x1 Figure 1: An example translation rule (r3 in Fig. 2). which compactly encodes many more alternatives than k-best parses. This scheme can be seen as a compromise between the string-based and treebased methods, while combining the advantages of both: decoding is still fast, yet does not commit to a single parse. Large-scale experiments (Section 4) show an improvement of 1.7 BLEU points over the 1-best baseline, which is also 0.8 points higher than decoding with 30-best trees, and takes even less time thanks to the sharing of common subtrees. 2 Tree-based systems Current tree-based systems perform translation in two separate steps: parsing and decoding. A parser first parses the source language input into a 1-best tree T, and the decoder then searches for the best derivation (a sequence of translation steps) d∗that converts source tree T into a target-language string among all possible derivations D: d∗= arg max d∈D P(d|T). (1) We will now proceed with a running example translating from Chinese to English: (2) À B`ush´ı Bush Æ yˇu with/and ™™ Sh¯al´ong Sharon1 >L jˇux´ıng hold † le pass.  hu`ıt´an talk2 “Bush held a talk2 with Sharon1” Figure 2 shows how this process works. The Chinese sentence (a) is first parsed into tree (b), which will be converted into an English string in 5 steps. First, at the root node, we apply rule r1 preserving top-level word-order between English and Chinese, (r1) IP(x1:NPB x2:VP) →x1 x2 (Liu et al., 2007) was a misnomer which actually refers to a set of several unrelated subtrees over disjoint spans, and should not be confused with the standard concept of packed forest. (a) B`ush´ı [yˇu Sh¯al´ong ]1 [jˇux´ıng le hu`ıt´an ]2 ⇓1-best parser (b) IP NPB NR B`ush´ı VP PP P yˇu NPB NR Sh¯al´ong VPB VV jˇux´ıng AS le NPB NN hu`ıt´an r1 ⇓ (c) NPB NR B`ush´ı VP PP P yˇu NPB NR Sh¯al´ong VPB VV jˇux´ıng AS le NPB NN hu`ıt´an r2 ⇓ r3 ⇓ (d) Bush held NPB NN hu`ıt´an with NPB NR Sh¯al´ong r4 ⇓ r5 ⇓ (e) Bush [held a talk]2 [with Sharon]1 Figure 2: An example derivation of tree-to-string translation. Shaded regions denote parts of the tree that is pattern-matched with the rule being applied. which results in two unfinished subtrees in (c). Then rule r2 grabs the B`ush´ı subtree and transliterate it (r2) NPB(NR(B`ush´ı)) →Bush. Similarly, rule r3 shown in Figure 1 is applied to the VP subtree, which swaps the two NPBs, yielding the situation in (d). This rule is particularly interesting since it has multiple levels on the source side, which has more expressive power than synchronous context-free grammars where rules are flat. 193 More formally, a (tree-to-string) translation rule (Huang et al., 2006) is a tuple ⟨t, s, φ⟩, where t is the source-side tree, whose internal nodes are labeled by nonterminal symbols in N, and whose frontier nodes are labeled by source-side terminals in Σ or variables from a set X = {x1, x2, . . .}; s ∈(X ∪∆)∗is the target-side string where ∆is the target language terminal set; and φ is a mapping from X to nonterminals in N. Each variable xi ∈X occurs exactly once in t and exactly once in s. We denote R to be the translation rule set. A similar formalism appears in another form in (Liu et al., 2006). These rules are in the reverse direction of the original string-to-tree transducer rules defined by Galley et al. (2004). Finally, from step (d) we apply rules r4 and r5 (r4) NPB(NN(hu`ıt´an)) →a talk (r5) NPB(NR(Sh¯al´ong)) →Sharon which perform phrasal translations for the two remaining subtrees, respectively, and get the Chinese translation in (e). 3 Forest-based translation We now extend the tree-based idea from the previous section to the case of forest-based translation. Again, there are two steps, parsing and decoding. In the former, a (modified) parser will parse the input sentence and output a packed forest (Section 3.1) rather than just the 1-best tree. Such a forest is usually huge in size, so we use the forest pruning algorithm (Section 3.4) to reduce it to a reasonable size. The pruned parse forest will then be used to direct the translation. In the decoding step, we first convert the parse forest into a translation forest using the translation rule set, by similar techniques of pattern-matching from tree-based decoding (Section 3.2). Then the decoder searches for the best derivation on the translation forest and outputs the target string (Section 3.3). 3.1 Parse Forest Informally, a packed parse forest, or forest in short, is a compact representation of all the derivations (i.e., parse trees) for a given sentence under a context-free grammar (Billot and Lang, 1989). For example, consider the Chinese sentence in Example (2) above, which has (at least) two readings depending on the part-of-speech of the word yˇu, which can be either a preposition (P “with”) or a conjunction (CC “and”). The parse tree for the preposition case is shown in Figure 2(b) as the 1-best parse, while for the conjunction case, the two proper nouns (B`ush´ı and Sh¯al´ong) are combined to form a coordinated NP NPB0,1 CC1,2 NPB2,3 NP0,3 (*) which functions as the subject of the sentence. In this case the Chinese sentence is translated into (3) “ [Bush and Sharon] held a talk”. Shown in Figure 3(a), these two parse trees can be represented as a single forest by sharing common subtrees such as NPB0,1 and VPB3,6. Such a forest has a structure of a hypergraph (Klein and Manning, 2001; Huang and Chiang, 2005), where items like NP0,3 are called nodes, and deductive steps like (*) correspond to hyperedges. More formally, a forest is a pair ⟨V, E⟩, where V is the set of nodes, and E the set of hyperedges. For a given sentence w1:l = w1 . . . wl, each node v ∈V is in the form of Xi,j, which denotes the recognition of nonterminal X spanning the substring from positions i through j (that is, wi+1 . . . wj). Each hyperedge e ∈E is a pair ⟨tails(e), head(e)⟩, where head(e) ∈V is the consequent node in the deductive step, and tails(e) ∈V ∗is the list of antecedent nodes. For example, the hyperedge for deduction (*) is notated: ⟨(NPB0,1, CC1,2, NPB2,3), NP0,3⟩. There is also a distinguished root node TOP in each forest, denoting the goal item in parsing, which is simply S0,l where S is the start symbol and l is the sentence length. 3.2 Translation Forest Given a parse forest and a translation rule set R, we can generate a translation forest which has a similar hypergraph structure. Basically, just as the depthfirst traversal procedure in tree-based decoding (Figure 2), we visit in top-down order each node v in the 194 (a) IP0,6 NP0,3 NPB0,1 NR0,1 B`ush´ı CC1,2 yˇu VP1,6 PP1,3 P1,2 NPB2,3 NR2,3 Sh¯al´ong VPB3,6 VV3,4 jˇux´ıng AS4,5 le NPB5,6 NN5,6 hu`ıt´an ⇓translation rule set R (b) IP0,6 NP0,3 NPB0,1 CC1,2 VP1,6 PP1,3 P1,2 NPB2,3 VPB3,6 VV3,4 AS4,5 NPB5,6 e5 e2 e6 e4 e3 e1 (c) translation hyperedge translation rule e1 r1 IP(x1:NPB x2:VP) →x1 x2 e2 r6 IP(x1:NP x2:VPB) →x1 x2 e3 r3 VP(PP(P(yˇu) x1:NPB) VPB(VV(jˇux´ıng) AS(le) x2:NPB)) →held x2 with x1 e4 r7 VP(PP(P(yˇu) x1:NPB) x2:VPB) →x2 with x1 e5 r8 NP(x1:NPB CC(yˇu) x2:NPB) →x1 and x2 e6 r9 VPB(VV(jˇux´ıng) AS(le) x1:NPB) →held x1 Figure 3: (a) the parse forest of the example sentence; solid hyperedges denote the 1-best parse in Figure 2(b) while dashed hyperedges denote the alternative parse due to Deduction (*). (b) the corresponding translation forest after applying the translation rules (lexical rules not shown); the derivation shown in bold solid lines (e1 and e3) corresponds to the derivation in Figure 2; the one shown in dashed lines (e2, e5, and e6) uses the alternative parse and corresponds to the translation in Example (3). (c) the correspondence between translation hyperedges and translation rules. parse forest, and try to pattern-match each translation rule r against the local sub-forest under node v. For example, in Figure 3(a), at node VP1,6, two rules r3 and r7 both matches the local subforest, and will thus generate two translation hyperedges e3 and e4 (see Figure 3(b-c)). More formally, we define a function match(r, v) which attempts to pattern-match rule r at node v in the parse forest, and in case of success, returns a list of descendent nodes of v that are matched to the variables in r, or returns an empty list if the match fails. Note that this procedure is recursive and may 195 Pseudocode 1 The conversion algorithm. 1: Input: parse forest Hp and rule set R 2: Output: translation forest Ht 3: for each node v ∈Vp in top-down order do 4: for each translation rule r ∈R do 5: vars ←match(r, v) ⊲variables 6: if vars is not empty then 7: e ←⟨vars, v, s(r)⟩ 8: add translation hyperedge e to Ht involve multiple parse hyperedges. For example, match(r3, VP1,6) = (NPB2,3, NPB5,6), which covers three parse hyperedges, while nodes in gray do not pattern-match any rule (although they are involved in the matching of other nodes, where they match interior nodes of the source-side tree fragments in a rule). We can thus construct a translation hyperedge from match(r, v) to v for each node v and rule r. In addition, we also need to keep track of the target string s(r) specified by rule r, which includes target-language terminals and variables. For example, s(r3) = “held x2 with x1”. The subtranslations of the matched variable nodes will be substituted for the variables in s(r) to get a complete translation for node v. So a translation hyperedge e is a triple ⟨tails(e), head(e), s⟩where s is the target string from the rule, for example, e3 = ⟨(NPB2,3, NPB5,6), VP1,6, “held x2 with x1”⟩. This procedure is summarized in Pseudocode 1. 3.3 Decoding Algorithms The decoder performs two tasks on the translation forest: 1-best search with integrated language model (LM), and k-best search with LM to be used in minimum error rate training. Both tasks can be done efficiently by forest-based algorithms based on k-best parsing (Huang and Chiang, 2005). For 1-best search, we use the cube pruning technique (Chiang, 2007; Huang and Chiang, 2007) which approximately intersects the translation forest with the LM. Basically, cube pruning works bottom up in a forest, keeping at most k +LM items at each node, and uses the best-first expansion idea from the Algorithm 2 of Huang and Chiang (2005) to speed up the computation. An +LM item of node v has the form (va⋆b), where a and b are the target-language boundary words. For example, (VP held ⋆Sharon 1,6 ) is an +LM item with its translation starting with “held” and ending with “Sharon”. This scheme can be easily extended to work with a general n-gram by storing n −1 words at both ends (Chiang, 2007). For k-best search after getting 1-best derivation, we use the lazy Algorithm 3 of Huang and Chiang (2005) that works backwards from the root node, incrementally computing the second, third, through the kth best alternatives. However, this time we work on a finer-grained forest, called translation+LM forest, resulting from the intersection of the translation forest and the LM, with its nodes being the +LM items during cube pruning. Although this new forest is prohibitively large, Algorithm 3 is very efficient with minimal overhead on top of 1-best. 3.4 Forest Pruning Algorithm We use the pruning algorithm of (Jonathan Graehl, p.c.; Huang, 2008) that is very similar to the method based on marginal probability (Charniak and Johnson, 2005), except that it prunes hyperedges as well as nodes. Basically, we use an Inside-Outside algorithm to compute the Viterbi inside cost β(v) and the Viterbi outside cost α(v) for each node v, and then compute the merit αβ(e) for each hyperedge: αβ(e) = α(head(e)) + X ui∈tails(e) β(ui) (4) Intuitively, this merit is the cost of the best derivation that traverses e, and the difference δ(e) = αβ(e) − β(TOP) can be seen as the distance away from the globally best derivation. We prune away a hyperedge e if δ(e) > p for a threshold p. Nodes with all incoming hyperedges pruned are also pruned. 4 Experiments We can extend the simple model in Equation 1 to a log-linear one (Liu et al., 2006; Huang et al., 2006): d∗= arg max d∈D P(d | T)λ0 · eλ1|d| · Plm(s)λ2 · eλ3|s| (5) where T is the 1-best parse, eλ1|d| is the penalty term on the number of rules in a derivation, Plm(s) is the language model and eλ3|s| is the length penalty term 196 on target translation. The derivation probability conditioned on 1-best tree, P(d | T), should now be replaced by P(d | Hp) where Hp is the parse forest, which decomposes into the product of probabilities of translation rules r ∈d: P(d | Hp) = Y r∈d P(r) (6) where each P(r) is the product of five probabilities: P(r) = P(t | s)λ4 · Plex(t | s)λ5· P(s | t)λ6 · Plex(s | t)λ7 · P(t | Hp) λ8. (7) Here t and s are the source-side tree and targetside string of rule r, respectively, P(t | s) and P(s | t) are the two translation probabilities, and Plex(·) are the lexical probabilities. The only extra term in forest-based decoding is P(t | Hp) denoting the source side parsing probability of the current translation rule r in the parse forest, which is the product of probabilities of each parse hyperedge ep covered in the pattern-match of t against Hp (which can be recorded at conversion time): P(t | Hp) = Y ep∈Hp, ep covered by t P(ep). (8) 4.1 Data preparation Our experiments are on Chinese-to-English translation, and we use the Chinese parser of Xiong et al. (2005) to parse the source side of the bitext. Following Huang (2008), we modify the parser to output a packed forest for each sentence. Our training corpus consists of 31,011 sentence pairs with 0.8M Chinese words and 0.9M English words. We first word-align them by GIZA++ refined by “diagand” from Koehn et al. (2003), and apply the tree-to-string rule extraction algorithm (Galley et al., 2006; Liu et al., 2006), which resulted in 346K translation rules. Note that our rule extraction is still done on 1-best parses, while decoding is on k-best parses or packed forests. We also use the SRI Language Modeling Toolkit (Stolcke, 2002) to train a trigram language model with Kneser-Ney smoothing on the English side of the bitext. We use the 2002 NIST MT Evaluation test set as our development set (878 sentences) and the 2005 0.230 0.232 0.234 0.236 0.238 0.240 0.242 0.244 0.246 0.248 0.250 0 5 10 15 20 25 30 35 BLEU score average decoding time (secs/sentence) 1-best p=5 p=12 k=10 k=30 k=100 k-best trees forests decoding Figure 4: Comparison of decoding on forests with decoding on k-best trees. NIST MT Evaluation test set as our test set (1082 sentences), with on average 28.28 and 26.31 words per sentence, respectively. We evaluate the translation quality using the case-sensitive BLEU-4 metric (Papineni et al., 2002). We use the standard minimum error-rate training (Och, 2003) to tune the feature weights to maximize the system’s BLEU score on the dev set. On dev and test sets, we prune the Chinese parse forests by the forest pruning algorithm in Section 3.4 with a threshold of p = 12, and then convert them into translation forests using the algorithm in Section 3.2. To increase the coverage of the rule set, we also introduce a default translation hyperedge for each parse hyperedge by monotonically translating each tail node, so that we can always at least get a complete translation in the end. 4.2 Results The BLEU score of the baseline 1-best decoding is 0.2325, which is consistent with the result of 0.2302 in (Liu et al., 2007) on the same training, development and test sets, and with the same rule extraction procedure. The corresponding BLEU score of Pharaoh (Koehn, 2004) is 0.2182 on this dataset. Figure 4 compares forest decoding with decoding on k-best trees in terms of speed and quality. Using more than one parse tree apparently improves the BLEU score, but at the cost of much slower decoding, since each of the top-k trees has to be decoded individually although they share many common subtrees. Forest decoding, by contrast, is much faster 197 0 5 10 15 20 25 0 10 20 30 40 50 60 70 80 90 100 Percentage of sentences (%) i (rank of the parse tree picked by the decoder) forest decoding 30-best trees Figure 5: Percentage of the i-th best parse tree being picked in decoding. 32% of the distribution for forest decoding is beyond top-100 and is not shown on this plot. and produces consistently better BLEU scores. With pruning threshold p = 12, it achieved a BLEU score of 0.2485, which is an absolute improvement of 1.6% points over the 1-best baseline, and is statistically significant using the sign-test of Collins et al. (2005) (p < 0.01). We also investigate the question of how often the ith-best parse tree is picked to direct the translation (i = 1, 2, . . .), in both k-best and forest decoding schemes. A packed forest can be roughly viewed as a (virtual) ∞-best list, and we can thus ask how often is a parse beyond top-k used by a forest, which relates to the fundamental limitation of k-best lists. Figure 5 shows that, the 1-best parse is still preferred 25% of the time among 30-best trees, and 23% of the time by the forest decoder. These ratios decrease dramatically as i increases, but the forest curve has a much longer tail in large i. Indeed, 40% of the trees preferred by a forest is beyond top-30, 32% is beyond top-100, and even 20% beyond top-1000. This confirms the fact that we need exponentially large kbest lists with the explosion of alternatives, whereas a forest can encode these information compactly. 4.3 Scaling to large data We also conduct experiments on a larger dataset, which contains 2.2M training sentence pairs. Besides the trigram language model trained on the English side of these bitext, we also use another trigram model trained on the first 1/3 of the Xinhua portion of Gigaword corpus. The two LMs have disapproach \ ruleset TR TR+BP 1-best tree 0.2666 0.2939 30-best trees 0.2755 0.3084 forest (p = 12) 0.2839 0.3149 Table 1: BLEU score results from training on large data. tinct weights tuned by minimum error rate training. The dev and test sets remain the same as above. Furthermore, we also make use of bilingual phrases to improve the coverage of the ruleset. Following Liu et al. (2006), we prepare a phrase-table from a phrase-extractor, e.g. Pharaoh, and at decoding time, for each node, we construct on-the-fly flat translation rules from phrases that match the sourceside span of the node. These phrases are called syntactic phrases which are consistent with syntactic constituents (Chiang, 2005), and have been shown to be helpful in tree-based systems (Galley et al., 2006; Liu et al., 2006). The final results are shown in Table 1, where TR denotes translation rule only, and TR+BP denotes the inclusion of bilingual phrases. The BLEU score of forest decoder with TR is 0.2839, which is a 1.7% points improvement over the 1-best baseline, and this difference is statistically significant (p < 0.01). Using bilingual phrases further improves the BLEU score by 3.1% points, which is 2.1% points higher than the respective 1-best baseline. We suspect this larger improvement is due to the alternative constituents in the forest, which activates many syntactic phrases suppressed by the 1-best parse. 5 Conclusion and future work We have presented a novel forest-based translation approach which uses a packed forest rather than the 1-best parse tree (or k-best parse trees) to direct the translation. Forest provides a compact data-structure for efficient handling of exponentially many tree structures, and is shown to be a promising direction with state-of-the-art translation results and reasonable decoding speed. This work can thus be viewed as a compromise between string-based and tree-based paradigms, with a good trade-off between speed and accuarcy. For future work, we would like to use packed forests not only in decoding, but also for translation rule extraction during training. 198 Acknowledgement Part of this work was done while L. H. was visiting CAS/ICT. The authors were supported by National Natural Science Foundation of China, Contracts 60736014 and 60573188, and 863 State Key Project No. 2006AA010108 (H. M and Q. L.), and by NSF ITR EIA-0205456 (L. H.). We would also like to thank Chris Quirk for inspirations, Yang Liu for help with rule extraction, Mark Johnson for posing the question of virtual ∞-best list, and the anonymous reviewers for suggestions. References Sylvie Billot and Bernard Lang. 1989. The structure of shared forests in ambiguous parsing. In Proceedings of ACL ’89, pages 143–151. Eugene Charniak and Mark Johnson. 2005. Coarse-tofine-grained n-best parsing and discriminative reranking. In Proceedings of the 43rd ACL. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL, pages 263–270, Ann Arbor, Michigan, June. David Chiang. 2007. Hierarchical phrase-based translation. Comput. Linguist., 33(2):201–228. Michael Collins, Philipp Koehn, and Ivona Kucerova. 2005. Clause restructuring for statistical machine translation. In Proceedings of ACL, pages 531–540, Ann Arbor, Michigan, June. Yuan Ding and Martha Palmer. 2005. Machine translation using probabilistic synchronous dependency insertion grammars. In Proceedings of ACL, pages 541– 548, Ann Arbor, Michigan, June. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In HLTNAACL, pages 273–280, Boston, MA. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceedings of COLING-ACL, pages 961–968, Sydney, Australia, July. Liang Huang and David Chiang. 2005. Better k-best parsing. In Proceedings of Ninth International Workshop on Parsing Technologies (IWPT-2005), Vancouver, Canada. Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proceedings of ACL, pages 144–151, Prague, Czech Republic, June. Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proceedings of AMTA, Boston, MA, August. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL, Columbus, OH. Dan Klein and Christopher D. Manning. 2001. Parsing and Hypergraphs. In Proceedings of the Seventh International Workshop on Parsing Technologies (IWPT2001), 17-19 October 2001, Beijing, China. Philipp Koehn, Franz Joseph Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLT-NAACL, Edmonton, AB, Canada. Philipp Koehn. 2004. Pharaoh: a beam search decoder for phrase-based statistical machine translation models. In Proceedings of AMTA, pages 115–124. Dekang Lin. 2004. A path-based transfer model for machine translation. In Proceedings of the 20th COLING, Barcelona, Spain. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-tostring alignment template for statistical machine translation. In Proceedings of COLING-ACL, pages 609– 616, Sydney, Australia, July. Yang Liu, Yun Huang, Qun Liu, and Shouxun Lin. 2007. Forest-to-string statistical translation rules. In Proceedings of ACL, pages 704–711, Prague, Czech Republic, June. Franz J. Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL, pages 160–167. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL, pages 311–318, Philadephia, USA, July. Chris Quirk and Simon Corston-Oliver. 2006. The impact of parse quality on syntactically-informed statistical machine translation. In Proceedings of EMNLP. Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency treelet translation: Syntactically informed phrasal SMT. In Proceedings of ACL, pages 271–279, Ann Arbor, Michigan, June. Andreas Stolcke. 2002. SRILM - an extensible language modeling toolkit. In Proceedings of ICSLP, volume 30, pages 901–904. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–404. Deyi Xiong, Shuanglong Li, Qun Liu, and Shouxun Lin. 2005. Parsing the Penn Chinese Treebank with semantic knowledge. In Proceedings of IJCNLP 2005, pages 70–81, Jeju Island, South Korea. Hao Zhang, Liang Huang, Daniel Gildea, and Kevin Knight. 2006. Synchronous binarization for machine translation. In Proceedings of HLT-NAACL, New York, NY. 199
2008
23
Proceedings of ACL-08: HLT, pages 200–208, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics A Discriminative Latent Variable Model for Statistical Machine Translation Phil Blunsom, Trevor Cohn and Miles Osborne School of Informatics, University of Edinburgh 2 Buccleuch Place, Edinburgh, EH8 9LW, UK {pblunsom,tcohn,miles}@inf.ed.ac.uk Abstract Large-scale discriminative machine translation promises to further the state-of-the-art, but has failed to deliver convincing gains over current heuristic frequency count systems. We argue that a principle reason for this failure is not dealing with multiple, equivalent translations. We present a translation model which models derivations as a latent variable, in both training and decoding, and is fully discriminative and globally optimised. Results show that accounting for multiple derivations does indeed improve performance. Additionally, we show that regularisation is essential for maximum conditional likelihood models in order to avoid degenerate solutions. 1 Introduction Statistical machine translation (SMT) has seen a resurgence in popularity in recent years, with progress being driven by a move to phrase-based and syntax-inspired approaches. Progress within these approaches however has been less dramatic. We believe this is because these frequency count based1 models cannot easily incorporate non-independent and overlapping features, which are extremely useful in describing the translation process. Discriminative models of translation can include such features without making assumptions of independence or explicitly modelling their interdependence. However, while discriminative models promise much, they have not been shown to deliver significant gains 1We class approaches using minimum error rate training (Och, 2003) frequency count based as these systems re-scale a handful of generative features estimated from frequency counts and do not support large sets of non-independent features. over their simpler cousins. We argue that this is due to a number of inherent problems that discriminative models for SMT must address, in particular the problems of spurious ambiguity and degenerate solutions. These occur when there are many ways to translate a source sentence to the same target sentence by applying a sequence of steps (a derivation) of either phrase translations or synchronous grammar rules, depending on the type of system. Existing discriminative models require a reference derivation to optimise against, however no parallel corpora annotated for derivations exist. Ideally, a model would account for this ambiguity by marginalising out the derivations, thus predicting the best translation rather than the best derivation. However, doing so exactly is NP-complete. For this reason, to our knowledge, all discriminative models proposed to date either side-step the problem by choosing simple model and feature structures, such that spurious ambiguity is lessened or removed entirely (Ittycheriah and Roukos, 2007; Watanabe et al., 2007), or else ignore the problem and treat derivations as translations (Liang et al., 2006; Tillmann and Zhang, 2007). In this paper we directly address the problem of spurious ambiguity in discriminative models. We use a synchronous context free grammar (SCFG) translation system (Chiang, 2007), a model which has yielded state-of-the-art results on many translation tasks. We present two main contributions. First, we develop a log-linear model of translation which is globally trained on a significant number of parallel sentences. This model maximises the conditional likelihood of the data, p(e|f), where e and f are the English and foreign sentences, respectively. Our estimation method is theoretically sound, avoiding the biases of the heuristic relative frequency estimates 200 G G G G G G G G G G G sentence length derivations 5 7 9 11 13 15 1e+03 1e+05 1e+08 Figure 1. Exponential relationship between sentence length and the average number of derivations (on a log scale) for each reference sentence in our training corpus. (Koehn et al., 2003). Second, within this framework, we model the derivation, d, as a latent variable, p(e, d|f), which is marginalised out in training and decoding. We show empirically that this treatment results in significant improvements over a maximum-derivation model. The paper is structured as follows. In Section 2 we list the challenges that discriminative SMT must face above and beyond the current systems. We situate our work, and previous work, on discriminative systems in this context. We present our model in Section 3, including our means of training and decoding. Section 4 reports our experimental setup and results, and finally we conclude in Section 5. 2 Challenges for Discriminative SMT Discriminative models allow for the use of expressive features, in the order of thousands or millions, which can reference arbitrary aspects of the source sentence. Given most successful SMT models have a highly lexicalised grammar (or grammar equivalent), these features can be used to smuggle in linguistic information, such as syntax and document context. With this undoubted advantage come four major challenges when compared to standard frequency count SMT models: 1. There is no one reference derivation. Often there are thousands of ways of translating a source sentence into the reference translation. Figure 1 illustrates the exponential relationship between sentence length and the number of derivations. Training is difficult without a clear target, and predicting only one derivation at test time is fraught with danger. 2. Parallel translation data is often very noisy, with such problems as non-literal translations, poor sentence- and word-alignments. A model which exactly translates the training data will inevitably perform poorly on held-out data. This problem of over-fitting is exacerbated in discriminative models with large, expressive, feature sets. Regularisation is essential for models with more than a handful of features. 3. Learning with a large feature set requires many training examples and typically many iterations of a solver during training. While current models focus solely on efficient decoding, discriminative models must also allow for efficient training. Past work on discriminative SMT only address some of these problems. To our knowledge no systems directly address Problem 1, instead choosing to ignore the problem by using one or a small handful of reference derivations in an n-best list (Liang et al., 2006; Watanabe et al., 2007), or else making local independence assumptions which side-step the issue (Ittycheriah and Roukos, 2007; Tillmann and Zhang, 2007; Wellington et al., 2006). These systems all include regularisation, thereby addressing Problem 2. An interesting counterpoint is the work of DeNero et al. (2006), who show that their unregularised model finds degenerate solutions. Some of these discriminative systems have been trained on large training sets (Problem 3); these systems are the local models, for which training is much simpler. Both the global models (Liang et al., 2006; Watanabe et al., 2007) use fairly small training sets, and there is no evidence that their techniques will scale to larger data sets. Our model addresses all three of the above problems within a global model, without resorting to nbest lists or local independence assumptions. Furthermore, our model explicitly accounts for spurious ambiguity without altering the model structure or arbitrarily selecting one derivation. Instead we model the translation distribution with a latent variable for the derivation, which we marginalise out in training and decoding. 201 the hat le chapeau red the hat le chapeau red Figure 2. The dropping of an adjective in this example means that there is no one segmentation that we could choose that would allow a system to learn le →the and chapeau →hat. ⟨S⟩→⟨S 1 X 2, S 1 X 2⟩ ⟨S⟩→⟨X 1, X 1⟩ ⟨X⟩→⟨ne X 1 pas, does not X 1⟩ ⟨X⟩→⟨va, go⟩ ⟨X⟩→⟨il, he⟩ Figure 3. A simple SCFG, with non-terminal symbols S and X, which performs the transduction: il ne vas pas ⇒ he does not go This itself provides robustness to noisy data, in addition to the explicit regularisation from a prior over the model parameters. For example, in many cases there is no one perfect derivation, but rather many imperfect ones which each include some good translation fragments. The model can learn from many of these derivations and thereby learn from all these translation fragments. This situation is illustrated in Figure 2 where the non-translated adjective red means neither segmentation is ‘correct’, although both together present positive evidence for the two lexical translations. We present efficient methods for training and prediction, demonstrating their scaling properties by training on more than a hundred thousand training sentences. Finally, we stress that our main findings are general ones. These results could – and should – be applied to other models, discriminative and generative, phrase- and syntax-based, to further progress the state-of-the-art in machine translation. 3 Discriminative Synchronous Transduction A synchronous context free grammar (SCFG) consists of paired CFG rules with co-indexed nonterminals (Lewis II and Stearns, 1968). By assigning the source and target languages to the respective sides of a SCFG it is possible to describe translation as the process of parsing the source sentence using a CFG, while generating the target translation from the other (Chiang, 2007). All the models we present use the grammar extraction technique described in Chiang (2007), and are bench-marked against our own implementation of this hierarchical model (Hiero). Figure 3 shows a simple instance of a hierarchical grammar with two non-terminals. Note that our approach is general and could be used with other synchronous grammar transducers (e.g., Galley et al. (2006)). 3.1 A global log-linear model Our log-linear translation model defines a conditional probability distribution over the target translations of a given source sentence. A particular sequence of SCFG rule applications which produces a translation from a source sentence is referred to as a derivation, and each translation may be produced by many different derivations. As the training data only provides source and target sentences, the derivations are modelled as a latent variable. The conditional probability of a derivation, d, for a target translation, e, conditioned on the source, f, is given by: pΛ(d, e|f) = exp P k λkHk(d, e, f) ZΛ(f) (1) where Hk(d, e, f) = X r∈d hk(f, r) (2) Here k ranges over the model’s features, and Λ = {λk} are the model parameters (weights for their corresponding features). The feature functions Hk are predefined real-valued functions over the source and target sentences, and can include overlapping and non-independent features of the data. The features must decompose with the derivation, as shown in (2). The features can reference the entire source sentence coupled with each rule, r, in a derivation. The distribution is globally normalised by the partition function, ZΛ(f), which sums out the numerator in (1) for every derivation (and therefore every translation) of f: ZΛ(f) = X e X d∈∆(e,f) exp X k λkHk(d, e, f) Given (1), the conditional probability of a target translation given the source is the sum over all of its derivations: pΛ(e|f) = X d∈∆(e,f) pΛ(d, e|f) (3) 202 where ∆(e, f) is the set of all derivations of the target sentence e from the source f. Most prior work in SMT, both generative and discriminative, has approximated the sum over derivations by choosing a single ‘best’ derivation using a Viterbi or beam search algorithm. In this work we show that it is both tractable and desirable to directly account for derivational ambiguity. Our findings echo those observed for latent variable log-linear models successfully used in monolingual parsing (Clark and Curran, 2007; Petrov et al., 2007). These models marginalise over derivations leading to a dependency structure and splits of non-terminal categories in a PCFG, respectively. 3.2 Training The parameters of our model are estimated from our training sample using a maximum a posteriori (MAP) estimator. This maximises the likelihood of the parallel training sentences, D = {(e, f)}, penalised using a prior, i.e., ΛMAP = arg maxΛ pΛ(D)p(Λ). We use a zero-mean Gaussian prior with the probability density function p0(λk) ∝exp −λ2 k/2σ2 .2 This results in the following log-likelihood objective and corresponding gradient: L = X (e,f)∈D log pΛ(e|f) + X k log p0(λk) (4) ∂L ∂λk = EpΛ(d|e,f)[hk] −EpΛ(e|f)[hk] −λk σ2 (5) In order to train the model, we maximise equation (4) using L-BFGS (Malouf, 2002; Sha and Pereira, 2003). This method has been demonstrated to be effective for (non-convex) log-linear models with latent variables (Clark and Curran, 2004; Petrov et al., 2007). Each L-BFGS iteration requires the objective value and its gradient with respect to the model parameters. These are calculated using inside-outside inference over the feature forest defined by the SCFG parse chart of f yielding the partition function, ZΛ(f), required for the log-likelihood, and the marginals, required for its derivatives. Efficiently calculating the objective and its gradient requires two separate packed charts, each representing a derivation forest. The first one is the full chart over the space of possible derivations given the 2In general, any conjugate prior could be used instead of a simple Gaussian. source sentence. The inside-outside algorithm over this chart gives the marginal probabilities for each chart cell, from which we can find the feature expectations. The second chart contains the space of derivations which produce the reference translation from the source. The derivations in this chart are a subset of those in the full derivation chart. Again, we use the inside-outside algorithm to find the ‘reference’ feature expectations from this chart. These expectations are analogous to the empirical observation of maximum entropy classifiers. Given these two charts we can calculate the loglikelihood of the reference translation as the insidescore from the sentence spanning cell of the reference chart, normalised by the inside-score of the spanning cell from the full chart. The gradient is calculated as the difference of the feature expectations of the two charts. Clark and Curran (2004) provides a more complete discussion of parsing with a loglinear model and latent variables. The full derivation chart is produced using a CYK parser in the same manner as Chiang (2005), and has complexity O(|e|3). We produce the reference chart by synchronously parsing the source and reference sentences using a variant of CYK algorithm over two dimensions, with a time complexity of O(|e|3|f|3). This is an instance of the ITG alignment algorithm (Wu, 1997). This step requires the reference translation for each training instance to be contained in the model’s hypothesis space. Achieving full coverage implies inducing a grammar which generates all observed source-target pairs, which is difficult in practise. Instead we discard the unreachable portion of the training sample (24% in our experiments). The proportion of discarded sentences is a function of the grammar used. Extraction heuristics other than the method used herein (Chiang, 2007) could allow complete coverage (e.g., Galley et al. (2004)). 3.3 Decoding Accounting for all derivations of a given translation should benefit not only training, but also decoding. Unfortunately marginalising over derivations in decoding is NP-complete. The standard solution is to approximate the maximum probability translation using a single derivation (Koehn et al., 2003). Here we approximate the sum over derivations directly using a beam search in which we produce a beam of high probability translation sub-strings for each cell in the parse chart. This algorithm is sim203 X[1,2] on X[2,3] the X[3,4] table X[1,3] on the X[2,4] the table X[1,3] on the table X[3,4] chart X[2,4] the chart X[1,3] on the chart s 1 sur 2 la 3 table 4 Figure 4. Hypergraph representation of max translation decoding. Each chart cell must store the entire target string generated. ilar to the methods for decoding with a SCFG intersected with an n-gram language model, which require language model contexts to be stored in each chart cell. However, while Chiang (2005) stores an abbreviated context composed of the n −1 target words on the left and right edge of the target substring, here we store the entire target string. Additionally, instead of maximising scores in each beam cell, we sum the inside scores for each derivation that produces a given string for that cell. When the beam search is complete we have a list of translations in the top beam cell spanning the entire source sentence along with their approximated inside derivation scores. Thus we can assign each translation string a probability by normalising its inside score by the sum of the inside scores of all the translations spanning the entire sentence. Figure 4 illustrates the search process for the simple grammar from Table 2. Each graph node represents a hypothesis translation substring covering a sub-span of the source string. The space of translation sub-strings is exponential in each cell’s span, and our algorithm can only sum over a small fraction of the possible strings. Therefore the resulting probabilities are only estimates. However, as demonstrated in Section 4, this algorithm is considerably more effective than maximum derivation (Viterbi) decoding. 4 Evaluation Our model evaluation was motivated by the following questions: (1) the effect of maximising translations rather than derivations in training and decoding; (2) whether a regularised model performs better than a maximum likelihood model; (3) how the performance of our model compares with a frequency count based hierarchical system; and (4) how translation performance scales with the number of training examples. We performed all of our experiments on the Europarl V2 French-English parallel corpus.3 The training data was created by filtering the full corpus for all the French sentences between five and fifteen words in length, resulting in 170K sentence pairs. These limits were chosen as a compromise between experiment turnaround time and leaving a large enough corpus to obtain indicative results. The development and test data was taken from the 2006 NAACL and 2007 ACL workshops on machine translation, also filtered for sentence length.4 Tuning of the regularisation parameter and MERT training of the benchmark models was performed on dev2006, while the test set was the concatenation of devtest2006, test2006 and test2007, amounting to 315 development and 1164 test sentences. Here we focus on evaluating our model’s basic ability to learn a conditional distribution from simple binary features, directly comparable to those currently employed in frequency count models. As such, our base model includes a single binary identity feature per-rule, equivalent to the p(e|f) parameters defined on each rule in standard models. As previously noted, our model must be able to derive the reference sentence from the source for it to be included in training. For both our discriminative and benchmark (Hiero) we extracted our grammar on the 170K sentence corpus using the approach described in Chiang (2007), resulting in 7.8 million rules. The discriminative model was then trained on the training partition, however only 130K of the sentences were used as the model could not produce a derivation of the reference for the remaining sentences. There were many grammar rules that the discriminative model did not observe in a reference derivation, and thus could not assign their feature a positive weight. While the benchmark model has a 3http://www.statmt.org/europarl/ 4http://www.statmt.org/wmt0{6,7} 204 Decoding Training derivation translation All Derivations 28.71 31.23 Single Derivation 26.70 27.32 ML (σ2 = ∞) 25.57 25.97 Table 1. A comparison on the impact of accounting for all derivations in training and decoding (development set). positive count for every rule (7.8M), the discriminative model only observes 1.7M rules in actual reference derivations. Figure 1 illustrates the massive ambiguity present in the training data, with fifteen word sentences averaging over 70M reference derivations. Performance is evaluated using cased BLEU4 score on the test set. Although there is no direct relationship between BLEU and likelihood, it provides a rough measure for comparing performance. Derivational ambiguity Table 1 shows the impact of accounting for derivational ambiguity in training and decoding.5 There are two options for training, we could use our latent variable model and optimise the probability of all derivations of the reference translation, or choose a single derivation that yields the reference and optimise its probability alone. The second option raises the difficult question of which one, of the thousands available, we should choose? We use the derivation which contains the most rules. The intuition is that small rules are likely to appear more frequently, and thus generalise better to a test set. In decoding we can search for the maximum probability derivation, which is the standard practice in SMT, or for the maximum probability translation which is what we actually want from our model, i.e. the best translation. The results clearly indicate the value in optimising translations, rather than derivations. Maxtranslation decoding for the model trained on single derivations has only a small positive effect, while for the latent variable model the impact is much larger.6 For example, our max-derivation model trained on the Europarl data translates carte sur la table as on the table card. This error in the reordering of card (which is an acceptable translation of carte) is due to the rule ⟨X⟩→⟨carte X 1, X 1 card⟩being the highest scoring rule for carte. This is reasonable, as 5When not explicitly stated, both here and in subsequent results, the regularisation parameter was set to one, σ2 = 1. 6We also experimented with using max-translation decoding for standard MER trained translation models, finding that it had a small negative impact on BLEU score. G G G G G G G beam width development BLEU (%) 29.0 29.5 30.0 30.5 31.0 31.5 100 1k 10k Figure 5. The effect of the beam width (log-scale) on maxtranslation decoding (development set). carte is a noun, which in the training data, is often observed with a trailing adjective which needs to be reordered when translating into English. In the example there is no adjective, but the simple hierarchical grammar cannot detect this. The max-translation model finds a good translation card on the table. This is due to the many rules that enforce monotone ordering around sur la, ⟨X⟩→⟨X 1 sur, X 1 in⟩ ⟨X⟩→⟨X 1 sur la X 2, X 1 in the X 2⟩etc. The scores of these many monotone rules sum to be greater than the reordering rule, thus allowing the model to use the weight of evidence to settle on the correct ordering. Having established that the search for the best translation is effective, the question remains as to how the beam width over partial translations affects performance. Figure 5 shows the relationship between beam width and development BLEU. Even with a very tight beam of 100, max-translation decoding outperforms maximum-derivation decoding, and performance is increasing even at a width of 10k. In subsequent experiments we use a beam of 5k which provides a good trade-off between performance and speed. Regularisation Table 1 shows that the performance of an unregularised maximum likelihood model lags well behind the regularised maxtranslation model. From this we can conclude that the maximum likelihood model is overfitting the training set. We suggest that is a result of the degenerate solutions of the conditional maximum likelihood estimate, as described in DeNero et al. (2006). Here we assert that our regularised maximum a pos205 Grammar Rules ML MAP (σ2 = ∞) (σ2 = 1) ⟨X⟩→⟨carte, map⟩ 1.0 0.5 ⟨X⟩→⟨carte, notice⟩ 0.0 0.5 ⟨X⟩→⟨sur, on⟩ 1.0 1.0 ⟨X⟩→⟨la, the⟩ 1.0 1.0 ⟨X⟩→⟨table, table⟩ 1.0 0.5 ⟨X⟩→⟨table, chart⟩ 0.0 0.5 ⟨X⟩→⟨carte sur, notice on⟩ 1.0 0.5 ⟨X⟩→⟨carte sur, map on⟩ 0.0 0.5 ⟨X⟩→⟨sur la, on the⟩ 1.0 1.0 ⟨X⟩→⟨la table, the table⟩ 0.0 0.5 ⟨X⟩→⟨la table, the chart⟩ 1.0 0.5 Training data: carte sur la table ↔map on the table carte sur la table ↔notice on the chart Table 2. Comparison of the susceptibility to degenerate solutions for a ML and MAP optimised model, using a simple grammar with one parameter per rule and a monotone glue rule: ⟨X⟩→⟨X 1 X 2 , X 1 X 2 ⟩ teriori model avoids such solutions. This is illustrated in Table 2, which shows the conditional probabilities for rules, obtained by locally normalising the rule feature weights for a simple grammar extracted from the ambiguous pair of sentences presented in DeNero et al. (2006). The first column of conditional probabilities corresponds to a maximum likelihood estimate, i.e., without regularisation. As expected, the model finds a degenerate solution in which overlapping rules are exploited in order to minimise the entropy of the rule translation distributions. The second column shows the solution found by our model when regularised by a Gaussian prior with unit variance. Here we see that the model finds the desired solution in which the true ambiguity of the translation rules is preserved. The intuition is that in order to find a degenerate solution, dispreferred rules must be given large negative weights. However the prior penalises large weights, and therefore the best strategy for the regularised model is to evenly distribute probability mass. Translation comparison Having demonstrated that accounting for derivational ambiguity leads to improvements for our discriminative model, we now place the performance of our system in the context of the standard approach to hierarchical translation. To do this we use our own implementation of Hiero (Chiang, 2007), with the same grammar but with the traditional generative feature set trained in a linear model with minimum BLEU training. The feature set includes: a trigram language model (lm) trained System Test (BLEU) Discriminative max-derivation 25.78 Hiero (pd, gr, rc, wc) 26.48 Discriminative max-translation 27.72 Hiero (pd, pr, plex d , plex r , gr, rc, wc) 28.14 Hiero (pd, pr, plex d , plex r , gr, rc, wc, lm) 32.00 Table 3. Test set performance compared with a standard Hiero system on the English side of the unfiltered Europarl corpus; direct and reverse translation scores estimated as relative frequencies (pd, pr); lexical translation scores (plex d , plex r ), a binary flag for the glue rule which allows the model to (dis)favour monotone translation (gr); and rule and target word counts (rc, wc). Table 3 shows the results of our system on the test set. Firstly we show the relative scores of our model against Hiero without using reverse translation or lexical features.7 This allows us to directly study the differences between the two translation models without the added complication of the other features. As well as both modelling the same distribution, when our model is trained with a single parameter per-rule these systems have the same parameter space, differing only in the manner of estimation. Additionally we show the scores achieved by MERT training the full set of features for Hiero, with and without a language model.8 We provide these results for reference. To compare our model directly with these systems we would need to incorporate additional features and a language model, work which we have left for a later date. The relative scores confirm that our model, with its minimalist feature set, achieves comparable performance to the standard feature set without the language model. This is encouraging as our model was trained to optimise likelihood rather than BLEU, yet it is still competitive on that metric. As expected, the language model makes a significant difference to BLEU, however we believe that this effect is orthogonal to the choice of base translation model, thus we would expect a similar gain when integrating a language model into the discriminative system. An informal comparison of the outputs on the development set, presented in Table 4, suggests that the 7Although the most direct comparison for the discriminative model would be with pd model alone, omitting the gr, rc and wc features and MERT training produces poor translations. 8Hiero (pd, pr, plex d , plex r , gr, rc, wc, lm) represents stateof-the-art performance on this training/testing set. 206 S: C’est pourquoi nous souhaitons que l’affaire nous soit renvoy´ee. R: We therefore want the matter re-referred to ourselves. D: That is why we want the that matters we to be referred back. T: That is why we would like the matter to be referred back. H: That is why we wish that the matter we be referred back. S: Par contre, la transposition dans les ´Etats membres reste trop lente. R: But implementation by the Member States has still been too slow. D: However, it is implemented in the Member States is still too slow. T: However, the implementation measures in Member States remains too slow. H: In against, transposition in the Member States remains too slow. S: Aussi, je consid`ere qu’il reste ´enorm´ement `a faire dans ce domaine. R: I therefore consider that there is an incredible amount still to do in this area. D: So I think remains a lot to be done in this field. T: So I think there is still much to be done in this area. H: Therefore, I think it remains a vast amount to do in this area. Table 4. Example output produced by the maxderivation (D), max-translation (T) decoding algorithms and Hiero(pd, pr, plex d , plex r , gr, rc, wc) (H) models, relative to the source (S) and reference (R). translation optimising discriminative model more often produces quite fluent translations, yet not in ways that would lead to an increase in BLEU score.9 This could be considered a side-effect of optimising likelihood rather than BLEU. Scaling In Figure 6 we plot the scaling characteristics of our models. The systems shown in the graph use the full grammar extracted on the 170k sentence corpus. The number of sentences upon which the iterative training algorithm is used to estimate the parameters is varied from 10k to the maximum 130K for which our model can reproduce the reference translation. As expected, the more data used to train the system, the better the performance. However, as the performance is still increasing significantly when all the parseable sentences are used, it is clear that the system’s performance is suffering from the large number (40k) of sentences that are discarded before training. 5 Discussion and Further Work We have shown that explicitly accounting for competing derivations yields translation improvements. 9Hiero was MERT trained on this set and has a 2% higher BLEU score compared to the discriminative model. G G G G G G training sentences development BLEU (%) 26 27 28 29 30 31 10k 25k 50k 75k 100k 130k Figure 6. Learning curve showing that the model continues to improve as we increase the number of training sentences (development set) Our model avoids the estimation biases associated with heuristic frequency count approaches and uses standard regularisation techniques to avoid degenerate maximum likelihood solutions. Having demonstrated the efficacy of our model with very simple features, the logical next step is to investigate more expressive features. Promising features might include those over source side reordering rules (Wang et al., 2007) or source context features (Carpuat and Wu, 2007). Rule frequency features extracted from large training corpora would help the model to overcome the issue of unreachable reference sentences. Such approaches have been shown to be effective in log-linear wordalignment models where only a small supervised corpus is available (Blunsom and Cohn, 2006). Finally, while in this paper we have focussed on the science of discriminative machine translation, we believe that with suitable engineering this model will advance the state-of-the-art. To do so would require integrating a language model feature into the max-translation decoding algorithm. The use of richer, more linguistic grammars (e.g., Galley et al. (2004)) may also improve the system. Acknowledgements The authors acknowledge the support of the EPSRC (Blunsom & Osborne, grant EP/D074959/1; Cohn, grant GR/T04557/01). 207 References Phil Blunsom and Trevor Cohn. 2006. Discriminative word alignment with conditional random fields. In Proc. of the 44th Annual Meeting of the ACL and 21st International Conference on Computational Linguistics (COLING/ACL-2006), pages 65–72, Sydney, Australia, July. Marine Carpuat and Dekai Wu. 2007. Improving statistical machine translation using word sense disambiguation. In Proc. of the 2007 Conference on Empirical Methods in Natural Language Processing (EMNLP2007), pages 61–72, Prague, Czech Republic. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. of the 43rd Annual Meeting of the ACL (ACL-2005), pages 263– 270, Ann Arbor, Michigan, June. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Stephen Clark and James R. Curran. 2004. Parsing the WSJ using CCG and log-linear models. In Proc. of the 42nd Annual Meeting of the ACL (ACL-2004), pages 103–110, Barcelona, Spain. Stephen Clark and James R. Curran. 2007. Widecoverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4). John DeNero, Dan Gillick, James Zhang, and Dan Klein. 2006. Why generative phrase models underperform surface heuristics. In Proc. of the HLT-NAACL 2006 Workshop on Statistical Machine Translation, pages 31–38, New York City, June. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proc. of the 4th International Conference on Human Language Technology Research and 5th Annual Meeting of the NAACL (HLT-NAACL 2004), Boston, USA, May. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proc. of the 44th Annual Meeting of the ACL and 21st International Conference on Computational Linguistics (COLING/ACL-2006), pages 961–968, Sydney, Australia, July. Abraham Ittycheriah and Salim Roukos. 2007. Direct translation model 2. In Proc. of the 7th International Conference on Human Language Technology Research and 8th Annual Meeting of the NAACL (HLT-NAACL 2007), pages 57–64, Rochester, USA. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proc. of the 3rd International Conference on Human Language Technology Research and 4th Annual Meeting of the NAACL (HLT-NAACL 2003), pages 81–88, Edmonton, Canada, May. Philip M. Lewis II and Richard E. Stearns. 1968. Syntaxdirected transduction. J. ACM, 15(3):465–488. Percy Liang, Alexandre Bouchard-Cˆot´e, Dan Klein, and Ben Taskar. 2006. An end-to-end discriminative approach to machine translation. In Proc. of the 44th Annual Meeting of the ACL and 21st International Conference on Computational Linguistics (COLING/ACL2006), pages 761–768, Sydney, Australia, July. Robert Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In Proc. of the 6th Conference on Natural Language Learning (CoNLL-2002), pages 49–55, Taipei, Taiwan, August. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of the 41st Annual Meeting of the ACL (ACL-2003), pages 160–167, Sapporo, Japan. Slav Petrov, Adam Pauls, and Dan Klein. 2007. Discriminative log-linear grammars with latent variables. In Advances in Neural Information Processing Systems 20 (NIPS), Vancouver, Canada. Fei Sha and Fernando Pereira. 2003. Shallow parsing with conditional random fields. In Proc. of the 3rd International Conference on Human Language Technology Research and 4th Annual Meeting of the NAACL (HLT-NAACL 2003), pages 134–141, Edmonton, Canada. Christoph Tillmann and Tong Zhang. 2007. A block bigram prediction model for statistical machine translation. ACM Transactions Speech Language Processing, 4(3):6. Chao Wang, Michael Collins, and Philipp Koehn. 2007. Chinese syntactic reordering for statistical machine translation. In Proc. of the 2007 Conference on Empirical Methods in Natural Language Processing (EMNLP-2007), pages 737–745, Prague, Czech Republic. Taro Watanabe, Jun Suzuki, Hajime Tsukada, and Hideki Isozaki. 2007. Online large-margin training for statistical machine translation. In Proc. of the 2007 Conference on Empirical Methods in Natural Language Processing (EMNLP-2007), pages 764–773, Prague, Czech Republic. Benjamin Wellington, Joseph Turian, Chris Pike, and I. Dan Melamed. 2006. Scalable purelydiscriminative training for word and tree transducers. In Proc. of the 7th Biennial Conference of the Association for Machine Translation in the Americas (AMTA), Boston, USA. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. 208
2008
24
Proceedings of ACL-08: HLT, pages 209–217, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Efficient Multi-pass Decoding for Synchronous Context Free Grammars Hao Zhang and Daniel Gildea Computer Science Department University of Rochester Rochester, NY 14627 Abstract We take a multi-pass approach to machine translation decoding when using synchronous context-free grammars as the translation model and n-gram language models: the first pass uses a bigram language model, and the resulting parse forest is used in the second pass to guide search with a trigram language model. The trigram pass closes most of the performance gap between a bigram decoder and a much slower trigram decoder, but takes time that is insignificant in comparison to the bigram pass. An additional fast decoding pass maximizing the expected count of correct translation hypotheses increases the BLEU score significantly. 1 Introduction Statistical machine translation systems based on synchronous grammars have recently shown great promise, but one stumbling block to their widespread adoption is that the decoding, or search, problem during translation is more computationally demanding than in phrase-based systems. This complexity arises from the interaction of the tree-based translation model with an n-gram language model. Use of longer n-grams improves translation results, but exacerbates this interaction. In this paper, we present three techniques for attacking this problem in order to obtain fast, high-quality decoders. First, we present a two-pass decoding algorithm, in which the first pass explores states resulting from an integrated bigram language model, and the second pass expands these states into trigram-based states. The general bigram-to-trigram technique is common in speech recognition (Murveit et al., 1993), where lattices from a bigram-based decoder are re-scored with a trigram language model. We examine the question of whether, given the reordering inherent in the machine translation problem, lower order n-grams will provide as valuable a search heuristic as they do for speech recognition. Second, we explore heuristics for agenda-based search, and present a heuristic for our second pass that combines precomputed language model information with information derived from the first pass. With this heuristic, we achieve the same BLEU scores and model cost as a trigram decoder with essentially the same speed as a bigram decoder. Third, given the significant speedup in the agenda-based trigram decoding pass, we can rescore the trigram forest to maximize the expected count of correct synchronous constituents of the model, using the product of inside and outside probabilities. Maximizing the expected count of synchronous constituents approximately maximizes BLEU. We find a significant increase in BLEU in the experiments, with minimal additional time. 2 Language Model Integrated Decoding for SCFG We begin by introducing Synchronous Context Free Grammars and their decoding algorithms when an n-gram language model is integrated into the grammatical search space. A synchronous CFG (SCFG) is a set of contextfree rewriting rules for recursively generating string pairs. Each synchronous rule is a pair of CFG rules 209 with the nonterminals on the right hand side of one CFG rule being one-to-one mapped to the other CFG rule via a permutation π. We adopt the SCFG notation of Satta and Peserico (2005). Superscript indices in the right-hand side of grammar rules: X →X(1) 1 ...X(n) n , X(π(1)) π(1) ...X(π(n)) π(n) indicate that the nonterminals with the same index are linked across the two languages, and will eventually be rewritten by the same rule application. Each Xi is a variable which can take the value of any nonterminal in the grammar. In this paper, we focus on binary SCFGs and without loss of generality assume that only the preterminal unary rules can generate terminal string pairs. Thus, we are focusing on Inversion Transduction Grammars (Wu, 1997) which are an important subclass of SCFG. Formally, the rules in our grammar include preterminal unary rules: X →e/f for pairing up words or phrases in the two languages and binary production rules with straight or inverted orders that are responsible for building up upperlevel synchronous structures. They are straight rules written: X →[Y Z] and inverted rules written: X →⟨Y Z⟩. Most practical non-binary SCFGs can be binarized using the synchronous binarization technique by Zhang et al. (2006). The Hiero-style rules of (Chiang, 2005), which are not strictly binary but binary only on nonterminals: X →yu X(1) you X(2); have X(2) with X(1) can be handled similarly through either offline binarization or allowing a fixed maximum number of gap words between the right hand side nonterminals in the decoder. For these reasons, the parsing problems for more realistic synchronous CFGs such as in Chiang (2005) and Galley et al. (2006) are formally equivalent to ITG. Therefore, we believe our focus on ITG for the search efficiency issue is likely to generalize to other SCFG-based methods. Without an n-gram language model, decoding using SCFG is not much different from CFG parsing. At each time a CFG rule is applied on the input string, we apply the synchronized CFG rule for the output language. From a dynamic programming point of view, the DP states are X[i, j], where X ranges over all possible nonterminals and i and j range over 0 to the input string length |w|. Each state stores the best translations obtainable. When we reach the top state S[0, |w|], we can get the best translation for the entire sentence. The algorithm is O(|w|3). However, when we want to integrate an n-gram language model into the search, our goal is searching for the derivation whose total sum of weights of productions and n-gram log probabilities is maximized. Now the adjacent span-parameterized states X[i, k] and X[k, j] can interact with each other by “peeping into” the leading and trailing n −1 words on the output side for each state. Different boundary words differentiate the spanparameterized states. Thus, to preserve the dynamic programming property, we need to refine the states by adding the boundary words into the parameterization. The LM-integrated states are represented as X[i, j, u1,..,n−1, v1,..,n−1]. Since the number of variables involved at each DP step has increased to 3 + 4(n −1), the decoding algorithm is asymptotically O(|w|3+4(n−1)). Although it is possible to use the “hook” trick of Huang et al. (2005) to factorize the DP operations to reduce the complexity to O(|w|3+3(n−1)), when n is greater than 2, the complexity is still prohibitive. 3 Multi-pass LM-Integrated Decoding In this section, we describe a multi-pass progressive decoding technique that gradually augments the LM-integrated states from lower orders to higher orders. For instance, a bigram-integrated state [X, i, j, u, v] is said to be a coarse-level state of a trigram-integrate state [X, i, j, u, u′, v′, v], because the latter state refines the previous by specifying more inner words. Progressive search has been used for HMM’s in speech recognition (Murveit et al., 1993). The gen210 eral idea is to use a simple and fast decoding algorithm to constrain the search space of a following more complex and slower technique. More specifically, a bigram decoding pass is executed forward and backward to figure out the probability of each state. Then the states can be pruned based on their global score using the product of inside and outside probabilities. The advanced decoding algorithm will use the constrained space (a lattice in the case of speech recognition) as a grammatical constraint to help it focus on a smaller search space on which more discriminative features are brought in. The same idea has been applied to forests for parsing. Charniak and Johnson (2005) use a PCFG to do a pass of inside-outside parsing to reduce the state space of a subsequent lexicalized n-best parsing algorithm to produce parses that are further re-ranked by a MaxEnt model. We take the same view as in speech recognition that a trigram integrated model is a finer-grained model than bigram model and in general we can do an n −1-gram decoding as a predicative pass for the following n-gram pass. We need to do insideoutside parsing as coarse-to-fine parsers do. However, we use the outside probability or cost information differently. We do not combine the inside and outside costs of a simpler model to prune the space for a more complex model. Instead, for a given finergained state, we combine its true inside cost with the outside cost of its coarse-level counter-part to estimate its worthiness of being explored. The use of the outside cost from a coarser-level as the outside estimate makes our method naturally fall in the framework of A* parsing. Klein and Manning (2003) describe an A* parsing framework for monolingual parsing and admissible outside estimates that are computed using inside/outside parsing algorithm on simplified PCFGs compared to the original PCFG. Zhang and Gildea (2006) describe A* for ITG and develop admissible heuristics for both alignment and decoding. Both have shown the effectiveness of A* in situations where the outside estimate approximates the true cost closely such as when the sentences are short. For decoding long sentences, it is difficult to come up with good admissible (or inadmissible) heuristics. If we can afford a bigram decoding pass, the outside cost from a bigram model is conceivably a very good estimate of the outside cost using a trigram model since a bigram language model and a trigram language model must be strongly correlated. Although we lose the guarantee that the bigram-pass outside estimate is admissible, we expect that it approximates the outside cost very closely, thus very likely to effectively guide the heuristic search. 3.1 Inside-outside Coarse Level Decoding We describe the coarse level decoding pass in this section. The decoding algorithms for the coarse level and the fine level do not necessarily have to be the same. The fine level decoding algorithm is an A* algorithm. The coarse level decoding algorithm can be CKY or A* or other alternatives. Conceptually, the algorithm is finding the shortest hyperpath in the hypergraph in which the nodes are states like X[i, j, u1,..,n−1, v1,..,n−1], and the hyperedges are the applications of the synchronous rules to go from right-hand side states to left-hand side states. The root of the hypergraph is a special node S′[0, |w|, ⟨s⟩, ⟨/s⟩] which means the entire input sentence has been translated to a string starting with the beginning-of-sentence symbol and ending at the end-of-sentence symbol. If we imagine a starting node that goes to all possible basic translation pairs, i.e., the instances of the terminal translation rules for the input, we are searching the shortest hyper path from the imaginary bottom node to the root. To help our outside parsing pass, we store the backpointers at each step of exploration. The outside parsing pass, however, starts from the root S′[|w|, ⟨s⟩, ⟨/s⟩] and follows the back-pointers downward to the bottom nodes. The nodes need to be visited in a topological order so that whenever a node is visited, its parents have been visited and its outside cost is over all possible outside parses. The algorithm is described in pseudocode in Algorithm 1. The number of hyperedges to traverse is much fewer than in the inside pass because not every state explored in the bottom up inside pass can finally reach the goal. As for normal outside parsing, the operations are the reverse of inside parsing. We propagate the outside cost of the parent to its children by combining with the inside cost of the other children and the interaction cost, i.e., the language model cost between the focused child and the other children. Since we want to approximate the Viterbi 211 outside cost, it makes sense to maximize over all possible outside costs for a given node, to be consistent with the maximization of the inside pass. For the nodes that have been explored in the bottom up pass but not in the top-down pass, we set their outside cost to be infinity so that their exploration is preferred only when the viable nodes from the first pass have all been explored in the fine pass. 3.2 Heuristics for Fine-grained Decoding In this section, we summarize the heuristics for finer level decoding. The motivation for combining the true inside cost of the fine-grained model and the outside estimate given by the coarse-level parsing is to approximate the true global cost of a fine-grained state as closely as possible. We can make the approximation even closer by incorporating local higherorder outside n-gram information for a state of X[i, j, u1,..,n−1, v1,..,n−1] into account. We call this the best-border estimate. For example, the bestborder estimate for trigram states is: hBB(X, i, j, u1, u2, v1, v2) =  max s∈S(i,j) Plm(u2 | s, u1)  ·  max s∈S(i,j) Plm(s | v1, v2)  where S(i, j) is the set of candidate target language words outside the span of (i, j). hBB is the product of the upper bounds for the two on-the-border n-grams. This heuristic function was one of the admissible heuristics used by Zhang and Gildea (2006). The benefit of including the best-border estimate is to refine the outside estimate with respect to the inner words which refine the bigram states into the trigram states. If we do not take the inner words into consideration when computing the outside cost, all states that map to the same coarse level state would have the same outside cost. When the simple best-border estimate is combined with the coarse-level outside estimate, it can further boost the search as will be shown in the experiments. To summarize, our recipe for faster decoding is that using β(X[i, j, u1,..,n−1, v1,..,n−1]) + α(X[i, j, u1, vn−1]) + hBB(X, i, j, u1,...,n, v1,...,n) (1) where β is the Viterbi inside cost and α is the Viterbi outside cost, to globally prioritize the n-gram integrated states on the agenda for exploration. 3.3 Alternative Efficient Decoding Algorithms The complexity of n-gram integrated decoding for SCFG has been tackled using other methods. The hook trick of Huang et al. (2005) factorizes the dynamic programming steps and lowers the asymptotic complexity of the n-gram integrated decoding, but has not been implemented in large-scale systems where massive pruning is present. The cube-pruning by Chiang (2007) and the lazy cube-pruning of Huang and Chiang (2007) turn the computation of beam pruning of CYK decoders into a top-k selection problem given two columns of translation hypotheses that need to be combined. The insight for doing the expansion top-down lazily is that there is no need to uniformly explore every cell. The algorithm starts with requesting the first best hypothesis from the root. The request translates into requests for the k-bests of some of its children and grandchildren and so on, because re-ranking at each node is needed to get the top ones. Venugopal et al. (2007) also take a two-pass decoding approach, with the first pass leaving the language model boundary words out of the dynamic programming state, such that only one hypothesis is retained for each span and grammar symbol. 4 Decoding to Maximize BLEU The ultimate goal of efficient decoding to find the translation that has a highest evaluation score using the least time possible. Section 3 talks about utilizing the outside cost of a lower-order model to estimate the outside cost of a higher-order model, boosting the search for the higher-order model. By doing so, we hope the intrinsic metric of our model agrees with the extrinsic metric of evaluation so that fast search for the model is equivalent to efficient decoding. But the mismatch between the two is evident, as we will see in the experiments. In this section, 212 Algorithm 1 OutsideCoarseParsing() for all X[i, j, u, v] in topological order do for all children pairs pointed to by the back-pointers do if X →[Y Z] then  the two children are Y [i, k, u, u′] and Z[k, j, v′, v] α(Y [i, k, u, u′]) = max {α(Y [i, k, u, u′]), α(X[i, j, u, v]) + β(Z[k, j, v′, v]) + rule(X →[Y Z]) + bigram(u′, v′)} α(Z[k, j, v′, v]) = max {α(Z[k, j, v′, v]), α(X[i, j, u, v]) + β(Y [i, k, u, u′]) + rule(X →[Y Z]) + bigram(u′, v′)} end if if X →⟨Y Z⟩then  the two children are Y [i, k, v′, v] and Z[k, j, u, u′] α(Y [i, k, v′, v]) = max {α(Y [i, k, v′, v]), α(X[i, j, u, v]) + β(Z[k, j, u, u′]) + rule(X →⟨Y Z⟩) + bigram(u′, v′)} α(Z[k, j, u, u′]) = max {α(Z[k, j, u, u′]), α(X[i, j, u, v]) + β(Y [i, k, v′, v]) + rule(X →⟨Y Z⟩) + bigram(u′, v′)} end if end for end for we deal with the mismatch by introducing another decoding pass that maximizes the expected count of synchronous constituents in the tree corresponding to the translation returned. BLEU is based on n-gram precision, and since each synchronous constituent in the tree adds a new 4-gram to the translation at the point where its children are concatenated, the additional pass approximately maximizes BLEU. Kumar and Byrne (2004) proposed the framework of Minimum Bayesian Risk (MBR) decoding that minimizes the expected loss given a loss function. Their MBR decoding is a reranking pass over an nbest list of translations returned by the decoder. Our algorithm is another dynamic programming decoding pass on the trigram forest, and is similar to the parsing algorithm for maximizing expected labelled recall presented by Goodman (1996). 4.1 Maximizing the expected count of correct synchronous constituents We introduce an algorithm that maximizes the expected count of correct synchronous constituents. Given a synchronous constituent specified by the state [X, i, j, u, u′, v′, v], its probability of being correct in the model is EC([X, i, j, u, u′, v′, v]) = α([X, i, j, u, u′, v′, v]) · β([X, i, j, u, u′, v′, v]), where α is the outside probability and β is the inside probability. We approximate β and α using the Viterbi probabilities. Since decoding from bottom up in the trigram pass already gives us the inside Viterbi scores, we only have to visit the nodes in the reverse order once we reach the root to compute the Viterbi outside scores. The outside-pass Algorithm 1 for bigram decoding can be generalized to the trigram case. We want to maximize over all translations (synchronous trees) T in the forest after the trigram decoding pass according to max T X [X,i,j,u,u′,v′,v]∈T EC([X, i, j, u, u′, v′, v]). The expression can be factorized and computed using dynamic programming on the forest. 5 Experiments We did our decoding experiments on the LDC 2002 MT evaluation data set for translation of Chinese newswire sentences into English. The evaluation data set has 10 human translation references for each sentence. There are a total of 371 Chinese sentences of no more than 20 words in the data set. These sentences are the test set for our different versions of language-model-integrated ITG decoders. We evaluate the translation results by comparing them against the reference translations using the BLEU metric. 213 The word-to-word translation probabilities are from the translation model of IBM Model 4 trained on a 160-million-word English-Chinese parallel corpus using GIZA++. The phrase-to-phrase translation probabilities are trained on 833K parallel sentences. 758K of this was data made available by ISI, and another 75K was FBIS data. The language model is trained on a 30-million-word English corpus. The rule probabilities for ITG are trained using EM on a corpus of 18,773 sentence pairs with a total of 276,113 Chinese words and 315,415 English words. 5.1 Bigram-pass Outside Cost as Trigram-pass Outside Estimate We first fix the beam for the bigram pass, and change the outside heuristics for the trigram pass to show the difference before and after using the first-pass outside cost estimate and the border estimate. We choose the beam size for the CYK bigram pass to be 10 on the log scale. The first row of Table 1 shows the number of explored hyperedges for the bigram pass and its BLEU score. In the rows below, we compare the additional numbers of hyperedges that need to be explored in the trigram pass using different outside heuristics. It takes too long to finish using uniform outside estimate; we have to use a tight beam to control the agenda-based exploration. Using the bigram outside cost estimate makes a huge difference. Furthermore, using Equation 1, adding the additional heuristics on the best trigrams that can appear on the borders of the current hypothesis, on average we only need to explore 2700 additional hyperedges per sentence to boost the BLEU score from 21.77 to 23.46. The boost is so significant that overall the dominant part of search time is no longer the second pass but the first bigram pass (inside pass actually) which provides a constrained space and outside heuristics for the second pass. 5.2 Two-pass decoding versus One-pass decoding By varying the beam size for the first pass, we can plot graphs of model scores versus search time and BLEU scores versus search time as shown in Figure 1. We use a very large beam for the second pass due to the reason that the outside estimate for the second pass is discriminative enough to guide the Decoding Method Avg. Hyperedges BLEU Bigram Pass 167K 21.77 Trigram Pass UNI – – BO + 629.7K=796.7K 23.56 BO+BB +2.7K =169.7K 23.46 Trigram One-pass, with Beam 6401K 23.47 Table 1: Speed and BLEU scores for two-pass decoding. UNI stands for the uniform (zero) outside estimate. BO stands for the bigram outside cost estimate. BB stands for the best border estimate, which is added to BO. Decoder Time BLEU Model Score One-pass agenda 4317s 22.25 -208.849 One-pass CYK 3793s 22.89 -207.309 Multi-pass, CYK first agenda second pass 3689s 23.56 -205.344 MEC third pass 3749s 24.07 -203.878 Lazy-cube-pruning 3746s 22.16 -208.575 Table 2: Summary of different trigram decoding strategies, using about the same time (10 seconds per sentence). search. We sum up the total number of seconds for both passes to compare with the baseline systems. On average, less than 5% of time is spent in the second pass. In Figure 1, we have four competing decoders. bitri cyk is our two-pass decoder, using CYK as the first pass decoding algorithm and using agendabased decoding in the second pass which is guided by the first pass. agenda is our trigram-integrated agenda-based decoder. The other two systems are also one-pass. cyk is our trigram-integrated CYK decoder. lazy kbest is our top-down k-best-style decoder.1 Figure 1(left) compares the search efficiencies of the four systems. bitri cyk at the top ranks first. cyk follows it. The curves of lazy kbest and agenda cross 1In our implementation of the lazy-cube-pruning based ITG decoder, we vary the re-ranking buffer size and the the top-k list size which are the two controlling parameters for the search space. But we did not use any LM estimate to achieve early stopping as suggested by Huang and Chiang (2007). Also, we did not have a translation-model-only pruning pass. So the results shown in this paper for the lazy cube pruning method is not of its best performance. 214 and are both below the curves of bitri cyk and cyk. This figure indicates the advantage of the two-pass decoding strategy in producing translations with a high model score in less time. However, model scores do not directly translate into BLEU scores. In Figure 1(right), bitri cyk is better than CYK only in a certain time window when the beam is neither too small nor too large. But the window is actually where we are interested – it ranges from 5 seconds per sentence to 20 seconds per sentence. Table 2 summarizes the performance of the four decoders when the decoding speed is at 10 seconds per sentence. 5.3 Does the hook trick help? We have many choices in implementing the bigram decoding pass. We can do either CYK or agendabased decoding. We can also use the dynamic programming hook trick. We are particularly interested in the effect of the hook trick in a large-scale system with aggressive pruning. Figure 2 compares the four possible combinations of the decoding choices for the first pass: bitri cyk, bitri agenda, bitri cyk hook and bitri agenda hook. bitri cyk which simply uses CYK as the first pass decoding algorithm is the best in terms of performance and time trade-off. The hook-based decoders do not show an advantage in our experiments. Only bitri agenda hook gets slightly better than bitri agenda when the beam size increases. So, it is very likely the overhead of building hooks offsets its benefit when we massively prune the hypotheses. 5.4 Maximizing BLEU The bitri cyk decoder spends little time in the agenda-based trigram pass, quickly reaching the goal item starting from the bottom of the chart. In order to maximize BLEU score using the algorithm described in Section 4, we need a sizable trigram forest as a starting point. Therefore, we keep popping off more items from the agenda after the goal is reached. Simply by exploring more (200 times the log beam) after-goal items, we can optimize the Viterbi synchronous parse significantly, shown in Figure 3(left) in terms of model score versus search time. However, the mismatch between model score and BLEU score persists. So, we try our algorithm of maximizing expected count of synchronous constituents on the trigram forest. We find significant improvement in BLEU, as shown in Figure 3 (right) by the curve of bitri cyk epass me cons. bitri cyk epass me cons beats both bitri cyk and cyk in terms of BLEU versus time if using more than 1.5 seconds on average to decode each sentence. At each time point, the difference in BLEU between bitri cyk epass me cons and the highest of bitri cyk and cyk is around .5 points consistently as we vary the beam size for the first pass. We achieve the record-high BLEU score 24.34 using on average 21 seconds per sentence, compared to the next-highest score of 23.92 achieved by cyk using on average 78 seconds per sentence. 6 Conclusion We present a multi-pass method to speed up ngram integrated decoding for SCFG. We use an inside/outside parsing algorithm to get the Viterbi outside cost of bigram integrated states which is used as an outside estimate for trigram integrated states. The coarse-level outside cost plus the simple estimate for border trigrams speeds up the trigram decoding pass hundreds of times compared to using no outside estimate. Maximizing the probability of the synchronous derivation is not equivalent to maximizing BLEU. We use a rescoring decoding pass that maximizes the expected count of synchronous constituents. This technique, together with the progressive search at previous stages, gives a decoder that produces the highest BLEU score we have obtained on the data in a very reasonable amount of time. As future work, new metrics for the final pass may be able to better approximate BLEU. As the bigram decoding pass currently takes the bulk of the decoding time, better heuristics for this phase may speed up the system further. Acknowledgments This work was supported by NSF ITR-0428020 and NSF IIS-0546554. References Eugene Charniak and Mark Johnson. 2005. Coarse-tofine n-best parsing and maxent discriminative reranking. In ACL. 215 -224 -222 -220 -218 -216 -214 -212 -210 -208 -206 -204 10 100 1000 10000 100000 log score total secs bitri_cyk cyk agenda lazy kbest 0.17 0.18 0.19 0.2 0.21 0.22 0.23 0.24 10 100 1000 10000 100000 bleu total secs bitri_cyk cyk agenda lazy kbest Figure 1: We compare the two-pass ITG decoder with the one-pass trigram-integrated ITG decoders in terms of both model scores vs. time (left) and BLEU scores vs. time (right). The model score here is the log probability of the decoded parse, summing up both the translation model and the language model. We vary the beam size (for the first pass in the case of two-pass) to search more and more thoroughly. -222 -220 -218 -216 -214 -212 -210 -208 -206 -204 100 1000 10000 100000 log score total secs bitri_cyk bitri_cyk_hook bitri_agenda bitri_agenda_hook 0.17 0.18 0.19 0.2 0.21 0.22 0.23 0.24 100 1000 10000 100000 bleu total secs bitri_cyk bitri_cyk_hook bitri_agenda bitri_agenda_hook Figure 2: We use different first-pass decoding algorithms, fixing the second pass to be agenda-based which is guided by the outside cost of the first pass. Left: model score vs. time. Right: BLEU score vs. time. -222 -220 -218 -216 -214 -212 -210 -208 -206 -204 -202 100 1000 10000 100000 log score total secs bitri_cyk delayed-stopping bitri_cyk 0.17 0.18 0.19 0.2 0.21 0.22 0.23 0.24 0.25 10 100 1000 10000 100000 bleu total secs bitri_cyk_epass_me_cons bitri_cyk cyk Figure 3: Left: improving the model score by extended agenda-exploration after the goal is reached in the best-first search. Right: maximizing BLEU by the maximizing expectation pass on the expanded forest. 216 David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Conference of the Association for Computational Linguistics (ACL-05), pages 263–270. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2). Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceedings of the International Conference on Computational Linguistics/Association for Computational Linguistics (COLING/ACL-06), pages 961–968, July. Joshua Goodman. 1996. Parsing algorithms and metrics. In Proceedings of the 34th Annual Conference of the Association for Computational Linguistics (ACL-96), pages 177–183. Liang Huang and David Chiang. 2007. Faster algorithms for decoding with integrated language models. In Proceedings of ACL, Prague, June. Liang Huang, Hao Zhang, and Daniel Gildea. 2005. Machine translation as lexicalized parsing with hooks. In International Workshop on Parsing Technologies (IWPT05), Vancouver, BC. Dan Klein and Christopher D. Manning. 2003. A* parsing: Fast exact Viterbi parse selection. In Proceedings of the 2003 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-03). Shankar Kumar and William Byrne. 2004. Minimum bayes-risk decoding for statistical machine translation. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings, pages 169–176, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics. Hy Murveit, John W. Butzberger, Vassilios V. Digalakis, and Mitchel Weintraub. 1993. Large-vocabulary dictation using SRI’s decipher speech recognition system: Progressive-search techniques. In Proceedings of the IEEE International Conference on Acoustics, Speech, & Signal Processing (IEEE ICASSP-93), volume 2, pages 319–322. IEEE. Giorgio Satta and Enoch Peserico. 2005. Some computational complexity results for synchronous contextfree grammars. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP), pages 803–810, Vancouver, Canada, October. Ashish Venugopal, Andreas Zollmann, and Stephan Vogel. 2007. An efficient two-pass approach to synchronous-CFG driven statistical MT. In NAACL07, Rochester, NY, April. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. Hao Zhang and Daniel Gildea. 2006. Efficient search for inversion transduction grammar. In 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP), Sydney. Hao Zhang, Liang Huang, Daniel Gildea, and Kevin Knight. 2006. Synchronous binarization for machine translation. In Proceedings of the 2006 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-06), pages 256–263. 217
2008
25
Proceedings of ACL-08: HLT, pages 218–226, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Regular tree grammars as a formalism for scope underspecification Alexander Koller∗ [email protected] ∗University of Edinburgh Michaela Regneri† § [email protected] † University of Groningen Stefan Thater§ [email protected] § Saarland University Abstract We propose the use of regular tree grammars (RTGs) as a formalism for the underspecified processing of scope ambiguities. By applying standard results on RTGs, we obtain a novel algorithm for eliminating equivalent readings and the first efficient algorithm for computing the best reading of a scope ambiguity. We also show how to derive RTGs from more traditional underspecified descriptions. 1 Introduction Underspecification (Reyle, 1993; Copestake et al., 2005; Bos, 1996; Egg et al., 2001) has become the standard approach to dealing with scope ambiguity in large-scale hand-written grammars (see e.g. Copestake and Flickinger (2000)). The key idea behind underspecification is that the parser avoids computing all scope readings. Instead, it computes a single compact underspecified description for each parse. One can then strengthen the underspecified description to efficiently eliminate subsets of readings that were not intended in the given context (Koller and Niehren, 2000; Koller and Thater, 2006); so when the individual readings are eventually computed, the number of remaining readings is much smaller and much closer to the actual perceived ambiguity of the sentence. In the past few years, a “standard model” of scope underspecification has emerged: A range of formalisms from Underspecified DRT (Reyle, 1993) to dominance graphs (Althaus et al., 2003) have offered mechanisms to specify the “semantic material” of which the semantic representations are built up, plus dominance or outscoping relations between these building blocks. This has been a very successful approach, but recent algorithms for eliminating subsets of readings have pushed the expressive power of these formalisms to their limits; for instance, Koller and Thater (2006) speculate that further improvements over their (incomplete) redundancy elimination algorithm require a more expressive formalism than dominance graphs. On the theoretical side, Ebert (2005) has shown that none of the major underspecification formalisms are expressively complete, i.e. supports the description of an arbitrary subset of readings. Furthermore, the somewhat implicit nature of dominance-based descriptions makes it difficult to systematically associate readings with probabilities or costs and then compute a best reading. In this paper, we address both of these shortcomings by proposing regular tree grammars (RTGs) as a novel underspecification formalism. Regular tree grammars (Comon et al., 2007) are a standard approach for specifying sets of trees in theoretical computer science, and are closely related to regular tree transducers as used e.g. in recent work on statistical MT (Knight and Graehl, 2005) and grammar formalisms (Shieber, 2006). We show that the “dominance charts” proposed by Koller and Thater (2005b) can be naturally seen as regular tree grammars; using their algorithm, classical underspecified descriptions (dominance graphs) can be translated into RTGs that describe the same sets of readings. However, RTGs are trivially expressively complete because every finite tree language is also regular. We exploit this increase in expressive power in presenting a novel redundancy elimination algorithm that is simpler and more powerful than the one by Koller and Thater (2006); in our algorithm, redundancy elimination amounts to intersection of regular tree languages. Furthermore, we show how to define a PCFG-style cost model on RTGs and compute best readings of deterministic RTGs efficiently, and illustrate this model on a machine learning based model 218 of scope preferences (Higgins and Sadock, 2003). To our knowledge, this is the first efficient algorithm for computing best readings of a scope ambiguity in the literature. The paper is structured as follows. In Section 2, we will first sketch the existing standard approach to underspecification. We will then define regular tree grammars and show how to see them as an underspecification formalism in Section 3. We will present the new redundancy elimination algorithm, based on language intersection, in Section 4, and show how to equip RTGs with weights and compute best readings in Section 5. We conclude in Section 6. 2 Underspecification The key idea behind scope underspecification is to describe all readings of an ambiguous expression with a single, compact underspecified representation (USR). This simplifies semantics construction, and current algorithms (Koller and Thater, 2005a) support the efficient enumeration of readings from an USR when it is necessary. Furthermore, it is possible to perform certain semantic processing tasks such as eliminating redundant readings (see Section 4) directly on the level of underspecified representations without explicitly enumerating individual readings. Under the “standard model” of scope underspecification, readings are considered as formulas or trees. USRs specify the “semantic material” common to all readings, plus dominance or outscopes relations between these building blocks. In this paper, we consider dominance graphs (Egg et al., 2001; Althaus et al., 2003) as one representative of this class. An example dominance graph is shown on the left of Fig. 1. It represents the five readings of the sentence “a representative of a company saw every sample.” The (directed, labelled) graph consists of seven subtrees, or fragments, plus dominance edges relating nodes of these fragments. Each reading is encoded as one configuration of the dominance graph, which can be obtained by “plugging” the tree fragments into each other, in a way that respects the dominance edges: The source node of each dominance edge must dominate (i.e., be an ancestor of) the target node in each configuration. The trees in Fig. 1a–e are the five configurations of the example graph. An important class of dominance graphs are hypernormally connected dominance graphs, or dominance nets (Niehren and Thater, 2003). The precise definition of dominance nets is not important here, but note that virtually all underspecified descriptions that are produced by current grammars are nets (Flickinger et al., 2005). For the rest of the paper, we restrict ourselves to dominance graphs that are hypernormally connected. 3 Regular tree grammars We will now recall the definition of regular tree grammars and show how they can be used as an underspecification formalism. 3.1 Definition Let Σ be an alphabet, or signature, of tree constructors { f,g,a,...}, each of which is equipped with an arity ar(f) ≥0. A finite constructor tree t is a finite tree in which each node is labelled with a symbol of Σ, and the number of children of the node is exactly the arity of this symbol. For instance, the configurations in Fig. 1a-e are finite constructor trees over the signature {ax|2,ay|2,compz|0,...}. Finite constructor trees can be seen as ground terms over Σ that respect the arities. We write T(Σ) for the finite constructor trees over Σ. A regular tree grammar (RTG) is a 4-tuple G = (S,N,Σ,R) consisting of a nonterminal alphabet N, a terminal alphabet Σ, a start symbol S ∈N, and a finite set of production rules R of the form A →β, where A ∈N and β ∈T(Σ ∪N); the nonterminals count as zero-place constructors. Two finite constructor trees t,t′ ∈T(Σ ∪N) stand in the derivation relation, t →G t′, if t′ can be built from t by replacing an occurrence of some nonterminal A by the tree on the right-hand side of some production for A. The language generated by G, L(G), is the set {t ∈T(Σ) | S →∗ G t}, i.e. all terms of terminal symbols that can be derived from the start symbol by a sequence of rule applications. Note that L(G) is a possibly infinite language of finite trees. As usual, we write A →t1 | ... | tn as shorthand for the n production rules A →ti (1 ≤i ≤n). See Comon et al. (2007) for more details. The languages that can be accepted by regular tree grammars are called regular tree languages (RTLs), and regular tree grammars are equivalent to regular 219 everyy sampley seex,y ax repr-ofx,z az compz 1 2 3 4 5 6 7 everyy ax sampley seex,y repr-ofx,z az compz (a) everyy az ax sampley seex,y compz repr-ofx,z (c) everyy az ax sampley seex,y compz repr-ofx,z (d) (b) everyy sampley seex,y ax repr-ofx,z az compz (e) everyy sampley ax repr-ofx,z seex,y az compz Figure 1: A dominance graph (left) and its five configurations. tree automata, which are defined essentially like the well-known regular string automata, except that they assign states to the nodes in a tree rather than the positions in a string. Tree automata are related to tree transducers as used e.g. in statistical machine translation (Knight and Graehl, 2005) exactly like finitestate string automata are related to finite-state string transducers, i.e. they use identical mechanisms to accept rather than transduce languages. Many theoretical results carry over from regular string languages to regular tree languages; for instance, membership of a tree in a RTL can be decided in linear time, RTLs are closed under intersection, union, and complement, and so forth. 3.2 Regular tree grammars in underspecification We can now use regular tree grammars in underspecification by representing the semantic representations as trees and taking an RTG G as an underspecified description of the trees in L(G). For example, the five configurations in Fig. 1 can be represented as the tree language accepted by the following grammar with start symbol S. S → ax(A1,A2) | az(B1,A3) | everyy(B3,A4) A1 → az(B1,B2) A2 → everyy(B3,B4) A3 → ax(B2,A2) | everyy(B3,A5) A4 → ax(A1,B4) | az(B1,A5) A5 → ax(B2,B4) B1 → compz B2 →repr-ofx,z B3 → sampley B4 →seex,y More generally, every finite set of trees can be written as the tree language accepted by a nonrecursive regular tree grammar such as this. This grammar can be much smaller than the set of trees, because nonterminal symbols (which stand for sets of possibly many subtrees) can be used on the righthand sides of multiple rules. Thus an RTG is a compact representation of a set of trees in the same way that a parse chart is a compact representation of the set of parse trees of a context-free string grammar. Note that each tree can be enumerated from the RTG in linear time. 3.3 From dominance graphs to tree grammars Furthermore, regular tree grammars can be systematically computed from more traditional underspecified descriptions. Koller and Thater (2005b) demonstrate how to compute a dominance chart from a dominance graph D by tabulating how a subgraph can be decomposed into smaller subgraphs by removing what they call a “free fragment”. If D is hypernormally connected, this chart can be read as a regular tree grammar whose nonterminal symbols are subgraphs of the dominance graph, and whose terminal symbols are names of fragments. For the example graph in Fig. 1, it looks as follows. {1,2,3,4,5,6,7} → 1({2,4,5},{3,6,7}) {1,2,3,4,5,6,7} → 2({4},{1,3,5,6,7}) {1,2,3,4,5,6,7} → 3({6},{1,2,4,5,7}) {1,3,5,6,7} → 1({5},{3,6,7}) | 3({6},{1,5,7}) {1,2,4,5,7} → 1({2,4,5},{7}) | 2({4},{1,5,7}) {1,5,7} → 1({5},{7}) {2,4,5} →2({4},{5}) {4} →4 {6} →6 {3,6,7} →3({6},{7}) {5} →5 {7} →7 This grammar accepts, again, five different trees, whose labels are the node names of the dominance graph, for instance 1(2(4,5),3(6,7)). If f : Σ →Σ′ is a relabelling function from one terminal alphabet to another, we can write f(G) for the grammar (S,N,Σ′,R′), where R′ = {A →f(a)(B1,...,Bn) | A →a(B1,...,Bn) ∈R}. Now if we choose f to be the labelling function of D (which maps node names to node labels) and G is the chart of D, then L(f(G)) will be the set of configurations of D. The grammar in Section 3.2 is simply f(G) for the chart above (up to consistent renaming of nonterminals). In the worst case, the dominance chart of a dominance graph with n fragments has O(2n) production rules (Koller and Thater, 2005b), i.e. charts may be exponential in size; but note that this is still an 220 1,0E+00 1,0E+04 1,0E+08 1,0E+12 1,0E+16 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 #fragments #configurations/rules 0 10 20 30 40 50 60 70 80 #sentences #sentences #production rules in chart #configurations Figure 2: Chart sizes in the Rondane corpus. improvement over the n! configurations that these worst-case examples have. In practice, RTGs that are computed by converting the USR computed by a grammar remain compact: Fig. 2 compares the average number of configurations and the average number of RTG production rules for USRs of increasing sizes in the Rondane treebank (see Sect. 4.3); the bars represent the number of sentences for USRs of a certain size. Even for the most ambiguous sentence, which has about 4.5×1012 scope readings, the dominance chart has only about 75 000 rules, and it takes only 15 seconds on a modern consumer PC (Intel Core 2 Duo at 2 GHz) to compute the grammar from the graph. Computing the charts for all 999 MRSnets in the treebank takes about 45 seconds. 4 Expressive completeness and redundancy elimination Because every finite tree language is regular, RTGs constitute an expressively complete underspecification formalism in the sense of Ebert (2005): They can represent arbitrary subsets of the original set of readings. Ebert shows that the classical dominancebased underspecification formalisms, such as MRS, Hole Semantics, and dominance graphs, are all expressively incomplete, which Koller and Thater (2006) speculate might be a practical problem for algorithms that strengthen USRs to remove unwanted readings. We will now show how both the expressive completeness and the availability of standard constructions for RTGs can be exploited to get an improved redundancy elimination algorithm. 4.1 Redundancy elimination Redundancy elimination (Vestre, 1991; Chaves, 2003; Koller and Thater, 2006) is the problem of deriving from an USR U another USR U′, such that the readings of U′ are a proper subset of the readings of U, but every reading in U is semantically equivalent to some reading in U′. For instance, the following sentence from the Rondane treebank is analyzed as having six quantifiers and 480 readings by the ERG grammar; these readings fall into just two semantic equivalence classes, characterized by the relative scope of “the lee of” and “a small hillside”. A redundancy elimination would therefore ideally reduce the underspecified description to one that has only two readings (one for each class). (1) We quickly put up the tents in the lee of a small hillside and cook for the first time in the open. (Rondane 892) Koller and Thater (2006) define semantic equivalence in terms of a rewrite system that specifies under what conditions two quantifiers may exchange their positions without changing the meaning of the semantic representation. For example, if we assume the following rewrite system (with just a single rule), the five configurations in Fig. 1a-e fall into three equivalence classes – indicated by the dotted boxes around the names a-e – because two pairs of readings can be rewritten into each other. (2) ax(az(P,Q),R) →az(P,ax(Q,R)) Based on this definition, Koller and Thater (2006) present an algorithm (henceforth, KT06) that deletes rules from a dominance chart and thus removes subsets of readings from the USR. The KT06 algorithm is fast and quite effective in practice. However, it essentially predicts for each production rule of a dominance chart whether each configuration that can be built with this rule is equivalent to a configuration that can be built with some other production for the same subgraph, and is therefore rather complex. 4.2 Redundancy elimination as language intersection We now define a new algorithm for redundancy elimination. It is based on the intersection of regular tree languages, and will be much simpler and more powerful than KT06. Let G = (S,N,Σ,R) be an RTG with a linear order on the terminals Σ; for ease of presentation, we assume Σ ⊆N. Furthermore, let f : Σ →Σ′ be a relabelling function into the signature Σ′ of the rewrite 221 system. For example, G could be the dominance chart of some dominance graph D, and f could be the labelling function of D. We can then define a tree language LF as follows: LF contains all trees over Σ that do not contain a subtree of the form q1(x1,...,xi−1,q2(...),xi+1,...,xk) where q1 > q2 and the rewrite system contains a rule that has f(q1)(X1,...,Xi−1, f(q2)(...),Xi+1,...,Xk) on the left or right hand side. LF is a regular tree language, and can be accepted by a regular tree grammar GF with O(n) nonterminals and O(n2) rules, where n = |Σ′|. A filter grammar for Fig. 1 looks as follows: S →1(S,S) | 2(S,Q1) | 3(S,S) | 4 | ... | 7 Q1 →2(S,Q1) | 3(S,S) | 4 | ... | 7 This grammar accepts all trees over Σ except ones in which a node with label 2 is the parent of a node with label 1, because such trees correspond to configurations in which a node with label az is the parent of a node with label ax, az and ax are permutable, and 2 > 1. In particular, it will accept the configurations (b), (c), and (e) in Fig. 1, but not (a) or (d). Since regular tree languages are closed under intersection, we can compute a grammar G′ such that L(G′) = L(G)∩LF. This grammar has O(nk) nonterminals and O(n2k) productions, where k is the number of production rules in G, and can be computed in time O(n2k). The relabelled grammar f(G′) accepts all trees in which adjacent occurrences of permutable quantifiers are in a canonical order (sorted from lowest to highest node name). For example, the grammar G′ for the example looks as follows; note that the nonterminal alphabet of G′ is the product of the nonterminal alphabets of G and GF. {1,2,3,4,5,6,7}S →1({2,4,5}S,{3,6,7}S) {1,2,3,4,5,6,7}S →2({4}S,{1,3,5,6,7}Q1) {1,2,3,4,5,6,7}S →3({6}S,{1,2,4,5,7}S) {1,3,5,6,7}Q1 →3({6}S,{1,5,7}S) {1,2,4,5,7}S →1({2,4,5}S,{7}S) {1,2,4,5,7}S →2({4}S,{1,5,7}Q1) {2,4,5}S →2({4}S,{5}Q1) {4}S →4 {3,6,7}S →3({6}S,{7}S) {5}S →5 {1,5,7}S →1({5}S,{7}S) {5}Q1 →5 {6}S →6 {7}S →7 Significantly, the grammar contains no productions for {1,3,5,6,7}Q1 with terminal symbol 1, and no production for {1,5,7}Q1. This reduces the tree language accepted by f(G′) to just the configurations (b), (c), and (e) in Fig. 1, i.e. exactly one representative of every equivalence class. Notice that there are two different nonterminals, {5}Q1 and {5}S, corresponding to the subgraph {5}, so the intersected RTG is not a dominance chart any more. As we will see below, this increased expressivity increases the power of the redundancy elimination algorithm. 4.3 Evaluation The algorithm presented here is not only more transparent than KT06, but also more powerful; for example, it will reduce the graph in Fig. 4 of Koller and Thater (2006) completely, whereas KT06 won’t. To measure the extent to which the new algorithm improves upon KT06, we compare both algorithms on the USRs in the Rondane treebank (version of January 2006). The Rondane treebank is a “Redwoods style” treebank (Oepen et al., 2002) containing MRS-based underspecified representations for sentences from the tourism domain, and is distributed together with the English Resource Grammar (ERG) (Copestake and Flickinger, 2000). The treebank contains 999 MRS-nets, which we translate automatically into dominance graphs and further into RTGs; the median number of scope readings per sentence is 56. For our experiment, we consider all 950 MRS-nets with less than 650 000 configurations. We use a slightly weaker version of the rewrite system that Koller and Thater (2006) used in their evaluation. It turns out that the median number of equivalence classes, computed by pairwise comparison of all configurations, is 8. The median number of configurations that remain after running our algorithm is also 8. By contrast, the median number after running KT06 is 11. For a more fine-grained comparison, Fig. 3 shows the percentage of USRs for which the two algorithms achieve complete reduction, i.e. retain only one reading per equivalence class. In the diagram, we have grouped USRs according to the natural logarithm of their numbers of configurations, and report the percentage of USRs in this group on which the algorithms were complete. The new algorithm dramatically outperforms KT06: In total, it reduces 96% of all USRs completely, whereas KT06 was complete only for 40%. This increase in completeness is partially due to the new algorithm’s ability to use non-chart RTGs: For 28% of the sentences, 222 0% 20% 40% 60% 80% 100% 1 3 5 7 9 11 13 KT06 RTG Figure 3: Percentage of USRs in Rondane for which the algorithms achieve complete reduction. it computes RTGs that are not dominance charts. KT06 was only able to reduce 5 of these 263 graphs completely. The algorithm needs 25 seconds to run for the entire corpus (old algorithm: 17 seconds), and it would take 50 (38) more seconds to run on the 49 large USRs that we exclude from the experiment. By contrast, it takes about 7 hours to compute the equivalence classes by pairwise comparison, and it would take an estimated several billion years to compute the equivalence classes of the excluded USRs. In short, the redundancy elimination algorithm presented here achieves nearly complete reduction at a tiny fraction of the runtime, and makes a useful task that was completely infeasible before possible. 4.4 Compactness Finally, let us briefly consider the ramifications of expressive completeness on efficiency. Ebert (2005) proves that no expressively complete underspecification formalism can be compact, i.e. in the worst case, the USR of a set of readings become exponentially large in the number of scope-bearing operators. In the case of RTGs, this worst case is achieved by grammars of the form S →t1 | ... |tn, where t1,...,tn are the trees we want to describe. This grammar is as big as the number of readings, i.e. worst-case exponential in the number n of scope-bearing operators, and essentially amounts to a meta-level disjunction over the readings. Ebert takes the incompatibility between compactness and expressive completeness as a fundamental problem for underspecification. We don’t see things quite as bleakly. Expressions of natural language itself are (extremely underspecified) descriptions of sets of semantic representations, and so Ebert’s argument applies to NL expressions as well. This means that describing a given set of readings may require an exponentially long discourse. Ebert’s definition of compactness may be too harsh: An USR, although exponential-size in the number of quantifiers, may still be polynomial-size in the length of the discourse in the worst case. Nevertheless, the tradeoff between compactness and expressive power is important for the design of underspecification formalisms, and RTGs offer a unique answer. They are expressively complete; but as we have seen in Fig. 2, the RTGs that are derived by semantic construction are compact, and even intersecting them with filter grammars for redundancy elimination only blows up their sizes by a factor of O(n2). As we add more and more information to an RTG to reduce the set of readings, ultimately to those readings that were meant in the actual context of the utterance, the grammar will become less and less compact; but this trend is counterbalanced by the overall reduction in the number of readings. For the USRs in Rondane, the intersected RTGs are, on average, 6% smaller than the original charts. Only 30% are larger than the charts, by a maximal factor of 3.66. Therefore we believe that the theoretical non-compactness should not be a major problem in a well-designed practical system. 5 Computing best configurations A second advantage of using RTGs as an underspecification formalism is that we can apply existing algorithms for computing the best derivations of weighted regular tree grammars to compute best (that is, cheapest or most probable) configurations. This gives us the first efficient algorithm for computing the preferred reading of a scope ambiguity. We define weighted dominance graphs and weighted tree grammars, show how to translate the former into the latter and discuss an example. 5.1 Weighted dominance graphs A weighted dominance graph D = (V,ET ⊎ED ⊎ WD ⊎WI) is a dominance graph with two new types of edges – soft dominance edges, WD, and soft disjointness edges, WI –, each of which is equipped with a numeric weight. Soft dominance and disjointness edges provide a mechanism for assigning weights to configurations; a soft dominance edge ex223 everyy sampley seex,y ax repr-ofx,z az compz 1 2 3 4 5 6 7 9 8 Figure 4: The graph of Fig. 1 with soft constraints presses a preference that two nodes dominate each other in a configuration, whereas a soft disjointness edge expresses a preference that two nodes are disjoint, i.e. neither dominates the other. We take the hard backbone of D to be the ordinary dominance graph B(D) = (V,ET ⊎ED) obtained by removing all soft edges. The set of configurations of a weighted graph D is the set of configurations of its hard backbone. For each configuration t of D, we define the weight c(t) to be the product of the weights of all soft dominance and disjointness edges that are satisfied in t. We can then ask for configurations of maximal weight. Weighted dominance graphs can be used to encode the standard models of scope preferences (Pafel, 1997; Higgins and Sadock, 2003). For example, Higgins and Sadock (2003) present a machine learning approach for determining pairwise preferences as to whether a quantifier Q1 dominates another quantifier Q2, Q2 dominates Q1, or neither (i.e. they are disjoint). We can represent these numbers as the weights of soft dominance and disjointness edges. An example (with artificial weights) is shown in Fig. 4; we draw the soft dominance edges as curved dotted arrows and the soft disjointness edges as as angled double-headed arrows. Each soft edge is annotated with its weight. The hard backbone of this dominance graph is our example graph from Fig. 1, so it has the same five configurations. The weighted graph assigns a weight of 8 to configuration (a), a weight of 1 to (d), and a weight of 9 to (e); this is also the configuration of maximum weight. 5.2 Weighted tree grammars In order to compute the maximal-weight configuration of a weighted dominance graph, we will first translate it into a weighted regular tree grammar. A weighted regular tree grammar (wRTG) (Graehl and Knight, 2004) is a 5-tuple G = (S,N,Σ,R,c) such that G′ = (S,N,Σ,R) is a regular tree grammar and c : R →R is a function that assigns each production rule a weight. G accepts the same language of trees as G′. It assigns each derivation a cost equal to the product of the costs of the production rules used in this derivation, and it assigns each tree in the language a cost equal to the sum of the costs of its derivations. Thus wRTGs define weights in a way that is extremely similar to PCFGs, except that we don’t require any weights to sum to one. Given a weighted, hypernormally connected dominance graph D, we can extend the chart of B(D) to a wRTG by assigning rule weights as follows: The weight of a rule D0 →i(D1,...,Dn) is the product over the weights of all soft dominance and disjointness edges that are established by this rule. We say that a rule establishes a soft dominance edge from u to v if u = i and v is in one of the subgraphs D1,...,Dn; we say that it establishes a soft disjointness edge between u and v if u and v are in different subgraphs D j and Dk (j ̸= k). It can be shown that the weight this grammar assigns to each derivation is equal to the weight that the original dominance graph assigns to the corresponding configuration. If we apply this construction to the example graph in Fig. 4, we obtain the following wRTG: {1,...,7} →ax({2,4,5},{3,6,7}) [9] {1,...,7} →az({4},{1,3,5,6,7}) [1] {1,...,7} →everyy({6},{1,2,4,5,7}) [8] {2,4,5} →az({4},{5}) [1] {3,6,7} →everyy({6},{7}) [1] {1,3,5,6,7} →ax({5},{3,6,7}) [1] {1,3,5,6,7} →everyy({6},{1,5,7}) [8] {1,2,4,5,7} →ax({2,4,5},{7}) [1] {1,2,4,5,7} →az({4},{1,5,7}) [1] {1,5,7} →ax({5},{7}) [1] {4} → compz [1] {5} →repr−o f x,z [1] {6} →sampley [1] {7} →seex,y [1] For example, picking “az” as the root of a configuration (Fig. 1 (c), (d)) of the entire graph has a weight of 1, because this rule establishes no soft edges. On the other hand, choosing “ax” as the root has a weight of 9, because this establishes the soft disjointness edge (and in fact, leads to the derivation of the maximum-weight configuration in Fig. 1 (e)). 5.3 Computing the best configuration The problem of computing the best configuration of a weighted dominance graph – or equivalently, the 224 best derivation of a weighted tree grammar – can now be solved by standard algorithms for wRTGs. For example, Knight and Graehl (2005) present an algorithm to extract the best derivation of a wRTG in time O(t +nlogn) where n is the number of nonterminals and t is the number of rules. In practice, we can extract the best reading of the most ambiguous sentence in the Rondane treebank (4.5 × 1012 readings, 75 000 grammar rules) with random soft edges in about a second. However, notice that this is not the same problem as computing the best tree in the language accepted by a wRTG, as trees may have multiple derivations. The problem of computing the best tree is NPcomplete (Sima’an, 1996). However, if the weighted regular tree automaton corresponding to the wRTG is deterministic, every tree has only one derivation, and thus computing best trees becomes easy again. The tree automata for dominance charts are always deterministic, and the automata for RTGs as in Section 3.2 (whose terminals correspond to the graph’s node labels) are also typically deterministic if the variable names are part of the quantifier node labels. Furthermore, there are algorithms for determinizing weighted tree automata (Borchardt and Vogler, 2003; May and Knight, 2006), which could be applied as preprocessing steps for wRTGs. 6 Conclusion In this paper, we have shown how regular tree grammars can be used as a formalism for scope underspecification, and have exploited the power of this view in a novel, simpler, and more complete algorithm for redundancy elimination and the first efficient algorithm for computing the best reading of a scope ambiguity. In both cases, we have adapted standard algorithms for RTGs, which illustrates the usefulness of using such a well-understood formalism. In the worst case, the RTG for a scope ambiguity is exponential in the number of scope bearers in the sentence; this is a necessary consequence of their expressive completeness. However, those RTGs that are computed by semantic construction and redundancy elimination remain compact. Rather than showing how to do semantic construction for RTGs, we have presented an algorithm that computes RTGs from more standard underspecification formalisms. We see RTGs as an “underspecification assembly language” – they support efficient and useful algorithms, but direct semantic construction may be inconvenient, and RTGs will rather be obtained by “compiling” higher-level underspecified representations such as dominance graphs or MRS. This perspective also allows us to establish a connection to approaches to semantic construction which use chart-based packing methods rather than dominance-based underspecification to manage scope ambiguities. For instance, both Combinatory Categorial Grammars (Steedman, 2000) and synchronous grammars (Nesson and Shieber, 2006) represent syntactic and semantic ambiguity as part of the same parse chart. These parse charts can be seen as regular tree grammars that accept the language of parse trees, and conceivably an RTG that describes only the semantic and not the syntactic ambiguity could be automatically extracted. We could thus reconcile these completely separate approaches to semantic construction within the same formal framework, and RTG-based algorithms (e.g., for redundancy elimination) would apply equally to dominance-based and chart-based approaches. Indeed, for one particular grammar formalism it has even been shown that the parse chart contains an isomorphic image of a dominance chart (Koller and Rambow, 2007). Finally, we have only scratched the surface of what can be be done with the computation of best configurations in Section 5. The algorithms generalize easily to weights that are taken from an arbitrary ordered semiring (Golan, 1999; Borchardt and Vogler, 2003) and to computing minimal-weight rather than maximal-weight configurations. It is also useful in applications beyond semantic construction, e.g. in discourse parsing (Regneri et al., 2008). Acknowledgments. We have benefited greatly from fruitful discussions on weighted tree grammars with Kevin Knight and Jonathan Graehl, and on discourse underspecification with Markus Egg. We also thank Christian Ebert, Marco Kuhlmann, Alex Lascarides, and the reviewers for their comments on the paper. Finally, we are deeply grateful to our former colleague Joachim Niehren, who was a great fan of tree automata before we even knew what they are. 225 References E. Althaus, D. Duchier, A. Koller, K. Mehlhorn, J. Niehren, and S. Thiel. 2003. An efficient graph algorithm for dominance constraints. J. Algorithms, 48:194–219. B. Borchardt and H. Vogler. 2003. Determinization of finite state weighted tree automata. Journal of Automata, Languages and Combinatorics, 8(3):417–463. J. Bos. 1996. Predicate logic unplugged. In Proceedings of the Tenth Amsterdam Colloquium, pages 133–143. R. P. Chaves. 2003. Non-redundant scope disambiguation in underspecified semantics. In Proceedings of the 8th ESSLLI Student Session, pages 47–58, Vienna. H. Comon, M. Dauchet, R. Gilleron, C. L¨oding, F. Jacquemard, D. Lugiez, S. Tison, and M. Tommasi. 2007. Tree automata techniques and applications. Available on: http://www.grappa.univ-lille3.fr/tata. A. Copestake and D. Flickinger. 2000. An opensource grammar development environment and broadcoverage English grammar using HPSG. In Conference on Language Resources and Evaluation. A. Copestake, D. Flickinger, C. Pollard, and I. Sag. 2005. Minimal recursion semantics: An introduction. Research on Language and Computation, 3:281–332. C. Ebert. 2005. Formal investigations of underspecified representations. Ph.D. thesis, King’s College, London. M. Egg, A. Koller, and J. Niehren. 2001. The Constraint Language for Lambda Structures. Logic, Language, and Information, 10:457–485. D. Flickinger, A. Koller, and S. Thater. 2005. A new well-formedness criterion for semantics debugging. In Proceedings of the 12th HPSG Conference, Lisbon. J. S. Golan. 1999. Semirings and their applications. Kluwer, Dordrecht. J. Graehl and K. Knight. 2004. Training tree transducers. In HLT-NAACL 2004, Boston. D. Higgins and J. Sadock. 2003. A machine learning approach to modeling scope preferences. Computational Linguistics, 29(1). K. Knight and J. Graehl. 2005. An overview of probabilistic tree transducers for natural language processing. In Computational linguistics and intelligent text processing, pages 1–24. Springer. A. Koller and J. Niehren. 2000. On underspecified processing of dynamic semantics. In Proceedings of COLING-2000, Saarbr¨ucken. A. Koller and O. Rambow. 2007. Relating dominance formalisms. In Proceedings of the 12th Conference on Formal Grammar, Dublin. A. Koller and S. Thater. 2005a. Efficient solving and exploration of scope ambiguities. Proceedings of the ACL-05 Demo Session. A. Koller and S. Thater. 2005b. The evolution of dominance constraint solvers. In Proceedings of the ACL05 Workshop on Software. A. Koller and S. Thater. 2006. An improved redundancy elimination algorithm for underspecified descriptions. In Proceedings of COLING/ACL-2006, Sydney. J. May and K. Knight. 2006. A better n-best list: Practical determinization of weighted finite tree automata. In Proceedings of HLT-NAACL. R. Nesson and S. Shieber. 2006. Simpler TAG semantics through synchronization. In Proceedings of the 11th Conference on Formal Grammar. J. Niehren and S. Thater. 2003. Bridging the gap between underspecification formalisms: Minimal recursion semantics as dominance constraints. In Proceedings of ACL 2003. S. Oepen, K. Toutanova, S. Shieber, C. Manning, D. Flickinger, and T. Brants. 2002. The LinGO Redwoods treebank: Motivation and preliminary applications. In Proceedings of the 19th International Conference on Computational Linguistics (COLING’02), pages 1253–1257. J. Pafel. 1997. Skopus und logische Struktur: Studien zum Quantorenskopus im Deutschen. Habilitationsschrift, Eberhard-Karls-Universit¨at T¨ubingen. M. Regneri, M. Egg, and A. Koller. 2008. Efficient processing of underspecified discourse representations. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-08: HLT) – Short Papers, Columbus, Ohio. U. Reyle. 1993. Dealing with ambiguities by underspecification: Construction, representation and deduction. Journal of Semantics, 10(1). S. Shieber. 2006. Unifying synchronous tree-adjoining grammars and tree transducers via bimorphisms. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL-06), Trento, Italy. K. Sima’an. 1996. Computational complexity of probabilistic disambiguation by means of tree-grammars. In Proceedings of the 16th conference on Computational linguistics, pages 1175–1180, Morristown, NJ, USA. Association for Computational Linguistics. M. Steedman. 2000. The syntactic process. MIT Press. E. Vestre. 1991. An algorithm for generating nonredundant quantifier scopings. In Proc. of EACL, pages 251–256, Berlin. 226
2008
26
Proceedings of ACL-08: HLT, pages 227–235, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Classification of Semantic Relationships between Nominals Using Pattern Clusters Dmitry Davidov ICNC Hebrew University of Jerusalem [email protected] Ari Rappoport Institute of Computer Science Hebrew University of Jerusalem [email protected] Abstract There are many possible different semantic relationships between nominals. Classification of such relationships is an important and difficult task (for example, the well known noun compound classification task is a special case of this problem). We propose a novel pattern clusters method for nominal relationship (NR) classification. Pattern clusters are discovered in a large corpus independently of any particular training set, in an unsupervised manner. Each of the extracted clusters corresponds to some unspecified semantic relationship. The pattern clusters are then used to construct features for training and classification of specific inter-nominal relationships. Our NR classification evaluation strictly follows the ACL SemEval-07 Task 4 datasets and protocol, obtaining an f-score of 70.6, as opposed to 64.8 of the best previous work that did not use the manually provided WordNet sense disambiguation tags. 1 Introduction Automatic extraction and classification of semantic relationships is a major field of activity, of both practical and theoretical interest. A prominent type of semantic relationships is that holding between nominals1. For example, in noun compounds many different semantic relationships are encoded by the same simple form (Girju et al., 2005): ‘dog food’ denotes food consumed by dogs, while ‘summer morn1Our use of the term ‘nominal’ follows (Girju et al., 2007), and includes simple nouns, noun compounds and multiword expressions serving as nouns. ing’ denotes a morning that happens in the summer. These two relationships are completely different semantically but are similar syntactically, and distinguishing between them could be essential for NLP applications such as question answering and machine translation. Relation classification usually relies on a training set in the form of tagged data. To improve results, some systems utilize additional manually constructed semantic resources such as WordNet (WN) (Beamer et al., 2007). However, in many domains and languages such resources are not available. Furthermore, usage of such resources frequently requires disambiguation and connection of the data to the resource (word sense disambiguation in the case of WordNet). Manual disambiguation is unfeasible in many practical tasks, and an automatic one may introduce errors and greatly degrade performance. It thus makes sense to try to minimize the usage of such resources, and utilize only corpus contexts in which the relevant words appear. A leading method for utilizing context information for classification and extraction of relationships is that of patterns (Hearst, 1992; Pantel and Pennacchiotti, 2006). The standard classification process is to find in an auxiliary corpus a set of patterns in which a given training word pair co-appears, and use pattern-word pair co-appearance statistics as features for machine learning algorithms. In this paper we introduce a novel approach, based on utilizing pattern clusters that are prepared separately and independently of the training set. We do not utilize any manually constructed resource or any manual tagging of training data beyond the cor227 rect classification, thus making our method applicable to fully automated tasks and less domain and language dependent. Moreover, our pattern clustering algorithm is fully unsupervised. Our method is based on the observation that while each lexical pattern can be highly ambiguous, several patterns in conjunction can reliably define and represent a lexical relationship. Accordingly, we construct pattern clusters from a large generic corpus, each such cluster potentially representing some important generic relationship. This step is done without accessing any training data, anticipating that most meaningful relationships, including those in a given classification problem, will be represented by some of the discovered clusters. We then use the training set to label some of the clusters, and the labeled clusters to assign classes to tested items. One of the advantages of our method is that it can be used not only for classification, but also for further analysis and retrieval of the observed relationships2. The semantic relationships between the components of noun compounds and between nominals in general are not easy to categorize rigorously. Several different relationship hierarchies have been proposed (Nastase and Szpakowicz, 2003; Moldovan et al., 2004). Some classes, like Container-Contained, Time-Event and Product-Producer, appear in several classification schemes, while classes like ToolObject are more vaguely defined and are subdivided differently. Recently, SemEval-07 Task 4 (Girju et al., 2007) proposed a benchmark dataset that includes a subset of 7 widely accepted nominal relationship (NR) classes, allowing consistent evaluation of different NR classification algorithms. In the SemEval event, 14 research teams evaluated their algorithms using this benchmark. Some of the teams have used the manually annotated WN labels provided with the dataset, and some have not. We evaluated our algorithm on SemEval-07 Task 4 data, showing superior results over participating algorithms that did not utilize WordNet disambiguation tags. We also show how pattern clusters can be used for a completely unsupervised classification of 2In (Davidov and Rappoport, 2008) we focus on the pattern cluster resource type itself, presenting an evaluation of its intrinsic quality based on SAT tests. In the present paper we focus on showing how the resource can be used to improve a known NLP task. the test set. Since in this case no training data is used, this allows the automated discovery of a potentially unbiased classification scheme. Section 2 discusses related work, Section 3 outlines the pattern clustering algorithm, Section 4 details three classification methods, and Sections 5 and 6 describe the evaluation protocol and results. 2 Related Work Numerous methods have been devised for classification of semantic relationships, among which those holding between nominals constitute a prominent category. Major differences between these methods include available resources, degree of preprocessing, features used, classification algorithm and the nature of training/test data. 2.1 Available Resources Many relation classification algorithms utilize WordNet. Among the 15 systems presented by the 14 SemEval teams, some utilized the manually provided WordNet tags for the dataset pairs (e.g., (Beamer et al., 2007)). In all cases, usage of WN tags improves the results significantly. Some other systems that avoided using the labels used WN as a supporting resource for their algorithms (Costello, 2007; Nakov and Hearst, 2007; Kim and Baldwin, 2007). Only three avoided WN altogether (Hendrickx et al., 2007; Bedmar et al., 2007; Aramaki et al., 2006). Other resources used for relationship discovery include Wikipedia (Strube and Ponzetto, 2006), thesauri or synonym sets (Turney, 2005) and domainspecific semantic hierarchies like MeSH (Rosario and Hearst, 2001). While usage of these resources is beneficial in many cases, high quality word sense annotation is not easily available. Besides, lexical resources are not available for many languages, and their coverage is limited even for English when applied to some restricted domains. In this paper we do not use any manually annotated resources apart from the classification training set. 2.2 Degree of Preprocessing Many relationship classification methods utilize some language-dependent preprocessing, like deep or shallow parsing, part of speech tagging and 228 named entity annotation (Pantel et al., 2004). While the obtained features were shown to improve classification performance, they tend to be language dependent and error-prone when working on unusual text domains and are also highly computationally intensive when processing large corpora. To make our approach as language independent and efficient as possible, we avoided using any such preprocessing techniques. 2.3 Classification Features A wide variety of features are used by different algorithms, ranging from simple bag-of-words frequencies to WordNet-based features (Moldovan et al., 2004). Several studies utilize syntactic features. Many other works manually develop a set of heuristic features devised with some specific relationship in mind, like a WordNet-based meronymy feature (Bedmar et al., 2007) or size-of feature (Aramaki et al., 2006). However, the most prominent feature type is based on lexico-syntactic patterns in which the related words co-appear. Since (Hearst, 1992), numerous works have used patterns for discovery and identification of instances of semantic relationships (e.g., (Girju et al., 2006; Snow et al., 2006; Banko et al, 2007)). Rosenfeld and Feldman (2007) discover relationship instances by clustering entities appearing in similar contexts. Strategies were developed for discovery of multiple patterns for some specified lexical relationship (Pantel and Pennacchiotti, 2006) and for unsupervised pattern ranking (Turney, 2006). Davidov et al. (2007) use pattern clusters to define general relationships, but these are specific to a given concept. No study so far has proposed a method to define, discover and represent general relationships present in an arbitrary corpus. In (Davidov and Rappoport, 2008) we present an approach to extract pattern clusters from an untagged corpus. Each such cluster represents some unspecified lexical relationship. In this paper, we use these pattern clusters as the (only) source of machine learning features for a nominal relationship classification problem. Unlike the majority of current studies, we avoid using any other features that require some language-specific information or are devised for specific relationship types. 2.4 Classification Algorithm Various learning algorithms have been used for relation classification. Common choices include variations of SVM (Girju et al., 2004; Nastase et al., 2006), decision trees and memory-based learners. Freely available tools like Weka (Witten and Frank, 1999) allow easy experimentation with common learning algorithms (Hendrickx et al., 2007). In this paper we did not focus on a single ML algorithm, letting algorithm selection be automatically based on cross-validation results on the training set, as in (Hendrickx et al., 2007) but using more algorithms and allowing a more flexible parameter choice. 2.5 Training Data As stated above, several categorization schemes for nominals have been proposed. Nastase and Szpakowicz (2003) proposed a two-level hierarchy with 5 (30) classes at the top (bottom) levels3. This hierarchy and a corresponding dataset were used in (Turney, 2005; Turney, 2006) and (Nastase et al., 2006) for evaluation of their algorithms. Moldovan et al. (2004) proposed a different scheme with 35 classes. The most recent dataset has been developed for SemEval 07 Task 4 (Girju et al., 2007). This manually annotated dataset includes a representative rather than exhaustive list of 7 important nominal relationships. We have used this dataset, strictly following the evaluation protocol. This made it possible to meaningfully compare our method to state-ofthe-art methods for relation classification. 3 Pattern Clustering Algorithm Our pattern clustering algorithm is designed for the unsupervised definition and discovery of generic semantic relationships. The algorithm first discovers and clusters patterns in which a single (‘hook’) word participates, and then merges the resulting clusters to form the final structure. In (Davidov and Rappoport, 2008) we describe the algorithm at length, discuss its behavior and parameters in detail, and evaluate its intrinsic quality. To assist readers of the present paper, in this section we provide an overview. Examples of some resulting pattern clusters are given in Section 6. We refer to a pattern 3Actually, there were 50 relationships at the bottom level, but valid nominal instances were found only for 30. 229 contained in our clusters (a pattern type) as a ‘pattern’ and to an occurrence of a pattern in the corpus (a pattern token) as a ‘pattern instance’. The algorithm does not rely on any data from the classification training set, hence we do not need to repeat its execution for different classification problems. To calibrate its parameters, we ran it a few times with varied parameters settings, producing several different configurations of pattern clusters with different degrees of noise, coverage and granularity. We then chose the best configuration for our task automatically without re-running pattern clustering for each specific problem (see Section 5.3). 3.1 Hook Words and Hook Corpora As a first step, we randomly sample a set of hook words, which will be used in order to discover relationships that generally occur in the corpus. To avoid selection of ambiguous words or typos, we do not select words with frequency higher than a parameter FC and lower than a threshold FB. We also limit the total number N of hook words. For each hook word, we now create a hook corpus, the set of the contexts in which the word appears. Each context is a window containing W words or punctuation characters before and after the hook word. 3.2 Pattern Specification To specify patterns, following (Davidov and Rappoport, 2006) we classify words into highfrequency words (HFWs) and content words (CWs). A word whose frequency is more (less) than FH (FC) is considered to be a HFW (CW). Our patterns have the general form [Prefix] CW1 [Infix] CW2 [Postfix] where Prefix, Infix and Postfix contain only HFWs. We require Prefix and Postfix to be a single HFW, while Infix can contain any number of HFWs (limiting pattern length by window size). This form may include patterns like ‘such X as Y and’. At this stage, the pattern slots can contain only single words; however, when using the final pattern clusters for nominal relationship classification, slots can contain multiword nominals. 3.3 Discovery of Target Words For each of the hook corpora, we now extract all pattern instances where one CW slot contains the hook word and the other CW slot contains some other (‘target’) word. To avoid the selection of common words as target words, and to avoid targets appearing in pattern instances that are relatively fixed multiword expressions, we sort all target words in a given hook corpus by pointwise mutual information between hook and target, and drop patterns obtained from pattern instances containing the lowest and highest L percent of target words. 3.4 Pattern Clustering We now have for each hook corpus a set of patterns, together with the target words used for their extraction, and we want to cluster pattern types. First, we group in clusters all patterns extracted using the same target word. Second, we merge clusters that share more than S percent of their patterns. Some patterns can appear in more than a single cluster. Finally, we merge pattern clusters from different hook corpora, to avoid clusters specific to a single hook word. During merging, we define and utilize core patterns and unconfirmed patterns, which are weighed differently during cluster labeling (see Section 4.2). We merge clusters from different hook corpora using the following algorithm: 1. Remove all patterns originating from a single hook corpus only. 2. Mark all patterns of all present clusters as unconfirmed. 3. While there exists some cluster C1 from corpus DX containing only unconfirmed patterns: (a) Select a cluster with a minimal number of patterns. (b) For each corpus D different from DX: i. Scan D for clusters C2 that share at least S percent of their patterns, and all of their core patterns, with C1. ii. Add all patterns of C2 to C1, setting all shared patterns as core and all others as unconfirmed. iii. Remove cluster C2. (c) If all of C1’s patterns remain unconfirmed remove C1. 4. If several clusters have the same set of core patterns merge them according to rules (i,ii). At the end of this stage, we have a set of pattern clusters where for each cluster there are two subsets, core patterns and unconfirmed patterns. 230 4 Relationship Classification Up to this stage we did not access the training set in any way and we did not use the fact that the target relations are those holding between nominals. Hence, only a small part of the acquired pattern clusters may be relevant for a given NR classification task, while other clusters can represent completely different relationships (e.g., between verbs). We now use the acquired clusters to learn a model for the given labeled training set and to use this model for classification of the test set. First we describe how we deal with data sparseness. Then we propose a HITS measure used for cluster labeling, and finally we present three different classification methods that utilize pattern clusters. 4.1 Enrichment of Provided Data Our classification algorithm is based on contexts of given nominal pairs. Co-appearance of nominal pairs can be very rare (in fact, some word pairs in the Task 4 set co-appear only once in Yahoo web search). Hence we need more contexts where the given nominals or nominals similar to them coappear. This step does not require the training labels (the correct classifications), so we do it for both training and test pairs. We do it in two stages: extracting similar nominals, and obtaining more contexts. 4.1.1 Extracting more words For each nominal pair (w1, w2) in a given sentence S, we use a method similar to (Davidov and Rappoport, 2006) to extract words that have a shared meaning with w1 or w2. We discover such words by scanning our corpora and querying the web for symmetric patterns (obtained automatically from the corpus as in (Davidov and Rappoport, 2006)) that contain w1 or w2. To avoid getting instances of w1,2 with a different meaning, we also require that the second word will appear in the same text paragraph or the same web page. For example, if we are given a pair <loans, students> and we see a sentence ‘... loans and scholarships for students and professionals ...’, we use the symmetric pattern ‘X and Y’ to add the word scholarships to the group of loans and to add the word professionals to the group of students. We do not take words from the sentence ‘In European soccer there are transfers and loans...’ since its context does not contain the word students. In cases where there are only several or zero instances where the two nominals co-appear, we dismiss the latter rule and scan for each nominal separately. Note that ‘loans’ can also be a verb, so usage of a part-of-speech tagger might reduce noise. If the number of instances for a desired nominal is very low, our algorithm trims the first words in these nominal and repeats the search (e.g., <simulation study, voluminous results> becomes <study, results>). This step is the only one specific to English, using the nature of English noun compounds. Our desire in this case is to keep the head words. 4.1.2 Extracting more contexts using the new words To find more instances where nominals similar to w1 and w2 co-appear in HFW patterns, we construct web queries using combinations of each nominal’s group and extract patterns from the search result snapshots (the two line summary provided by search engines for each search result). 4.2 The HITS Measure To use clusters for classification we define a HITS measure similar to that of (Davidov et al., 2007), reflecting the affinity of a given nominal pair to a given cluster. We use the pattern clusters from Section 3 and the additional data collected during the enrichment phase to estimate a HITS value for each cluster and each pair in the training and test sets. For a given nominal pair (w1, w2) and cluster C with n core patterns Pcore and m unconfirmed patterns Punconf, HITS(C, (w1, w2)) = |{p; (w1, w2) appears in p ∈Pcore}| /n+ α × |{p; (w1, w2) appears in p ∈Punconf}| /m. In this formula, ‘appears in’ means that the nominal pair appears in instances of this pattern extracted from the original corpus or retrieved from the web at the previous stage. Thus if some pair appears in most of the patterns of some cluster it receives a high HITS value for this cluster. α (0..1) is a parameter that lets us modify the relative weight of core and unconfirmed patterns. 231 4.3 Classification Using Pattern Clusters We present three ways to use pattern clusters for relationship classification. 4.3.1 Classification by cluster labeling One way to train a classifier in our case is to attach a single relationship label to each cluster during the training phase, and to assign each unlabeled pair to some labeled cluster during the test phase. We use the following normalized HITS measure to label the involved pattern clusters. Denote by ki the number of training pairs in class i in training set T. Then Label(C) = argmaxi X p∈T,Label(p)=i hits(C, p)/ki Clusters where the above sum is zero remain unlabeled. In the test phase we assign to each test pair p the label of the labeled cluster C that received the highest HITS(C, p) value. If there are several clusters with a highest HITS value, then the algorithm selects a ‘clarifying’ set of patterns – patterns that are different in these best clusters. Then it constructs clarifying web queries that contain the test nominal pair inside the clarifying patterns. The effect is to increment the HITS value of the cluster containing a clarifying pattern if an appropriate pattern instance (including the target nominals) was found on the web. We start with the most frequent clarifying pattern and perform additional queries until no clarifying patterns are left or until some labeled cluster obtains a highest HITS value. If no patterns are left but there are still several winning clusters, we assign to the pair the label of the cluster with the largest number of pattern instances in the corpus. One advantage of this method is that we get as a by-product a set of labeled pattern clusters. Examination of this set can help to distinguish and analyze (by means of patterns) which different relationships actually exist for each class in the training set. Furthermore, labeled pattern clusters can be used for web queries to obtain additional examples of the same relationship. 4.3.2 Classification by cluster HITS values as features In this method we treat the HITS measure for a cluster as a feature for a machine learning classification algorithm. To do this, we construct feature vectors from each training pair, where each feature is the HITS measure corresponding to a single pattern cluster. We prepare test vectors similarly. Once we have feature vectors, we can use a variety of classifiers (we used those in Weka) to construct a model and to evaluate it on the test set. 4.3.3 Unsupervised clustering If we are not given any training set, it is still possible to separate between different relationship types by grouping the feature vectors of Section 4.3.2 into clusters. This can be done by applying k-means or another clustering algorithm to the feature vectors described above. This makes the whole approach completely unsupervised. However, it does not provide any inherent labeling, making an evaluation difficult. 5 Experimental Setup The main problem in a fair evaluation of NR classification is that there is no widely accepted list of possible relationships between nominals. In our evaluation we have selected the setup and data from SemEval-07 Task 4 (Girju et al., 2007). Selecting this type of dataset allowed us to compare to 6 submitted state-of-art systems that evaluated on exactly the same data and to 9 other systems that utilize additional information (WN labels). We have applied our three different classification methods on the given data set. 5.1 SemEval-07 Task 4 Overview Task 4 (Girju et al., 2007) involves classification of relationships between simple nominals other than named entities. Seven distinct relationships were chosen: Cause-Effect, Instrument-Agency, ProductProducer, Origin-Entity, Theme-Tool, Part-Whole, and Content-Container. For each relationship, the provided dataset consists of 140 training and 70 test examples. Examples were binary tagged as belonging/not belonging to the tested relationship. The vast majority of negative examples were near-misses, acquired from the web using the same lexico-syntactic patterns as the positives. Examples appear as sentences with the nominal pair tagged. Nouns in this pair were manually labeled with their corresponding WordNet 3 labels and the web queries used to 232 obtain the sentences. The 15 submitted systems were assigned into 4 categories according to whether they use the WordNet and Query tags (some systems were assigned to more than a single category, since they reported experiments in several settings). In our evaluation we do not utilize WordNet or Query tags, hence we compare ourselves with the corresponding group (A), containing 6 systems. 5.2 Corpus and Web Access Our algorithm uses two corpora. We estimate frequencies and perform primary search on a local web corpus containing about 68GB untagged plain text. This corpus was extracted from the web starting from open directory links, comprising English web pages with varied topics and styles (Gabrilovich and Markovitch, 2005). To enrich the set of given word pairs and patterns as described in Section 4.1 and to perform clarifying queries, we utilize the Yahoo API for web queries. For each query, if the desired words/patterns were found in a page link’s snapshot, we do not use the link, otherwise we download the page from the retrieved link and then extract the required data. If only several links were found for a given word pair we perform local crawling to depth 3 in an attempt to discover more instances. 5.3 Parameters and Learning Algorithm Our algorithm utilizes several parameters. Instead of calibrating them manually, we only provided a desired range for each, and the final parameter values were obtained during selection of the bestperforming setup using 10-fold cross-validation on the training set. For each parameter we have estimated its desired range using the (Nastase and Szpakowicz, 2003) set as a development set. Note that this set uses an entirely different relationship classification scheme. We ran the pattern clustering phase on 128 different sets of parameters, obtaining 128 different clustering schemes with varied granularity, noise and coverage. The parameter ranges obtained are: FC (metapattern content word frequency and upper bound for hook word selection): 100−5000 words per million (wpm); FH (meta-pattern HFW): 10 −100 wpm; FB (low word count for hook word filtering): 1−50 wpm; N (number of hook words): 100 −1000; W (window size): 5 or window = sentence; L (target word mutual information filter): 1/3 −1/5; S (cluster overlap filter for cluster merging): 2/3; α (core vs. unconfirmed weight for HITS estimation): 0.1 −0.01; S (commonality for cluster merging): 2/3. As designed, each parameter indeed influences a certain effect. Naturally, the parameters are not mutually independent. Selecting the best configuration in the cross-validation phase makes the algorithm flexible and less dependent on hard-coded parameter values. Selection of learning algorithm and its algorithmspecific parameters were done as follows. For each of the 7 classification tasks (one per relationship type), for each of the 128 pattern clustering schemes, we prepared a list of most of the compatible algorithms available in Weka, and we automatically selected the model (a parameter set and an algorithm) which gave the best 10-fold cross-validation results. The winning algorithms were LWL (Atkeson et al., 1997), SMO (Platt, 1999), and K* (Cleary and Trigg, 1995) (there were 7 tasks, and different algorithms could be selected for each task). We then used the obtained model to classify the testing set. This allowed us to avoid fixing parameters that are best for a specific dataset but not for others. Since each dataset has only 140 examples, the computation time of each learning algorithm is negligible. 6 Results The pattern clustering phase results in 90 to 3000 distinct pattern clusters, depending on the parameter setup. Manual sampling of these clusters indeed reveals that many clusters contain patterns specific to some apparent lexical relationship. For example, we have discovered such clusters as: {‘buy Y accessory for X!’, ‘shipping Y for X’, ‘Y is available for X’, ‘Y are available for X’, ‘Y are available for X systems’, ‘Y for X’ } and {‘best X for Y’, ‘X types for Y’, ‘Y with X’, ‘X is required for Y’, ‘X as required for Y’, ‘X for Y’}. Note that some patterns (‘Y for X’) can appear in many clusters. We applied the three classification methods described in Section 4.3 to Task 4 data. For supervised classification we strictly followed the SemEval datasets and rules. For unsupervised classification we did not use any training data. Using the k-means algorithm, we obtained two nearly equal unlabeled 233 Method P R F Acc Unsupervised clustering (4.3.3) 64.5 61.3 62.0 64.5 Cluster Labeling (4.3.1) 65.1 69.0 67.2 68.5 HITS Features (4.3.2) 69.1 70.6 70.6 70.1 Best Task 4 (no WordNet) 66.1 66.7 64.8 66.0 Best Task 4 (with WordNet) 79.7 69.8 72.4 76.3 Table 1: Our SemEval-07 Task 4 results. Relation Type F Acc C Cause-Effect 69.7 71.4 2 Instrument-Agency 76.5 74.2 1 Product-Producer 76.4 83.8 1 Origin-Entity 65.4 62.6 4 Theme-Tool 59.4 58.7 6 Part-Whole 74.3 70.9 1 Content-Container 72.6 69.2 2 Table 2: By-relation Task 4 HITS-based results. C is the number of clusters with positive labels. clusters containing test samples. For evaluation we assigned a negative/positive label to these two clusters according to the best alignment with true labels. Table 1 shows our results, along with the best Task 4 result not using WordNet labels (Costello, 2007). For reference, the best results overall (Beamer et al., 2007) are also shown. The table shows precision (P) recall (R), F-score (F), and Accuracy (Acc) (percentage of correctly classified examples). We can see that while our algorithm is not as good as the best method that utilizes WordNet tags, results are superior to all participants who did not use these tags. We can also see that the unsupervised method results are above the random baseline (50%). In fact, our results (f-score 62.0, accuracy 64.5) are better than the averaged results (58.0, 61.1) of the group that did not utilize WN tags. Table 2 shows the HITS-based classification results (F-score and Accuracy) and the number of positively labeled clusters (C) for each relation. As observed by participants of Task 4, we can see that different sets vary greatly in difficulty. However, we also obtain a nice insight as to why this happens – relations like Theme-Tool seem very ambiguous and are mapped to several clusters, while relations like Product-Producer seem to be well-defined by the obtained pattern clusters. The SemEval dataset does not explicitly mark items whose correct classification requires analysis of the context of the whole sentence in which they appear. Since our algorithm does not utilize test sentence contextual information, we do not expect it to show exceptional performance on such items. This is a good topic for future research. Since the SemEval dataset is of a very specific nature, we have also applied our classification framework to the (Nastase and Szpakowicz, 2003) dataset, which contains 600 pairs labeled with 5 main relationship types. We have used the exact evaluation procedure described in (Turney, 2006), achieving a class f-score average of 60.1, as opposed to 54.6 in (Turney, 2005) and 51.2 in (Nastase et al., 2006). This shows that our method produces superior results for rather differing datasets. 7 Conclusion Relationship classification is known to improve many practical tasks, e.g., textual entailment (Tatu and Moldovan, 2005). We have presented a novel framework for relationship classification, based on pattern clusters prepared as a standalone resource independently of the training set. Our method outperforms current state-of-the-art algorithms that do not utilize WordNet tags on Task 4 of SemEval-07. In practical situations, it would not be feasible to provide a large amount of such sense disambiguation tags manually. Our method also shows competitive performance compared to the majority of task participants that do utilize WN tags. Our method can produce labeled pattern clusters, which can be potentially useful for automatic discovery of additional instances for a given relationship. We intend to pursue this promising direction in future work. Acknowledgement. We would like to thank the anonymous reviewers, whose comments have greatly improved the quality of this paper. References Aramaki, E., Imai, T., Miyo, K., and Ohe, K., 2007. UTH: semantic relation classification using physical sizes. ACL SemEval ’07 Workshop. Atkeson, C., Moore, A., and Schaal, S., 1997. Locally weighted learning. Artificial Intelligence Review, 11(1–5): 75–113. 234 Banko, M., Cafarella, M. J., Soderland, S., Broadhead, M., and Etzioni, O., 2007. Open information extraction from the Web. IJCAI ’07. Beamer, B., Bhat, S., Chee, B., Fister, A., Rozovskaya A., and Girju, R., 2007. UIUC: A knowledge-rich approach to identifying semantic relations between nominals. ACL SemEval ’07 Workshop. Bedmar, I. S., Samy, D., and Martinez, J. L., 2007. UC3M: Classification of semantic relations between nominals using sequential minimal optimization. ACL SemEval ’07 Workshop. Cleary, J. G. , Trigg, L. E., 1995. K*: An instance-based learner using and entropic distance measure. ICML ’95. Costello, F. J., 2007. UCD-FC: Deducing semantic relations using WordNet senses that occur frequently in a database of noun-noun compounds. ACL SemEval ’07 Workshop. Davidov, D., Rappoport, A., 2006. Efficient unsupervised discovery of word categories using symmetric patterns and high frequency words. COLING-ACL ’06 Davidov D., Rappoport A. and Koppel M., 2007. Fully unsupervised discovery of concept-specific relationships by Web mining. ACL ’07. Davidov, D., Rappoport, A., 2008. Unsupervised discovery of generic relationships using pattern clusters and its evaluation by automatically generated SAT analogy questions. ACL ’08. Gabrilovich, E., Markovitch, S., 2005. Feature generation for text categorization using world knowledge. IJCAI ’05. Girju, R., Giuglea, A., Olteanu, M., Fortu, O., Bolohan, O., and Moldovan, D., 2004. Support vector machines applied to the classification of semantic relations in nominalized noun phrases. HLT/NAACL ’04 Workshop on Computational Lexical Semantics. Girju, R., Moldovan, D., Tatu, M., and Antohe, D., 2005. On the semantics of noun compounds. Computer Speech and Language, 19(4):479-496. Girju, R., Badulescu, A., and Moldovan, D., 2006. Automatic discovery of part-whole relations. Computational Linguistics, 32(1). Girju, R., Hearst, M., Nakov, P., Nastase, V., Szpakowicz, S., Turney, P., and Yuret, D., 2007. Task 04: Classification of semantic relations between nominal at SemEval 2007. 4th Intl. Workshop on Semantic Evaluations (SemEval ’07), in ACL ’07. Hearst, M., 1992. Automatic acquisition of hyponyms from large text corpora. COLING ’92 Hendrickx, I., Morante, R., Sporleder, C., and van den Bosch, A., 2007. Machine learning of semantic relations with shallow features and almost no data. ACL SemEval ’07 Workshop. Kim, S.N., Baldwin, T., 2007. MELB-KB: Nominal classification as noun compound interpretation. ACL SemEval ’07 Workshop. Moldovan, D., Badulescu, A., Tatu, M., Antohe, D., and Girju, R., 2004. Models for the semantic classification of noun phrases. HLT-NAACL ’04 Workshop on Computational Lexical Semantics. Nakov, P., and Hearst, M., 2007. UCB: System description for SemEval Task #4. ACL SemEval ’07 Workshop. Nastase, V., Szpakowicz, S., 2003. Exploring nounmodifier semantic relations. In Fifth Intl. Workshop on Computational Semantics (IWCS-5). Nastase, V., Sayyad-Shirabad, J., Sokolova, M., and Szpakowicz, S., 2006. Learning noun-modifier semantic relations with corpus-based and WordNet-based features. In Proceedings of the 21st National Conference on Artificial Intelligence, Boston, MA. Pantel, P., Ravichandran, D., and Hovy, E., 2004. Towards terascale knowledge acquisition. COLING ’04. Pantel, P., Pennacchiotti, M., 2006. Espresso: leveraging generic patterns for automatically harvesting semantic relations. COLING-ACL ’06. Platt, J., 1999. Fast training of support vector machines using sequential minimal optimization. In Scholkopf, Burges, and Smola, Advances in Kernel Methods – Support Vector Learning, pp. 185–208. MIT Press. Rosario, B., Hearst, M., 2001. Classifying the semantic relations in noun compounds. EMNLP ’01. Rosenfeld, B., Feldman, R., 2007. Clustering for unsupervised relation identification. CIKM ’07. Snow, R., Jurafsky, D., Ng, A.Y., 2006. Semantic taxonomy induction from heterogeneous evidence. COLING-ACL ’06. Strube, M., Ponzetto, S., 2006. WikiRelate! computing semantic relatedness using Wikipedia. AAAI ’06. Tatu, M., Moldovan, D., 2005. A semantic approach to recognizing textual entailment. HLT/EMNLP ’05. Turney, P., 2005. Measuring semantic similarity by latent relational analysis. IJCAI ’05. Turney, P., 2006. Expressing implicit semantic relations without supervision. COLING-ACL ’06. Witten, H., Frank, E., 1999. Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufman, San Francisco, CA. 235
2008
27
Proceedings of ACL-08: HLT, pages 236–244, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Vector-based Models of Semantic Composition Jeff Mitchell and Mirella Lapata School of Informatics, University of Edinburgh 2 Buccleuch Place, Edinburgh EH8 9LW, UK [email protected], [email protected] Abstract This paper proposes a framework for representing the meaning of phrases and sentences in vector space. Central to our approach is vector composition which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models which we evaluate empirically on a sentence similarity task. Experimental results demonstrate that the multiplicative models are superior to the additive alternatives when compared against human judgments. 1 Introduction Vector-based models of word meaning (Lund and Burgess, 1996; Landauer and Dumais, 1997) have become increasingly popular in natural language processing (NLP) and cognitive science. The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar (Harris, 1968). A variety of NLP tasks have made good use of vector-based models. Examples include automatic thesaurus extraction (Grefenstette, 1994), word sense discrimination (Sch¨utze, 1998) and disambiguation (McCarthy et al., 2004), collocation extraction (Schone and Jurafsky, 2001), text segmentation (Choi et al., 2001) , and notably information retrieval (Salton et al., 1975). In cognitive science vector-based models have been successful in simulating semantic priming (Lund and Burgess, 1996; Landauer and Dumais, 1997) and text comprehension (Landauer and Dumais, 1997; Foltz et al., 1998). Moreover, the vector similarities within such semantic spaces have been shown to substantially correlate with human similarity judgments (McDonald, 2000) and word association norms (Denhire and Lemaire, 2004). Despite their widespread use, vector-based models are typically directed at representing words in isolation and methods for constructing representations for phrases or sentences have received little attention in the literature. In fact, the commonest method for combining the vectors is to average them. Vector averaging is unfortunately insensitive to word order, and more generally syntactic structure, giving the same representation to any constructions that happen to share the same vocabulary. This is illustrated in the example below taken from Landauer et al. (1997). Sentences (1-a) and (1-b) contain exactly the same set of words but their meaning is entirely different. (1) a. It was not the sales manager who hit the bottle that day, but the office worker with the serious drinking problem. b. That day the office manager, who was drinking, hit the problem sales worker with a bottle, but it was not serious. While vector addition has been effective in some applications such as essay grading (Landauer and Dumais, 1997) and coherence assessment (Foltz et al., 1998), there is ample empirical evidence that syntactic relations across and within sentences are crucial for sentence and discourse processing (Neville et al., 1991; West and Stanovich, 1986) and modulate cognitive behavior in sentence priming (Till et al., 1988) and inference tasks (Heit and 236 Rubinstein, 1994). Computational models of semantics which use symbolic logic representations (Montague, 1974) can account naturally for the meaning of phrases or sentences. Central in these models is the notion of compositionality — the meaning of complex expressions is determined by the meanings of their constituent expressions and the rules used to combine them. Here, semantic analysis is guided by syntactic structure, and therefore sentences (1-a) and (1-b) receive distinct representations. The downside of this approach is that differences in meaning are qualitative rather than quantitative, and degrees of similarity cannot be expressed easily. In this paper we examine models of semantic composition that are empirically grounded and can represent similarity relations. We present a general framework for vector-based composition which allows us to consider different classes of models. Specifically, we present both additive and multiplicative models of vector combination and assess their performance on a sentence similarity rating experiment. Our results show that the multiplicative models are superior and correlate significantly with behavioral data. 2 Related Work The problem of vector composition has received some attention in the connectionist literature, particularly in response to criticisms of the ability of connectionist representations to handle complex structures (Fodor and Pylyshyn, 1988). While neural networks can readily represent single distinct objects, in the case of multiple objects there are fundamental difficulties in keeping track of which features are bound to which objects. For the hierarchical structure of natural language this binding problem becomes particularly acute. For example, simplistic approaches to handling sentences such as John loves Mary and Mary loves John typically fail to make valid representations in one of two ways. Either there is a failure to distinguish between these two structures, because the network fails to keep track of the fact that John is subject in one and object in the other, or there is a failure to recognize that both structures involve the same participants, because John as a subject has a distinct representation from John as an object. In contrast, symbolic representations can naturally handle the binding of constituents to their roles, in a systematic manner that avoids both these problems. Smolensky (1990) proposed the use of tensor products as a means of binding one vector to another. The tensor product u ⊗v is a matrix whose components are all the possible products uivj of the components of vectors u and v. A major difficulty with tensor products is their dimensionality which is higher than the dimensionality of the original vectors (precisely, the tensor product has dimensionality m × n). To overcome this problem, other techniques have been proposed in which the binding of two vectors results in a vector which has the same dimensionality as its components. Holographic reduced representations (Plate, 1991) are one implementation of this idea where the tensor product is projected back onto the space of its components. The projection is defined in terms of circular convolution a mathematical function that compresses the tensor product of two vectors. The compression is achieved by summing along the transdiagonal elements of the tensor product. Noisy versions of the original vectors can be recovered by means of circular correlation which is the approximate inverse of circular convolution. The success of circular correlation crucially depends on the components of the n-dimensional vectors u and v being randomly distributed with mean 0 and variance 1 n. This poses problems for modeling linguistic data which is typically represented by vectors with non-random structure. Vector addition is by far the most common method for representing the meaning of linguistic sequences. For example, assuming that individual words are represented by vectors, we can compute the meaning of a sentence by taking their mean (Foltz et al., 1998; Landauer and Dumais, 1997). Vector addition does not increase the dimensionality of the resulting vector. However, since it is order independent, it cannot capture meaning differences that are modulated by differences in syntactic structure. Kintsch (2001) proposes a variation on the vector addition theme in an attempt to model how the meaning of a predicate (e.g., run) varies depending on the arguments it operates upon (e.g, the horse ran vs. the color ran). The idea is to add not only the vectors representing the predicate and its argument but also the neighbors associated with both of them. The neighbors, Kintsch argues, can ‘strengthen features of the predicate that are appropriate for the argument of the predication’. 237 animal stable village gallop jokey horse 0 6 2 10 4 run 1 8 4 4 0 Figure 1: A hypothetical semantic space for horse and run Unfortunately, comparisons across vector composition models have been few and far between in the literature. The merits of different approaches are illustrated with a few hand picked examples and parameter values and large scale evaluations are uniformly absent (see Frank et al. (2007) for a criticism of Kintsch’s (2001) evaluation standards). Our work proposes a framework for vector composition which allows the derivation of different types of models and licenses two fundamental composition operations, multiplication and addition (and their combination). Under this framework, we introduce novel composition models which we compare empirically against previous work using a rigorous evaluation methodology. 3 Composition Models We formulate semantic composition as a function of two vectors, u and v. We assume that individual words are represented by vectors acquired from a corpus following any of the parametrisations that have been suggested in the literature.1 We briefly note here that a word’s vector typically represents its co-occurrence with neighboring words. The construction of the semantic space depends on the definition of linguistic context (e.g., neighbouring words can be documents or collocations), the number of components used (e.g., the k most frequent words in a corpus), and their values (e.g., as raw co-occurrence frequencies or ratios of probabilities). A hypothetical semantic space is illustrated in Figure 1. Here, the space has only five dimensions, and the matrix cells denote the co-occurrence of the target words (horse and run) with the context words animal, stable, and so on. Let p denote the composition of two vectors u and v, representing a pair of constituents which stand in some syntactic relation R. Let K stand for any additional knowledge or information which is needed to construct the semantics of their composi1A detailed treatment of existing semantic space models is outside the scope of the present paper. We refer the interested reader to Pad´o and Lapata (2007) for a comprehensive overview. tion. We define a general class of models for this process of composition as: p = f(u,v,R,K) (1) The expression above allows us to derive models for which p is constructed in a distinct space from u and v, as is the case for tensor products. It also allows us to derive models in which composition makes use of background knowledge K and models in which composition has a dependence, via the argument R, on syntax. To derive specific models from this general framework requires the identification of appropriate constraints to narrow the space of functions being considered. One particularly useful constraint is to hold R fixed by focusing on a single well defined linguistic structure, for example the verb-subject relation. Another simplification concerns K which can be ignored so as to explore what can be achieved in the absence of additional knowledge. This reduces the class of models to: p = f(u,v) (2) However, this still leaves the particular form of the function f unspecified. Now, if we assume that p lies in the same space as u and v, avoiding the issues of dimensionality associated with tensor products, and that f is a linear function, for simplicity, of the cartesian product of u and v, then we generate a class of additive models: p = Au+Bv (3) where A and B are matrices which determine the contributions made by u and v to the product p. In contrast, if we assume that f is a linear function of the tensor product of u and v, then we obtain multiplicative models: p = Cuv (4) where C is a tensor of rank 3, which projects the tensor product of u and v onto the space of p. Further constraints can be introduced to reduce the free parameters in these models. So, if we assume that only the ith components of u and v contribute to the ith component of p, that these components are not dependent on i, and that the function is symmetric with regard to the interchange of u 238 and v, we obtain a simpler instantiation of an additive model: pi = ui +vi (5) Analogously, under the same assumptions, we obtain the following simpler multiplicative model: pi = ui ·vi (6) For example, according to (5), the addition of the two vectors representing horse and run in Figure 1 would yield horse+run = [1 14 6 14 4]. Whereas their product, as given by (6), is horse·run = [0 48 8 40 0]. Although the composition model in (5) is commonly used in the literature, from a linguistic perspective, the model in (6) is more appealing. Simply adding the vectors u and v lumps their contents together rather than allowing the content of one vector to pick out the relevant content of the other. Instead, it could be argued that the contribution of the ith component of u should be scaled according to its relevance to v, and vice versa. In effect, this is what model (6) achieves. As a result of the assumption of symmetry, both these models are ‘bag of words’ models and word order insensitive. Relaxing the assumption of symmetry in the case of the simple additive model produces a model which weighs the contribution of the two components differently: pi = αui +βvi (7) This allows additive models to become more syntax aware, since semantically important constituents can participate more actively in the composition. As an example if we set α to 0.4 and β to 0.6, then horse = [0 2.4 0.8 4 1.6] and run = [0.6 4.8 2.4 2.4 0], and their sum horse+run = [0.6 5.6 3.2 6.4 1.6]. An extreme form of this differential in the contribution of constituents is where one of the vectors, say u, contributes nothing at all to the combination: pi = v j (8) Admittedly the model in (8) is impoverished and rather simplistic, however it can serve as a simple baseline against which to compare more sophisticated models. The models considered so far assume that components do not ‘interfere’ with each other, i.e., that only the ith components of u and v contribute to the ith component of p. Another class of models can be derived by relaxing this constraint. To give a concrete example, circular convolution is an instance of the general multiplicative model which breaks this constraint by allowing u j to contribute to pi: pi = ∑ j u j ·vi−j (9) It is also possible to re-introduce the dependence on K into the model of vector composition. For additive models, a natural way to achieve this is to include further vectors into the summation. These vectors are not arbitrary and ideally they must exhibit some relation to the words of the construction under consideration. When modeling predicate-argument structures, Kintsch (2001) proposes including one or more distributional neighbors, n, of the predicate: p = u+v+∑n (10) Note that considerable latitude is allowed in selecting the appropriate neighbors. Kintsch (2001) considers only the m most similar neighbors to the predicate, from which he subsequently selects k, those most similar to its argument. So, if in the composition of horse with run, the chosen neighbor is ride, ride = [2 15 7 9 1], then this produces the representation horse+run+ride = [3 29 13 23 5]. In contrast to the simple additive model, this extended model is sensitive to syntactic structure, since n is chosen from among the neighbors of the predicate, distinguishing it from the argument. Although we have presented multiplicative and additive models separately, there is nothing inherent in our formulation that disallows their combination. The proposal is not merely notational. One potential drawback of multiplicative models is the effect of components with value zero. Since the product of zero with any number is itself zero, the presence of zeroes in either of the vectors leads to information being essentially thrown away. Combining the multiplicative model with an additive model, which does not suffer from this problem, could mitigate this problem: pi = αui +βvi +γuivi (11) where α, β, and γ are weighting constants. 239 4 Evaluation Set-up We evaluated the models presented in Section 3 on a sentence similarity task initially proposed by Kintsch (2001). In his study, Kintsch builds a model of how a verb’s meaning is modified in the context of its subject. He argues that the subjects of ran in The color ran and The horse ran select different senses of ran. This change in the verb’s sense is equated to a shift in its position in semantic space. To quantify this shift, Kintsch proposes measuring similarity relative to other verbs acting as landmarks, for example gallop and dissolve. The idea here is that an appropriate composition model when applied to horse and ran will yield a vector closer to the landmark gallop than dissolve. Conversely, when color is combined with ran, the resulting vector will be closer to dissolve than gallop. Focusing on a single compositional structure, namely intransitive verbs and their subjects, is a good point of departure for studying vector combination. Any adequate model of composition must be able to represent argument-verb meaning. Moreover by using a minimal structure we factor out inessential degrees of freedom and are able to assess the merits of different models on an equal footing. Unfortunately, Kintsch (2001) demonstrates how his own composition algorithm works intuitively on a few hand selected examples but does not provide a comprehensive test set. In order to establish an independent measure of sentence similarity, we assembled a set of experimental materials and elicited similarity ratings from human subjects. In the following we describe our data collection procedure and give details on how our composition models were constructed and evaluated. Materials and Design Our materials consisted of sentences with an an intransitive verb and its subject. We first compiled a list of intransitive verbs from CELEX2. All occurrences of these verbs with a subject noun were next extracted from a RASP parsed (Briscoe and Carroll, 2002) version of the British National Corpus (BNC). Verbs and nouns that were attested less than fifty times in the BNC were removed as they would result in unreliable vectors. Each reference subject-verb tuple (e.g., horse ran) was paired with two landmarks, each a synonym of the verb. The landmarks were chosen so as to represent distinct verb senses, one compatible 2http://www.ru.nl/celex/ with the reference (e.g., horse galloped) and one incompatible (e.g., horse dissolved). Landmarks were taken from WordNet (Fellbaum, 1998). Specifically, they belonged to different synsets and were maximally dissimilar as measured by the Jiang and Conrath (1997) measure.3 Our initial set of candidate materials consisted of 20 verbs, each paired with 10 nouns, and 2 landmarks (400 pairs of sentences in total). These were further pretested to allow the selection of a subset of items showing clear variations in sense as we wanted to have a balanced set of similar and dissimilar sentences. In the pretest, subjects saw a reference sentence containing a subject-verb tuple and its landmarks and were asked to choose which landmark was most similar to the reference or neither. Our items were converted into simple sentences (all in past tense) by adding articles where appropriate. The stimuli were administered to four separate groups; each group saw one set of 100 sentences. The pretest was completed by 53 participants. For each reference verb, the subjects’ responses were entered into a contingency table, whose rows corresponded to nouns and columns to each possible answer (i.e., one of the two landmarks). Each cell recorded the number of times our subjects selected the landmark as compatible with the noun or not. We used Fisher’s exact test to determine which verbs and nouns showed the greatest variation in landmark preference and items with p-values greater than 0.001 were discarded. This yielded a reduced set of experimental items (120 in total) consisting of 15 reference verbs, each with 4 nouns, and 2 landmarks. Procedure and Subjects Participants first saw a set of instructions that explained the sentence similarity task and provided several examples. Then the experimental items were presented; each contained two sentences, one with the reference verb and one with its landmark. Examples of our items are given in Table 1. Here, burn is a high similarity landmark (High) for the reference The fire glowed, whereas beam is a low similarity landmark (Low). The opposite is the case for the reference The face 3We assessed a wide range of semantic similarity measures using the WordNet similarity package (Pedersen et al., 2004). Most of them yielded similar results. We selected Jiang and Conrath’s measure since it has been shown to perform consistently well across several cognitive and NLP tasks (Budanitsky and Hirst, 2001). 240 Noun Reference High Low The fire glowed burned beamed The face glowed beamed burned The child strayed roamed digressed The discussion strayed digressed roamed The sales slumped declined slouched The shoulders slumped slouched declined Table 1: Example Stimuli with High and Low similarity landmarks glowed. Sentence pairs were presented serially in random order. Participants were asked to rate how similar the two sentences were on a scale of one to seven. The study was conducted remotely over the Internet using Webexp4, a software package designed for conducting psycholinguistic studies over the web. 49 unpaid volunteers completed the experiment, all native speakers of English. Analysis of Similarity Ratings The reliability of the collected judgments is important for our evaluation experiments; we therefore performed several tests to validate the quality of the ratings. First, we examined whether participants gave high ratings to high similarity sentence pairs and low ratings to low similarity ones. Figure 2 presents a box-and-whisker plot of the distribution of the ratings. As we can see sentences with high similarity landmarks are perceived as more similar to the reference sentence. A Wilcoxon rank sum test confirmed that the difference is statistically significant (p < 0.01). We also measured how well humans agree in their ratings. We employed leave-one-out resampling (Weiss and Kulikowski, 1991), by correlating the data obtained from each participant with the ratings obtained from all other participants. We used Spearman’s ρ, a non parametric correlation coefficient, to avoid making any assumptions about the distribution of the similarity ratings. The average inter-subject agreement5 was ρ = 0.40. We believe that this level of agreement is satisfactory given that naive subjects are asked to provide judgments on fine-grained semantic distinctions (see Table 1). More evidence that this is not an easy task comes from Figure 2 where we observe some overlap in the ratings for High and Low similarity items. 4http://www.webexp.info/ 5Note that Spearman’s rho tends to yield lower coefficients compared to parametric alternatives such as Pearson’s r. High Low 0 1 2 3 4 5 6 7 Figure 2: Distribution of elicited ratings for High and Low similarity items Model Parameters Irrespectively of their form, all composition models discussed here are based on a semantic space for representing the meanings of individual words. The semantic space we used in our experiments was built on a lemmatised version of the BNC. Following previous work (Bullinaria and Levy, 2007), we optimized its parameters on a word-based semantic similarity task. The task involves examining the degree of linear relationship between the human judgments for two individual words and vector-based similarity values. We experimented with a variety of dimensions (ranging from 50 to 500,000), vector component definitions (e.g., pointwise mutual information or log likelihood ratio) and similarity measures (e.g., cosine or confusion probability). We used WordSim353, a benchmark dataset (Finkelstein et al., 2002), consisting of relatedness judgments (on a scale of 0 to 10) for 353 word pairs. We obtained best results with a model using a context window of five words on either side of the target word, the cosine measure, and 2,000 vector components. The latter were the most common context words (excluding a stop list of function words). These components were set to the ratio of the probability of the context word given the target word to the probability of the context word overall. This configuration gave high correlations with the WordSim353 similarity judgments using the cosine measure. In addition, Bullinaria and Levy (2007) found that these parameters perform well on a number of other tasks such as the synonymy task from the Test of English as a Foreign Language (TOEFL). Our composition models have no additional pa241 rameters beyond the semantic space just described, with three exceptions. First, the additive model in (7) weighs differentially the contribution of the two constituents. In our case, these are the subject noun and the intransitive verb. To this end, we optimized the weights on a small held-out set. Specifically, we considered eleven models, varying in their weightings, in steps of 10%, from 100% noun through 50% of both verb and noun to 100% verb. For the best performing model the weight for the verb was 80% and for the noun 20%. Secondly, we optimized the weightings in the combined model (11) with a similar grid search over its three parameters. This yielded a weighted sum consisting of 95% verb, 0% noun and 5% of their multiplicative combination. Finally, Kintsch’s (2001) additive model has two extra parameters. The m neighbors most similar to the predicate, and the k of m neighbors closest to its argument. In our experiments we selected parameters that Kintsch reports as optimal. Specifically, m was set to 20 and m to 1. Evaluation Methodology We evaluated the proposed composition models in two ways. First, we used the models to estimate the cosine similarity between the reference sentence and its landmarks. We expect better models to yield a pattern of similarity scores like those observed in the human ratings (see Figure 2). A more scrupulous evaluation requires directly correlating all the individual participants’ similarity judgments with those of the models.6 We used Spearman’s ρ for our correlation analyses. Again, better models should correlate better with the experimental data. We assume that the inter-subject agreement can serve as an upper bound for comparing the fit of our models against the human judgments. 5 Results Our experiments assessed the performance of seven composition models. These included three additive models, i.e., simple addition (equation (5), Add), weighted addition (equation (7), WeightAdd), and Kintsch’s (2001) model (equation (10), Kintsch), a multiplicative model (equation (6), Multiply), and also a model which combines multiplication with 6We avoided correlating the model predictions with averaged participant judgments as this is inappropriate given the ordinal nature of the scale of these judgments and also leads to a dependence between the number of participants and the magnitude of the correlation coefficient. Model High Low ρ NonComp 0.27 0.26 0.08** Add 0.59 0.59 0.04* WeightAdd 0.35 0.34 0.09** Kintsch 0.47 0.45 0.09** Multiply 0.42 0.28 0.17** Combined 0.38 0.28 0.19** UpperBound 4.94 3.25 0.40** Table 2: Model means for High and Low similarity items and correlation coefficients with human judgments (*: p < 0.05, **: p < 0.01) addition (equation (11), Combined). As a baseline we simply estimated the similarity between the reference verb and its landmarks without taking the subject noun into account (equation (8), NonComp). Table 2 shows the average model ratings for High and Low similarity items. For comparison, we also show the human ratings for these items (UpperBound). Here, we are interested in relative differences, since the two types of ratings correspond to different scales. Model similarities have been estimated using cosine which ranges from 0 to 1, whereas our subjects rated the sentences on a scale from 1 to 7. The simple additive model fails to distinguish between High and Low Similarity items. We observe a similar pattern for the non compositional baseline model, the weighted additive model and Kintsch (2001). The multiplicative and combined models yield means closer to the human ratings. The difference between High and Low similarity values estimated by these models are statistically significant (p < 0.01 using the Wilcoxon rank sum test). Figure 3 shows the distribution of estimated similarities under the multiplicative model. The results of our correlation analysis are also given in Table 2. As can be seen, all models are significantly correlated with the human ratings. In order to establish which ones fit our data better, we examined whether the correlation coefficients achieved differ significantly using a t-test (Cohen and Cohen, 1983). The lowest correlation (ρ = 0.04) is observed for the simple additive model which is not significantly different from the non-compositional baseline model. The weighted additive model (ρ = 0.09) is not significantly different from the baseline either or Kintsch (2001) (ρ = 0.09). Given that the basis 242 High Low 0 0.2 0.4 0.6 0.8 1 Figure 3: Distribution of predicted similarities for the vector multiplication model on High and Low similarity items of Kintsch’s model is the summation of the verb, a neighbor close to the verb and the noun, it is not surprising that it produces results similar to a summation which weights the verb more heavily than the noun. The multiplicative model yields a better fit with the experimental data, ρ = 0.17. The combined model is best overall with ρ = 0.19. However, the difference between the two models is not statistically significant. Also note that in contrast to the combined model, the multiplicative model does not have any free parameters and hence does not require optimization for this particular task. 6 Discussion In this paper we presented a general framework for vector-based semantic composition. We formulated composition as a function of two vectors and introduced several models based on addition and multiplication. Despite the popularity of additive models, our experimental results showed the superiority of models utilizing multiplicative combinations, at least for the sentence similarity task attempted here. We conjecture that the additive models are not sensitive to the fine-grained meaning distinctions involved in our materials. Previous applications of vector addition to document indexing (Deerwester et al., 1990) or essay grading (Landauer et al., 1997) were more concerned with modeling the gist of a document rather than the meaning of its sentences. Importantly, additive models capture composition by considering all vector components representing the meaning of the verb and its subject, whereas multiplicative models consider a subset, namely non-zero components. The resulting vector is sparser but expresses more succinctly the meaning of the predicate-argument structure, and thus allows semantic similarity to be modelled more accurately. Further research is needed to gain a deeper understanding of vector composition, both in terms of modeling a wider range of structures (e.g., adjectivenoun, noun-noun) and also in terms of exploring the space of models more fully. We anticipate that more substantial correlations can be achieved by implementing more sophisticated models from within the framework outlined here. In particular, the general class of multiplicative models (see equation (4)) appears to be a fruitful area to explore. Future directions include constraining the number of free parameters in linguistically plausible ways and scaling to larger datasets. The applications of the framework discussed here are many and varied both for cognitive science and NLP. We intend to assess the potential of our composition models on context sensitive semantic priming (Till et al., 1988) and inductive inference (Heit and Rubinstein, 1994). NLP tasks that could benefit from composition models include paraphrase identification and context-dependent language modeling (Coccaro and Jurafsky, 1998). References E. Briscoe, J. Carroll. 2002. Robust accurate statistical annotation of general text. In Proceedings of the 3rd International Conference on Language Resources and Evaluation, 1499–1504, Las Palmas, Canary Islands. A. Budanitsky, G. Hirst. 2001. Semantic distance in WordNet: An experimental, application-oriented evaluation of five measures. In Proceedings of ACL Workshop on WordNet and Other Lexical Resources, Pittsburgh, PA. J. Bullinaria, J. Levy. 2007. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior Research Methods, 39:510–526. F. Choi, P. Wiemer-Hastings, J. Moore. 2001. Latent semantic analysis for text segmentation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 109–117, Pittsburgh, PA. N. Coccaro, D. Jurafsky. 1998. Towards better integration of semantic predictors in statistical language modeling. In Proceedings of the 5th International Conference on Spoken Language Processsing, Sydney, Australia. 243 J. Cohen, P. Cohen. 1983. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences. Hillsdale, NJ: Erlbaum. S. C. Deerwester, S. T. Dumais, T. K. Landauer, G. W. Furnas, R. A. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society of Information Science, 41(6):391–407. G. Denhire, B. Lemaire. 2004. A computational model of children’s semantic memory. In Proceedings of the 26th Annual Meeting of the Cognitive Science Society, 297–302, Chicago, IL. C. Fellbaum, ed. 1998. WordNet: An Electronic Database. MIT Press, Cambridge, MA. L. Finkelstein, E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan, G. Wolfman, E. Ruppin. 2002. Placing search in context: The concept revisited. ACM Transactions on Information Systems, 20(1):116–131. J. Fodor, Z. Pylyshyn. 1988. Connectionism and cognitive architecture: A critical analysis. Cognition, 28:3– 71. P. W. Foltz, W. Kintsch, T. K. Landauer. 1998. The measurement of textual coherence with latent semantic analysis. Discourse Process, 15:285–307. S. Frank, M. Koppen, L. Noordman, W. Vonk. 2007. World knowledge in computational models of discourse comprehension. Discourse Processes. In press. G. Grefenstette. 1994. Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publishers. Z. Harris. 1968. Mathematical Structures of Language. Wiley, New York. E. Heit, J. Rubinstein. 1994. Similarity and property effects in inductive reasoning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20:411–422. J. J. Jiang, D. W. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings of International Conference on Research in Computational Linguistics, Taiwan. W. Kintsch. 2001. Predication. Cognitive Science, 25(2):173–202. T. K. Landauer, S. T. Dumais. 1997. A solution to Plato’s problem: the latent semantic analysis theory of acquisition, induction and representation of knowledge. Psychological Review, 104(2):211–240. T. K. Landauer, D. Laham, B. Rehder, M. E. Schreiner. 1997. How well can passage meaning be derived without using word order: A comparison of latent semantic analysis and humans. In Proceedings of 19th Annual Conference of the Cognitive Science Society, 412–417, Stanford, CA. K. Lund, C. Burgess. 1996. Producing high-dimensional semantic spaces from lexical co-occurrence. Behavior Research Methods, Instruments & Computers, 28:203–208. D. McCarthy, R. Koeling, J. Weeds, J. Carroll. 2004. Finding predominant senses in untagged text. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, 280–287, Barcelona, Spain. S. McDonald. 2000. Environmental Determinants of Lexical Processing Effort. Ph.D. thesis, University of Edinburgh. R. Montague. 1974. English as a formal language. In R. Montague, ed., Formal Philosophy. Yale University Press, New Haven, CT. H. Neville, J. L. Nichol, A. Barss, K. I. Forster, M. F. Garrett. 1991. Syntactically based sentence prosessing classes: evidence form event-related brain potentials. Journal of Congitive Neuroscience, 3:151–165. S. Pad´o, M. Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161–199. T. Pedersen, S. Patwardhan, J. Michelizzi. 2004. WordNet::similarity - measuring the relatedness of concepts. In Proceedings of the 5th Annual Meeting of the North American Chapter of the Association for Computational Linguistics, 38–41, Boston, MA. T. A. Plate. 1991. Holographic reduced representations: Convolution algebra for compositional distributed representations. In Proceedings of the 12th International Joint Conference on Artificial Intelligence, 30– 35, Sydney, Australia. G. Salton, A. Wong, C. S. Yang. 1975. A vector space model for automatic indexing. Communications of the ACM, 18(11):613–620. P. Schone, D. Jurafsky. 2001. Is knowledge-free induction of multiword unit dictionary headwords a solved problem? In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 100– 108, Pittsburgh, PA. H. Sch¨utze. 1998. Automatic word sense discrimination. Computational Linguistics, 24(1):97–124. P. Smolensky. 1990. Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artificial Intelligence, 46:159–216. R. E. Till, E. F. Mross, W. Kintsch. 1988. Time course of priming for associate and inference words in discourse context. Memory and Cognition, 16:283–299. S. M. Weiss, C. A. Kulikowski. 1991. Computer Systems that Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems. Morgan Kaufmann, San Mateo, CA. R. F. West, K. E. Stanovich. 1986. Robust effects of syntactic structure on visual word processing. Journal of Memory and Cognition, 14:104–112. 244
2008
28
Proceedings of ACL-08: HLT, pages 245–253, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Exploiting Feature Hierarchy for Transfer Learning in Named Entity Recognition Andrew Arnold, Ramesh Nallapati and William W. Cohen Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA, USA {aarnold, nmramesh, wcohen}@cs.cmu.edu Abstract We present a novel hierarchical prior structure for supervised transfer learning in named entity recognition, motivated by the common structure of feature spaces for this task across natural language data sets. The problem of transfer learning, where information gained in one learning task is used to improve performance in another related task, is an important new area of research. In the subproblem of domain adaptation, a model trained over a source domain is generalized to perform well on a related target domain, where the two domains’ data are distributed similarly, but not identically. We introduce the concept of groups of closely-related domains, called genres, and show how inter-genre adaptation is related to domain adaptation. We also examine multitask learning, where two domains may be related, but where the concept to be learned in each case is distinct. We show that our prior conveys useful information across domains, genres and tasks, while remaining robust to spurious signals not related to the target domain and concept. We further show that our model generalizes a class of similar hierarchical priors, smoothed to varying degrees, and lay the groundwork for future exploration in this area. 1 Introduction 1.1 Problem definition Consider the task of named entity recognition (NER). Specifically, you are given a corpus of news articles in which all tokens have been labeled as either belonging to personal name mentions or not. The standard supervised machine learning problem is to learn a classifier over this training data that will successfully label unseen test data drawn from the same distribution as the training data, where “same distribution” could mean anything from having the train and test articles written by the same author to having them written in the same language. Having successfully trained a named entity classifier on this news data, now consider the problem of learning to classify tokens as names in e-mail data. An intuitive solution might be to simply retrain the classifier, de novo, on the e-mail data. Practically, however, large, labeled datasets are often expensive to build and this solution would not scale across a large number of different datasets. Clearly the problems of identifying names in news articles and e-mails are closely related, and learning to do well on one should help your performance on the other. At the same time, however, there are serious differences between the two problems that need to be addressed. For instance, capitalization, which will certainly be a useful feature in the news problem, may prove less informative in the e-mail data since the rules of capitalization are followed less strictly in that domain. These are the problems we address in this paper. In particular, we develop a novel prior for named entity recognition that exploits the hierarchical feature space often found in natural language domains (§1.2) and allows for the transfer of information from labeled datasets in other domains (§1.3). §2 introduces the maximum entropy (maxent) and conditional random field (CRF) learning techniques employed, along with specifications for the design and training of our hierarchical prior. Finally, in §3 we present an empirical investigation of our prior’s performance against a number of baselines, demonstrating both its effectiveness and robustness. 1.2 Hierarchical feature trees In many NER problems, features are often constructed as a series of transformations of the input training data, performed in sequence. Thus, if our task is to identify tokens as either being (O)utside or (I)nside person names, and we are given the labeled 245 sample training sentence: O O O O O I Give the book to Professor Caldwell (1) one such useful feature might be: Is the token one slot to the left of the current token Professor? We can represent this symbolically as L.1.Professor where we describe the whole space of useful features of this form as: {direction = (L)eft, (C)urrent, (R)ight}.{distance = 1, 2, 3, ...}.{value = Professor, book, ...}. We can conceptualize this structure as a tree, where each slot in the symbolic name of a feature is a branch and each period between slots represents another level, going from root to leaf as read left to right. Thus a subsection of the entire feature tree for the token Caldwell could be drawn as in Figure 1 (zoomed in on the section of the tree where the L.1.Professor feature resides). direction L C R distance 1 2 ... ... ... value Professor book... ... ... true false ... Figure 1: Graphical representation of a hierarchical feature tree for token Caldwell in example Sentence 1. Representing feature spaces with this kind of tree, besides often coinciding with the explicit language used by common natural language toolkits (Cohen, 2004), has the added benefit of allowing a model to easily back-off, or smooth, to decreasing levels of specificity. For example, the leaf level of the feature tree for our sample Sentence 1 tells us that the word Professor is important, with respect to labeling person names, when located one slot to the left of the current word being classified. This may be useful in the context of an academic corpus, but might be less useful in a medical domain where the word Professor occurs less often. Instead, we might want to learn the related feature L.1.Dr. In fact, it might be useful to generalize across multiple domains the fact that the word immediately preceding the current word is often important with respect LeftToken.* LeftToken.IsWord.* LeftToken.IsWord.IsTitle.* LeftToken.IsWord.IsTitle.equals.* LeftToken.IsWord.IsTitle.equals.mr Table 1: A few examples of the feature hierarchy to the named entity status of the current word. This is easily accomplished by backing up one level from a leaf in the tree structure to its parent, to represent a class of features such as L.1.*. It has been shown empirically that, while the significance of particular features might vary between domains and tasks, certain generalized classes of features retain their importance across domains (Minkov et al., 2005). By backing-off in this way, we can use the feature hierarchy as a prior for transferring beliefs about the significance of entire classes of features across domains and tasks. Some examples illustrating this idea are shown in table 1. 1.3 Transfer learning When only the type of data being examined is allowed to vary (from news articles to e-mails, for example), the problem is called domain adaptation (Daum´e III and Marcu, 2006). When the task being learned varies (say, from identifying person names to identifying protein names), the problem is called multi-task learning (Caruana, 1997). Both of these are considered specific types of the overarching transfer learning problem, and both seem to require a way of altering the classifier learned on the first problem (called the source domain, or source task) to fit the specifics of the second problem (called the target domain, or target task). More formally, given an example x and a class label y, the standard statistical classification task is to assign a probability, p(y|x), to x of belonging to class y. In the binary classification case the labels are Y ∈{0, 1}. In the case we examine, each example xi is represented as a vector of binary features (f1(xi), · · · , fF (xi)) where F is the number of features. The data consists of two disjoint subsets: the training set (Xtrain, Ytrain) = {(x1, y1) · · · , (xN, yN)}, available to the model for its training and the test set Xtest = (x1, · · · , xM), upon which we want to use our trained classifier to make predictions. 246 In the paradigm of inductive learning, (Xtrain, Ytrain) are known, while both Xtest and Ytest are completely hidden during training time. In this cases Xtest and Xtrain are both assumed to have been drawn from the same distribution, D. In the setting of transfer learning, however, we would like to apply our trained classifier to examples drawn from a distribution different from the one upon which it was trained. We therefore assume there are two different distributions, Dsource and Dtarget, from which data may be drawn. Given this notation we can then precisely state the transfer learning problem as trying to assign labels Y target test to test data Xtarget test drawn from Dtarget, given training data (Xsource train , Y source train ) drawn from Dsource. In this paper we focus on two subproblems of transfer learning: • domain adaptation, where we assume Y (the set of possible labels) is the same for both Dsource and Dtarget, while Dsource and Dtarget themselves are allowed to vary between domains. • multi-task learning (Ando and Zhang, 2005; Caruana, 1997; Sutton and McCallum, 2005; Zhang et al., 2005) in which the task (and label set) is allowed to vary from source to target. Domain adaptation can be further distinguished by the degree of relatedness between the source and target domains. For example, in this work we group data collected in the same medium (e.g., all annotated e-mails or all annotated news articles) as belonging to the same genre. Although the specific boundary between domain and genre for a particular set of data is often subjective, it is nevertheless a useful distinction to draw. One common way of addressing the transfer learning problem is to use a prior which, in conjunction with a probabilistic model, allows one to specify a priori beliefs about a distribution, thus biasing the results a learning algorithm would have produced had it only been allowed to see the training data (Raina et al., 2006). In the example from §1.1, our belief that capitalization is less strict in e-mails than in news articles could be encoded in a prior that biased the importance of the capitalization feature to be lower for e-mails than news articles. In the next section we address the problem of how to come up with a suitable prior for transfer learning across named entity recognition problems. 2 Models considered 2.1 Basic Conditional Random Fields In this work, we will base our work on Conditional Random Fields (CRF’s) (Lafferty et al., 2001), which are now one of the most preferred sequential models for many natural language processing tasks. The parametric form of the CRF for a sentence of length n is given as follows: pΛ(Y = y|x) = 1 Z(x) exp( n X i=1 F X j=1 fj(x, yi)λj) (2) where Z(x) is the normalization term. CRF learns a model consisting of a set of weights Λ = {λ1...λF } over the features so as to maximize the conditional likelihood of the training data, p(Ytrain|Xtrain), given the model pΛ. 2.2 CRF with Gaussian priors To avoid overfitting the training data, these λ’s are often further constrained by the use of a Gaussian prior (Chen and Rosenfeld, 1999) with diagonal covariance, N(µ, σ2), which tries to maximize: argmax Λ N X k=1  log pΛ(yk|xk)  −β F X j (λj −µj)2 2σ2 j where β > 0 is a parameter controlling the amount of regularization, and N is the number of sentences in the training set. 2.3 Source trained priors One recently proposed method (Chelba and Acero, 2004) for transfer learning in Maximum Entropy models 1 involves modifying the µ’s of this Gaussian prior. First a model of the source domain, Λsource, is learned by training on {Xsource train , Y source train }. Then a model of the target domain is trained over a limited set of labeled target data n Xtarget train , Y target train o , but instead of regularizing this Λtarget to be near zero (i.e. setting µ = 0), Λtarget is instead regularized towards the previously learned source values Λsource (by setting µ = Λsource, while σ2 remains 1) and thus minimizing (Λtarget −Λsource)2. 1Maximum Entropy models are special cases of CRFs that use the I.I.D. assumption. The method under discussion can also be extended to CRF directly. 247 Note that, since this model requires Y target train in order to learn Λtarget, it, in effect, requires two distinct labeled training datasets: one on which to train the prior, and another on which to learn the model’s final weights (which we call tuning), using the previously trained prior for regularization. If we are unable to find a match between features in the training and tuning datasets (for instance, if a word appears in the tuning corpus but not the training), we backoff to a standard N(0, 1) prior for that feature. 3 y x i i (1) (1) (1) M w (1) 1 y x i i ( M y x i i ( M (2) 2) (2) (3) 3) (3) w w (1) w (1) w1 w w w1 w (1) 2 3 4 (2) (2) (2) 2 3 (3) (3) 2 z z z 1 2 Figure 2: Graphical representation of the hierarchical transfer model. 2.4 New model: Hierarchical prior model In this section, we will present a new model that learns simultaneously from multiple domains, by taking advantage of our feature hierarchy. We will assume that there are D domains on which we are learning simultaneously. Let there be Md training data in each domain d. For our experiments with non-identically distributed, independent data, we use conditional random fields (cf. §2.1). However, this model can be extended to any discriminative probabilistic model such as the MaxEnt model. Let Λ(d) = (λ(d) 1 , · · · , λ(d) Fd ) be the parameters of the discriminative model in the domain d where Fd represents the number of features in the domain d. Further, we will also assume that the features of different domains share a common hierarchy represented by a tree T , whose leaf nodes are the features themselves (cf. Figure 1). The model parameters Λ(d), then, form the parameters of the leaves of this hierarchy. Each non-leaf node n ∈non-leaf(T ) of the tree is also associated with a hyper-parameter zn. Note that since the hierarchy is a tree, each node n has only one parent, represented by pa(n). Similarly, we represent the set of children nodes of a node n as ch(n). The entire graphical model for an example consisting of three domains is shown in Figure 2. The conditional likelihood of the entire training data (y, x) = {(y(d) 1 , x(d) 1 ), · · · , (y(d) Md, x(d) Md)}D d=1 is given by: P(y|x, w, z) = ( D Y d=1 Md Y k=1 P(y(d) k |x(d) k , Λ(d)) ) ×    D Y d=1 Fd Y f=1 N(λ(d) f |zpa(f(d)), 1)    ×    Y n∈Tnonleaf N(zn|zpa(n), 1)    (3) where the terms in the first line of eq. (3) represent the likelihood of data in each domain given their corresponding model parameters, the second line represents the likelihood of each model parameter in each domain given the hyper-parameter of its parent in the tree hierarchy of features and the last term goes over the entire tree T except the leaf nodes. Note that in the last term, the hyper-parameters are shared across the domains, so there is no product over d. We perform a MAP estimation for each model parameter as well as the hyper-parameters. Accordingly, the estimates are given as follows: λ(d) f = Md X i=1 ∂ ∂λ(d) f  log P(yd i |x(d) i , Λ(d))  + zpa(f(d)) zn = zpa(n) + P i∈ch(n)(λ|z)i 1 + |ch(n)| (4) where we used the notation (λ|z)i because node i, the child node of n, could be a parameter node or a hyper-parameter node depending on the position of the node n in the hierarchy. Essentially, in this model, the weights of the leaf nodes (model parameters) depend on the log-likelihood as well as the prior weight of its parent. Additionally, the weight 248 of each hyper-parameter node in the tree is computed as the average of all its children nodes and its parent, resulting in a smoothing effect, both up and down the tree. 2.5 An approximate Hierarchical prior model The Hierarchical prior model is a theoretically well founded model for transfer learning through feature heirarchy. However, our preliminary experiments indicated that its performance on real-life data sets is not as good as expected. Although a more thorough investigation needs to be carried out, our analysis indicates that the main reason for this phenomenon is over-smoothing. In other words, by letting the information propagate from the leaf nodes in the hierarchy all the way to the root node, the model loses its ability to discriminate between its features. As a solution to this problem, we propose an approximate version of this model that weds ideas from the exact heirarchical prior model and the Chelba model. As with the Chelba prior method in §2.3, this approximate hierarchical method also requires two distinct data sets, one for training the prior and another for tuning the final weights. Unlike Chelba, we smooth the weights of the priors using the featuretree hierarchy presented in §1.1, like the hierarchical prior model. For smoothing of each feature weight, we chose to back-off in the tree as little as possible until we had a large enough sample of prior data (measured as M, the number of subtrees below the current node) on which to form a reliable estimate of the mean and variance of each feature or class of features. For example, if the tuning data set is as in Sentence 1, but the prior contains no instances of the word Professor, then we would back-off and compute the prior mean and variance on the next higher level in the tree. Thus the prior for L.1.Professor would be N(mean(L.1.*), variance(L.1.*)), where mean() and variance() of L.1.* are the sample mean and variance of all the features in the prior dataset that match the pattern L.1.* – or, put another way, all the siblings of L.1.Professor in the feature tree. If fewer than M such siblings exist, we continue backing-off, up the tree, until an ancestor with sufficient descendants is found. A detailed description of the approximate hierarchical algorithm is shown in table 2. Input: Dsource = (Xsource train , Y source train ) Dtarget = (Xtarget train , Y target train ); Feature sets Fsource, Ftarget; Feature Hierarchies Hsource, Htarget Minimum membership size M Train CRF using Dsource to obtain feature weights Λsource For each feature f ∈Ftarget Initialize: node n = f While (n /∈Hsource or |Leaves(Hsource(n))| ≤M) and n ̸= root(Htarget) n ←Pa(Htarget(n)) Compute µf and σf using the sample {λsource i | i ∈Leaves(Hsource(n))} Train Gaussian prior CRF using Dtarget as data and {µf} and {σf} as Gaussian prior parameters. Output:Parameters of the new CRF Λtarget. Table 2: Algorithm for approximate hierarchical prior: Pa(Hsource(n)) is the parent of node n in feature hierarchy Hsource; |Leaves(Hsource(n))| indicates the number of leaf nodes (basic features) under a node n in the hierarchy Hsource. It is important to note that this smoothed tree is an approximation of the exact model presented in §2.4 and thus an important parameter of this method in practice is the degree to which one chooses to smooth up or down the tree. One of the benefits of this model is that the semantics of the hierarchy (how to define a feature, a parent, how and when to back-off and up the tree, etc.) can be specified by the user, in reference to the specific datasets and tasks under consideration. For our experiments, the semantics of the tree are as presented in §1.1. The Chelba method can be thought of as a hierarchical prior in which no smoothing is performed on the tree at all. Only the leaf nodes of the prior’s feature tree are considered, and, if no match can be found between the tuning and prior’s training datasets’ features, a N(0, 1) prior is used instead. However, in the new approximate hierarchical model, even if a certain feature in the tuning dataset does not have an analog in the training dataset, we can always back-off until an appropriate match is found, even to the level of the root. Henceforth, we will use only the approximate hierarchical model in our experiments and discussion. 249 Table 3: Summary of data used in experiments Corpus Genre Task UTexas Bio Protein Yapex Bio Protein MUC6 News Person MUC7 News Person CSPACE E-mail Person 3 Investigation 3.1 Data, domains and tasks For our experiments, we have chosen five different corpora (summarized in Table 3). Although each corpus can be considered its own domain (due to variations in annotation standards, specific task, date of collection, etc), they can also be roughly grouped into three different genres. These are: abstracts from biological journals [UT (Bunescu et al., 2004), Yapex (Franz´en et al., 2002)]; news articles [MUC6 (Fisher et al., 1995), MUC7 (Borthwick et al., 1998)]; and personal e-mails [CSPACE (Kraut et al., 2004)]. Each corpus, depending on its genre, is labeled with one of two name-finding tasks: • protein names in biological abstracts • person names in news articles and e-mails We chose this array of corpora so that we could evaluate our hierarchical prior’s ability to generalize across and incorporate information from a variety of domains, genres and tasks. In each case, each item (abstract, article or e-mail) was tokenized and each token was hand-labeled as either being part of a name (protein or person) or not, respectively. We used a standard natural language toolkit (Cohen, 2004) to compute tens of thousands of binary features on each of these tokens, encoding such information as capitalization patterns and contextual information from surrounding words. This toolkit produces features of the type described in §1.2 and thus was amenable to our hierarchical prior model. In particular, we chose to use the simplest default, out-of-the-box feature generator and purposefully did not use specifically engineered features, dictionaries, or other techniques commonly employed to boost performance on such tasks. The goal of our experiments was to see to what degree named entity recognition problems naturally conformed to hierarchical methods, and not just to achieve the highest performance possible. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 20 40 60 80 100 F1 Percent of target-domain data used for tuning Intra-genre transfer performance evaluated on MUC6 (a) GAUSS: tuned on MUC6 (b) CAT: tuned on MUC6+7 (c) HIER: MUC6+7 prior, tuned on MUC6 (d) CHELBA: MUC6+7 prior, tuned on MUC6 Figure 3: Adding a relevant HIER prior helps compared to the GAUSS baseline ((c) > (a)), while simply CAT’ing or using CHELBA can hurt ((d) ≈(b) < (a), except with very little data), and never beats HIER ((c) > (b) ≈(d)). 3.2 Experiments & results We evaluated the performance of various transfer learning methods on the data and tasks described in §3.1. Specifically, we compared our approximate hierarchical prior model (HIER), implemented as a CRF, against three baselines: • GAUSS: CRF model tuned on a single domain’s data, using a standard N(0, 1) prior • CAT: CRF model tuned on a concatenation of multiple domains’ data, using a N(0, 1) prior • CHELBA: CRF model tuned on one domain’s data, using a prior trained on a different, related domain’s data (cf. §2.3) We use token-level F1 as our main evaluation measure, combining precision and recall into one metric. 3.2.1 Intra-genre, same-task transfer learning Figure 3 shows the results of an experiment in learning to recognize person names in MUC6 news articles. In this experiment we examined the effect of adding extra data from a different, but related domain from the same genre, namely, MUC7. Line a shows the F1 performance of a CRF model tuned only on the target MUC6 domain (GAUSS) across a range of tuning data sizes. Line b shows the same experiment, but this time the CRF model has been tuned on a dataset comprised of a simple concatenation of the training MUC6 data from (a), along with a different training set from MUC7 (CAT). We can see that adding extra data in this way, though 250 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 20 40 60 80 100 F1 Percent of target-domain data used for tuning Inter-genre transfer performance evaluated on MUC6 (e) HIER: MUC6+7 prior, tuned on MUC6 (f) CAT: tuned on all domains (g) HIER: all domains prior, tuned on MUC6 (h) CHELBA: all domains prior, tuned on MUC6 Figure 4: Transfer aware priors CHELBA and HIER effectively filter irrelevant data. Adding more irrelevant data to the priors doesn’t hurt ((e) ≈(g) ≈(h)), while simply CAT’ing it, in this case, is disastrous ((f) << (e). the data is closely related both in domain and task, has actually hurt the performance of our recognizer for training sizes of moderate to large size. This is most likely because, although the MUC6 and MUC7 datasets are closely related, they are still drawn from different distributions and thus cannot be intermingled indiscriminately. Line c shows the same combination of MUC6 and MUC7, only this time the datasets have been combined using the HIER prior. In this case, the performance actually does improve, both with respect to the single-dataset trained baseline (a) and the naively trained double-dataset (b). Finally, line d shows the results of the CHELBA prior. Curiously, though the domains are closely related, it does more poorly than even the non-transfer GAUSS. One possible explanation is that, although much of the vocabulary is shared across domains, the interpretation of the features of these words may differ. Since CHELBA doesn’t model the hierarchy among features like HIER, it is unable to smooth away these discrepancies. In contrast, we see that our HIER prior is able to successfully combine the relevant parts of data across domains while filtering the irrelevant, and possibly detrimental, ones. This experiment was repeated for other sets of intra-genre tasks, and the results are summarized in §3.2.3. 3.2.2 Inter-genre, multi-task transfer learning In Figure 4 we see that the properties of the hierarchical prior hold even when transferring across tasks. Here again we are trying to learn to recognize person names in MUC6 e-mails, but this time, instead of adding only other datasets similarly labeled with person names, we are additionally adding biological corpora (UT & YAPEX), labeled not with person names but with protein names instead, along with the CSPACE e-mail and MUC7 news article corpora. The robustness of our prior prevents a model trained on all five domains (g) from degrading away from the intra-genre, same-task baseline (e), unlike the model trained on concatenated data (f). CHELBA (h) performs similarly well in this case, perhaps because the domains are so different that almost none of the features match between prior and tuning data, and thus CHELBA backs-off to a standard N(0, 1) prior. This robustness in the face of less similarly related data is very important since these types of transfer methods are most useful when one possesses only very little target domain data. In this situation, it is often difficult to accurately estimate performance and so one would like assurance than any transfer method being applied will not have negative effects. 3.2.3 Comparison of HIER prior to baselines Each scatter plot in Figure 5 shows the relative performance of a baseline method against HIER. Each point represents the results of two experiments: the y-coordinate is the F1 score of the baseline method (shown on the y-axis), while the xcoordinate represents the score of the HIER method in the same experiment. Thus, points lying below the y = x line represent experiments for which HIER received a higher F1 value than did the baseline. While all three plots show HIER outperforming each of the three baselines, not surprisingly, the non-transfer GAUSS method suffers the worst, followed by the naive concatenation (CAT) baseline. Both methods fail to make any explicit distinction between the source and target domains and thus suffer when the domains differ even slightly from each other. Although the differences are more subtle, the right-most plot of Figure 5 suggests HIER is likewise able to outperform the nonhierarchical CHELBA prior in certain transfer scenarios. CHELBA is able to avoid suffering as much as the other baselines when faced with large difference between domains, but is still unable to capture 251 0 .2 .4 .6 .8 1 0 .2 .4 .6 .8 1 GAUSS (F1) HIER (F1) 0 .2 .4 .6 .8 1 0 .2 .4 .6 .8 1 CAT (F1) HIER (F1) .4 .6 .8 .4 .6 .8 CHELBA (F1) HIER (F1) ˜ y = x MUC6@3% MUC6@6% MUC6@13% MUC6@25% MUC6@50% MUC6@100% CSPACE@3% CSPACE@6% CSPACE@13% CSPACE@25% CSPACE@50% CSPACE@100% Figure 5: Comparative performance of baseline methods (GAUSS, CAT, CHELBA) vs. HIER prior, as trained on nine prior datasets (both pure and concatenated) of various sample sizes, evaluated on MUC6 and CSPACE datasets. Points below the y = x line indicate HIER outperforming baselines. as many dependencies between domains as HIER. 4 Conclusions, related & future work In this work we have introduced hierarchical feature tree priors for use in transfer learning on named entity extraction tasks. We have provided evidence that motivates these models on intuitive, theoretical and empirical grounds, and have gone on to demonstrate their effectiveness in relation to other, competitive transfer methods. Specifically, we have shown that hierarchical priors allow the user enough flexibility to customize their semantics to a specific problem, while providing enough structure to resist unintended negative effects when used inappropriately. Thus hierarchical priors seem a natural, effective and robust choice for transferring learning across NER datasets and tasks. Some of the first formulations of the transfer learning problem were presented over 10 years ago (Thrun, 1996; Baxter, 1997). Other techniques have tried to quantify the generalizability of certain features across domains (Daum´e III and Marcu, 2006; Jiang and Zhai, 2006), or tried to exploit the common structure of related problems (Ben-David et al., 2007; Sch¨olkopf et al., 2005). Most of this prior work deals with supervised transfer learning, and thus requires labeled source domain data, though there are examples of unsupervised (Arnold et al., 2007), semi-supervised (Grandvalet and Bengio, 2005; Blitzer et al., 2006), and transductive approaches (Taskar et al., 2003). Recent work using so-called meta-level priors to transfer information across tasks (Lee et al., 2007), while related, does not take into explicit account the hierarchical structure of these meta-level features often found in NLP tasks. Daum´e allows an extra degree of freedom among the features of his domains, implicitly creating a two-level feature hierarchy with one branch for general features, and another for domain specific ones, but does not extend his hierarchy further (Daum´e III, 2007)). Similarly, work on hierarchical penalization (Szafranski et al., 2007) in two-level trees tries to produce models that rely only on a relatively small number of groups of variable, as structured by the tree, as opposed to transferring knowledge between branches themselves. Our future work is focused on designing an algorithm to optimally choose a smoothing regime for the learned feature trees so as to better exploit the similarities between domains while neutralizing their differences. Along these lines, we are working on methods to reduce the amount of labeled target domain data needed to tune the prior-based models, looking forward to semi-supervised and unsupervised transfer methods. 252 References Rie K. Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. In JMLR 6, pages 1817 – 1853. Andrew Arnold, Ramesh Nallapati, and William W. Cohen. 2007. A comparative study of methods for transductive transfer learning. In Proceedings of the IEEE International Conference on Data Mining (ICDM) 2007 Workshop on Mining and Management of Biological Data. Jonathan Baxter. 1997. A Bayesian/information theoretic model of learning to learn via multiple task sampling. Machine Learning, 28(1):7–39. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. 2007. Analysis of representations for domain adaptation. In NIPS 20, Cambridge, MA. MIT Press. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In EMNLP, Sydney, Australia. A. Borthwick, J. Sterling, E. Agichtein, and R. Grishman. 1998. NYU: Description of the MENE named entity system as used in MUC-7. R. Bunescu, R. Ge, R. Kate, E. Marcotte, R. Mooney, A. Ramani, and Y. Wong. 2004. Comparative experiments on learning information extractors for proteins and their interactions. In Journal of AI in Medicine. Data from ftp://ftp.cs.utexas.edu/pub/mooney/biodata/proteins.tar.gz. Rich Caruana. 1997. Multitask learning. Machine Learning, 28(1):41–75. Ciprian Chelba and Alex Acero. 2004. Adaptation of maximum entropy capitalizer: Little data can help a lot. In Dekang Lin and Dekai Wu, editors, EMNLP 2004, pages 285–292. ACL. S. Chen and R. Rosenfeld. 1999. A gaussian prior for smoothing maximum entropy models. William W. Cohen. 2004. Minorthird: Methods for identifying names and ontological relations in text using heuristics for inducing regularities from data. http://minorthird.sourceforge.net. Hal Daum´e III and Daniel Marcu. 2006. Domain adaptation for statistical classifiers. In Journal of Artificial Intelligence Research 26, pages 101–126. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In ACL. David Fisher, Stephen Soderland, Joseph McCarthy, Fangfang Feng, and Wendy Lehnert. 1995. Description of the UMass system as used for MUC-6. Kristofer Franz´en, Gunnar Eriksson, Fredrik Olsson, Lars Asker, Per Lidn, and Joakim C¨oster. 2002. Protein names and how to find them. In International Journal of Medical Informatics. Yves Grandvalet and Yoshua Bengio. 2005. Semisupervised learning by entropy minimization. In CAP, Nice, France. Jing Jiang and ChengXiang Zhai. 2006. Exploiting domain structure for named entity recognition. In Human Language Technology Conference, pages 74 – 81. R. Kraut, S. Fussell, F. Lerch, and J. Espinosa. 2004. Coordination in teams: evidence from a simulated management game. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. 18th International Conf. on Machine Learning, pages 282–289. Morgan Kaufmann, San Francisco, CA. S.-I. Lee, V. Chatalbashev, D. Vickrey, and D. Koller. 2007. Learning a meta-level prior for feature relevance from multiple related tasks. In Proceedings of International Conference on Machine Learning (ICML). Einat Minkov, Richard C. Wang, and William W. Cohen. 2005. Extracting personal names from email: Applying named entity recognition to informal text. In HLT/EMNLP. Rajat Raina, Andrew Y. Ng, and Daphne Koller. 2006. Transfer learning by constructing informative priors. In ICML 22. Bernhard Sch¨olkopf, Florian Steinke, and Volker Blanz. 2005. Object correspondence as a machine learning problem. In ICML ’05: Proceedings of the 22nd international conference on Machine learning, pages 776– 783, New York, NY, USA. ACM. Charles Sutton and Andrew McCallum. 2005. Composition of conditional random fields for transfer learning. In HLT/EMLNLP. M. Szafranski, Y. Grandvalet, and P. MorizetMahoudeaux. 2007. Hierarchical penalization. In Advances in Neural Information Processing Systems 20. MIT press. B. Taskar, M.-F. Wong, and D. Koller. 2003. Learning on the test data: Leveraging ‘unseen’ features. In Proc. Twentieth International Conference on Machine Learning (ICML). Sebastian Thrun. 1996. Is learning the n-th thing any easier than learning the first? In NIPS, volume 8, pages 640–646. MIT. J. Zhang, Z. Ghahramani, and Y. Yang. 2005. Learning multiple related tasks using latent independent component analysis. 253
2008
29
Proceedings of ACL-08: HLT, pages 19–27, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Weakly-Supervised Acquisition of Open-Domain Classes and Class Attributes from Web Documents and Query Logs Marius Pas¸ca Google Inc. Mountain View, California 94043 [email protected] Benjamin Van Durme∗ University of Rochester Rochester, New York 14627 [email protected] Abstract A new approach to large-scale information extraction exploits both Web documents and query logs to acquire thousands of opendomain classes of instances, along with relevant sets of open-domain class attributes at precision levels previously obtained only on small-scale, manually-assembled classes. 1 Introduction Current methods for large-scale information extraction take advantage of unstructured text available from either Web documents (Banko et al., 2007; Snow et al., 2006) or, more recently, logs of Web search queries (Pas¸ca, 2007) to acquire useful knowledge with minimal supervision. Given a manually-specified target attribute (e.g., birth years for people) and starting from as few as 10 seed facts such as (e.g., John Lennon, 1941), as many as a million facts of the same type can be derived from unstructured text within Web documents (Pas¸ca et al., 2006). Similarly, given a manually-specified target class (e.g., Drug) with its instances (e.g., Vicodin and Xanax) and starting from as few as 5 seed attributes (e.g., side effects and maximum dose for Drug), other relevant attributes can be extracted for the same class from query logs (Pas¸ca, 2007). These and other previous methods require the manual specification of the input classes of instances before any knowledge (e.g., facts or attributes) can be acquired for those classes. ∗Contributions made during an internship at Google. The extraction method introduced in this paper mines a collection of Web search queries and a collection of Web documents to acquire open-domain classes in the form of instance sets (e.g., {whales, seals, dolphins, sea lions,...}) associated with class labels (e.g., marine animals), as well as large sets of open-domain attributes for each class (e.g., circulatory system, life cycle, evolution, food chain and scientific name for the class marine animals). In this light, the contributions of this paper are fourfold. First, instead of separately addressing the tasks of collecting unlabeled sets of instances (Lin, 1998), assigning appropriate class labels to a given set of instances (Pantel and Ravichandran, 2004), and identifying relevant attributes for a given set of classes (Pas¸ca, 2007), our integrated method from Section 2 enables the simultaneous extraction of class instances, associated labels and attributes. Second, by exploiting the contents of query logs during the extraction of labeled classes of instances from Web documents, we acquire thousands (4,583, to be exact) of open-domain classes covering a wide range of topics and domains. The accuracy reported in Section 3.2 exceeds 80% for both instance sets and class labels, although the extraction of classes requires a remarkably small amount of supervision, in the form of only a few commonly-used Is-A extraction patterns. Third, we conduct the first study in extracting attributes for thousands of open-domain, automatically-acquired classes, at precision levels over 70% at rank 10, and 67% at rank 20 as described in Section 3.3. The amount of supervision is limited to five seed attributes provided for only one reference class. In comparison, the largest previous 19 Knowledge extracted from documents and queries amino acids={phenylalanine, l−cysteine, tryptophan, glutamic acid, lysine, thr, marine animals={whales, seals, dolphins, turtles, sea lions, fishes, penguins, squids, movies={jay and silent bob strike back, romeo must die, we were soldiers, matrix, zoonotic diseases={rabies, west nile virus, leptospirosis, brucellosis, lyme disease, movies: [opening song, cast, characters, actors, film review, movie script, zoonotic diseases: [scientific name, causative agent, mode of transmission, Open−domain labeled classes of instances marine animals: [circulatory system, life cycle, evolution, food chain, eyesight, Open−domain class attributes (2) ornithine, valine, serine, isoleucine, aspartic acid, aspartate, taurine, histidine,...} pacific walrus, aquatic birds, comb jellies, starfish, florida manatees, walruses,...} kill bill, thelma and louise, mad max, field of dreams, ice age, star wars,...} cat scratch fever, foot and mouth disease, venezuelan equine encephalitis,...} amino acids: [titration curve, molecular formula, isoelectric point, density, extinction coefficient, pi, food sources, molecular weight, pka values,...] scientific name, skeleton, digestion, gestation period, reproduction, taxonomy,...] symbolism, special effects, soundboards, history, screenplay, director,...] life cycle, pathology, meaning, prognosis, incubation period, symptoms,...] Query logs Web documents (1) (2) Figure 1: Overview of weakly-supervised extraction of class instances, class labels and class attributes from Web documents and query logs study in attribute extraction reports results on a set of 40 manually-assembled classes, and requires five seed attributes to be provided as input for each class. Fourth, we introduce the first approach to information extraction from a combination of both Web documents and search query logs, to extract opendomain knowledge that is expected to be suitable for later use. In contrast, the textual data sources used in previous studies in large-scale information extraction are either Web documents (Mooney and Bunescu, 2005; Banko et al., 2007) or, recently, query logs (Pas¸ca, 2007), but not both. 2 Extraction from Documents and Queries 2.1 Open-Domain Labeled Classes of Instances Figure 1 provides an overview of how Web documents and queries are used together to acquire opendomain, labeled classes of instances (phase (1) in the figure); and to acquire attributes that capture quantifiable properties of those classes, by mining query logs based on the class instances acquired from the documents, while guiding the extraction based on a few attributes provided as seed examples (phase (2)). As described in Figure 2, the algorithm for deriving labeled sets of class instances starts with the acquisition of candidate pairs {ME} of a class label and an instance, by applying a few extraction patterns to unstructured text within Web documents {D}, while guiding the extraction by the contents of query logs {Q} (Step 1 in Figure 2). This is folInput: set of Is-A extraction patterns {E} . large repository of search queries {Q} . large repository of Web docs {D} . weighting parameters J ∈[0,1] and K∈1..∞ Output: set of pairs of a class label and an instance {<C,I>} Variables: {S} = clusters of distributionally similar phrases . {V} = vectors of contextual matches of queries in text . {ME} = set of pairs of a class label and an instance . {CS} = set of class labels . {X}, {Y} = sets of queries Steps: 01. {ME} = Match patterns {E} in docs {D} around {Q} 02. {V} = Match phrases {Q} in docs {D} 03. {S} = Generate clusters of queries based on vectors {V} 04. For each cluster of phrases S in {S} 05. {CS} = ∅ 06. For each query Q of S 07. Insert labels of Q from {ME} into {CS} 08. For each label CS of {CS} 09. {X} = Find queries of S with the label CS in {ME} 10. {Y} = Find clusters of {S} containing some query 10. with the label CS in {ME} 11. If |{X}| > J ×|{S}| 12. If |{Y}| < K 13. For each query X of {X} 14. Insert pair <CS,X> into output pairs {<C,I>} 15. Return pairs {<C,I>} Figure 2: Acquisition of labeled sets of class instances lowed by the generation of unlabeled clusters {S} of distributionally similar queries, by clustering vectors of contextual features collected around the occurrences of queries {Q} within documents {D} (Steps 2 and 3). Finally, the intermediate data {ME} and {S} is merged and filtered into smaller, more accurate labeled sets of instances (Steps 4 through 15). Step 1 in Figure 2 applies lexico-syntactic patterns {E} that aim at extracting Is-A pairs of an instance (e.g., Google) and an associated class label (e.g., Internet search engines) from text. The two patterns, which are inspired by (Hearst, 1992) and have been the de-facto extraction technique in previous work on extracting conceptual hierarchies from text (cf. (Ponzetto and Strube, 2007; Snow et al., 2006)), can be summarized as: ⟨[..] C [such as|including] I [and|,|.]⟩, where I is a potential instance (e.g., Venezuelan equine encephalitis) and C is a potential class label for the instance (e.g., zoonotic diseases), for example in the sentence: “The expansion of the farms increased the spread of zoonotic diseases such as Venezuelan equine encephalitis [..]”. During matching, all string comparisons are caseinsensitive. In order for a pattern to match a sentence, two conditions must be met. First, the class 20 label C from the sentence must be a non-recursive noun phrase whose last component is a plural-form noun (e.g., zoonotic diseases in the above sentence). Second, the instance I from the sentence must also occur as a complete query somewhere in the query logs {Q}, that is, a query containing the instance and nothing else. This heuristic acknowledges the difficulty of pinpointing complex entities within documents (Downey et al., 2007), and embodies the hypothesis that, if an instance is prominent, Web search users will eventually ask about it. In Steps 4 through 14 from Figure 2, each cluster is inspected by scanning all labels attached to one or more queries from the cluster. For each label CS, if a) {ME} indicates that a large number of all queries from the cluster are attached to the label (as controlled by the parameter J in Step 12); and b) those queries are a significant portion of all queries from all clusters attached to the same label in {ME} (as controlled by the parameter K in Step 13), then the label CS and each query with that label are stored in the output pairs {<C,I>} (Steps 13 and 14). The parameters J and K can be used to emphasize precision (higher J and lower K) or recall (lower J and higher K). The resulting pairs of an instance and a class label are arranged into sets of class instances (e.g., {rabies, west nile virus, leptospirosis,...}), each associated with a class label (e.g., zoonotic diseases), and returned in Step 15. 2.2 Open-Domain Class Attributes The labeled classes of instances collected automatically from Web documents are passed as input to phase (2) from Figure 1, which acquires class attributes by mining a collection of Web search queries. The attributes capture properties that are relevant to the class. The extraction of attributes exploits the set of class instances rather than the associated class label, and consists of four stages: 1) identification of a noisy pool of candidate attributes, as remainders of queries that also contain one of the class instances. In the case of the class movies, whose instances include jay and silent bob strike back and kill bill, the query “cast jay and silent bob strike back” produces the candidate attribute cast; 2) construction of internal search-signature vector representations for each candidate attribute, based on queries (e.g., “cast selection for kill bill”) that contain a candidate attribute (cast) and a class instance (kill bill). These vectors consist of counts tied to the frequency with which an attribute occurs with a given “templatized” query. The latter replaces specific attributes and instances from the query with common placeholders, e.g., “X for Y”; 3) construction of a reference internal searchsignature vector representation for a small set of seed attributes provided as input. A reference vector is the normalized sum of the individual vectors corresponding to the seed attributes; 4) ranking of candidate attributes with respect to each class (e.g., movies), by computing similarity scores between their individual vector representations and the reference vector of the seed attributes. The result of the four stages is a ranked list of attributes (e.g., [opening song, cast, characters,...]) for each class (e.g., movies). In a departure from previous work, the instances of each input class are automatically generated as described earlier, rather than manually assembled. Furthermore, the amount of supervision is limited to seed attributes being provided for only one of the classes, whereas (Pas¸ca, 2007) requires seed attributes for each class. To this effect, the extraction includes modifications such that only one reference vector is constructed internally from the seed attributes during the third stage, rather one such vector for each class in (Pas¸ca, 2007); and similarity scores are computed cross-class by comparing vector representations of individual candidate attributes against the only reference vector available during the fourth stage, rather than with respect to the reference vector of each class in (Pas¸ca, 2007). 3 Evaluation 3.1 Textual Data Sources The acquisition of open-domain knowledge, in the form of class instances, labels and attributes, relies on unstructured text available within Web documents maintained by, and search queries submitted to, the Google search engine. The collection of queries is a random sample of fully-anonymized queries in English submitted by Web users in 2006. The sample contains approximately 50 million unique queries. Each query is 21 Found in Count Pct. Examples WordNet? Yes 1931 42.2% baseball players, (original) endangered species Yes 2614 57.0% caribbean countries, (removal) fundamental rights No 38 0.8% agrochemicals, celebs, handhelds, mangas Table 1: Class labels found in WordNet in original form, or found in WordNet after removal of leading words, or not found in WordNet at all accompanied by its frequency of occurrence in the logs. The document collection consists of approximately 100 million Web documents in English, as available in a Web repository snapshot from 2006. The textual portion of the documents is cleaned of HTML, tokenized, split into sentences and part-ofspeech tagged using the TnT tagger (Brants, 2000). 3.2 Evaluation of Labeled Classes of Instances Extraction Parameters: The set of instances that can be potentially acquired by the extraction algorithm described in Section 2.1 is heuristically limited to the top five million queries with the highest frequency within the input query logs. In the extracted data, a class label (e.g., search engines) is associated with one or more instances (e.g., google). Similarly, an instance (e.g., google) is associated with one or more class labels (e.g., search engines and internet search engines). The values chosen for the weighting parameters J and K from Section 2.1 are 0.01 and 30 respectively. After discarding classes with fewer than 25 instances, the extracted set of classes consists of 4,583 class labels, each of them associated with 25 to 7,967 instances, with an average of 189 instances per class. Accuracy of Class Labels: Built over many years of manual construction efforts, lexical gold standards such as WordNet (Fellbaum, 1998) provide widecoverage upper ontologies of the English language. Built-in morphological normalization routines make it straightforward to verify whether a class label (e.g., faculty members) exists as a concept in WordNet (e.g., faculty member). When an extracted label (e.g., central nervous system disorders) is not found in WordNet, it is looked up again after iteratively removing its leading words (e.g., nervous system disClass Label={Set of Instances} Parent in C? WordNet american composers={aaron copland, composers Y eric ewazen, george gershwin,...} modern appliances={built-in oven, appliances S ceramic hob, tumble dryer,...} area hospitals={carolinas medical hospitals S center, nyack hospital,...} multiple languages={chuukese, languages N ladino, mandarin, us english,...} Table 2: Correctness judgments for extracted classes whose class labels are found in WordNet only after removal of their leading words (C=Correctness, Y=correct, S=subjectively correct, N=incorrect) orders, system disorders and disorders). As shown in Table 1, less than half of the 4,583 extracted class labels (e.g., baseball players) are found in their original forms in WordNet. The majority of the class labels (2,614 out of 4,583) can be found in WordNet only after removal of one or more leading words (e.g., caribbean countries), which suggests that many of the class labels correspond to finer-grained, automatically-extracted concepts that are not available in the manually-built WordNet. To test whether that is the case, a random sample of 200 class labels, out of the 2,614 labels found to be potentially-useful specific concepts, are manually annotated as correct, subjectively correct or incorrect, as shown in Table 2. A class label is: correct, if it captures a relevant concept although it could not be found in WordNet; subjectively correct, if it is relevant not in general but only in a particular context, either from a subjective viewpoint (e.g., modern appliances), or relative to a particular temporal anchor (e.g., current players), or in connection to a particular geographical area (e.g., area hospitals); or incorrect, if it does not capture any useful concept (e.g., multiple languages). The manual analysis of the sample of 200 class labels indicates that 154 (77%) are relevant concepts and 27 (13.5%) are subjectively relevant concepts, for a total of 181 (90.5%) relevant concepts, whereas 19 (9.5%) of the labels are incorrect. It is worth emphasizing the importance of automatically-collected classes judged as relevant and not present in WordNet: caribbean countries, computer manufacturers, entertainment companies, market research firms are arguably very useful and should probably be considered as part of 22 Class Label Size of Instance Sets Class Label Size of Instance Sets M (Manual) E (Extracted) M E M∩E M M (Manual) E (Extracted) M E M∩E M Actor actors 1500 696 23.73 Movie movies 626 2201 30.83 AircraftModel 217 NationalPark parks 59 296 0 Award awards 200 283 13 NbaTeam nba teams 30 66 86.66 BasicFood foods 155 3484 61.93 Newspaper newspapers 599 879 16.02 CarModel car models 368 48 5.16 Painter painters 1011 823 22.45 CartoonChar cartoon 50 144 36 ProgLanguage programming 101 153 26.73 characters languages CellPhoneModel cell phones 204 49 0 Religion religions 128 72 11.71 ChemicalElem chemicals 118 487 1.69 River river systems 167 118 15.56 City cities 589 3642 50.08 SearchEngine search engines 25 133 64 Company companies 738 7036 26.01 SkyBody constellations 97 37 1.03 Country countries 197 677 91.37 Skyscraper 172 Currency currencies 55 128 25.45 SoccerClub football clubs 116 101 22.41 DigitalCamera digital cameras 534 58 0.18 SportEvent sports events 143 73 12.58 Disease diseases 209 3566 65.55 Stadium stadiums 190 92 6.31 Drug drugs 345 1209 44.05 TerroristGroup terrorist groups 74 134 33.78 Empire empires 78 54 6.41 Treaty treaties 202 200 7.42 Flower flowers 59 642 25.42 University universities 501 1127 21.55 Holiday holidays 82 300 48.78 VideoGame video games 450 282 17.33 Hurricane 74 Wine wines 60 270 56.66 Mountain mountains 245 49 7.75 WorldWarBattle battles 127 135 9.44 Total mapped: 37 out of 40 classes 26.89 Table 3: Comparison between manually-assembled instance sets of gold-standard classes (M) and instance sets of automatically-extracted classes (E). Each gold-standard class (M) was manually mapped into an extracted class (E), unless no relevant mapping was found. Ratios ( M∩E M ) are shown as percentages any refinements to hand-built hierarchies, including any future extensions of WordNet. Accuracy of Class Instances: The computation of the precision of the extracted instances (e.g., fifth element and kill bill for the class label movies) relies on manual inspection of all instances associated to a sample of the extracted class labels. Rather than inspecting a random sample of classes, the evaluation validates the results against a reference set of 40 gold-standard classes that were manually assembled as part of previous work (Pas¸ca, 2007). A class from the gold standard consists of a manually-created class label (e.g., AircraftModel) associated with a manually-assembled, and therefore high-precision, set of representative instances of the class. To evaluate the precision of the extracted instances, the manual label of each gold-standard class (e.g., SearchEngine) is mapped into a class label extracted from text (e.g., search engines). As shown in the first two columns of Table 3, the mapping into extracted class labels succeeds for 37 of the 40 goldstandard classes. 28 of the 37 mappings involve linking an abstract class label (e.g., SearchEngine) with the corresponding plural forms among the extracted class labels (e.g., search engines). The remaining 9 mappings link a manual class label with either an equivalent extracted class label (e.g., SoccerClub with football clubs), or a strongly-related class label (e.g., NationalPark with parks). No mapping is found for 3 out of the 40 classes, namely AircraftModel, Hurricane and Skyscraper, which are therefore removed from consideration. The sizes of the instance sets available for each class in the gold standard are compared in the third through fifth columns of Table 3. In the table, M stands for manually-assembled instance sets, and E for automatically-extracted instance sets. For example, the gold-standard class SearchEngine contains 25 manually-collected instances, while the parallel class label search engines contains 133 automatically-extracted instances. The fifth column shows the percentage of manually-collected instances (M) that are also extracted automatically (E). In the case of the class SearchEngine, 16 of the 25 manually-collected instances are among the 133 automatically-extracted instances of the same class, 23 Label Value Examples of Attributes vital 1.0 investors: investment strategies okay 0.5 religious leaders: coat of arms wrong 0.0 designers: stephanie Table 4: Labels for assessing attribute correctness which corresponds to a relative coverage of 64% of the manually-collected instance set. Some instances may occur within the manually-collected set but not the automatically-extracted set (e.g., zoominfo and brainbost for the class SearchEngine) or, more frequently, vice-versa (e.g., surfwax, blinkx, entireweb, web wombat, exalead etc.). Overall, the relative coverage of automatically-extracted instance sets with respect to manually-collected instance sets is 26.89%, as an average over the 37 gold-standard classes. More significantly, the size advantage of automatically-extracted instance sets is not the undesirable result of those sets containing many spurious instances. Indeed, the manual inspection of the automatically-extracted instances sets indicates an average accuracy of 79.3% over the 37 gold-standard classes retained in the experiments. To summarize, the method proposed in this paper acquires open-domain classes from unstructured text of arbitrary quality, without a-priori restrictions to specific domains of interest and with virtually no supervision (except for the ubiquitous Is-A extraction patterns), at accuracy levels of around 90% for class labels and 80% for class instances. 3.3 Evaluation of Class Attributes Extraction Parameters: Given a target class specified as a set of instances and a set of five seed attributes for a class (e.g., {quality, speed, number of users, market share, reliability} for SearchEngine), the method described in Section 2.2 extracts ranked lists of class attributes from the input query logs. Internally, the ranking uses Jensen-Shannon (Lee, 1999) to compute similarity scores between internal representations of seed attributes, on one hand, and each of the candidate attributes, on the other hand. Evaluation Procedure: To remove any possible bias towards higher-ranked attributes during the assessment of class attributes, the ranked lists of attributes to be evaluated are sorted alphabetically into a merged list. Each attribute of the merged list is 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 Precision Rank Class: Holiday manually assembled instances automatically extracted instances 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 Precision Rank Class: Average-Class manually assembled instances automatically extracted instances 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 Precision Rank Class: Mountain manually assembled instances automatically extracted instances 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 Precision Rank Class: Average-Class manually assembled instances automatically extracted instances Figure 3: Accuracy of attributes extracted based on manually assembled, gold standard (M) vs. automatically extracted (E) instance sets, for a few target classes (leftmost graphs) and as an average over all (37) target classes (rightmost graphs). Seed attributes are provided as input for each target class (top graphs), or for only one target class (bottom graphs) manually assigned a correctness label within its respective class. An attribute is vital if it must be present in an ideal list of attributes of the class; okay if it provides useful but non-essential information; and wrong if it is incorrect. To compute the overall precision score over a ranked list of extracted attributes, the correctness labels are converted to numeric values as shown in Table 4. Precision at some rank N in the list is thus measured as the sum of the assigned values of the first N candidate attributes, divided by N. Accuracy of Class Attributes: Figure 3 plots precision values for ranks 1 through 50 of the lists of attributes extracted through several runs over the 37 gold-standard classes described in the previous section. The runs correspond to different amounts of supervision, specified through a particular choice in the number of seed attributes, and in the source of instances passed as input to the system: • number of input seed attributes: seed attributes are provided either for each of the 37 classes, for a total of 5×37=185 attributes (the graphs at the top of Figure 3); or only for one class (namely, Country), 24 Class Precision Top Ten Extracted Attributes # Class Label={Set of Instances} @5 @10 @15 @20 1 accounting systems={flexcube, 0.70 0.70 0.77 0.70 overview, architecture, interview questions, free myob, oracle financials, downloads, canadian version, passwords, modules, peachtree accounting, sybiz,...} crystal reports, property management, free trial 2 antimicrobials={azithromycin, 1.00 1.00 0.93 0.95 chemical formula, chemical structure, history, chloramphenicol, fusidic acid, invention, inventor, definition, mechanism of quinolones, sulfa drugs,...} action, side-effects, uses, shelf life 5 civilizations={ancient greece, 1.00 1.00 0.93 0.90 social pyramid, climate, geography, flag, chaldeans, etruscans, inca population, social structure, natural resources, indians, roman republic,...} family life, god, goddesses 9 farm animals={angora goats, 1.00 0.80 0.83 0.80 digestive system, evolution, domestication, burros, cattle, cows, donkeys, gestation period, scientific name, adaptations, draft horses, mule, oxen,...} coloring pages, p**, body parts, selective breeding 10 forages={alsike clover, rye grass, 0.90 0.95 0.73 0.57 types, picture, weed control, planting, uses, tall fescue, sericea lespedeza,...} information, herbicide, germination, care, fertilizer Average-Class (25 classes) 0.75 0.70 0.68 0.67 Table 5: Precision of attributes extracted for a sample of 25 classes. Seed attributes are provided for only one class. for a total of 5 attributes over all classes (the graphs at the bottom of Figure 3); • source of input instance sets: the instance sets for each class are either manually collected (M from Table 3), or automatically extracted (E from Table 3). The choices correspond to the two curves plotted in each graph in Figure 3. The graphs in Figure 3 show the precision over individual target classes (leftmost graphs), and as an average over all 37 classes (rightmost graphs). As expected, the precision of the extracted attributes as an average over all classes is best when the input instance sets are hand-picked (M), as opposed to automatically extracted (E). However, the loss of precision from M to E is small at all measured ranks. Table 5 offers an alternative view on the quality of the attributes extracted for a random sample of 25 classes out of the larger set of 4,583 classes acquired from text. The 25 classes are passed as input for attribute extraction without modifications. In particular, the instance sets are not manually postfiltered or otherwise changed in any way. To keep the time required to judge the correctness of all extracted attributes within reasonable limits, the evaluation considers only the top 20 (rather than 50) attributes extracted per class. As shown in Table 5, the method proposed in this paper acquires attributes for automatically-extracted, open-domain classes, without a-priori restrictions to specific domains of interest and relying on only five seed attributes specified for only one class, at accuracy levels reaching 70% at rank 10, and 67% at rank 20. 4 Related Work 4.1 Acquisition of Classes of Instances Although some researchers focus on re-organizing or extending classes of instances already available explicitly within manually-built resources such as Wikipedia (Ponzetto and Strube, 2007) or WordNet (Snow et al., 2006) or both (Suchanek et al., 2007), a large body of previous work focuses on compiling sets of instances, not necessarily labeled, from unstructured text. The extraction proceeds either iteratively by starting from a few seed extraction rules (Collins and Singer, 1999), or by mining named entities from comparable news articles (Shinyama and Sekine, 2004) or from multilingual corpora (Klementiev and Roth, 2006). A bootstrapping method (Riloff and Jones, 1999) cautiously grows very small seed sets of five instances of the same class, to fewer than 300 items after 50 consecutive iterations, with a final precision varying between 46% and 76% depending on the type of semantic lexicon. Experimental results from (Feldman and Rosenfeld, 2006) indicate that named entity recognizers can boost the performance of weakly supervised extraction of class instances, but only for a few coarse-grained types such as Person and only if they are simpler to recognize in text (Feldman and Rosenfeld, 2006). 25 In (Cafarella et al., 2005), handcrafted extraction patterns are applied to a collection of 60 million Web documents to extract instances of the classes Company and Country. Based on the manual evaluation of samples of extracted instances, an estimated number of 1,116 instances of Company are extracted at a precision score of 90%. In comparison, the approach of this paper pursues a more aggressive goal, by extracting a larger and more diverse number of labeled classes, whose instances are often more difficult to extract than country names and most company names, at precision scores of almost 80%. The task of extracting relevant labels to describe sets of documents, rather than sets of instances, is explored in (Treeratpituk and Callan, 2006). Given pre-existing sets of instances, (Pantel and Ravichandran, 2004) investigates the task of acquiring appropriate class labels to the sets from unstructured text. Various class labels are assigned to a total of 1,432 sets of instances. The accuracy of the class labels is computed over a sample of instances, by manually assessing the correctness of the top five labels returned by the system for each instance. The resulting mean reciprocal rank of 77% gives partial credit to labels of an evaluated instance, even if only the fourth or fifth assigned labels are correct. Our evaluation of the accuracy of class labels is stricter, as it considers only one class label of a given instance at a time, rather than a pool of the best candidate labels. As a pre-requisite to extracting relations among pairs of classes, the method described in (Davidov et al., 2007) extracts class instances from unstructured Web documents, by submitting pairs of instances as queries and analyzing the contents of the top 1,000 documents returned by a Web search engine. For each target class, a small set of instances must be provided manually as seeds. As such, the method can be applied to the task of extracting a large set of open-domain classes only after manually enumerating through the entire set of target classes, and providing seed instances for each. Furthermore, no attempt is made to extract relevant class labels for the sets of instances. Comparatively, the open-domain classes extracted in our paper have an explicit label in addition to the sets of instances, and do not require identifying the range of the target classes in advance, or providing any seed instances as input. The evaluation methodology is also quite different, as the instance sets acquired based on the input seed instances in (Davidov et al., 2007) are only evaluated for three hand-picked classes, with precision scores of 90% for names of countries, 87% for fish species and 68% for instances of constellations. Our evaluation of the accuracy of class instances is again stricter, since the evaluation sample is larger, and includes more varied classes, whose instances are sometimes more difficult to identify in text. 4.2 Acquisition of Class Attributes Previous work on the automatic acquisition of attributes for open-domain classes from text is less general than the extraction method and experiments presented in our paper. Indeed, previous evaluations were restricted to small sets of classes (forty classes in (Pas¸ca, 2007)), whereas our evaluations also consider a random, more diverse sample of open-domain classes. More importantly, by dropping the requirement of manually providing a small set of seed attributes for each target class, and relying on only a few seed attributes specified for one reference class, we harvest class attributes without the need of first determining what the classes should be, what instances they should contain, and from which resources the instances should be collected. 5 Conclusion In a departure from previous approaches to largescale information extraction from unstructured text on the Web, this paper introduces a weaklysupervised extraction framework for mining useful knowledge from a combination of both documents and search query logs. In evaluations over labeled classes of instances extracted without a-priori restrictions to specific domains of interest and with very little supervision, the accuracy exceeds 90% for class labels, approaches 80% for class instances, and exceeds 70% (at rank 10) and 67% (at rank 20) for class attributes. Current work aims at expanding the number of instances within each class while retaining similar precision levels; extracting attributes with more consistent precision scores across classes from different domains; and introducing confidence scores in attribute extraction, allowing for the detection of classes for which it is unlikely to extract large numbers of useful attributes from text. 26 References M. Banko, Michael J Cafarella, S. Soderland, M. Broadhead, and O. Etzioni. 2007. Open information extraction from the Web. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-07), pages 2670–2676, Hyderabad, India. T. Brants. 2000. TnT - a statistical part of speech tagger. In Proceedings of the 6th Conference on Applied Natural Language Processing (ANLP-00), pages 224–231, Seattle, Washington. M. Cafarella, D. Downey, S. Soderland, and O. Etzioni. 2005. KnowItNow: Fast, scalable information extraction from the Web. In Proceedings of the Human Language Technology Conference (HLT-EMNLP-05), pages 563–570, Vancouver, Canada. M. Collins and Y. Singer. 1999. Unsupervised models for named entity classification. In Proceedings of the 1999 Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC-99), pages 189–196, College Park, Maryland. D. Davidov, A. Rappoport, and M. Koppel. 2007. Fully unsupervised discovery of concept-specific relationships by Web mining. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL-07), pages 232–239, Prague, Czech Republic. D. Downey, M. Broadhead, and O. Etzioni. 2007. Locating complex named entities in Web text. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-07), pages 2733–2739, Hyderabad, India. R. Feldman and B. Rosenfeld. 2006. Boosting unsupervised relation extraction by using NER. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP-ACL06), pages 473–481, Sydney, Australia. C. Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database and Some of its Applications. MIT Press. M. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th International Conference on Computational Linguistics (COLING-92), pages 539–545, Nantes, France. A. Klementiev and D. Roth. 2006. Weakly supervised named entity transliteration and discovery from multilingual comparable corpora. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL06), pages 817–824, Sydney, Australia. L. Lee. 1999. Measures of distributional similarity. In Proceedings of the 37th Annual Meeting of the Association of Computational Linguistics (ACL-99), pages 25–32, College Park, Maryland. D. Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 17th International Conference on Computational Linguistics and the 36th Annual Meeting of the Association for Computational Linguistics (COLING-ACL-98), pages 768–774, Montreal, Quebec. R. Mooney and R. Bunescu. 2005. Mining knowledge from text using information extraction. SIGKDD Explorations, 7(1):3–10. M. Pas¸ ca, D. Lin, J. Bigham, A. Lifchits, and A. Jain. 2006. Organizing and searching the World Wide Web of facts - step one: the one-million fact extraction challenge. In Proceedings of the 21st National Conference on Artificial Intelligence (AAAI-06), pages 1400– 1405, Boston, Massachusetts. M. Pas¸ ca. 2007. Organizing and searching the World Wide Web of facts - step two: Harnessing the wisdom of the crowds. In Proceedings of the 16th World Wide Web Conference (WWW-07), pages 101–110, Banff, Canada. P. Pantel and D. Ravichandran. 2004. Automatically labeling semantic classes. In Proceedings of the 2004 Human Language Technology Conference (HLTNAACL-04), pages 321–328, Boston, Massachusetts. S. Ponzetto and M. Strube. 2007. Deriving a large scale taxonomy from Wikipedia. In Proceedings of the 22nd National Conference on Artificial Intelligence (AAAI07), pages 1440–1447, Vancouver, British Columbia. E. Riloff and R. Jones. 1999. Learning dictionaries for information extraction by multi-level bootstrapping. In Proceedings of the 16th National Conference on Artificial Intelligence (AAAI-99), pages 474–479, Orlando, Florida. Y. Shinyama and S. Sekine. 2004. Named entity discovery using comparable news articles. In Proceedings of the 20th International Conference on Computational Linguistics (COLING-04), pages 848–853, Geneva, Switzerland. R. Snow, D. Jurafsky, and A. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLINGACL-06), pages 801–808, Sydney, Australia. F. Suchanek, G. Kasneci, and G. Weikum. 2007. Yago: a core of semantic knowledge unifying WordNet and Wikipedia. In Proceedings of the 16th World Wide Web Conference (WWW-07), pages 697–706, Banff, Canada. P. Treeratpituk and J. Callan. 2006. Automatically labeling hierarchical clusters. In Proceedings of the 7th Annual Conference on Digital Government Research (DGO-06), pages 167–176, San Diego, California. 27
2008
3
Proceedings of ACL-08: HLT, pages 254–262, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Refining Event Extraction through Cross-document Inference Heng Ji Ralph Grishman Computer Science Department New York University New York, NY 10003, USA (hengji, grishman)@cs.nyu.edu Abstract We apply the hypothesis of “One Sense Per Discourse” (Yarowsky, 1995) to information extraction (IE), and extend the scope of “discourse” from one single document to a cluster of topically-related documents. We employ a similar approach to propagate consistent event arguments across sentences and documents. Combining global evidence from related documents with local decisions, we design a simple scheme to conduct cross-document inference for improving the ACE event extraction task1. Without using any additional labeled data this new approach obtained 7.6% higher F-Measure in trigger labeling and 6% higher F-Measure in argument labeling over a state-of-the-art IE system which extracts events independently for each sentence. 1 Introduction Identifying events of a particular type within individual documents – ‘classical’ information extraction – remains a difficult task. Recognizing the different forms in which an event may be expressed, distinguishing events of different types, and finding the arguments of an event are all challenging tasks. Fortunately, many of these events will be reported multiple times, in different forms, both within the same document and within topically- related documents (i.e. a collection of documents sharing participants in potential events). We can 1 http://www.nist.gov/speech/tests/ace/ take advantage of these alternate descriptions to improve event extraction in the original document, by favoring consistency of interpretation across sentences and documents. Several recent studies involving specific event types have stressed the benefits of going beyond traditional singledocument extraction; in particular, Yangarber (2006) has emphasized this potential in his work on medical information extraction. In this paper we demonstrate that appreciable improvements are possible over the variety of event types in the ACE (Automatic Content Extraction) evaluation through the use of cross-sentence and cross-document evidence. As we shall describe below, we can make use of consistency at several levels: consistency of word sense across different instances of the same word in related documents, and consistency of arguments and roles across different mentions of the same or related events. Such methods allow us to build dynamic background knowledge as required to interpret a document and can compensate for the limited annotated training data which can be provided for each event type. 2 Task and Baseline System 2.1 ACE Event Extraction Task The event extraction task we are addressing is that of the Automatic Content Extraction (ACE) evaluations2. ACE defines the following terminology: 2 In this paper we don’t consider event mention coreference resolution and so don’t distinguish event mentions and events. 254 entity: an object or a set of objects in one of the semantic categories of interest mention: a reference to an entity (typically, a noun phrase) event trigger: the main word which most clearly expresses an event occurrence event arguments: the mentions that are involved in an event (participants) event mention: a phrase or sentence within which an event is described, including trigger and arguments The 2005 ACE evaluation had 8 types of events, with 33 subtypes; for the purpose of this paper, we will treat these simply as 33 distinct event types. For example, for a sentence: Barry Diller on Wednesday quit as chief of Vivendi Universal Entertainment. the event extractor should detect a “Personnel_End-Position” event mention, with the trigger word, the position, the person who quit the position, the organization, and the time during which the event happened: Trigger Quit Arguments Role = Person Barry Diller Role = Organization Vivendi Universal Entertainment Role = Position Chief Role = Time-within Wednesday Table 1. Event Extraction Example We define the following standards to determine the correctness of an event mention: • A trigger is correctly labeled if its event type and offsets match a reference trigger. • An argument is correctly identified if its event type and offsets match any of the reference argument mentions. • An argument is correctly identified and classified if its event type, offsets, and role match any of the reference argument mentions. 2.2 A Baseline Within-Sentence Event Tagger We use a state-of-the-art English IE system as our baseline (Grishman et al., 2005). This system extracts events independently for each sentence. Its training and test procedures are as follows. The system combines pattern matching with statistical models. For every event mention in the ACE training corpus, patterns are constructed based on the sequences of constituent heads separating the trigger and arguments. In addition, a set of Maximum Entropy based classifiers are trained: • Trigger Labeling: to distinguish event mentions from non-event-mentions, to classify event mentions by type; • Argument Classifier: to distinguish arguments from non-arguments; • Role Classifier: to classify arguments by argument role. • Reportable-Event Classifier: Given a trigger, an event type, and a set of arguments, to determine whether there is a reportable event mention. In the test procedure, each document is scanned for instances of triggers from the training corpus. When an instance is found, the system tries to match the environment of the trigger against the set of patterns associated with that trigger. This pattern-matching process, if successful, will assign some of the mentions in the sentence as arguments of a potential event mention. The argument classifier is applied to the remaining mentions in the sentence; for any argument passing that classifier, the role classifier is used to assign a role to it. Finally, once all arguments have been assigned, the reportable-event classifier is applied to the potential event mention; if the result is successful, this event mention is reported. 3 Motivations In this section we shall present our motivations based on error analysis for the baseline event tagger. 3.1 One Trigger Sense Per Cluster Across a heterogeneous document corpus, a particular verb can sometimes be trigger and sometimes not, and can represent different event types. However, for a collection of topically-related documents, the distribution may be much more convergent. We investigate this hypothesis by automatically obtaining 25 related documents for each test text. The statistics of some trigger examples are presented in table 2. 255 Candidate Triggers Event Type Perc./Freq. as trigger in ACE training corpora Perc./Freq. as trigger in test document Perc./Freq. as trigger in test + related documents Correct Event Triggers advance Movement_Transport 31% of 16 50% of 2 88.9% of 27 fire Personnel_End-Position 7% of 81 100% of 2 100% of 10 fire Conflict_Attack 54% of 81 100% of 3 100% of 19 replace Personnel_End-Position 5% of 20 100% of 1 83.3% of 6 form Business_Start-Org 12% of 8 100% of 2 100% of 23 talk Contact_Meet 59% of 74 100% of 4 100% of 26 Incorrect Event Triggers hurt Life_Injure 24% of 33 0% of 2 0% of 7 execution Life_Die 12% of 8 0% of 4 4% of 24 Table 2. Examples: Percentage of a Word as Event Trigger in Different Data Collections As we can see from the table, the likelihood of a candidate word being an event trigger in the test document is closer to its distribution in the collection of related documents than the uniform training corpora. So if we can determine the sense (event type) of a word in the related documents, this will allow us to infer its sense in the test document. In this way related documents can help recover event mentions missed by within-sentence extraction. For example, in a document about “the advance into Baghdad”: Example 1: [Test Sentence] Most US army commanders believe it is critical to pause the breakneck advance towards Baghdad to secure the supply lines and make sure weapons are operable and troops resupplied…. [Sentences from Related Documents] British and US forces report gains in the advance on Baghdad and take control of Umm Qasr, despite a fierce sandstorm which slows another flank. … The baseline event tagger is not able to detect “advance” as a “Movement_Transport” event trigger because there is no pattern “advance towards [Place]” in the ACE training corpora (“advance” by itself is too ambiguous). The training data, however, does include the pattern “advance on [Place]”, which allows the instance of “advance” in the related documents to be successfully identified with high confidence by pattern matching as an event. This provides us much stronger “feedback” confidence in tagging ‘advance’ in the test sentence as a correct trigger. On the other hand, if a word is not tagged as an event trigger in most related documents, then it’s less likely to be correct in the test sentence despite its high local confidence. For example, in a document about “assessment of Russian president Putin”: Example 2: [Test Sentence] But few at the Kremlin forum suggested that Putin's own standing among voters will be hurt by Russia's apparent diplomacy failures. [Sentences from Related Documents] Putin boosted ties with the United States by throwing his support behind its war on terrorism after the Sept. 11 attacks, but the Iraq war has hurt the relationship. … The word “hurt” in the test sentence is mistakenly identified as a “Life_Injure” trigger with high local confidence (because the within-sentence extractor misanalyzes “voters” as the object of “hurt” and so matches the pattern “[Person] be hurt”). Based on the fact that many other instances of “hurt” are not “Life_Injure” triggers in the related documents, we can successfully remove this wrong event mention in the test document. 3.2 One Argument Role Per Cluster Inspired by the observation about trigger distribution, we propose a similar hypothesis – one argument role per cluster for event arguments. In other words, each entity plays the same argument role, or no role, for events with the same type in a collection of related documents. For example, 256 Example 3: [Test Sentence] Vivendi earlier this week confirmed months of press speculation that it planned to shed its entertainment assets by the end of the year. [Sentences from Related Documents] Vivendi has been trying to sell assets to pay off huge debt, estimated at the end of last month at more than $13 billion. Under the reported plans, Blackstone Group would buy Vivendi's theme park division, including Universal Studios Hollywood, Universal Orlando in Florida... … The above test sentence doesn’t include an explicit trigger word to indicate “Vivendi” as a “seller” of a “Transaction_Transfer-Ownership” event mention, but “Vivendi” is correctly identified as “seller” in many other related sentences (by matching patterns “[Seller] sell” and “buy [Seller]’s”). So we can incorporate such additional information to enhance the confidence of “Vivendi” as a “seller” in the test sentence. On the other hand, we can remove spurious arguments with low cross-document frequency and confidence. In the following example, Example 4: [Test Sentence] The Davao Medical Center, a regional government hospital, recorded 19 deaths with 50 wounded. “the Davao Medical Center” is mistakenly tagged as “Place” for a “Life_Die” event mention. But the same annotation for this mention doesn’t appear again in the related documents, so we can determine it’s a spurious argument. 4 System Approach Overview Based on the above motivations we propose to incorporate global evidence from a cluster of related documents to refine local decisions. This section gives more details about the baseline withinsentence event tagger, and the information retrieval system we use to obtain related documents. In the next section we shall focus on describing the inference procedure. 4.1 System Pipeline Figure 1 depicts the general procedure of our approach. EMSet represents a set of event mentions which is gradually updated. Figure 1. Cross-doc Inference for Event Extraction 4.2 Within-Sentence Event Extraction For each event mention in a test document t , the baseline Maximum Entropy based classifiers produce three types of confidence values: • LConf(trigger,etype): The probability of a string trigger indicating an event mention with type etype; if the event mention is produced by pattern matching then assign confidence 1. • LConf(arg, etype): The probability that a mention arg is an argument of some particular event type etype. • LConf(arg, etype, role): If arg is an argument with event type etype, the probability of arg having some particular role. We apply within-sentence event extraction to get an initial set of event mentions 0 t EMSet , and conduct cross-sentence inference (details will be presented in section 5) to get an updated set of event mentions 1 t EMSet . 4.3 Information Retrieval We then use the INDRI retrieval system (Strohman et al., 2005) to obtain the top N (N=25 in this paTest doc Within-sent Event Extraction Query Construction Cross-sent Inference Query Unlabeled Corpora Information Retrieval Related docs Within-sent Event Extraction Cross-sent Inference 1 r EMSet Cross-doc Inference 0 t EMSet 0 r EMSet 1 t EMSet 2 t EMSet 257 per3) related documents. We construct an INDRI query from the triggers and arguments, each weighted by local confidence and frequency in the test document. For each argument we also add other names coreferential with or bearing some ACE relation to the argument. For each related document r returned by INDRI, we repeat the within-sentence event extraction and cross-sentence inference procedure, and get an expanded event mention set 1 t r EMSet + . Then we apply cross-document inference to 1 t r EMSet + and get the final event mention output 2 t EMSet . 5 Global Inference The central idea of inference is to obtain document-wide and cluster-wide statistics about the frequency with which triggers and arguments are associated with particular types of events, and then use this information to correct event and argument identification and classification. For a set of event mentions we tabulate the following document-wide and cluster-wide confidence-weighted frequencies: • for each trigger string, the frequency with which it appears as the trigger of an event of a particular type; • for each event argument string and the names coreferential with or related to the argument, the frequency of the event type; • for each event argument string and the names coreferential with or related to the argument, the frequency of the event type and role. Besides these frequencies, we also define the following margin metric to compute the confidence of the best (most frequent) event type or role: Margin = (WeightedFrequency (most frequent value) – WeightedFrequency (second most freq value))/ WeightedFrequency (second most freq value) A large margin indicates greater confidence in the most frequent value. We summarize the frequency and confidence metrics in Table 3. Based on these confidence metrics, we designed the inference rules in Table 4. These rules are applied in the order (1) to (9) based on the principle of improving ‘local’ information before global 3 We tested different N ∈ [10, 75] on dev set; and N=25 achieved best gains. propagation. Although the rules may seem complex, they basically serve two functions: • to remove triggers and arguments with low (local or cluster-wide) confidence; • to adjust trigger and argument identification and classification to achieve (document-wide or cluster-wide) consistency. 6 Experimental Results and Analysis In this section we present the results of applying this inference method to improve ACE event extraction. 6.1 Data We used 10 newswire texts from ACE 2005 training corpora (from March to May of 2003) as our development set, and then conduct blind test on a separate set of 40 ACE 2005 newswire texts. For each test text we retrieved 25 related texts from English TDT5 corpus which in total consists of 278,108 texts (from April to September of 2003). 6.2 Confidence Metric Thresholding We select the thresholds (δk with k=1~13) for various confidence metrics by optimizing the Fmeasure score of each rule on the development set, as shown in Figure 2 and 3 as follows. Each curve in Figure 2 and 3 shows the effect on precision and recall of varying the threshold for an individual rule. Figure 2. Trigger Labeling Performance with Confidence Thresholding on Dev Set 258 Figure 3. Argument Labeling Performance with Confidence Thresholding on Dev Set The labeled point on each curve shows the best F-measure that can be obtained on the development set by adjusting the threshold for that rule. The gain obtained by applying successive rules can be seen in the progression of successive points towards higher recall and, for argument labeling, precision4. 6.3 Overall Performance Table 5 shows the overall Precision (P), Recall (R) and F-Measure (F) scores for the blind test set. In addition, we also measured the performance of two human annotators who prepared the ACE 2005 training data on 28 newswire texts (a subset of the blind test set). The final key was produced by review and adjudication of the two annotations. Both cross-sentence and cross-document inferences provided significant improvement over the baseline with local confidence thresholds controlled. We conducted the Wilcoxon Matched-Pairs Signed-Ranks Test on a document basis. The results show that the improvement using crosssentence inference is significant at a 99.9% confidence level for both trigger and argument labeling; adding cross-document inference is significant at a 99.9% confidence level for trigger labeling and 93.4% confidence level for argument labeling. 4 We didn’t show the classification adjusting rules (2), (6) and (8) here because of their relatively small impact on dev set. 6.4 Discussion From table 5 we can see that for trigger labeling our approach dramatically enhanced recall (22.9% improvement) with some loss (7.4%) in precision. This precision loss was much larger than that for the development set (0.3%). This indicates that the trigger propagation thresholds optimized on the development set were too low for the blind test set and thus more spurious triggers got propagated. The improved trigger labeling is better than one human annotator and only 4.7% worse than another. For argument labeling we can see that crosssentence inference improved both identification (3.7% higher F-Measure) and classification (6.1% higher accuracy); and cross-document inference mainly provided further gains (1.9%) in classification. This shows that identification consistency may be achieved within a narrower context while the classification task favors more global background knowledge in order to solve some difficult cases. This matches the situation of human annotation as well: we may decide whether a mention is involved in some particular event or not by reading and analyzing the target sentence itself; but in order to decide the argument’s role we may need to frequently refer to wider discourse in order to infer and confirm our decision. In fact sometimes it requires us to check more similar web pages or even wikipedia databases. This was exactly the intuition of our approach. We should also note that human annotators label arguments based on perfect entity mentions, but our system used the output from the IE system. So the gap was also partially due to worse entity detection. Error analysis on the inference procedure shows that the propagation rules (3), (4), (7) and (9) produced a few extra false alarms. For trigger labeling, most of these errors appear for support verbs such as “take” and “get” which can only represent an event mention together with other verbs or nouns. Some other errors happen on nouns and adjectives. These are difficult tasks even for human annotators. As shown in table 5 the inter-annotator agreement on trigger identification is only about 40%. Besides some obvious overlooked cases (it’s probably difficult for a human to remember 33 different event types during annotation), most difficulties were caused by judging generic verbs, nouns and adjectives. 259 Performance System/Human Trigger Identification +Classification Argument Identification Argument Classification Accuracy Argument Identification +Classification P R F P R F P R F Within-Sentence IE with Rule (1) (Baseline) 67.6 53.5 59.7 47.8 38.3 42.5 86.0 41.2 32.9 36.6 Cross-sentence Inference 64.3 59.4 61.8 54.6 38.5 45.1 90.2 49.2 34.7 40.7 Cross-sentence+ Cross-doc Inference 60.2 76.4 67.3 55.7 39.5 46.2 92.1 51.3 36.4 42.6 Human Annotator1 59.2 59.4 59.3 60.0 69.4 64.4 85.8 51.6 59.5 55.3 Human Annotator2 69.2 75.0 72.0 62.7 85.4 72.3 86.3 54.1 73.7 62.4 Inter-Annotator Agreement 41.9 38.8 40.3 55.2 46.7 50.6 91.7 50.6 42.9 46.4 Table 5. Overall Performance on Blind Test Set (%) In fact, compared to a statistical tagger trained on the corpus after expert adjudication, a human annotator tends to make more mistakes in trigger classification. For example it’s hard to decide whether “named” represents a “Personnel_Nominate” or “Personnel_Start-Position” event mention; “hacked to death” represents a “Life_Die” or “Conflict_Attack” event mention without following more specific annotation guidelines. 7 Related Work The trigger labeling task described in this paper is in part a task of word sense disambiguation (WSD), so we have used the idea of sense consistency introduced in (Yarowsky, 1995), extending it to operate across related documents. Almost all the current event extraction systems focus on processing single documents and, except for coreference resolution, operate a sentence at a time (Grishman et al., 2005; Ahn, 2006; Hardy et al., 2006). We share the view of using global inference to improve event extraction with some recent research. Yangarber et al. (Yangarber and Jokipii, 2005; Yangarber, 2006; Yangarber et al., 2007) applied cross-document inference to correct local extraction results for disease name, location and start/end time. Mann (2007) encoded specific inference rules to improve extraction of CEO (name, start year, end year) in the MUC management succession task. In addition, Patwardhan and Riloff (2007) also demonstrated that selectively applying event patterns to relevant regions can improve MUC event extraction. We expand the idea to more general event types and use information retrieval techniques to obtain wider background knowledge from related documents. 8 Conclusion and Future Work One of the initial goals for IE was to create a database of relations and events from the entire input corpus, and allow further logical reasoning on the database. The artificial constraint that extraction should be done independently for each document was introduced in part to simplify the task and its evaluation. In this paper we propose a new approach to break down the document boundaries for event extraction. We gather together event extraction results from a set of related documents, and then apply inference and constraints to enhance IE performance. In the short term, the approach provides a platform for many byproducts. For example, we can naturally get an event-driven summary for the collection of related documents; the sentences including high-confidence events can be used as additional training data to bootstrap the event tagger; from related events in different timeframes we can derive entailment rules; the refined consistent events can serve better for other NLP tasks such as template based question-answering. The aggregation approach described here can be easily extended to improve relation detection and coreference resolution (two argument mentions referring to the same role of related events are likely to corefer). Ultimately we would like to extend the system to perform essential, although probably lightweight, event prediction. 260 XSent-Trigger-Freq(trigger, etype) The weighted frequency of string trigger appearing as the trigger of an event of type etype across all sentences within a document XDoc-Trigger-Freq (trigger, etype) The weighted frequency of string trigger appearing as the trigger of an event of type etype across all documents in a cluster XDoc-Trigger-BestFreq (trigger) Maximum over all etypes of XDoc-Trigger-Freq (trigger, etype) XDoc-Arg-Freq(arg, etype) The weighted frequency of arg appearing as an argument of an event of type etype across all documents in a cluster XDoc-Role-Freq(arg, etype, role) The weighted frequency of arg appearing as an argument of an event of type etype with role role across all documents in a cluster XDoc-Role-BestFreq(arg) Maximum over all etypes and roles of XDoc-Role-Freq(arg, etype, role) XSent-Trigger-Margin(trigger) The margin value of trigger in XSent-Trigger-Freq XDoc-Trigger-Margin(trigger) The margin value of trigger in XDoc-Trigger-Freq XDoc-Role-Margin(arg) The margin value of arg in XDoc-Role-Freq Table 3. Global Frequency and Confidence Metrics Rule (1): Remove Triggers and Arguments with Low Local Confidence If LConf(trigger, etype) < δ1, then delete the whole event mention EM; If LConf(arg, etype) < δ2 or LConf(arg, etype, role) < δ3, then delete arg. Rule (2): Adjust Trigger Classification to Achieve Document-wide Consistency If XSent-Trigger-Margin(trigger) >δ4, then propagate the most frequent etype to all event mentions with trigger in the document; and correct roles for corresponding arguments. Rule (3): Adjust Trigger Identification to Achieve Document-wide Consistency If LConf(trigger, etype) > δ5, then propagate etype to all unlabeled strings trigger in the document. Rule (4): Adjust Argument Identification to Achieve Document-wide Consistency If LConf(arg, etype) > δ6, then in the document, for each sentence containing an event mention EM with etype, add any unlabeled mention in that sentence with the same head as arg as an argument of EM with role. Rule (5): Remove Triggers and Arguments with Low Cluster-wide Confidence If XDoc-Trigger-Freq (trigger, etype) < δ7, then delete EM; If XDoc-Arg-Freq(arg, etype) < δ8 or XDoc-Role-Freq(arg, etype, role) < δ9, then delete arg. Rule (6): Adjust Trigger Classification to Achieve Cluster-wide Consistency If XDoc-Trigger-Margin(trigger) >δ10, then propagate most frequent etype to all event mentions with trigger in the cluster; and correct roles for corresponding arguments. Rule (7): Adjust Trigger Identification to Achieve Cluster-wide Consistency If XDoc-Trigger-BestFreq (trigger) >δ11, then propagate etype to all unlabeled strings trigger in the cluster, override the results of Rule (3) if conflict. Rule (8): Adjust Argument Classification to Achieve Cluster-wide Consistency If XDoc-Role-Margin(arg) >δ12, then propagate the most frequent etype and role to all arguments with the same head as arg in the entire cluster. Rule (9): Adjust Argument Identification to Achieve Cluster-wide Consistency If XDoc-Role-BestFreq(arg) > δ13, then in the cluster, for each sentence containing an event mention EM with etype, add any unlabeled mention in that sentence with the same head as arg as an argument of EM with role. Table 4. Probabilistic Inference Rule Acknowledgments This material is based upon work supported by the Defense Advanced Research Projects Agency under Contract No. HR0011-06-C-0023, and the National Science Foundation under Grant IIS00325657. Any opinions, findings and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of the U. S. Government. 261 References David Ahn. 2006. The stages of event extraction. Proc. COLING/ACL 2006 Workshop on Annotating and Reasoning about Time and Events. Sydney, Australia. Ralph Grishman, David Westbrook and Adam Meyers. 2005. NYU’s English ACE 2005 System Description. Proc. ACE 2005 Evaluation Workshop. Washington, US. Hilda Hardy, Vika Kanchakouskaya and Tomek Strzalkowski. 2006. Automatic Event Classification Using Surface Text Features. Proc. AAAI06 Workshop on Event Extraction and Synthesis. Boston, Massachusetts. US. Gideon Mann. 2007. Multi-document Relationship Fusion via Constraints on Probabilistic Databases. Proc. HLT/NAACL 2007. Rochester, NY, US. Siddharth Patwardhan and Ellen Riloff. 2007. Effective Information Extraction with Semantic Affinity Patterns and Relevant Regions. Proc. EMNLP 2007. Prague, Czech Republic. Trevor Strohman, Donald Metzler, Howard Turtle and W. Bruce Croft. 2005. Indri: A Language-model based Search Engine for Complex Queries (extended version). Technical Report IR-407, CIIR, Umass Amherst, US. Roman Yangarber, Clive Best, Peter von Etter, Flavio Fuart, David Horby and Ralf Steinberger. 2007. Combining Information about Epidemic Threats from Multiple Sources. Proc. RANLP 2007 workshop on Multi-source, Multilingual Information Extraction and Summarization. Borovets, Bulgaria. Roman Yangarber. 2006. Verification of Facts across Document Boundaries. Proc. International Workshop on Intelligent Information Access. Helsinki, Finland. Roman Yangarber and Lauri Jokipii. 2005. Redundancy-based Correction of Automatically Extracted Facts. Proc. HLT/EMNLP 2005. Vancouver, Canada. David Yarowsky. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. Proc. ACL 1995. Cambridge, MA, US. 262
2008
30
Proceedings of ACL-08: HLT, pages 263–271, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Learning Document-Level Semantic Properties from Free-text Annotations S.R.K. Branavan Harr Chen Jacob Eisenstein Regina Barzilay Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology {branavan, harr, jacobe, regina}@csail.mit.edu Abstract This paper demonstrates a new method for leveraging free-text annotations to infer semantic properties of documents. Free-text annotations are becoming increasingly abundant, due to the recent dramatic growth in semistructured, user-generated online content. An example of such content is product reviews, which are often annotated by their authors with pros/cons keyphrases such as “a real bargain” or “good value.” To exploit such noisy annotations, we simultaneously find a hidden paraphrase structure of the keyphrases, a model of the document texts, and the underlying semantic properties that link the two. This allows us to predict properties of unannotated documents. Our approach is implemented as a hierarchical Bayesian model with joint inference, which increases the robustness of the keyphrase clustering and encourages the document model to correlate with semantically meaningful properties. We perform several evaluations of our model, and find that it substantially outperforms alternative approaches. 1 Introduction A central problem in language understanding is transforming raw text into structured representations. Learning-based approaches have dramatically increased the scope and robustness of this type of automatic language processing, but they are typically dependent on large expert-annotated datasets, which are costly to produce. In this paper, we show how novice-generated free-text annotations available online can be leveraged to automatically infer document-level semantic properties. With the rapid increase of online content created by end users, noisy free-text annotations have pros/cons: great nutritional value ... combines it all: an amazing product, quick and friendly service, cleanliness, great nutrition ... pros/cons: a bit pricey, healthy ... is an awesome place to go if you are health conscious. They have some really great low calorie dishes and they publish the calories and fat grams per serving. Figure 1: Excerpts from online restaurant reviews with pros/cons phrase lists. Both reviews discuss healthiness, but use different keyphrases. become widely available (Vickery and WunschVincent, 2007; Sterling, 2005). For example, consider reviews of consumer products and services. Often, such reviews are annotated with keyphrase lists of pros and cons. We would like to use these keyphrase lists as training labels, so that the properties of unannotated reviews can be predicted. Having such a system would facilitate structured access and summarization of this data. However, novicegenerated keyphrase annotations are incomplete descriptions of their corresponding review texts. Furthermore, they lack consistency: the same underlying property may be expressed in many ways, e.g., “healthy” and “great nutritional value” (see Figure 1). To take advantage of such noisy labels, a system must both uncover their hidden clustering into properties, and learn to predict these properties from review text. This paper presents a model that addresses both problems simultaneously. We assume that both the document text and the selection of keyphrases are governed by the underlying hidden properties of the document. Each property indexes a language model, thus allowing documents that incorporate the same 263 property to share similar features. In addition, each keyphrase is associated with a property; keyphrases that are associated with the same property should have similar distributional and surface features. We link these two ideas in a joint hierarchical Bayesian model. Keyphrases are clustered based on their distributional and lexical properties, and a hidden topic model is applied to the document text. Crucially, the keyphrase clusters and document topics are linked, and inference is performed jointly. This increases the robustness of the keyphrase clustering, and ensures that the inferred hidden topics are indicative of salient semantic properties. Our model is broadly applicable to many scenarios where documents are annotated in a noisy manner. In this work, we apply our method to a collection of reviews in two categories: restaurants and cell phones. The training data consists of review text and the associated pros/cons lists. We then evaluate the ability of our model to predict review properties when the pros/cons list is hidden. Across a variety of evaluation scenarios, our algorithm consistently outperforms alternative strategies by a wide margin. 2 Related Work Review Analysis Our approach relates to previous work on property extraction from reviews (Popescu et al., 2005; Hu and Liu, 2004; Kim and Hovy, 2006). These methods extract lists of phrases, which are analogous to the keyphrases we use as input to our algorithm. However, our approach is distinguished in two ways: first, we are able to predict keyphrases beyond those that appear verbatim in the text. Second, our approach learns the relationships between keyphrases, allowing us to draw direct comparisons between reviews. Bayesian Topic Modeling One aspect of our model views properties as distributions over words in the document. This approach is inspired by methods in the topic modeling literature, such as Latent Dirichlet Allocation (LDA) (Blei et al., 2003), where topics are treated as hidden variables that govern the distribution of words in a text. Our algorithm extends this notion by biasing the induced hidden topics toward a clustering of known keyphrases. Tying these two information sources together enhances the robustness of the hidden topics, thereby increasing the chance that the induced structure corresponds to semantically meaningful properties. Recent work has examined coupling topic models with explicit supervision (Blei and McAuliffe, 2007; Titov and McDonald, 2008). However, such approaches assume that the documents are labeled within a predefined annotation structure, e.g., the properties of food, ambiance, and service for restaurants. In contrast, we address free-text annotations created by end users, without known semantic properties. Rather than requiring a predefined annotation structure, our model infers one from the data. 3 Problem Formulation We formulate our problem as follows. We assume a dataset composed of documents with associated keyphrases. Each document may be marked with multiple keyphrases that express unseen semantic properties. Across the entire collection, several keyphrases may express the same property. The keyphrases are also incomplete — review texts often express properties that are not mentioned in their keyphrases. At training time, our model has access to both text and keyphrases; at test time, the goal is to predict the properties supported by a previously unseen document. We can then use this property list to generate an appropriate set of keyphrases. 4 Model Description Our approach leverages both keyphrase clustering and distributional analysis of the text in a joint, hierarchical Bayesian model. Keyphrases are drawn from a set of clusters; words in the documents are drawn from language models indexed by a set of topics, where the topics correspond to the keyphrase clusters. Crucially, we bias the assignment of hidden topics in the text to be similar to the topics represented by the keyphrases of the document, but we permit some words to be drawn from other topics not represented by the keyphrases. This flexibility in the coupling allows the model to learn effectively in the presence of incomplete keyphrase annotations, while still encouraging the keyphrase clustering to cohere with the topics supported by the text. We train the model on documents annotated with keyphrases. During training, we learn a hidden topic model from the text; each topic is also asso264 ψ – keyphrase cluster model x – keyphrase cluster assignment s – keyphrase similarity values h – document keyphrases η – document keyphrase topics λ – probability of selecting η instead of φ c – selects between η and φ for word topics φ – document topic model z – word topic assignment θ – language models of each topic w – document words ψ ∼Dirichlet(ψ0) xℓ∼Multinomial(ψ) sℓ,ℓ′ ∼ ( Beta(α=) if xℓ= xℓ′ Beta(α̸=) otherwise ηd = [ηd,1 . . . ηd,K]T where ηd,k ∝ ( 1 if xℓ= k for any l ∈hd 0 otherwise λ ∼Beta(λ0) cd,n ∼Bernoulli(λ) φd ∼Dirichlet(φ0) zd,n ∼ ( Multinomial(ηd) if cd,n = 1 Multinomial(φd) otherwise θk ∼Dirichlet(θ0) wd,n ∼Multinomial(θzd,n) Figure 2: The plate diagram for our model. Shaded circles denote observed variables, and squares denote hyper parameters. The dotted arrows indicate that η is constructed deterministically from x and h. ciated with a cluster of keyphrases. At test time, we are presented with documents that do not contain keyphrase annotations. The hidden topic model of the review text is used to determine the properties that a document as a whole supports. For each property, we compute the proportion of the document’s words assigned to it. Properties with proportions above a set threshold (tuned on a development set) are predicted as being supported. 4.1 Keyphrase Clustering One of our goals is to cluster the keyphrases, such that each cluster corresponds to a well-defined property. We represent each distinct keyphrase as a vector of similarity scores computed over the set of observed keyphrases; these scores are represented by s in Figure 2, the plate diagram of our model.1 Modeling the similarity matrix rather than the sur1We assume that similarity scores are conditionally independent given the keyphrase clustering, though the scores are in fact related. Such simplifying assumptions have been previously used with success in NLP (e.g., Toutanova and Johnson, 2007), though a more theoretically sound treatment of the similarity matrix is an area for future research. face forms allows arbitrary comparisons between keyphrases, e.g., permitting the use of both lexical and distributional information. The lexical comparison is based on the cosine similarity between the keyphrase words. The distributional similarity is quantified in terms of the co-occurrence of keyphrases across review texts. Our model is inherently capable of using any arbitrary source of similarity information; for a discussion of similarity metrics, see Lin (1998). 4.2 Document-level Distributional Analysis Our analysis of the document text is based on probabilistic topic models such as LDA (Blei et al., 2003). In the LDA framework, each word is generated from a language model that is indexed by the word’s topic assignment. Thus, rather than identifying a single topic for a document, LDA identifies a distribution over topics. Our word model operates similarly, identifying a topic for each word, written as z in Figure 2. To tie these topics to the keyphrases, we deterministically construct a document-specific topic distribu265 tion from the clusters represented by the document’s keyphrases — this is η in the figure. η assigns equal probability to all topics that are represented in the keyphrases, and a small smoothing probability to other topics. As noted above, properties may be expressed in the text even when no related keyphrase appears. For this reason, we also construct a document-specific topic distribution φ. The auxiliary variable c indicates whether a given word’s topic is drawn from the set of keyphrase clusters, or from this topic distribution. 4.3 Generative Process In this section, we describe the underlying generative process more formally. First we consider the set of all keyphrases observed across the entire corpus, of which there are L. We draw a multinomial distribution ψ over the K keyphrase clusters from a symmetric Dirichlet prior ψ0. Then for the ℓth keyphrase, a cluster assignment xℓis drawn from the multinomial ψ. Finally, the similarity matrix s ∈[0, 1]L×L is constructed. Each entry sℓ,ℓ′ is drawn independently, depending on the cluster assignments xℓand xℓ′. Specifically, sℓ,ℓ′ is drawn from a Beta distribution with parameters α= if xℓ= xℓ′ and α̸= otherwise. The parameters α= linearly bias sℓ,ℓ′ towards one (Beta(α=) ≡ Beta(2, 1)), and the parameters α̸= linearly bias sℓ,ℓ′ towards zero (Beta(α̸=) ≡Beta(1, 2)). Next, the words in each of the D documents are generated. Document d has Nd words; zd,n is the topic for word wd,n. These latent topics are drawn either from the set of clusters represented by the document’s keyphrases, or from the document’s topic model φd. We deterministically construct a document-specific keyphrase topic model ηd, based on the keyphrase cluster assignments x and the observed keyphrases hd. The multinomial ηd assigns equal probability to each topic that is represented by a phrase in hd, and a small probability to other topics. As noted earlier, a document’s text may support properties that are not mentioned in its observed keyphrases. For that reason, we draw a document topic multinomial φd from a symmetric Dirichlet prior φ0. The binary auxiliary variable cd,n determines whether the word’s topic is drawn from the keyphrase model ηd or the document topic model φd. cd,n is drawn from a weighted coin flip, with probability λ; λ is drawn from a Beta distribution with prior λ0. We have zd,n ∼ηd if cd,n = 1, and zd,n ∼φd otherwise. Finally, the word wd,n is drawn from the multinomial θzd,n, where zd,n indexes a topic-specific language model. Each of the K language models θk is drawn from a symmetric Dirichlet prior θ0. 5 Posterior Sampling Ultimately, we need to compute the model’s posterior distribution given the training data. Doing so analytically is intractable due to the complexity of the model, but sampling-based techniques can be used to estimate the posterior. We employ Gibbs sampling, previously used in NLP by Finkel et al. (2005) and Goldwater et al. (2006), among others. This technique repeatedly samples from the conditional distributions of each hidden variable, eventually converging on a Markov chain whose stationary distribution is the posterior distribution of the hidden variables in the model (Gelman et al., 2004). We now present sampling equations for each of the hidden variables in Figure 2. The prior over keyphrase clusters ψ is sampled based on hyperprior ψ0 and keyphrase cluster assignments x. We write p(ψ | . . .) to mean the probability conditioned on all the other variables. p(ψ | . . .) ∝p(ψ | ψ0)p(x | ψ), = p(ψ | ψ0) L Y ℓ p(xℓ| ψ) = Dir(ψ; ψ0) L Y ℓ Mul(xℓ; ψ) = Dir(ψ; ψ′), where ψ′ i = ψ0 + count(xℓ= i). This update rule is due to the conjugacy of the multinomial to the Dirichlet distribution. The first line follows from Bayes’ rule, and the second line from the conditional independence of each keyphrase assignment xℓfrom the others, given ψ. φd and θk are resampled in a similar manner: p(φd | . . .) ∝Dir(φd; φ′ d), p(θk | . . .) ∝Dir(θk; θ′ k), 266 p(xℓ| . . .) ∝ p(xℓ| ψ)p(s | xℓ, x−ℓ, α)p(z | η, ψ, c) ∝ p(xℓ| ψ)  Y ℓ′̸=ℓ p(sℓ,ℓ′ | xℓ, xℓ′, α)     D Y d Y cd,n=1 p(zd,n | ηd)   = Mul(xℓ; ψ)  Y ℓ′̸=ℓ Beta(sℓ,ℓ′; αxℓ,xℓ′)     D Y d Y cd,n=1 Mul(zd,n; ηd)   Figure 3: The resampling equation for the keyphrase cluster assignments. where φ′ d,i = φ0 + count(zd,n = i ∧cd,n = 0) and θ′ k,i = θ0 + P d count(wd,n = i ∧zd,n = k). In building the counts for φ′ d,i, we consider only cases in which cd,n = 0, indicating that the topic zd,n is indeed drawn from the document topic model φd. Similarly, when building the counts for θ′ k, we consider only cases in which the word wd,n is drawn from topic k. To resample λ, we employ the conjugacy of the Beta prior to the Bernoulli observation likelihoods, adding counts of c to the prior λ0. p(λ | . . .) ∝Beta(λ; λ′), where λ′ = λ0 +  P d count(cd,n = 1) P d count(cd,n = 0)  . The keyphrase cluster assignments are represented by x, whose sampling distribution depends on ψ, s, and z, via η. The equation is shown in Figure 3. The first term is the prior on xℓ. The second term encodes the dependence of the similarity matrix s on the cluster assignments; with slight abuse of notation, we write αxℓ,xℓ′ to denote α= if xℓ= xℓ′, and α̸= otherwise. The third term is the dependence of the word topics zd,n on the topic distribution ηd. We compute the final result of Figure 3 for each possible setting of xℓ, and then sample from the normalized multinomial. The word topics z are sampled according to keyphrase topic distribution ηd, document topic distribution φd, words w, and auxiliary variables c: p(zd,n | . . .) ∝ p(zd,n | φd, ηd, cd,n)p(wd,n | zd,n, θ) = ( Mul(zd,n; ηd)Mul(wd,n; θzd,n) if cd,n = 1, Mul(zd,n; φd)Mul(wd,n; θzd,n) otherwise. As with xℓ, each zd,n is sampled by computing the conditional likelihood of each possible setting within a constant of proportionality, and then sampling from the normalized multinomial. Finally, we sample each auxiliary variable cd,n, which indicates whether the hidden topic zd,n is drawn from ηd or φd. The conditional probability for cd,n depends on its prior λ and the hidden topic assignments zd,n: p(cd,n | . . .) ∝ p(cd,n | λ)p(zd,n | ηd, φd, cd,n) = ( Bern(cd,n; λ)Mul(zd,n; ηd) if cd,n = 1, Bern(cd,n; λ)Mul(zd,n; φd) otherwise. We compute the likelihood of cd,n = 0 and cd,n = 1 within a constant of proportionality, and then sample from the normalized Bernoulli distribution. 6 Experimental Setup Data Sets We evaluate our system on reviews from two categories, restaurants and cell phones. These reviews were downloaded from the popular Epinions2 website. Users of this website evaluate products by providing both a textual description of their opinion, as well as concise lists of keyphrases (pros and cons) summarizing the review. The statistics of this dataset are provided in Table 1. For each of the categories, we randomly selected 50%, 15%, and 35% of the documents as training, development, and test sets, respectively. Manual analysis of this data reveals that authors often omit properties mentioned in the text from the list of keyphrases. To obtain a complete gold 2http://www.epinions.com/ 267 Restaurants Cell Phones # of reviews 3883 1112 Avg. review length 916.9 1056.9 Avg. keyphrases / review 3.42 4.91 Table 1: Statistics of the reviews dataset by category. standard, we hand-annotated a subset of the reviews from the restaurant category. The annotation effort focused on eight commonly mentioned properties, such as those underlying the keyphrases “pleasant atmosphere” and “attentive staff.” Two raters annotated 160 reviews, 30 of which were annotated by both. Cohen’s kappa, a measure of interrater agreement ranging from zero to one, was 0.78 for this subset, indicating high agreement (Cohen, 1960). Each review was annotated with 2.56 properties on average. Each manually-annotated property corresponded to an average of 19.1 keyphrases in the restaurant data, and 6.7 keyphrases in the cell phone data. This supports our intuition that a single semantic property may be expressed using a variety of different keyphrases. Training Our model needs to be provided with the number of clusters K. We set K large enough for the model to learn effectively on the development set. For the restaurant data — where the gold standard identified eight semantic properties — we set K to 20, allowing the model to account for keyphrases not included in the eight most common properties. For the cell phones category, we set K to 30. To improve the model’s convergence rate, we perform two initialization steps for the Gibbs sampler. First, sampling is done only on the keyphrase clustering component of the model, ignoring document text. Second, we fix this clustering and sample the remaining model parameters. These two steps are run for 5,000 iterations each. The full joint model is then sampled for 100,000 iterations. Inspection of the parameter estimates confirms model convergence. On a 2GHz dual-core desktop machine, a multi-threaded C++ implementation of model training takes about two hours for each dataset. Inference The final point estimate used for testing is an average (for continuous variables) or a mode (for discrete variables) over the last 1,000 Gibbs sampling iterations. Averaging is a heuristic that is applicable in our case because our sample histograms are unimodal and exhibit low skew. The model usually works equally well using singlesample estimates, but is more prone to estimation noise. As previously mentioned, we convert word topic assignments to document properties by examining the proportion of words supporting each property. A threshold for this proportion is set for each property via the development set. Evaluation Our first evaluation examines the accuracy of our model and the baselines by comparing their output against the keyphrases provided by the review authors. More specifically, the model first predicts the properties supported by a given review. We then test whether the original authors’ keyphrases are contained in the clusters associated with these properties. As noted above, the authors’ keyphrases are often incomplete. To perform a noise-free comparison, we based our second evaluation on the manually constructed gold standard for the restaurant category. We took the most commonly observed keyphrase from each of the eight annotated properties, and tested whether they are supported by the model based on the document text. In both types of evaluation, we measure the model’s performance using precision, recall, and Fscore. These are computed in the standard manner, based on the model’s keyphrase predictions compared against the corresponding references. The sign test was used for statistical significance testing (De Groot and Schervish, 2001). Baselines To the best of our knowledge, this task not been previously addressed in the literature. We therefore consider five baselines that allow us to explore the properties of this task and our model. Random: Each keyphrase is supported by a document with probability of one half. This baseline’s results are computed (in expectation) rather than actually run. This method is expected to have a recall of 0.5, because in expectation it will select half of the correct keyphrases. Its precision is the proportion of supported keyphrases in the test set. Phrase in text: A keyphrase is supported by a document if it appears verbatim in the text. Because of this narrow requirement, precision should be high whereas recall will be low. 268 Restaurants Restaurants Cell Phones gold standard annotation free-text annotation free-text annotation Recall Prec. F-score Recall Prec. F-score Recall Prec. F-score Random 0.500 0.300 ∗0.375 0.500 0.500 ∗0.500 0.500 0.489 ∗0.494 Phrase in text 0.048 0.500 ∗0.087 0.078 0.909 ∗0.144 0.171 0.529 ∗0.259 Cluster in text 0.223 0.534 0.314 0.517 0.640 ∗0.572 0.829 0.547 0.659 Phrase classifier 0.028 0.636 ∗0.053 0.068 0.963 ∗0.126 0.029 0.600 ∗0.055 Cluster classifier 0.113 0.622 ⋄0.192 0.255 0.907 ∗0.398 0.210 0.759 0.328 Our model 0.625 0.416 0.500 0.901 0.652 0.757 0.886 0.585 0.705 Our model + gold clusters 0.582 0.398 0.472 0.795 0.627 ∗0.701 0.886 0.520 ⋄0.655 Table 2: Comparison of the property predictions made by our model and the baselines in the two categories as evaluated against the gold and free-text annotations. Results for our model using the fixed, manually-created gold clusterings are also shown. The methods against which our model has significantly better results on the sign test are indicated with a ∗for p <= 0.05, and ⋄for p <= 0.1. Cluster in text: A keyphrase is supported by a document if it or any of its paraphrases appears in the text. Paraphrasing is based on our model’s clustering of the keyphrases. The use of paraphrasing information enhances recall at the potential cost of precision, depending on the quality of the clustering. Phrase classifier: Discriminative classifiers are trained for each keyphrase. Positive examples are documents that are labeled with the keyphrase; all other documents are negative examples. A keyphrase is supported by a document if that keyphrase’s classifier returns positive. Cluster classifier: Discriminative classifiers are trained for each cluster of keyphrases, using our model’s clustering. Positive examples are documents that are labeled with any keyphrase from the cluster; all other documents are negative examples. All keyphrases of a cluster are supported by a document if that cluster’s classifier returns positive. Phrase classifier and cluster classifier employ maximum entropy classifiers, trained on the same features as our model, i.e., word counts. The former is high-precision/low-recall, because for any particular keyphrase, its synonymous keyphrases would be considered negative examples. The latter broadens the positive examples, which should improve recall. We used Zhang Le’s MaxEnt toolkit3 to build these classifiers. 3http://homepages.inf.ed.ac.uk/s0450736/ maxent_toolkit.html 7 Results Comparative performance Table 2 presents the results of the evaluation scenarios described above. Our model outperforms every baseline by a wide margin in all evaluations. The absolute performance of the automatic methods indicates the difficulty of the task. For instance, evaluation against gold standard annotations shows that the random baseline outperforms all of the other baselines. We observe similar disappointing results for the non-random baselines against the free-text annotations. The precision and recall characteristics of the baselines match our previously described expectations. The poor performance of the discriminative models seems surprising at first. However, these results can be explained by the degree of noise in the training data, specifically, the aforementioned sparsity of free-text annotations. As previously described, our technique allows document text topics to stochastically derive from either the keyphrases or a background distribution — this allows our model to learn effectively from incomplete annotations. In fact, when we force all text topics to derive from keyphrase clusters in our model, its performance degrades to the level of the classifiers or worse, with an F-score of 0.390 in the restaurant category and 0.171 in the cell phone category. Impact of paraphrasing As previously observed in entailment research (Dagan et al., 2006), paraphrasing information contributes greatly to improved performance on semantic inference. This is 269 Figure 4: Sample keyphrase clusters that our model infers in the cell phone category. confirmed by the dramatic difference in results between the cluster in text and phrase in text baselines. Therefore it is important to quantify the quality of automatically computed paraphrases, such as those illustrated in Figure 4. Restaurants Cell Phones Keyphrase similarity only 0.931 0.759 Joint training 0.966 0.876 Table 3: Rand Index scores of our model’s clusters, using only keyphrase similarity vs. using keyphrases and text jointly. Comparison of cluster quality is against the gold standard. One way to assess clustering quality is to compare it against a “gold standard” clustering, as constructed in Section 6. For this purpose, we use the Rand Index (Rand, 1971), a measure of cluster similarity. This measure varies from zero to one; higher scores are better. Table 3 shows the Rand Indices for our model’s clustering, as well as the clustering obtained by using only keyphrase similarity. These scores confirm that joint inference produces better clusters than using only keyphrases. Another way of assessing cluster quality is to consider the impact of using the gold standard clustering instead of our model’s clustering. As shown in the last two lines of Table 2, using the gold clustering yields results worse than using the model clustering. This indicates that for the purposes of our task, the model clustering is of sufficient quality. 8 Conclusions and Future Work In this paper, we have shown how free-text annotations provided by novice users can be leveraged as a training set for document-level semantic inference. The resulting hierarchical Bayesian model overcomes the lack of consistency in such annotations by inducing a hidden structure of semantic properties, which correspond both to clusters of keyphrases and hidden topic models in the text. Our system successfully extracts semantic properties of unannotated restaurant and cell phone reviews, empirically validating our approach. Our present model makes strong assumptions about the independence of similarity scores. We believe this could be avoided by modeling the generation of the entire similarity matrix jointly. We have also assumed that the properties themselves are unstructured, but they are in fact related in interesting ways. For example, it would be desirable to model antonyms explicitly, e.g., no restaurant review should be simultaneously labeled as having good and bad food. The correlated topic model (Blei and Lafferty, 2006) is one way to account for relationships between hidden topics; more structured representations, such as hierarchies, may also be considered. Finally, the core idea of using free-text as a source of training labels has wide applicability, and has the potential to enable sophisticated content search and analysis. For example, online blog entries are often tagged with short keyphrases. Our technique could be used to standardize these tags, and assign keyphrases to untagged blogs. The notion of free-text annotations is also very broad — we are currently exploring the applicability of this model to Wikipedia articles, using section titles as keyphrases, to build standard article schemas. Acknowledgments The authors acknowledge the support of the NSF, Quanta Computer, the U.S. Office of Naval Research, and DARPA. Thanks to Michael Collins, Dina Katabi, Kristian Kersting, Terry Koo, Brian Milch, Tahira Naseem, Dan Roy, Benjamin Snyder, Luke Zettlemoyer, and the anonymous reviewers for helpful comments and suggestions. Any opinions, findings, and conclusions or recommendations expressed above are those of the authors and do not necessarily reflect the views of the NSF. 270 References David M. Blei and John D. Lafferty. 2006. Correlated topic models. In Advances in NIPS, pages 147–154. David M. Blei and Jon McAuliffe. 2007. Supervised topic models. In Advances in NIPS. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):37–46. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. Lecture Notes in Computer Science, 3944:177–190. Morris H. De Groot and Mark J. Schervish. 2001. Probability and Statistics. Addison Wesley. Jenny R. Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by Gibbs sampling. In Proceedings of the ACL, pages 363–370. Andrew Gelman, John B. Carlin, Hal S. Stern, and Donald B. Rubin. 2004. Bayesian Data Analysis. Texts in Statistical Science. Chapman & Hall/CRC, 2nd edition. Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2006. Contextual dependencies in unsupervised word segmentation. In Proceedings of ACL, pages 673–680. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of SIGKDD, pages 168–177. Soo-Min Kim and Eduard Hovy. 2006. Automatic identification of pro and con reasons in online reviews. In Proceedings of the COLING/ACL, pages 483–490. Dekang Lin. 1998. An information-theoretic definition of similarity. In Proceedings of ICML, pages 296–304. Ana-Maria Popescu, Bao Nguyen, and Oren Etzioni. 2005. OPINE: Extracting product features and opinions from reviews. In Proceedings of HLT/EMNLP, pages 339–346. William M. Rand. 1971. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association, 66(336):846–850, December. Bruce Sterling. 2005. Order out of chaos: What is the best way to tag, bag, and sort data? Give it to the unorganized masses. http://www.wired.com/ wired/archive/13.04/view.html?pg=4. Accessed April 21, 2008. Ivan Titov and Ryan McDonald. 2008. A joint model of text and aspect ratings for sentiment summarization. In Proceedings of the ACL. Kristina Toutanova and Mark Johnson. 2007. A Bayesian LDA-based model for semi-supervised partof-speech tagging. In Advances in NIPS. Graham Vickery and Sacha Wunsch-Vincent. 2007. Participative Web and User-Created Content: Web 2.0, Wikis and Social Networking. OECD Publishing. 271
2008
31
Proceedings of ACL-08: HLT, pages 272–280, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Automatic Image Annotation Using Auxiliary Text Information Yansong Feng and Mirella Lapata School of Informatics, University of Edinburgh 2 Buccleuch Place, Edinburgh EH8 9LW, UK [email protected], [email protected] Abstract The availability of databases of images labeled with keywords is necessary for developing and evaluating image annotation models. Dataset collection is however a costly and time consuming task. In this paper we exploit the vast resource of images available on the web. We create a database of pictures that are naturally embedded into news articles and propose to use their captions as a proxy for annotation keywords. Experimental results show that an image annotation model can be developed on this dataset alone without the overhead of manual annotation. We also demonstrate that the news article associated with the picture can be used to boost image annotation performance. 1 Introduction As the number of image collections is rapidly growing, so does the need to browse and search them. Recent years have witnessed significant progress in developing methods for image retrieval1, many of which are query-based. Given a database of images, each annotated with keywords, the query is used to retrieve relevant pictures under the assumption that the annotations can essentially capture their semantics. One stumbling block to the widespread use of query-based image retrieval systems is obtaining the keywords for the images. Since manual annotation is expensive, time-consuming and practically infeasible for large databases, there has been great in1The approaches are too numerous to list; we refer the interested reader to Datta et al. (2005) for an overview. terest in automating the image annotation process (see references). More formally, given an image I with visual features Vi = {v1,v2,...,vN} and a set of keywords W = {w1,w2,...,wM}, the task consists in finding automatically the keyword subset WI ⊂W, which can appropriately describe the image I. Indeed, several approaches have been proposed to solve this problem under a variety of learning paradigms. These range from supervised classification (Vailaya et al., 2001; Smeulders et al., 2000) to instantiations of the noisy-channel model (Duygulu et al., 2002), to clustering (Barnard et al., 2002), and methods inspired by information retrieval (Lavrenko et al., 2003; Feng et al., 2004). Obviously in order to develop accurate image annotation models, some manually labeled data is required. Previous approaches have been developed and tested almost exclusively on the Corel database. The latter contains 600 CD-ROMs, each containing about 100 images representing the same topic or concept, e.g., people, landscape, male. Each topic is associated with keywords and these are assumed to also describe the images under this topic. As an example consider the pictures in Figure 1 which are classified under the topic male and have the description keywords man, male, people, cloth, and face. Current image annotation methods work well when large amounts of labeled images are available but can run into severe difficulties when the number of images and keywords for a given topic is relatively small. Unfortunately, databases like Corel are few and far between and somewhat idealized. Corel contains clusters of many closely related images which in turn share keyword descriptions, thus allowing models to learn image-keyword associations 272 Figure 1: Images from the Corel database, exemplifying the concept male with keyword descriptions man, male, people, cloth, and face. reliably (Tang and Lewis, 2007). It is unlikely that models trained on this database will perform well out-of-domain on other image collections which are more noisy and do not share these characteristics. Furthermore, in order to develop robust image annotation models, it is crucial to have large and diverse datasets both for training and evaluation. In this work, we aim to relieve the data acquisition bottleneck associated with automatic image annotation by taking advantage of resources where images and their annotations co-occur naturally. News articles associated with images and their captions spring readily to mind (e.g., BBC News, Yahoo News). So, rather than laboriously annotating images with their keywords, we simply treat captions as labels. These annotations are admittedly noisy and far from ideal. Captions can be denotative (describing the objects the image depicts) but also connotative (describing sociological, political, or economic attitudes reflected in the image). Importantly, our images are not standalone, they come with news articles whose content is shared with the image. So, by processing the accompanying document, we can effectively learn about the image and reduce the effect of noise due to the approximate nature of the caption labels. To give a simple example, if two words appear both in the caption and the document, it is more likely that the annotation is genuine. In what follows, we present a new database consisting of articles, images, and their captions which we collected from an on-line news source. We then propose an image annotation model which can learn from our noisy annotations and the auxiliary documents. Specifically, we extend and modify Lavrenko’s (2003) continuous relevance model to suit our task. Our experimental results show that this model can successfully scale to our database, without making use of explicit human annotations in any way. We also show that the auxiliary document contains important information for generating more accurate image descriptions. 2 Related Work Automatic image annotation is a popular task in computer vision. The earliest approaches are closely related to image classification (Vailaya et al., 2001; Smeulders et al., 2000), where pictures are assigned a set of simple descriptions such as indoor, outdoor, landscape, people, animal. A binary classifier is trained for each concept, sometimes in a “one vs all” setting. The focus here is mostly on image processing and good feature selection (e.g., colour, texture, contours) rather than the annotation task itself. Recently, much progress has been made on the image annotation task thanks to three factors. The availability of the Corel database, the use of unsupervised methods and new insights from the related fields of natural language processing and information retrieval. The co-occurrence model (Mori et al., 1999) collects co-occurrence counts between words and image features and uses them to predict annotations for new images. Duygulu et al. (2002) improve on this model by treating image regions and keywords as a bi-text and using the EM algorithm to construct an image region-word dictionary. Another way of capturing co-occurrence information is to introduce latent variables linking image features with words. Standard latent semantic analysis (LSA) and its probabilistic variant (PLSA) have been applied to this task (Hofmann, 1998). Barnard et al. (2002) propose a hierarchical latent model in order to account for the fact that some words are more general than others. More sophisticated graphical models (Blei and Jordan, 2003) have also been employed including Gaussian Mixture Models (GMM) and Latent Dirichlet Allocation (LDA). Finally, relevance models originally developed for information retrieval, have been successfully applied to image annotation (Lavrenko et al., 2003; Feng et al., 2004). A key idea behind these models is to find the images most similar to the test image and then use their shared keywords for annotation. Our approach differs from previous work in two 273 important respects. Firstly, our ultimate goal is to develop an image annotation model that can cope with real-world images and noisy data sets. To this end we are faced with the challenge of building an appropriate database for testing and training purposes. Our solution is to leverage the vast resource of images available on the web but also the fact that many of these images are implicitly annotated. For example, news articles often contain images whose captions can be thought of as annotations. Secondly, we allow our image annotation model access to knowledge sources other than the image and its keywords. This is relatively straightforward in our case; an image and its accompanying document have shared content, and we can use the latter to glean information about the former. But we hope to illustrate the more general point that auxiliary linguistic information can indeed bring performance improvements on the image annotation task. 3 BBC News Database Our database consists of news images which are abundant. Many on-line news providers supply pictures with news articles, some even classify news into broad topic categories (e.g., business, world, sports, entertainment). Importantly, news images often display several objects and complex scenes and are usually associated with captions describing their contents. The captions are image specific and use a rich vocabulary. This is in marked contrast to the Corel database whose images contain one or two salient objects and a limited vocabulary (typically around 300 words). We downloaded 3,361 news articles from the BBC News website.2 Each article was accompanied with an image and its caption. We thus created a database of image-caption-document tuples. The documents cover a wide range of topics including national and international politics, advanced technology, sports, education, etc. An example of an entry in our database is illustrated in Figure 2. Here, the image caption is Marcin and Florent face intense competition from outside Europe and the accompanying article discusses EU subsidies to farmers. The images are usually 203 pixels wide and 152 pixels high. The average caption length is 5.35 tokens, and the average document length 133.85 tokens. Our 2http://news.bbc.co.uk/ Figure 2: A sample from our BBC News database. Each entry contains an image, a caption for the image, and the accompanying document with its title. captions have a vocabulary of 2,167 words and our documents 6,253. The vocabulary shared between captions and documents is 2,056 words. 4 Extending the Continuous Relevance Annotation Model Our work is an extension of the continuous relevance annotation model put forward in Lavrenko et al. (2003). Unlike other unsupervised approaches where a set of latent variables is introduced, each defining a joint distribution on the space of keywords and image features, the relevance model captures the joint probability of images and annotated words directly, without requiring an intermediate clustering stage. This model is a good point of departure for our task for several reasons, both theoretical and empirical. Firstly, expectations are computed over every single point in the training set and 274 therefore parameters can be estimated without EM. Indeed, Lavrenko et al. achieve competitive performance with latent variable models. Secondly, the generation of feature vectors is modeled directly, so there is no need for quantization. Thirdly, as we show below the model can be easily extended to incorporate information outside the image and its keywords. In the following we first lay out the assumptions underlying our model. We next describe the continuous relevance model in more detail and present our extensions and modifications. Assumptions Since we are using a nonstandard database, namely images embedded in documents, it is important to clarify what we mean by image annotation, and how the precise nature of our data impacts the task. We thus make the following assumptions: 1. The caption describes the content of the image directly or indirectly. Unlike traditional image annotation where keywords describe salient objects, captions supply more detailed information, not only about objects, and their attributes, but also events. In Figure 2 the caption mentions Marcin and Florent the two individuals shown in the picture but also the fact that they face competition from outside Europe. 2. Since our images are implicitly rather than explicitly labeled, we do not assume that we can annotate all objects present in the image. Instead, we hope to be able to model event-related information such as “what happened”, “who did it”, “when” and “where”. Our annotation task is therefore more semantic in nature than traditionally assumed. 3. The accompanying document describes the content of the image. This is trivially true for news documents where the images conventionally depict events, objects or people mentioned in the article. To validate these assumptions, we performed the following experiment on our BBC News dataset. We randomly selected 240 image-caption pairs and manually assessed whether the caption content words (i.e., nouns, verbs, and adjectives) could describe the image. We found out that the captions express the picture’s content 90% of the time. Furthermore, approximately 88% of the nouns in subject or object position directly denote salient picture objects. We thus conclude that the captions contain useful information about the picture and can be used for annotation purposes. Model Description The continuous relevance image annotation model (Lavrenko et al., 2003) generatively learns the joint probability distribution P(V,W) of words W and image regions V. The key assumption here is that the process of generating images is conditionally independent from the process of generating words. Each annotated image in the training set is treated as a latent variable. Then for an unknown image I, we estimate: P(VI,WI) = ∑ s∈D P(VI|s)P(WI|s)P(s), (1) where D is the number of images in the training database, VI are visual features of the image regions representing I, WI are the keywords of I, s is a latent variable (i.e., an image-annotation pair), and P(s) the prior probability of s. The latter is drawn from a uniform distribution: P(s) = 1 ND (2) where ND is number of the latent variables in the training database D. When estimating P(VI|s), the probability of image regions and words, Lavrenko et al. (2003) reasonably assume a generative Gaussian kernel distribution for the image regions: (3) P(VI|s) = NVI ∏ r=1 Pg(vr|s) = NVI ∏ r=1 1 nsv nsv ∑ i=1 exp{(vr −vi)TΣ−1(vr −vi)} p 2kπk |Σ| where NVI is the number of regions in image I, vr the feature vector for region r in image I, nsv the number of regions in the image of latent variable s, vi the feature vector for region i in s’s image, k the dimension of the image feature vectors and Σ the feature covariance matrix. According to equation (3), a Gaussian kernel is fit to every feature vector vi corresponding to region i in the image of the latent variable s. Each kernel here is determined by the feature covariance matrix Σ, and for simplicity, Σ is assumed to be a diagonal matrix: Σ = βI, where I is the identity matrix; and β is a scalar modulating the bandwidth of 275 the kernel whose value is optimized on the development set. Lavrenko et al. (2003) estimate the word probabilities P(WI|s) using a multinomial distribution. This is a reasonable assumption in the Corel dataset, where the annotations have similar lengths and the words reflect the salience of objects in the image (the multinomial model tends to favor words that appear multiple times in the annotation). However, in our dataset the annotations have varying lengths, and do not necessarily reflect object salience. We are more interested in modeling the presence or absence of words in the annotation and thus use the multipleBernoulli distribution to generate words (Feng et al., 2004). And rather than relying solely on annotations in the training database, we can also take the accompanying document into account using a weighted combination. The probability of sampling a set of words W given a latent variable s from the underlying multiple Bernoulli distribution that has generated the training set D is: P(W|s) = ∏ w∈W P(w|s) ∏ w/∈W (1−P(w|s)) (4) where P(w|s) denotes the probability of the w’th component of the multiple Bernoulli distribution. Now, in estimating P(w|s) we can include the document as: Pest(w|s) = αPest(w|sa)+(1−α)Pest(w|sd) (5) where α is a smoothing parameter tuned on the development set, sa is the annotation for the latent variable s and sd its corresponding document. Equation (5) smooths the influence of the annotation words and allows to offset the negative effect of the noise inherent in our dataset. Since our images are implicitly annotated, there is no guarantee that the annotations are all appropriate. By taking into account Pest(w|sd), it is possible to annotate an image with a word that appears in the document but is not included in the caption. We use a Bayesian framework for estimating Pest(w|sa). Specifically, we assume a beta prior (conjugate to the Bernoulli distribution) for each word: Pest(w|sa) = µ bw,sa +Nw µ+D (6) where µ is a smoothing parameter estimated on the development set, bw,sa is a Boolean variable denoting whether w appears in the annotation sa, and Nw is the number of latent variables that contain w in their annotations. We estimate Pest(w|sd) using maximum likelihood estimation (Ponte and Croft, 1998): Pest(w|sd) = numw,sd numsd (7) where numw,sd denotes the frequency of w in the accompanying document of latent variable s and numsd the number of all tokens in the document. Note that we purposely leave Pest unsmoothed, since it is used as a means of balancing the weight of word frequencies in annotations. So, if a word does not appear in the document, the possibility of selecting it will not be greater than α (see Equation (5)). Unfortunately, including the document in the estimation of Pest(w|s) increases the vocabulary which in turn increases computation time. Given a test image-document pair, we must evaluate P(w|VI) for every w in our vocabulary which is the union of the caption and document words. We reduce the search space, by scoring each document word with its tf ∗idf weight (Salton and McGill, 1983) and adding the n-best candidates to our caption vocabulary. This way the vocabulary is not fixed in advance for all images but changes dynamically depending on the document at hand. Re-ranking the Annotation Hypotheses It is easy to see that the output of our model is a ranked word list. Typically, the k-best words are taken to be the automatic annotations for a test image I (Duygulu et al., 2002; Lavrenko et al., 2003; Jeon and Manmatha, 2004) where k is a small number and the same for all images. So far we have taken account of the auxiliary document rather naively, by considering its vocabulary in the estimation of P(W|s). Crucially, documents are written with one or more topics in mind. The image (and its annotations) are likely to represent these topics, so ideally our model should prefer words that are strong topic indicators. A simple way to implement this idea is by re-ranking our k-best list according to a topic model estimated from the entire document collection. Specifically, we use Latent Dirichlet Allocation (LDA) as our topic model (Blei et al., 2003). LDA 276 represents documents as a mixture of topics and has been previously used to perform document classification (Blei et al., 2003) and ad-hoc information retrieval (Wei and Croft, 2006) with good results. Given a collection of documents and a set of latent variables (i.e., the number of topics), the LDA model estimates the probability of topics per document and the probability of words per topic. The topic mixture is drawn from a conjugate Dirichlet prior that remains the same for all documents. For our re-ranking task, we use the LDA model to infer the m-best topics in the accompanying document. We then select from the output of our model those words that are most likely according to these topics. To give a concrete example, let us assume that for a given image our model has produced five annotations, w1, w2, w3, w4, and w5. However, according to the LDA model neither w2 nor w5 are likely topic indicators. We therefore remove w2 and w5 and substitute them with words further down the ranked list that are topical (e.g., w6 and w7). An advantage of using LDA is that at test time we can perform inference without retraining the topic model. 5 Experimental Setup In this section we discuss our experimental design for assessing the performance of the model presented above. We give details on our training procedure and parameter estimation, describe our features, and present the baseline methods used for comparison with our approach. Data Our model was trained and tested on the database introduced in Section 3. We used 2,881 image-caption-document tuples for training, 240 tuples for development and 240 for testing. The documents and captions were part-of-speech tagged and lemmatized with Tree Tagger (Schmid, 1994).Words other than nouns, verbs, and adjectives were discarded. Words that were attested less than five times in the training set were also removed to avoid unreliable estimation. In total, our vocabulary consisted of 8,309 words. Model Parameters Images are typically segmented into regions prior to training. We impose a fixed-size rectangular grid on each image rather than attempting segmentation using a general purpose algorithm such as normalized cuts (Shi and Malik, Color average of RGB components, standard deviation average of LUV components, standard deviation average of LAB components, standard deviation Texture output of DCT transformation output of Gabor filtering (4 directions, 3 scales) Shape oriented edge (4 directions) ratio of edge to non-edge Table 2: Set of image features used in our experiments. 2000). Using a grid avoids unnecessary errors from image segmentation algorithms, reduces computation time, and simplifies parameter estimation (Feng et al., 2004). Taking the small size and low resolution of the BBC News images into account, we averagely divide each image into 6 ×5 rectangles and extract features for each region. We use 46 features based on color, texture, and shape. They are summarized in Table 2. The model presented in Section 4 has a few parameters that must be selected empirically on the development set. These include the vocabulary size, which is dependent on the n words with the highest tf ∗idf scores in each document, and the number of topics for the LDA-based re-ranker. We obtained best performance with n set to 100 (no cutoff was applied in cases where the vocabulary was less than 100). We trained an LDA model with 20 topics on our document collection using David Blei’s implementation.3 We used this model to re-rank the output of our annotation model according to the three most likely topics in each document. Baselines We compared our model against three baselines. The first baseline is based on tf ∗idf (Salton and McGill, 1983). We rank the document’s content words (i.e., nouns, verbs, and adjectives) according to their tf ∗idf weight and select the top k to be the final annotations. Our second baseline simply annotates the image with the document’s title. Again we only use content words (the average title length in the training set was 4.0 words). Our third baseline is Lavrenko et al.’s (2003) continuous relevance model. It is trained solely on image-caption 3Available from http://www.cs.princeton.edu/˜blei/ lda-c/index.html. 277 Model Top 10 Top 15 Top 20 Precision Recall F1 Precision Recall F1 Precision Recall F1 tf ∗idf 4.37 7.09 5.41 3.57 8.12 4.86 2.65 8.89 4.00 DocTitle 9.22 7.03 7.20 9.22 7.03 7.20 9.22 7.03 7.20 Lavrenko03 9.05 16.01 11.81 7.73 17.87 10.71 6.55 19.38 9.79 ExtModel 14.72 27.95 19.82 11.62 32.99 17.18 9.72 36.77 15.39 Table 1: Automatic image annotation results on the BBC News database. pairs, uses a vocabulary of 2,167 words and the same features as our extended model. Evaluation Our evaluation follows the experimental methodology proposed in Duygulu et al. (2002). We are given an un-annotated image I and are asked to automatically produce suitable annotations for I. Given a set of image regions VI, we use equation (1) to derive the conditional distribution P(w|VI). We consider the k-best words as the annotations for I. We present results using the top 10, 15, and 20 annotation words. We assess our model’s performance using precision/recall and F1. In our task, precision is the percentage of correctly annotated words over all annotations that the system suggested. Recall, is the percentage of correctly annotated words over the number of genuine annotations in the test data. F1 is the harmonic mean of precision and recall. These measures are averaged over the set of test words. 6 Results Our experiments were driven by three questions: (1) Is it possible to create an annotation model from noisy data that has not been explicitly hand labeled for this task? (2) What is the contribution of the auxiliary document? As mentioned earlier, considering the document increases the model’s computational complexity, which can be justified as long as we demonstrate a substantial increase in performance. (3) What is the contribution of the image? Here, we are trying to assess if the image features matter. For instance, we could simply generate annotation words by processing the document alone. Our results are summarized in Table 1. We compare the annotation performance of the model proposed in this paper (ExtModel) with Lavrenko et al.’s (2003) original continuous relevance model (Lavrenko03) and two other simpler models which do not take the image into account (tf ∗idf and DocTitle). First, note that the original relevance model performs best when the annotation output is restricted to 10 words with an F1 of 11.81% (recall is 9.05 and precision 16.01). F1 is marginally worse with 15 output words and decreases by 2% with 20. This model does not take any document-based information into account, it is trained solely on imagecaption pairs. On the Corel test set the same model obtains a precision of 19.0% and a recall of 16.0% with a vocabulary of 260 words. Although these results are not strictly comparable with ours due to the different nature of the training data (in addition, we output 10 annotation words, whereas Lavrenko et al. (2003) output 5), they give some indication of the decrease in performance incurred when using a more challenging dataset. Unlike Corel, our images have greater variety, non-overlapping content and employ a larger vocabulary (2,167 vs. 260 words). When the document is taken into account (see ExtModel in Table 1), F1 improves by 8.01% (recall is 14.72% and precision 27.95%). Increasing the size of the output annotations to 15 or 20 yields better recall, at the expense of precision. Eliminating the LDA reranker from the extended model decreases F1 by 0.62%. Incidentally, LDA can be also used to rerank the output of Lavrenko et al.’s (2003) model. LDA also increases the performance of this model by 0.41%. Finally, considering the document alone, without the image yields inferior performance. This is true for the tf ∗idf model and the model based on the document titles.4 Interestingly, the latter yields precision similar to Lavrenko et al. (2003). This is probably due to the fact that the document’s title is in a sense similar to a caption. It often contains words that describe the document’s gist and expectedly 4Reranking the output of these models with LDA slightly decreases performance (approximately by 0.2%). 278 tf ∗idf breastfeed, medical, intelligent, health, child culturalism, faith, Muslim, separateness, ethnic ceasefire, Lebanese, disarm, cabinet, Haaretz DocTitle Breast milk does not boost IQ UK must tackle ethnic tensions Mid-East hope as ceasefire begins Lavrenko03 woman, baby, hospital, new, day, lead, good, England, look, family bomb, city, want, day, fight, child, attack, face, help, government war, carry, city, security, Israeli, attack, minister, force, government, leader ExtModel breastfeed, intelligent, baby, mother, tend, child, study, woman, sibling, advantage aim, Kelly, faith, culturalism, community, Ms, tension, commission, multi, tackle, school Lebanon, Israeli, Lebanese, aeroplane, troop, Hezbollah, Israel, force, ceasefire, grey Caption Breastfed babies tend to be brighter Segregation problems were blamed for 2001’s Bradford riots Thousands of Israeli troops are in Lebanon as the ceasefire begins Figure 3: Examples of annotations generated by our model (ExtModel), the continuous relevance model (Lavrenko03), and the two baselines based on tf ∗idf and the document title (DocTitle). Words in bold face indicate exact matches, underlined words are semantically compatible. The original captions are in the last row. some of these words will be also appropriate for the image. In fact, in our dataset, the title words are a subset of those found in the captions. Examples of the annotations generated by our model are shown in Figure 3. We also include the annotations produced by Lavrenko et. al’s (2003) model and the two baselines. As we can see our model annotates the image with words that are not always included in the caption. Some of these are synonyms of the caption words (e.g., child and intelligent in left image of Figure 3), whereas others express additional information (e.g., mother, woman). Also note that complex scene images remain challenging (see the center image in Figure 3). Such images are better analyzed at a higher resolution and probably require more training examples. 7 Conclusions and Future Work In this paper, we describe a new approach for the collection of image annotation datasets. Specifically, we leverage the vast resource of images available on the Internet while exploiting the fact that many of them are labeled with captions. Our experiments show that it is possible to learn an image annotation model from caption-picture pairs even if these are not explicitly annotated in any way. We also show that the annotation model benefits substantially from additional information, beyond the caption or image. In our case this information is provided by the news documents associated with the pictures. But more generally our results indicate that further linguistic knowledge is needed to improve performance on the image annotation task. For instance, resources like WordNet (Fellbaum, 1998) can be used to expand the annotations by exploiting information about is-a relationships. The uses of the database discussed in this article are many and varied. An interesting future direction concerns the application of the proposed model in a semi-supervised setting where the annotation output is iteratively refined with some manual intervention. Another possibility would be to use the document to increase the annotation keywords by identifying synonyms or even sentences that are similar to the image caption. Also note that our analysis of the accompanying document was rather shallow, limited to part of speech tagging. It is reasonable to assume that results would improve with more sophisticated preprocessing (i.e., named entity recognition, parsing, word sense disambiguation). Finally, we also believe that the model proposed here can be usefully employed in an information retrieval setting, where the goal is to find the image most relevant for a given query or document. 279 References K. Barnard, P. Duygulu, D. Forsyth, N. de Freitas, D. Blei, and M. Jordan. 2002. Matching words and pictures. Journal of Machine Learning Research, 3:1107–1135. D. Blei and M. Jordan. 2003. Modeling annotated data. In Proceedings of the 26th Annual International ACM SIGIR Conference, pages 127–134, Toronto, ON. D. Blei, A. Ng, and M. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. R. Datta, J. Li, and J. Z. Wang. 2005. Content-based image retrieval – approaches and trends of the new age. In Proceedings of the International Workshop on Multimedia Information Retrieval, pages 253–262, Singapore. P. Duygulu, K. Barnard, J. de Freitas, and D. Forsyth. 2002. Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary. In Proceedings of the 7th European Conference on Computer Vision, pages 97–112, Copenhagen, Danemark. C. Fellbaum, editor. 1998. WordNet: An Electronic Database. MIT Press, Cambridge, MA. S. Feng, V. Lavrenko, and R. Manmatha. 2004. Multiple Bernoulli relevance models for image and video annotation. In Proceedings of the International Conference on Computer Vision and Pattern Recognition, pages 1002–1009, Washington, DC. T. Hofmann. 1998. Learning and representing topic. A hierarchical mixture model for word occurrences in document databases. In Proceedings of the Conference for Automated Learning and Discovery, pages 408–415, Pittsburgh, PA. J. Jeon and R. Manmatha. 2004. Using maximum entropy for automatic image annotation. In Proceedings of the 3rd International Conference on Image and Video Retrieval, pages 24–32, Dublin City, Ireland. V. Lavrenko, R. Manmatha, and J. Jeon. 2003. A model for learning the semantics of pictures. In Proceedings of the 16th Conference on Advances in Neural Information Processing Systems, Vancouver, BC. Y. Mori, H. Takahashi, and R. Oka. 1999. Image-to-word transformation based on dividing and vector quantizing images with words. In Proceedings of the 1st International Workshop on Multimedia Intelligent Storage and Retrieval Management, Orlando, FL. J. M. Ponte and W. Bruce Croft. 1998. A language modeling approach to information retrieval. In Proceedings of the 21st Annual International ACM SIGIR Conference, pages 275–281, New York, NY. G. Salton and M.J. McGill. 1983. Introduction to Modern Information Retrieval. McGraw-Hill, New York. H. Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In Proceedings of the International Conference on New Methods in Language Processing, Manchester, UK. J. Shi and J. Malik. 2000. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905. A. W. Smeulders, M. Worring, S. Santini, A. Gupta, and R. Jain. 2000. Content-based image retrieval at the end of the early years. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(12):1349– 1380. J. Tang and P. H. Lewis. 2007. A study of quality issues for image auto-annotation with the Corel data-set. IEEE Transactions on Circuits and Systems for Video Technology, 17(3):384–389. A. Vailaya, M. Figueiredo, A. Jain, and H. Zhang. 2001. Image classification for content-based indexing. IEEE Transactions on Image Processing, 10:117–130. X. Wei and B. W. Croft. 2006. LDA-based document models for ad-hoc retrieval. In Proeedings of the 29th Annual International ACM SIGIR Conference, pages 178–185, Seattle, WA. 280
2008
32
Proceedings of ACL-08: HLT, pages 281–289, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Hedge classification in biomedical texts with a weakly supervised selection of keywords Gy¨orgy Szarvas Research Group on Artificial Intelligence Hungarian Academy of Sciences / University of Szeged HU-6720 Szeged, Hungary [email protected] Abstract Since facts or statements in a hedge or negated context typically appear as false positives, the proper handling of these language phenomena is of great importance in biomedical text mining. In this paper we demonstrate the importance of hedge classification experimentally in two real life scenarios, namely the ICD9-CM coding of radiology reports and gene name Entity Extraction from scientific texts. We analysed the major differences of speculative language in these tasks and developed a maxent-based solution for both the free text and scientific text processing tasks. Based on our results, we draw conclusions on the possible ways of tackling speculative language in biomedical texts. 1 Introduction The highly accurate identification of several regularly occurring language phenomena like the speculative use of language, negation and past tense (temporal resolution) is a prerequisite for the efficient processing of biomedical texts. In various natural language processing tasks, relevant statements appearing in a speculative context are treated as false positives. Hedge detection seeks to perform a kind of semantic filtering of texts, that is it tries to separate factual statements from speculative/uncertain ones. 1.1 Hedging in biomedical NLP To demonstrate the detrimental effects of speculative language on biomedical NLP tasks, we will consider two inherently different sample tasks, namely the ICD-9-CM coding of radiology records and gene information extraction from biomedical scientific texts. The general features of texts used in these tasks differ significantly from each other, but both tasks require the exclusion of uncertain (or speculative) items from processing. 1.1.1 Gene Name and interaction extraction from scientific texts The test set of the hedge classification dataset 1 (Medlock and Briscoe, 2007) has also been annotated for gene names2. Examples of speculative assertions: Thus, the D-mib wing phenotype may result from defective N inductive signaling at the D-V boundary. A similar role of Croquemort has not yet been tested, but seems likely since the crq mutant used in this study (crqKG01679) is lethal in pupae. After an automatic parallelisation of the 2 annotations (sentence matching) we found that a significant part of the gene names mentioned (638 occurences out of a total of 1968) appears in a speculative sentence. This means that approximately 1 in every 3 genes should be excluded from the interaction detection process. These results suggest that a major portion of system false positives could be due to hedging if hedge detection had been neglected by a gene interaction extraction system. 1.1.2 ICD-9-CM coding of radiology records Automating the assignment of ICD-9-CM codes for radiology records was the subject of a shared task 1http://www.cl.cam.ac.uk/∼bwm23/ 2http://www.cl.cam.ac.uk/∼nk304/ 281 challenge organised in Spring 2007. The detailed description of the task, and the challenge itself can be found in (Pestian et al., 2007) and online3. ICD9-CM codes that are assigned to each report after the patient’s clinical treatment are used for the reimbursement process by insurance companies. There are official guidelines for coding radiology reports (Moisio, 2006). These guidelines strictly state that an uncertain diagnosis should never be coded, hence identifying reports with a diagnosis in a speculative context is an inevitable step in the development of automated ICD-9-CM coding systems. The following examples illustrate a typical non-speculative context where a given code should be added, and a speculative context where the same code should never be assigned to the report: non-speculative: Subsegmental atelectasis in the left lower lobe, otherwise normal exam. speculative: Findings suggesting viral or reactive airway disease with right lower lobe atelectasis or pneumonia. In an ICD-9 coding system developed for the challenge, the inclusion of a hedge classifier module (a simple keyword-based lookup method with 38 keywords) improved the overall system performance from 79.7% to 89.3%. 1.2 Related work Although a fair amount of literature on hedging in scientific texts has been produced since the 1990s (e.g. (Hyland, 1994)), speculative language from a Natural Language Processing perspective has only been studied in the past few years. This phenomenon, together with others used to express forms of authorial opinion, is often classified under the notion of subjectivity (Wiebe et al., 2004), (Shanahan et al., 2005). Previous studies (Light et al., 2004) showed that the detection of hedging can be solved effectively by looking for specific keywords which imply that the content of a sentence is speculative and constructing simple expert rules that describe the circumstances of where and how a keyword should appear. Another possibility is to treat the problem as a classification task and train a statistical model to discriminate speculative and nonspeculative assertions. This approach requires the availability of labeled instances to train the models 3http://www.computationalmedicine.org/challenge/index.php on. Riloff et al. (Riloff et al., 2003) applied bootstrapping to recognise subjective noun keywords and classify sentences as subjective or objective in newswire texts. Medlock and Briscoe (Medlock and Briscoe, 2007) proposed a weakly supervised setting for hedge classification in scientific texts where the aim is to minimise human supervision needed to obtain an adequate amount of training data. Here we follow (Medlock and Briscoe, 2007) and treat the identification of speculative language as the classification of sentences for either speculative or non-speculative assertions, and extend their methodology in several ways. Thus given labeled sets Sspec and Snspec the task is to train a model that, for each sentence s, is capable of deciding whether a previously unseen s is speculative or not. The contributions of this paper are the following: • The construction of a complex feature selection procedure which successfully reduces the number of keyword candidates without excluding helpful keywords. • We demonstrate that with a very limited amount of expert supervision in finalising the feature representation, it is possible to build accurate hedge classifiers from (semi-) automatically collected training data. • The extension of the feature representation used by previous works with bigrams and trigrams and an evaluation of the benefit of using longer keywords in hedge classification. • We annotated a small test corpora of biomedical scientific papers from a different source to demonstrate that hedge keywords are highly task-specific and thus constructing models that generalise well from one task to another is not feasible without a noticeable loss in accuracy. 2 Methods 2.1 Feature space representation Hedge classification can essentially be handled by acquiring task specific keywords that trigger speculative assertions more or less independently of each other. As regards the nature of this task, a vector space model (VSM) is a straightforward and suitable representation for statistical learning. As VSM 282 is inadequate for capturing the (possibly relevant) relations between subsequent tokens, we decided to extend the representation with bi- and trigrams of words. We chose not to add any weighting of features (by frequency or importance) and for the Maximum Entropy Model classifier we included binary data about whether single features occurred in the given context or not. 2.2 Probabilistic training data acquisition To build our classifier models, we used the dataset gathered and made available by (Medlock and Briscoe, 2007). They commenced with the seed set Sspec gathered automatically (all sentences containing suggest or likely – two very good speculative keywords), and Snspec that consisted of randomly selected sentences from which the most probable speculative instances were filtered out by a pattern matching and manual supervision procedure. With these seed sets they then performed the following iterative method to enlarge the initial training sets, adding examples to both classes from an unlabelled pool of sentences called U: 1. Generate seed training data: Sspec and Snspec 2. Initialise: Tspec ←Sspec and Tnspec ←Snspec 3. Iterate: • Train classifier using Tspec and Tnspec • Order U by P(spec) values assigned by the classifier • Tspec ←most probable batch • Tnspec ←least probable batch What makes this iterative method efficient is that, as we said earlier, hedging is expressed via keywords in natural language texts; and often several keywords are present in a single sentence. The seed set Sspec contained either suggest or likely, and due to the fact that other keywords cooccur with these two in many sentences, they appeared in Sspec with reasonable frequency. For example, P(spec|may) = 0.9985 on the seed sets created by (Medlock and Briscoe, 2007). The iterative extension of the training sets for each class further boosted this effect, and skewed the distribution of speculative indicators as sentences containing them were likely to be added to the extended training set for the speculative class, and unlikely to fall into the non-speculative set. We should add here that the very same feature has an inevitable, but very important side effect that is detrimental to the classification accuracy of models trained on a dataset which has been obtained this way. This side effect is that other words (often common words or stopwords) that tend to cooccur with hedge cues will also be subject to the same iterative distortion of their distribution in speculative and non-speculative uses. Perhaps the best example of this is the word it. Being a stopword in our case, and having no relevance at all to speculative assertions, it has a class conditional probability of P(spec|it) = 74.67% on the seed sets. This is due to the use of phrases like it suggests that, it is likely, and so on. After the iterative extension of training sets, the class-conditional probability of it dramatically increased, to P(spec|it) = 94.32%. This is a consequence of the frequent co-occurence of it with meaningful hedge cues and the probabilistic model used and happens with many other irrelevant terms (not just stopwords). The automatic elimination of these irrelevant candidates is one of our main goals (to limit the number of candidates for manual consideration and thus to reduce the human effort required to select meaningful hedge cues). This shows that, in addition to the desired effect of introducing further speculative keywords and biasing their distribution towards the speculative class, this iterative process also introduces significant noise into the dataset. This observation led us to the conclusion that in order to build efficient classifiers based on this kind of dataset, we should filter out noise. In the next part we will present our feature selection procedure (evaluated in the Results section) which is capable of underranking irrelevant keywords in the majority of cases. 2.3 Feature (or keyword) selection To handle the inherent noise in the training dataset that originates from its weakly supervised construction, we applied the following feature selection procedure. The main idea behind it is that it is unlikely that more than two keywords are present in the text, which are useful for deciding whether an instance is speculative. Here we performed the following steps: 283 1. We ranked the features x by frequency and their class conditional probability P(spec|x). We then selected those features that had P(spec|x) > 0.94 (this threshold was chosen arbitrarily) and appeared in the training dataset with reasonable frequency (frequency above 10−5). This set constituted the 2407 candidates which we used in the second analysis phase. 2. For trigrams, bigrams and unigrams – processed separately – we calculated a new classconditional probability for each feature x, discarding those observations of x in speculative instances where x was not among the two highest ranked candidate. Negative credit was given for all occurrences in non-speculative contexts. We discarded any feature that became unreliable (i.e. any whose frequency dropped below the threshold or the strict class-conditional probability dropped below 0.94). We did this separately for the uni-, bi- and trigrams to avoid filtering out longer phrases because more frequent, shorter candidates took the credit for all their occurrences. In this step we filtered out 85% of all the keyword candidates and kept 362 uni-, bi-, and trigrams altogether. 3. In the next step we re-evaluated all 362 candidates together and filtered out all phrases that had a shorter and thus more frequent substring of themselves among the features, with a similar class-conditional probability on the speculative class (worse by 2% at most). Here we discarded a further 30% of the candidates and kept 253 uni-, bi-, and trigrams altogether. This efficient way of reranking and selecting potentially relevant features (we managed to discard 89.5% of all the initial candidates automatically) made it easier for us to manually validate the remaining keywords. This allowed us to incorporate supervision into the learning model in the feature representation stage, but keep the weakly supervised modelling (with only 5 minutes of expert supervision required). 2.4 Maximum Entropy Classifier Maximum Entropy Models (Berger et al., 1996) seek to maximise the conditional probability of classes, given certain observations (features). This is performed by weighting features to maximise the likelihood of data and, for each instance, decisions are made based on features present at that point, thus maxent classification is quite suitable for our purposes. As feature weights are mutually estimated, the maxent classifier is capable of taking feature dependence into account. This is useful in cases like the feature it being dependent on others when observed in a speculative context. By downweighting such features, maxent is capable of modelling to a certain extent the special characteristics which arise from the automatic or weakly supervised training data acquisition procedure. We used the OpenNLP maxent package, which is freely available4. 3 Results In this section we will present our results for hedge classification as a standalone task. In experiments we made use of the hedge classification dataset of scientific texts provided by (Medlock and Briscoe, 2007) and used a labeled dataset generated automatically based on false positive predictions of an ICD9-CM coding system. 3.1 Results for hedge classification in biomedical texts As regards the degree of human intervention needed, our classification and feature selection model falls within the category of weakly supervised machine learning. In the following sections we will evaluate our above-mentioned contributions one by one, describing their effects on feature space size (efficiency in feature and noise filtering) and classification accuracy. In order to compare our results with Medlock and Briscoe’s results (Medlock and Briscoe, 2007), we will always give the BEP(spec) that they used – the break-even-point of precision and recall5. We will also present Fβ=1(spec) values 4http://maxent.sourceforge.net/ 5It is the point on the precision-recall curve of spec class where P = R. If an exact P = R cannot be realised due to the equal ranking of many instances, we use the point closest to P = R and set BEP(spec) = (P + R)/2. BEP is an 284 which show how good the models are at recognising speculative assertions. 3.1.1 The effects of automatic feature selection The method we proposed seems especially effective in the sense that we successfully reduced the number of keyword candidates from an initial 2407 words having P(spec|x) > 0.94 to 253, which is a reduction of almost 90%. During the process, very few useful keywords were eliminated and this indicated that our feature selection procedure was capable of distinguishing useful keywords from noise (i.e. keywords having a very high speculative class-conditional probability due to the skewed characteristics of the automatically gathered training dataset). The 2407-keyword model achieved a BEP(spec) os 76.05% and Fβ=1(spec) of 73.61%, while the model after feature selection performed better, achieving a BEP(spec) score of 78.68% and Fβ=1(spec) score of 78.09%. Simplifying the model to predict a spec label each time a keyword was present (by discarding those 29 features that were too weak to predict spec alone) slightly increased both the BEP(spec) and Fβ=1(spec) values to 78.95% and 78.25%. This shows that the Maximum Entropy Model in this situation could not learn any meaningful hypothesis from the cooccurence of individually weak keywords. 3.1.2 Improvements by manual feature selection After a dimension reduction via a strict reranking of features, the resulting number of keyword candidates allowed us to sort the retained phrases manually and discard clearly irrelevant ones. We judged a phrase irrelevant if we could consider no situation in which the phrase could be used to express hedging. Here 63 out of the 253 keywords retained by the automatic selection were found to be potentially relevant in hedge classification. All these features were sufficient for predicting the spec class alone, thus we again found that the learnt model reduced to a single keyword-based decision.6 These 63 keyinteresting metric as it demonstrates how well we can trade-off precision for recall. 6We kept the test set blind during the selection of relevant keywords. This meant that some of them eventually proved to be irrelevant, or even lowered the classification accuracy. Examples of such keywords were will, these data and hypothesis. words yielded a classifier with a BEP(spec) score of 82.02% and Fβ=1(spec) of 80.88%. 3.1.3 Results obtained adding external dictionaries In our final model we added the keywords used in (Light et al., 2004) and those gathered for our ICD9-CM hedge detection module. Here we decided not to check whether these keywords made sense in scientific texts or not, but instead left this task to the maximum entropy classifier, and added only those keywords that were found reliable enough to predict spec label alone by the maxent model trained on the training dataset. These experiments confirmed that hedge cues are indeed task specific – several cues that were reliable in radiology reports proved to be of no use for scientific texts. We managed to increase the number of our features from 63 to 71 using these two external dictionaries. These additional keywords helped us to increase the overall coverage of the model. Our final hedge classifier yielded a BEP(spec) score of 85.29% and Fβ=1(spec) score of 85.08% (89.53% Precision, 81.05% Recall) for the speculative class. This meant an overall classification accuracy of 92.97%. Using this system as a pre-processing module for a hypothetical gene interaction extraction system, we found that our classifier successfully excluded gene names mentioned in a speculative sentence (it removed 81.66% of all speculative mentions) and this filtering was performed with a respectable precision of 93.71% (Fβ=1(spec) = 87.27%). Articles 4 Sentences 1087 Spec sentences 190 Nspec sentences 897 Table 1: Characteristics of the BMC hedge dataset. 3.1.4 Evaluation on scientific texts from a different source Following the annotation standards of Medlock and Briscoe (Medlock and Briscoe, 2007), we manually annotated 4 full articles downloaded from the We assumed that these might suggest a speculative assertion. 285 BMC Bioinformatics website to evaluate our final model on documents from an external source. The chief characteristics of this dataset (which is available at7) is shown in Table 1. Surprisingly, the model learnt on FlyBase articles seemed to generalise to these texts only to a limited extent. Our hedge classifier model yielded a BEP(spec) = 75.88% and Fβ=1(spec) = 74.93% (mainly due to a drop in precision), which is unexpectedly low compared to the previous results. Analysis of errors revealed that some keywords which proved to be very reliable hedge cues in FlyBase articles were also used in non-speculative contexts in the BMC articles. Over 50% (24 out of 47) of our false positive predictions were due to the different use of 2 keywords, possible and likely. These keywords were many times used in a mathematical context (referring to probabilities) and thus expressed no speculative meaning, while such uses were not represented in the FlyBase articles (otherwise bigram or trigram features could have captured these non-speculative uses). 3.1.5 The effect of using 2-3 word-long phrases as hedge cues Our experiments demonstrated that it is indeed a good idea to include longer phrases in the vector space model representation of sentences. One third of the features used by our advanced model were either bigrams or trigrams. About half of these were the kind of phrases that had no unigram components of themselves in the feature set, so these could be regarded as meaningful standalone features. Examples of such speculative markers in the fruit fly dataset were: results support, these observations, indicate that, not clear, does not appear, . . . The majority of these phrases were found to be reliable enough for our maximum entropy model to predict a speculative class based on that single feature. Our model using just unigram features achieved a BEP(spec) score of 78.68% and Fβ=1(spec) score of 80.23%, which means that using bigram and trigram hedge cues here significantly improved the performance (the difference in BEP(spec) and Fβ=1(spec) scores were 5.23% and 4.97%, respectively). 7http://www.inf.u-szeged.hu/∼szarvas/homepage/hedge.html 3.2 Results for hedge classification in radiology reports In this section we present results using the abovementioned methods for the automatic detection of speculative assertions in radiology reports. Here we generated training data by an automated procedure. Since hedge cues cause systems to predict false positive labels, our idea here was to train Maximum Entropy Models for the false positive classifications of our ICD-9-CM coding system using the vector space representation of radiology reports. That is, we classified every sentence that contained a medical term (disease or symptom name) and caused the automated ICD-9 coder8 to predict a false positive code was treated as a speculative sentence and all the rest were treated as non-speculative sentences. Here a significant part of the false positive predictions of an ICD-9-CM coding system that did not handle hedging originated from speculative assertions, which led us to expect that we would have the most hedge cues among the top ranked keywords which implied false positive labels. Taking the above points into account, we used the training set of the publicly available ICD-9-CM dataset to build our model and then evaluated each single token by this model to measure their predictivity for a false positive code. Not surprisingly, some of the best hedge cues appeared among the highest ranked features, while some did not (they did not occur frequently enough in the training data to be captured by statistical methods). For this task, we set the initial P(spec|x) threshold for filtering to 0.7 since the dataset was generated by a different process and we expected hedge cues to have lower class-conditional probabilities without the effect of the probabilistic data acquisition method that had been applied for scientific texts. Using all 167 terms as keywords that had P(spec|x) > 0.7 resulted in a hedge classifier with an Fβ=1(spec) score of 64.04% After the feature selection process 54 keywords were retained. This 54-keyword maxent classifier got an Fβ=1(spec) score of 79.73%. Plugging this model (without manual filtering) into the ICD-9 coding system as a hedge module, the ICD-9 coder 8Here the ICD-9 coding system did not handle the hedging task. 286 yielded an F measure of 88.64%, which is much better than one without a hedge module (79.7%). Our experiments revealed that in radiology reports, which mainly concentrate on listing the identified diseases and symptoms (facts) and the physician’s impressions (speculative parts), detecting hedge instances can be performed accurately using unigram features. All bi- and trigrams retained by our feature selection process had unigram equivalents that were eliminated due to the noise present in the automatically generated training data. We manually examined all keywords that had a P(spec) > 0.5 given as a standalone instance for our maxent model, and constructed a dictionary of hedge cues from the promising candidates. Here we judged 34 out of 54 candidates to be potentially useful for hedging. Using these 34 keywords we got an Fβ=1(spec) performance of 81.96% due to the improved precision score. Extending the dictionary with the keywords we gathered from the fruit fly dataset increased the Fβ=1(spec) score to 82.07% with only one outdomain keyword accepted by the maxent classifier. Biomedical papers Medical reports BEP (spec) Fβ=1(spec) Fβ=1(spec) Baseline 1 60.00 – 48.99 Baseline 2 76.30 – – All features 76.05 73.61 64.04 Feature selection 78.68 78.09 79.73 Manual feat. sel. 82.02 80.88 81.96 Outer dictionary 85.29 85.08 82.07 Table 2: Summary of results. 4 Conclusions The overall results of our study are summarised in a concise way in Table 2. We list BEP(spec) and Fβ=1(spec) values for the scientific text dataset, and Fβ=1(spec) for the clinical free text dataset. Baseline 1 denotes the substring matching system of Light et al. (Light et al., 2004) and Baseline 2 denotes the system of Medlock and Briscoe (Medlock and Briscoe, 2007). For clinical free texts, Baseline 1 is an out-domain model since the keywords were collected for scientific texts by (Light et al., 2004). The third row corresponds to a model using all keywords P(spec|x) above the threshold and the fourth row a model after automatic noise filtering, while the fifth row shows the performance after the manual filtering of automatically selected keywords. The last row shows the benefit gained by adding reliable keywords from an external hedge keyword dictionary. Our results presented above confirm our hypothesis that speculative language plays an important role in the biomedical domain, and it should be handled in various NLP applications. We experimentally compared the general features of this task in texts from two different domains, namely medical free texts (radiology reports), and scientific articles on the fruit fly from FlyBase. The radiology reports had mainly unambiguous single-term hedge cues. On the other hand, it proved to be useful to consider bi- and trigrams as hedge cues in scientific texts. This, and the fact that many hedge cues were found to be ambiguous (they appeared in both speculative and non-speculative assertions) can be attributed to the literary style of the articles. Next, as the learnt maximum entropy models show, the hedge classification task reduces to a lookup for single keywords or phrases and to the evaluation of the text based on the most relevant cue alone. Removing those features that were insufficient to classify an instance as a hedge individually did not produce any difference in the Fβ=1(spec) scores. This latter fact justified a view of ours, namely that during the construction of a statistical hedge detection module for a given application the main issue is to find the task-specific keywords. Our findings based on the two datasets employed show that automatic or weakly supervised data acquisition, combined with automatic and manual feature selection to eliminate the skewed nature of the data obtained, is a good way of building hedge classifier modules with an acceptable performance. The analysis of errors indicate that more complex features like dependency structure and clausal phrase information could only help in allocating the scope of hedge cues detected in a sentence, not the detection of any itself. Our finding that token unigram features are capable of solving the task accurately agrees with the the results of previous works on hedge classification ((Light et al., 2004), (Med287 lock and Briscoe, 2007)), and we argue that 2-3 word-long phrases also play an important role as hedge cues and as non-speculative uses of an otherwise speculative keyword as well (i.e. to resolve an ambiguity). In contrast to the findings of Wiebe et al. ((Wiebe et al., 2004)), who addressed the broader task of subjectivity learning and found that the density of other potentially subjective cues in the context benefits classification accuracy, we observed that the co-occurence of speculative cues in a sentence does not help in classifying a term as speculative or not. Realising that our learnt models never predicted speculative labels based on the presence of two or more individually weak cues and discarding such terms that were not reliable enough to predict a speculative label (using that term alone as a single feature) slightly improved performance, we came to the conclusion that even though speculative keywords tend to cooccur, and two keywords are present in many sentences; hedge cues have a speculative meaning (or not) on their own without the other term having much impact on this. The main issue thus lies in the selection of keywords, for which we proposed a procedure that is capable of reducing the number of candidates to an acceptable level for human evaluation – even in data collected automatically and thus having some undesirable properties. The worse results on biomedical scientific papers from a different source also corroborates our finding that hedge cues can be highly ambiguous. In our experiments two keywords that are practically never used in a non-speculative context in the FlyBase articles we used for training were responsible for 50% of false positives in BMC texts since they were used in a different meaning. In our case, the keywords possible and likely are apparently always used as speculative terms in the FlyBase articles used, while the articles from BMC Bioinformatics frequently used such cliche phrases as all possible combinations or less likely / more likely . . . (referring to probabilities shown in the figures). This shows that the portability of hedge classifiers is limited, and cannot really be done without the examination of the specific features of target texts or a more heterogenous corpus is required for training. The construction of hedge classifiers for each separate target application in a weakly supervised way seems feasible though. Collecting bi- and trigrams which cover non-speculative usages of otherwise common hedge cues is a promising solution for addressing the false positives in hedge classifiers and for improving the portability of hedge modules. 4.1 Resolving the scope of hedge keywords In this paper we focused on the recognition of hedge cues in texts. Another important issue would be to determine the scope of hedge cues in order to locate uncertain sentence parts. This can be solved effectively using a parser adapted for biomedical papers. We manually evaluated the parse trees generated by (Miyao and Tsujii, 2005) and came to the conclusion that for each keyword it is possible to define the scope of the keyword using subtrees linked to the keyword in the predicate-argument syntactic structure or by the immediate subsequent phrase (e.g. prepositional phrase). Naturally, parse errors result in (slightly) mislocated scopes but we had the general impression that state-of-the-art parsers could be used efficiently for this issue. On the other hand, this approach requires a human expert to define the scope for each keyword separately using the predicate-argument relations, or to determine keywords that act similarly and their scope can be located with the same rules. Another possibility is simply to define the scope to be each token up to the end of the sentence (and optionally to the previous punctuation mark). The latter solution has been implemented by us and works accurately for clinical free texts. This simple algorithm is similar to NegEx (Chapman et al., 2001) as we use a list of phrases and their context, but we look for punctuation marks to determine the scopes of keywords instead of applying a fixed window size. Acknowledgments This work was supported in part by the NKTH grant of Jedlik ´Anyos R&D Programme 2007 of the Hungarian government (codename TUDORKA7). The author wishes to thank the anonymous reviewers for valuable comments and Veronika Vincze for valuable comments in linguistic issues and for help with the annotation work. 288 References Adam L. Berger, Stephen Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. Wendy W. Chapman, Will Bridewell, Paul Hanbury, Gregory F. Cooper, and Bruce G. Buchanan. 2001. A simple algorithm for identifying negated findings and diseases in discharge summaries. Journal of Biomedical Informatics, 5:301–310. Ken Hyland. 1994. Hedging in academic writing and eap textbooks. English for Specific Purposes, 13(3):239– 256. Marc Light, Xin Ying Qiu, and Padmini Srinivasan. 2004. The language of bioscience: Facts, speculations, and statements in between. In Lynette Hirschman and James Pustejovsky, editors, HLTNAACL 2004 Workshop: BioLINK 2004, Linking Biological Literature, Ontologies and Databases, pages 17–24, Boston, Massachusetts, USA, May 6. Association for Computational Linguistics. Ben Medlock and Ted Briscoe. 2007. Weakly supervised learning for hedge classification in scientific literature. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 992–999, Prague, Czech Republic, June. Association for Computational Linguistics. Yusuke Miyao and Jun’ichi Tsujii. 2005. Probabilistic disambiguation models for wide-coverage HPSG parsing. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 83–90, Ann Arbor, Michigan, June. Association for Computational Linguistics. Marie A. Moisio. 2006. A Guide to Health Insurance Billing. Thomson Delmar Learning. John P. Pestian, Chris Brew, Pawel Matykiewicz, DJ Hovermale, Neil Johnson, K. Bretonnel Cohen, and Wlodzislaw Duch. 2007. A shared task involving multi-label classification of clinical free text. In Biological, translational, and clinical language processing, pages 97–104, Prague, Czech Republic, June. Association for Computational Linguistics. Ellen Riloff, Janyce Wiebe, and Theresa Wilson. 2003. Learning subjective nouns using extraction pattern bootstrapping. In Proceedings of the Seventh Computational Natural Language Learning Conference, pages 25–32, Edmonton, Canada, May-June. Association for Computational Linguistics. James G. Shanahan, Yan Qu, and Janyce Wiebe. 2005. Computing Attitude and Affect in Text: Theory and Applications (The Information Retrieval Series). Springer-Verlag New York, Inc., Secaucus, NJ, USA. Janyce Wiebe, Theresa Wilson, Rebecca F. Bruce, Matthew Bell, and Melanie Martin. 2004. Learning subjective language. Computational Linguistics, 30(3):277–308. 289
2008
33
Proceedings of ACL-08: HLT, pages 290–298, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics When Specialists and Generalists Work Together: Overcoming Domain Dependence in Sentiment Tagging Alina Andreevskaia Concordia University Montreal, Quebec [email protected] Sabine Bergler Concordia University Montreal, Canada [email protected] Abstract This study presents a novel approach to the problem of system portability across different domains: a sentiment annotation system that integrates a corpus-based classifier trained on a small set of annotated in-domain data and a lexicon-based system trained on WordNet. The paper explores the challenges of system portability across domains and text genres (movie reviews, news, blogs, and product reviews), highlights the factors affecting system performance on out-of-domain and smallset in-domain data, and presents a new system consisting of the ensemble of two classifiers with precision-based vote weighting, that provides significant gains in accuracy and recall over the corpus-based classifier and the lexicon-based system taken individually. 1 Introduction One of the emerging directions in NLP is the development of machine learning methods that perform well not only on the domain on which they were trained, but also on other domains, for which training data is not available or is not sufficient to ensure adequate machine learning. Many applications require reliable processing of heterogeneous corpora, such as the World Wide Web, where the diversity of genres and domains present in the Internet limits the feasibility of in-domain training. In this paper, sentiment annotation is defined as the assignment of positive, negative or neutral sentiment values to texts, sentences, and other linguistic units. Recent experiments assessing system portability across different domains, conducted by Aue and Gamon (2005), demonstrated that sentiment annotation classifiers trained in one domain do not perform well on other domains. A number of methods has been proposed in order to overcome this system portability limitation by using out-of-domain data, unlabelled in-domain corpora or a combination of in-domain and out-of-domain examples (Aue and Gamon, 2005; Bai et al., 2005; Drezde et al., 2007; Tan et al., 2007). In this paper, we present a novel approach to the problem of system portability across different domains by developing a sentiment annotation system that integrates a corpus-based classifier with a lexicon-based system trained on WordNet. By adopting this approach, we sought to develop a system that relies on both general and domainspecific knowledge, as humans do when analyzing a text. The information contained in lexicographical sources, such as WordNet, reflects a lay person’s general knowledge about the world, while domainspecific knowledge can be acquired through classifier training on a small set of in-domain data. The first part of this paper reviews the extant literature on domain adaptation in sentiment analysis and highlights promising directions for research. The second part establishes a baseline for system evaluation by drawing comparisons of system performance across four different domains/genres movie reviews, news, blogs, and product reviews. The final, third part of the paper presents our system, composed of an ensemble of two classifiers – one trained on WordNet glosses and synsets and the other trained on a small in-domain training set. 290 2 Domain Adaptation in Sentiment Research Most text-level sentiment classifiers use standard machine learning techniques to learn and select features from labeled corpora. Such approaches work well in situations where large labeled corpora are available for training and validation (e.g., movie reviews), but they do not perform well when training data is scarce or when it comes from a different domain (Aue and Gamon, 2005; Read, 2005), topic (Read, 2005) or time period (Read, 2005). There are two alternatives to supervised machine learning that can be used to get around this problem: on the one hand, general lists of sentiment clues/features can be acquired from domain-independent sources such as dictionaries or the Internet, on the other hand, unsupervised and weakly-supervised approaches can be used to take advantage of a small number of annotated in-domain examples and/or of unlabelled indomain data. The first approach, using general word lists automatically acquired from the Internet or from dictionaries, outperforms corpus-based classifiers when such classifiers use out-of-domain training data or when the training corpus is not sufficiently large to accumulate the necessary feature frequency information. But such general word lists were shown to perform worse than statistical models built on sufficiently large in-domain training sets of movie reviews (Pang et al., 2002). On other domains, such as product reviews, the performance of systems that use general word lists is comparable to the performance of supervised machine learning approaches (Gamon and Aue, 2005). The recognition of major performance deficiencies of supervised machine learning methods with insufficient or out-of-domain training brought about an increased interest in unsupervised and weaklysupervised approaches to feature learning. For instance, Aue and Gamon (2005) proposed training on a samll number of labeled examples and large quantities of unlabelled in-domain data. This system performed well even when compared to systems trained on a large set of in-domain examples: on feedback messages from a web survey on knowledge bases, Aue and Gamon report 73.86% accuracy using unlabelled data compared to 77.34% for in-domain and 72.39% for the best out-of-domain training on a large training set. Drezde et al. (2007) applied structural correspondence learning (Drezde et al., 2007) to the task of domain adaptation for sentiment classification of product reviews. They showed that, depending on the domain, a small number (e.g., 50) of labeled examples allows to adapt the model learned on another corpus to a new domain. However, they note that the success of such adaptation and the number of necessary in-domain examples depends on the similarity between the original domain and the new one. Similarly, Tan et al. (2007) suggested to combine out-of-domain labeled examples with unlabelled ones from the target domain in order to solve the domain-transfer problem. They applied an outof-domain-trained SVM classifier to label examples from the target domain and then retrained the classifier using these new examples. In order to maximize the utility of the examples from the target domain, these examples were selected using Similarity Ranking and Relative Similarity Ranking algorithms (Tan et al., 2007). Depending on the similarity between domains, this method brought up to 15% gain compared to the baseline SVM. Overall, the development of semi-supervised approaches to sentiment tagging is a promising direction of the research in this area but so far, based on reported results, the performance of such methods is inferior to the supervised approaches with indomain training and to the methods that use general word lists. It also strongly depends on the similarity between the domains as has been shown by (Drezde et al., 2007; Tan et al., 2007). 3 Factors Affecting System Performance The comparison of system performance across different domains involves a number of factors that can significantly affect system performance – from training set size to level of analysis (sentence or entire document), document domain/genre and many other factors. In this section we present a series of experiments conducted to assess the effects of different external factors (i.e., factors unrelated to the merits of the system itself) on system performance in order to establish the baseline for performance comparisons across different domains/genres. 291 3.1 Level of Analysis Research on sentiment annotation is usually conducted at the text (Aue and Gamon, 2005; Pang et al., 2002; Pang and Lee, 2004; Riloff et al., 2006; Turney, 2002; Turney and Littman, 2003) or at the sentence levels (Gamon and Aue, 2005; Hu and Liu, 2004; Kim and Hovy, 2005; Riloff et al., 2006). It should be noted that each of these levels presents different challenges for sentiment annotation. For example, it has been observed that texts often contain multiple opinions on different topics (Turney, 2002; Wiebe et al., 2001), which makes assignment of the overall sentiment to the whole document problematic. On the other hand, each individual sentence contains a limited number of sentiment clues, which often negatively affects the accuracy and recall if that single sentiment clue encountered in the sentence was not learned by the system. Since the comparison of sentiment annotation system performance on texts and on sentences has not been attempted to date, we also sought to close this gap in the literature by conducting the first set of our comparative experiments on data sets of 2,002 movie review texts and 10,662 movie review snippets (5331 with positive and 5331 with negative sentiment) provided by Bo Pang (http://www.cs.cornell.edu/People/pabo/moviereview-data/). 3.2 Domain Effects The second set of our experiments explores system performance on different domains at sentence level. For this we used four different data sets of sentences annotated with sentiment tags: • A set of movie review snippets (further: movie) from (Pang and Lee, 2005). This dataset of 10,662 snippets was collected automatically from www.rottentomatoes.com website. All sentences in reviews marked “rotten” were considered negative and snippets from “fresh” reviews were deemed positive. In order to make the results obtained on this dataset comparable to other domains, a randomly selected subset of 1066 snippets was used in the experiments. • A balanced corpus of 800 manually annotated sentences extracted from 83 newspaper texts (further, news). The full set of sentences was annotated by one judge. 200 sentences from this corpus (100 positive and 100 negative) were also randomly selected from the corpus for an inter-annotator agreement study and were manually annotated by two independent annotators. The pairwise agreement between annotators was calculated as the percent of same tags divided by the number of sentences with this tag in the gold standard. The pair-wise agreement between the three annotators ranged from 92.5 to 95.9% (κ=0.74 and 0.75 respectively) on positive vs. negative tags. • A set of sentences taken from personal weblogs (further, blogs) posted on LiveJournal (http://www.livejournal.com) and on http://www.cyberjournalist.com. This corpus is composed of 800 sentences (400 sentences with positive and 400 sentences with negative sentiment). In order to establish the interannotator agreement, two independent judges were asked to annotate 200 sentences from this corpus. The agreement between the two annotators on positive vs. negative tags reached 99% (κ=0.97). • A set of 1200 product review (PR) sentences extracted from the annotated corpus made available by Bing Liu (Hu and Liu, 2004) (http://www.cs.uic.edu/ liub/FBS/FBS.html). The data set sizes are summarized in Table 1. Movies News Blogs PR Text level 2002 texts n/a n/a n/a Sentence level 10662 800 800 1200 snippets sent. sent. sent. Table 1: Datasets 3.3 Establishing a Baseline for a Corpus-based System (CBS) Supervised statistical methods have been very successful in sentiment tagging of texts: on movie review texts they reach accuracies of 85-90% (Aue and Gamon, 2005; Pang and Lee, 2004). These methods perform particularly well when a large volume of labeled data from the same domain as the 292 test set is available for training (Aue and Gamon, 2005). For this reason, most of the research on sentiment tagging using statistical classifiers was limited to product and movie reviews, where review authors usually indicate their sentiment in a form of a standardized score that accompanies the texts of their reviews. The lack of sufficient data for training appears to be the main reason for the virtual absence of experiments with statistical classifiers in sentiment tagging at the sentence level. To our knowledge, the only work that describes the application of statistical classifiers (SVM) to sentence-level sentiment classification is (Gamon and Aue, 2005)1. The average performance of the system on ternary classification (positive, negative, and neutral) was between 0.50 and 0.52 for both average precision and recall. The results reported by (Riloff et al., 2006) for binary classification of sentences in a related domain of subjectivity tagging (i.e., the separation of sentiment-laden from neutral sentences) suggest that statistical classifiers can perform well on this task: the authors have reached 74.9% accuracy on the MPQA corpus (Riloff et al., 2006). In order to explore the performance of different approaches in sentiment annotation at the text and sentence levels, we used a basic Na¨ıve Bayes classifier. It has been shown that both Na¨ıve Bayes and SVMs perform with similar accuracy on different sentiment tagging tasks (Pang and Lee, 2004). These observations were confirmed with our own experiments with SVMs and Na¨ıve Bayes (Table 3). We used the Weka package (http://www.cs.waikato.ac.nz/ml/weka/) with default settings. In the sections that follow, we describe a set of comparative experiments with SVMs and Na¨ıve Bayes classifiers (1) on texts and sentences and (2) on four different domains (movie reviews, news, blogs, and product reviews). System runs with unigrams, bigrams, and trigrams as features and with different training set sizes are presented. 1Recently, a similar task has been addressed by the Affective Text Task at SemEval-1 where even shorter units – headlines – were classified into positive, negative and neutral categories using a variety of techniques (Strapparava and Mihalcea, 2007). 4 Experiments 4.1 System Performance on Texts vs. Sentences The experiments comparing in-domain trained system performance on texts vs. sentences were conducted on 2,002 movie review texts and on 10,662 movie review snippets. The results with 10-fold cross-validation are reported in Table 22. Trained on Texts Trained on Sent. Tested on Tested on Tested on Tested on Texts Sent. Texts Sent. 1gram 81.1 69.0 66.8 77.4 2gram 83.7 68.6 71.2 73.9 3gram 82.5 64.1 70.0 65.4 Table 2: Accuracy of Na¨ıve Bayes on movie reviews. Consistent with findings in the literature (Cui et al., 2006; Dave et al., 2003; Gamon and Aue, 2005), on the large corpus of movie review texts, the indomain-trained system based solely on unigrams had lower accuracy than the similar system trained on bigrams. But the trigrams fared slightly worse than bigrams. On sentences, however, we have observed an inverse pattern: unigrams performed better than bigrams and trigrams. These results highlight a special property of sentence-level annotation: greater sensitivity to sparseness of the model: On texts, classifier error on one particular sentiment marker is often compensated by a number of correctly identified other sentiment clues. Since sentences usually contain a much smaller number of sentiment clues than texts, sentence-level annotation more readily yields errors when a single sentiment clue is incorrectly identified or missed by the system. Due to lower frequency of higher-order n-grams (as opposed to unigrams), higher-order ngram language models are more sparse, which increases the probability of missing a particular sentiment marker in a sentence (Table 33). Very large 2All results are statistically significant at α = 0.01 with two exceptions: the difference between trigrams and bigrams for the system trained and tested on texts is statistically significant at alpha=0.1 and for the system trained on sentences and tested on texts is not statistically significant at α = 0.01. 3The results for movie reviews are lower than those reported in Table 2 since the dataset is 10 times smaller, which results in less accurate classification. The statistical significance of the 293 training sets are required to overcome this higher ngram sparseness in sentence-level annotation. Dataset Movie News Blogs PRs Dataset size 1066 800 800 1200 unigrams SVM 68.5 61.5 63.85 76.9 NB 60.2 59.5 60.5 74.25 nb features 5410 4544 3615 2832 bigrams SVM 59.9 63.2 61.5 75.9 NB 57.0 58.4 59.5 67.8 nb features 16286 14633 15182 12951 trigrams SVM 54.3 55.4 52.7 64.4 NB 53.3 57.0 56.0 69.7 nb features 20837 18738 19847 19132 Table 3: Accuracy of unigram, bigram and trigram models across domains. 4.2 System Performance on Different Domains In the second set of experiments we sought to compare system results on sentences using in-domain and out-of-domain training. Table 4 shows that indomain training, as expected, consistently yields superior accuracy than out-of-domain training across all four datasets: movie reviews (Movies), news, blogs, and product reviews (PRs). The numbers for in-domain trained runs are highlighted in bold. Test Data Training Data Movies News Blogs PRs Movies 68.5 55.2 53.2 60.7 News 55.0 61.5 56.25 57.4 Blogs 53.7 49.9 63.85 58.8 PRs 55.8 55.9 56.25 76.9 Table 4: Accuracy of SVM with unigram model results depends on the genre and size of the n-gram: on product reviews, all results are statistically significant at α = 0.025 level; on movie reviews, the difference between Na¨ve Bayes and SVM is statistically significant at α = 0.01 but the significance diminishes as the size of the n-gram increases; on news, only bi-grams produce a statistically significant (α = 0.01) difference between the two machine learning methods, while on blogs the difference between SVMs and Na¨ve Bayes is most pronounced when unigrams are used (α = 0.025). It is interesting to note that on sentences, regardless of the domain used in system training and regardless of the domain used in system testing, unigrams tend to perform better than higher-order ngrams. This observation suggests that, given the constraints on the size of the available training sets, unigram-based systems may be better suited for sentence-level sentiment annotation. 5 Lexicon-Based Approach The search for a base-learner that can produce greatest synergies with a classifier trained on small-set in-domain data has turned our attention to lexiconbased systems. Since the benefits from combining classifiers that always make similar decisions is minimal, the two (or more) base-learners should complement each other (Alpaydin, 2004). Since a system based on a fairly different learning approach is more likely to produce a different decision under a given set of circumstances, the diversity of approaches integrated in the ensemble of classifiers was expected to have a beneficial effect on the overall system performance. A lexicon-based approach capitalizes on the fact that dictionaries, such as WordNet (Fellbaum, 1998), contain a comprehensive and domainindependent set of sentiment clues that exist in general English. A system trained on such general data, therefore, should be less sensitive to domain changes. This robustness, however is expected to come at some cost, since some domain-specific sentiment clues may not be covered in the dictionary. Our hypothesis was, therefore, that a lexiconbased system will perform worse than an in-domain trained classifier but possibly better than a classifier trained on out-of domain data. One of the limitations of general lexicons and dictionaries, such as WordNet (Fellbaum, 1998), as training sets for sentiment tagging systems is that they contain only definitions of individual words and, hence, only unigrams could be effectively learned from dictionary entries. Since the structure of WordNet glosses is fairly different from that of other types of corpora, we developed a system that used the list of human-annotated adjectives from (Hatzivassiloglou and McKeown, 1997) as a seed list and then learned additional unigrams 294 from WordNet synsets and glosses with up to 88% accuracy, when evaluated against General Inquirer (Stone et al., 1966) (GI) on the intersection of our automatically acquired list with GI. In order to expand the list coverage for our experiments at the text and sentence levels, we then augmented the list by adding to it all the words annotated with “Positiv” or “Negativ” tags in GI, that were not picked up by the system. The resulting list of features contained 11,000 unigrams with the degree of membership in the category of positive or negative sentiment assigned to each of them. In order to assign the membership score to each word, we did 58 system runs on unique nonintersecting seed lists drawn from manually annotated list of positive and negative adjectives from (Hatzivassiloglou and McKeown, 1997). The 58 runs were then collapsed into a single set of 7,813 unique words. For each word we computed a score by subtracting the total number of runs assigning this word a negative sentiment from the total of the runs that consider it positive. The resulting measure, termed Net Overlap Score (NOS), reflected the number of ties linking a given word with other sentimentladen words in WordNet, and hence, could be used as a measure of the words’ centrality in the fuzzy category of sentiment. The NOSs were then normalized into the interval from -1 to +1 using a sigmoid fuzzy membership function (Zadeh, 1975)4. Only words with fuzzy membership degree not equal to zero were retained in the list. The resulting list contained 10,809 sentiment-bearing words of different parts of speech. The sentiment determination at the sentence and text level was then done by summing up the scores of all identified positive unigrams (NOS>0) and all negative unigrams (NOS<0) (Andreevskaia and Bergler, 2006). 5.1 Establishing a Baseline for the Lexicon-Based System (LBS) The baseline performance of the Lexicon-Based System (LBS) described above is presented in Table 5, along with the performance results of the indomain- and out-of-domain-trained SVM classifier. Table 5 confirms the predicted pattern: the LBS performs with lower accuracy than in-domain4With coefficients: α=1, γ=15. Movies News Blogs PRs LBS 57.5 62.3 63.3 59.3 SVM in-dom. 68.5 61.5 63.85 76.9 SVM out-of-dom. 55.8 55.9 56.25 60.7 Table 5: System accuracy on best runs on sentences trained corpus-based classifiers, and with similar or better accuracy than the corpus-based classifiers trained on out-of-domain data. Thus, the lexiconbased approach is characterized by a bounded but stable performance when the system is ported across domains. These performance characteristics of corpus-based and lexicon-based approaches prompt further investigation into the possibility to combine the portability of dictionary-trained systems with the accuracy of in-domain trained systems. 6 Integrating the Corpus-based and Dictionary-based Approaches The strategy of integration of two or more systems in a single ensemble of classifiers has been actively used on different tasks within NLP. In sentiment tagging and related areas, Aue and Gamon (2005) demonstrated that combining classifiers can be a valuable tool in domain adaptation for sentiment analysis. In the ensemble of classifiers, they used a combination of nine SVM-based classifiers deployed to learn unigrams, bigrams, and trigrams on three different domains, while the fourth domain was used as an evaluation set. Using then an SVM meta-classifier trained on a small number of target domain examples to combine the nine base classifiers, they obtained a statistically significant improvement on out-of-domain texts from book reviews, knowledge-base feedback, and product support services survey data. No improvement occurred on movie reviews. Pang and Lee (2004) applied two different classifiers to perform sentiment annotation in two sequential steps: the first classifier separated subjective (sentiment-laden) texts from objective (neutral) ones and then they used the second classifier to classify the subjective texts into positive and negative. Das and Chen (2004) used five classifiers to determine market sentiment on Yahoo! postings. Simple majority vote was applied to make decisions within 295 the ensemble of classifiers and achieved accuracy of 62% on ternary in-domain classification. In this study we describe a system that attempts to combine the portability of a dictionary-trained system (LBS) with the accuracy of an in-domain trained corpus-based system (CBS). The selection of these two classifiers for this system, thus, was theorybased. The section that follows describes the classifier integration and presents the performance results of the system consisting of an ensemble CBS and LBS classifier and a precision-based vote weighting procedure. 6.1 The Classifier Integration Procedure and System Evaluation The comparative analysis of the corpus-based and lexicon-based systems described above revealed that the errors produced by CBS and LBS were to a great extent complementary (i.e., where one classifier makes an error, the other tends to give the correct answer). This provided further justification to the integration of corpus-based and lexicon-based approaches in a single system. Table 6 below illustrates the complementarity of the performance CBS and LBS classifiers on the positive and negative categories. In this experiment, the corpus-based classifier was trained on 400 annotated product review sentences5. The two systems were then evaluated on a test set of another 400 product review sentences. The results reported in Table 6 are statistically significant at α = 0.01. CBS LBS Precision positives 89.3% 69.3% Precision negatives 55.5% 81.5% Pos/Neg Precision 58.0% 72.1% Table 6: Base-learners’ precision and recall on product reviews on test data. Table 6 shows that the corpus-based system has a very good precision on those sentences that it classifies as positive but makes a lot of errors on those sentences that it deems negative. At the same time, the lexicon-based system has low precision on positives 5The small training set explains relatively low overall performance of the CBS system. and high precision on negatives6. Such complementary distribution of errors produced by the two systems was observed on different data sets from different domains, which suggests that the observed distribution pattern reflects the properties of each of the classifiers, rather than the specifics of the domain/genre. In order to take advantage of the observed complementarity of the two systems, the following procedure was used. First, a small set of in-domain data was used to train the CBS system. Then both CBS and LBS systems were run separately on the same training set, and for each classifier, the precision measures were calculated separately for those sentences that the classifier considered positive and those it considered negative. The chance-level performance (50%) was then subtracted from the precision figures to ensure that the final weights reflect by how much the classifier’s precision exceeds the chance level. The resulting chance-adjusted precision numbers of the two classifiers were then normalized, so that the weights of CBS and LBS classifiers sum up to 100% on positive and to 100% on negative sentences. These weights were then used to adjust the contribution of each classifier to the decision of the ensemble system. The choice of the weight applied to the classifier decision, thus, varied depending on whether the classifier scored a given sentence as positive or as negative. The resulting system was then tested on a separate test set of sentences7. The small-set training and evaluation experiments with the system were performed on different domains using 3-fold validation. The experiments conducted with the Ensemble system were designed to explore system performance under conditions of limited availability of annotated data for classifier training. For this reason, the numbers reported for the corpus-based classifier do not reflect the full potential of machine learning approaches when sufficient in-domain training data is available. Table 7 presents the results of these experiments by domain/genre. The results 6These results are consistent with an observation in (Kennedy and Inkpen, 2006), where a lexicon-based system performed with a better precision on negative than on positive texts. 7The size of the test set varied in different experiments due to the availability of annotated data for a particular domain. 296 are statistically significant at α = 0.01, except the runs on movie reviews where the difference between the LBS and Ensemble classifiers was significant at α = 0.05. LBS CBS Ensemble News Acc 67.8 53.2 73.3 F 0.82 0.71 0.85 Movies Acc 54.5 53.5 62.1 F 0.73 0.72 0.77 Blogs Acc 61.2 51.1 70.9 F 0.78 0.69 0.83 PRs Acc 59.5 58.9 78.0 F 0.77 0.75 0.88 Average Acc 60.7 54.2 71.1 F 0.77 0.72 0.83 Table 7: Performance of the ensemble classifier Table 7 shows that the combination of two classifiers into an ensemble using the weighting technique described above leads to consistent improvement in system performance across all domains/genres. In the ensemble system, the average gain in accuracy across the four domains was 16.9% relative to CBS and 10.3% relative to LBS. Moreover, the gain in accuracy and precision was not offset by decreases in recall: the net gain in recall was 7.4% relative to CBS and 13.5% vs. LBS. The ensemble system on average reached 99.1% recall. The F-measure has increased from 0.77 and 0.72 for LBS and CBS classifiers respectively to 0.83 for the whole ensemble system. 7 Discussion The development of domain-independent sentiment determination systems poses a substantial challenge for researchers in NLP and artificial intelligence. The results presented in this study suggest that the integration of two fairly different classifier learning approaches in a single ensemble of classifiers can yield substantial gains in system performance on all measures. The most substantial gains occurred in recall, accuracy, and F-measure. This study permits to highlight a set of factors that enable substantial performance gains with the ensemble of classifiers approach. Such gains are most likely when (1) the errors made by the classifiers are complementary, i.e., where one classifier makes an error, the other tends to give the correct answer, (2) the classifier errors are not fully random and occur more often in a certain segment (or category) of classifier results, and (3) there is a way for a system to identify that low-precision segment and reduce the weights of that classifier’s results on that segment accordingly. The two classifiers used in this study – corpus-based and lexicon-based – provided an interesting illustration of potential performance gains associated with these three conditions. The use of precision of classifier results on the positives and negatives proved to be an effective technique for classifier vote weighting within the ensemble. 8 Conclusion This study contributes to the research on sentiment tagging, domain adaptation, and the development of ensembles of classifiers (1) by proposing a novel approach for sentiment determination at sentence level and delineating the conditions under which greatest synergies among combined classifiers can be achieved, (2) by describing a precision-based technique for assigning differential weights to classifier results on different categories identified by the classifier (i.e., categories of positive vs. negative sentences), and (3) by proposing a new method for sentiment annotation in situations where the annotated in-domain data is scarce and insufficient to ensure adequate performance of the corpus-based classifier, which still remains the preferred choice when large volumes of annotated data are available for system training. Among the most promising directions for future research in the direction laid out in this paper is the deployment of more advanced classifiers and feature selection techniques that can further enhance the performance of the ensemble of classifiers. The precision-based vote weighting technique may prove to be effective also in situations, where more than two classifiers are integrated into a single system. We expect that these more advanced ensemble-ofclassifiers systems would inherit the benefits of multiple complementary approaches to sentiment annotation and will be able to achieve better and more stable accuracy on in-domain, as well as on out-ofdomain data. 297 References Ethem Alpaydin. 2004. Introduction to Machine Learning. The MIT Press, Cambridge, MA. Alina Andreevskaia and Sabine Bergler. 2006. Mining WordNet for a fuzzy sentiment: Sentiment tag extraction from WordNet glosses. In Proceedings the 11th Conference of the European Chapter of the Association for Computational Linguistics, Trento, IT. Anthony Aue and Michael Gamon. 2005. Customizing sentiment classifiers to new domains: a case study. In Proccedings of the International Conference on Recent Advances in Natural Language Processing, Borovets, BG. Xue Bai, Rema Padman, and Edoardo Airoldi. 2005. On learning parsimonious models for extracting consumer opinions. In Proceedings of the 38th Annual Hawaii International Conference on System Sciences, Washington, DC. Hang Cui, Vibhu Mittal, and Mayur Datar. 2006. Comparative experiments on sentiment classification for online product reviews. In Proceedings of the 21st International Conference on Artificial Intelligence, Boston, MA. Kushal Dave, Steve Lawrence, and David M. Pennock. 2003. Mining the Peanut gallery: opinion extraction and semantic classification of product reviews. In Proceedings of WWW03, Budapest, HU. Mark Drezde, John Blitzer, and Fernando Pereira. 2007. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, Prague, CZ. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA. Michael Gamon and Anthony Aue. 2005. Automatic identification of sentiment vocabulary: exploiting low association with known sentiment terms. In Proceedings of the ACL-05 Workshop on Feature Engineering for Machine Learning in Natural Language Processing, Ann Arbor, US. Vasileios Hatzivassiloglou and Kathleen B. McKeown. 1997. Predicting the Semantic Orientation of Adjectives. In Proceedings of the the 40th Annual Meeting of the Association of Computational Linguistics. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In KDD-04, pages 168–177. Alistair Kennedy and Diana Inkpen. 2006. Sentiment Classification of Movie Reviews Using Contextual Valence Shifters. Computational Intelligence, 22(2):110–125. Soo-Min Kim and Eduard Hovy. 2005. Automatic detection of opinion bearing words and sentences. In Proceedings of the Second International Joint Conference on Natural Language Processing, Companion Volume, Jeju Island, KR. Bo Pang and Lilian Lee. 2004. A sentiment education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43nd Meeting of the Association for Computational Linguistics, Ann Arbor, US. Bo Pang, Lilian Lee, and Shrivakumar Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Conference on Empirical Methods in Natural Language Processing. Jonathon Read. 2005. Using emoticons to reduce dependency in machine learning techniques for sentiment classification. In Proceedings of the ACL-2005 Student Research Workshop, Ann Arbor, MI. Ellen Riloff, Siddharth Patwardhan, and Janyce Wiebe. 2006. Feature subsumption for opinion analysis. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Sydney, AUS. P.J. Stone, D.C. Dumphy, M.S. Smith, and D.M. Ogilvie. 1966. The General Inquirer: a computer approach to content analysis. M.I.T. studies in comparative politics. M.I.T. Press, Cambridge, MA. Carlo Strapparava and Rada Mihalcea. 2007. SemEval2007 Task 14: Affective Text. In Proceedings of the 4th International Workshop on Semantic Evaluations, Prague, CZ. Songbo Tan, Gaowei Wu, Huifeng Tang, and Zueqi Cheng. 2007. A Novel Scheme for Domain-transfer Problem in the context of Sentiment Analysis. In Proceedings of CIKM 2007. Peter Turney and Michael Littman. 2003. Measuring praise and criticism: inference of semantic orientation from association. ACM Transactions on Information Systems (TOIS), 21:315–346. Peter Turney. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In Proceedings of the 40th Annual Meeting of the Association of Computational Linguistics. Janyce Wiebe, Rebecca Bruce, Matthew Bell, Melanie Martin, and Theresa Wilson. 2001. A corpus study of Evaluative and Speculative Language. In Proceedings of the 2nd ACL SIGDial Workshop on Discourse and Dialogue, Aalberg, DK. Lotfy A. Zadeh. 1975. Calculus of Fuzzy Restrictions. In L.A. Zadeh et al., editor, Fuzzy Sets and their Applications to cognitive and decision processes, pages 1–40. Academic Press Inc., New-York. 298
2008
34
Proceedings of ACL-08: HLT, pages 299–307, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics A Generic Sentence Trimmer with CRFs Tadashi Nomoto National Institute of Japanese Literature 10-3, Midori Tachikawa Tokyo, 190-0014, Japan [email protected] Abstract The paper presents a novel sentence trimmer in Japanese, which combines a non-statistical yet generic tree generation model and Conditional Random Fields (CRFs), to address improving the grammaticality of compression while retaining its relevance. Experiments found that the present approach outperforms in grammaticality and in relevance a dependency-centric approach (Oguro et al., 2000; Morooka et al., 2004; Yamagata et al., 2006; Fukutomi et al., 2007) −the only line of work in prior literature (on Japanese compression) we are aware of that allows replication and permits a direct comparison. 1 Introduction For better or worse, much of prior work on sentence compression (Riezler et al., 2003; McDonald, 2006; Turner and Charniak, 2005) turned to a single corpus developed by Knight and Marcu (2002) (K&M, henceforth) for evaluating their approaches. The K&M corpus is a moderately sized corpus consisting of 1,087 pairs of sentence and compression, which account for about 2% of a Ziff-Davis collection from which it was derived. Despite its limited scale, prior work in sentence compression relied heavily on this particular corpus for establishing results (Turner and Charniak, 2005; McDonald, 2006; Clarke and Lapata, 2006; Galley and McKeown, 2007). It was not until recently that researchers started to turn attention to an alternative approach which does not require supervised data (Turner and Charniak, 2005). Our approach is broadly in line with prior work (Jing, 2000; Dorr et al., 2003; Riezler et al., 2003; Clarke and Lapata, 2006), in that we make use of some form of syntactic knowledge to constrain compressions we generate. What sets this work apart from them, however, is a novel use we make of Conditional Random Fields (CRFs) to select among possible compressions (Lafferty et al., 2001; Sutton and McCallum, 2006). An obvious benefit of using CRFs for sentence compression is that the model provides a general (and principled) probabilistic framework which permits information from various sources to be integrated towards compressing sentence, a property K&M do not share. Nonetheless, there is some cost that comes with the straightforward use of CRFs as a discriminative classifier in sentence compression; its outputs are often ungrammatical and it allows no control over the length of compression they generates (Nomoto, 2007). We tackle the issues by harnessing CRFs with what we might call dependency truncation, whose goal is to restrict CRFs to working with candidates that conform to the grammar. Thus, unlike McDonald (2006), Clarke and Lapata (2006) and Cohn and Lapata (2007), we do not insist on finding a globally optimal solution in the space of 2n possible compressions for an n word long sentence. Rather we insist on finding a most plausible compression among those that are explicitly warranted by the grammar. Later in the paper, we will introduce an approach called the ‘Dependency Path Model’ (DPM) from the previous literature (Section 4), which purports to provide a robust framework for sentence compres299 sion in Japanese. We will look at how the present approach compares with that of DPM in Section 6. 2 A Sentence Trimmer with CRFs Our idea on how to make CRFs comply with grammar is quite simple: we focus on only those label sequences that are associated with grammatically correct compressions, by making CRFs look at only those that comply with some grammatical constraints G, and ignore others, regardless of how probable they are.1 But how do we find compressions that are grammatical? To address the issue, rather than resort to statistical generation models as in the previous literature (Cohn and Lapata, 2007; Galley and McKeown, 2007), we pursue a particular rule-based approach we call a ‘dependency truncation,’ which as we will see, gives us a greater control over the form that compression takes. Let us denote a set of label assignments for S that satisfy constraints, by G(S).2 We seek to solve the following, y⋆= arg max y∈G(S) p(y|x;θθθ). (2) There would be a number of ways to go about the problem. In the context of sentence compression, a linear programming based approach such as Clarke and Lapata (2006) is certainly one that deserves consideration. In this paper, however, we will explore a much simpler approach which does not require as involved formulation as Clarke and Lapata (2006) do. We approach the problem extentionally, i.e., through generating sentences that are grammatical, or that conform to whatever constraints there are. 1Assume as usual that CRFs take the form, p(y|x) ∝ exp P k,j λjfj(yk, yk−1, x) + P i µigi(xk, yk, x) ! = exp[w⊤f(x, y)] (1) fj and gi are ‘features’ associated with edges and vertices, respectively, and k ∈C, where C denotes a set of cliques in CRFs. λj and µi are the weights for corresponding features. w and f are vector representations of weights and features, respectively (Tasker, 2004). 2Note that a sentence compression can be represented as an array of binary labels, one of them marking words to be retained in compression and the other those to be dropped. S V N P N V N A D J N P N V N Figure 1: Syntactic structure in Japanese Consider the following. (3) Mushoku-no unemployed John John -ga SBJ takai expensive kuruma car -wo ACC kat-ta. buy PAST ‘John, who is unemployed, bought an expensive car.’ whose grammatically legitimate compressions would include: (4) (a) John -ga takai kuruma -wo kat-ta. ‘John bought an expensive car.’ (b) John -ga kuruma -wo kat-ta. ‘John bought a car.’ (c) Mushoku-no John -ga kuruma -wo kat-ta. ‘John, who is unemployed, bought a car. (d) John -ga kat-ta. ‘John bought.’ (e) Mushoku-no John -ga kat-ta. ‘John, who is unemployed, bought.’ (f) Takai kuruma-wo kat-ta. ‘ Bought an expensive car.’ (g) Kuruma-wo kat-ta. ‘ Bought a car.’ (h) Kat-ta. ‘ Bought.’ This would give us G(S)={a, b, c, d, e, f, g, h}, for the input 3. Whatever choice we make for compression among candidates in G(S), should be grammatical, since they all are. One linguistic feature 300 B S 2 B S 4 B S 5 B S 3 B S 1 N P V S Figure 2: Compressing an NP chunk C D E B A Figure 3: Trimming TDPs of the Japanese language we need to take into account when generating compressions, is that the sentence, which is free of word order and verb-final, typically takes a left-branching structure as in Figure 1, consisting of an array of morphological units called bunsetsu (BS, henceforth). A BS, which we might regard as an inflected form (case marked in the case of nouns) of verb, adjective, and noun, could involve one or more independent linguistic elements such as noun, case particle, but acts as a morphological atom, in that it cannot be torn apart, or partially deleted, without compromising the grammaticality.3 Noting that a Japanese sentence typically consists of a sequence of case marked NPs and adjuncts, followed by a main verb at the end (or what would be called ‘matrix verb’ in linguistics), we seek to compress each of the major chunks in the sentence, leaving untouched the matrix verb, as its removal often leaves the sentence unintelligible. In particular, starting with the leftmost BS in a major constituent, 3Example 3 could be broken into BSs: / Mushuku -no / John -ga / takai / kuruma -wo / kat-ta /. we work up the tree by pruning BSs on our way up, which in general gives rise to grammatically legitimate compressions of various lengths (Figure 2). More specifically, we take the following steps to construct G(S). Let S = ABCDE. Assume that it has a dependency structure as in Figure 3. We begin by locating terminal nodes, i.e., those which have no incoming edges, depicted as filled circles in Figure 3, and find a dependency (singly linked) path from each terminal node to the root, or a node labeled ‘E’ here, which would give us two paths p1 = A-C-D-E and p2 = B-C-D-E (call them terminating dependency paths, or TDPs). Now create a set T of all trimmings, or suffixes of each TDP, including an empty string: T (p1) = {<A C D E>, <C D E>, <D E>, <E>, <>} T (p2) = {<B C D E>, <C D E>, <D E>, <E>, <>} Then we merge subpaths from the two sets in every possible way, i.e., for any two subpaths t1 ∈T (p1) and t2 ∈T (p2), we take a union over nodes in t1 and t2; Figure 4 shows how this might done. We remove duplicates if any. This would give us G(S)={{A B C D E}, {A C D E}, {B C D E}, {C D E}, {D E}, {E}, {}}, a set of compressions over S based on TDPs. What is interesting about the idea is that creating G(S) does not involve much of anything that is specific to a given language. Indeed this could be done on English as well. Take for instance a sentence at the top of Table 1, which is a slightly modified lead sentence from an article in the New York Times. Assume that we have a relevant dependency structure as shown in Figure 5, where we have three TDPs, i.e., one with southern, one with British and one with lethal. Then G(S) would include those listed in Table 1. A major difference from Japanese lies in the direction in which a tree is branching out: right versus left.4 Having said this, we need to address some language specific constraints: in Japanese, for instance, we should keep a topic marked NP in compression as its removal often leads to a decreased readability; and also it is grammatically wrong to start any compressed segment with sentence nominalizers such as 4We stand in a marked contrast to previous ‘grafting’ approaches which more or less rely on an ad-hoc collection of transformation rules to generate candidates (Riezler et al., 2003). 301 Table 1: Hedge-clipping English An official was quoted yesterday as accusing Iran of supplying explosive technology used in lethal attacks on British troops in southern Iraq An official was quoted yesterday as accusing Iran of supplying explosive technology used in lethal attacks on British troops in Iraq An official was quoted yesterday as accusing Iran of supplying explosive technology used in lethal attacks on British troops An official was quoted yesterday as accusing Iran of supplying explosive technology used in lethal attacks on troops An official was quoted yesterday as accusing Iran of supplying explosive technology used in lethal attacks An official was quoted yesterday as accusing Iran of supplying explosive technology used in attacks An official was quoted yesterday as accusing Iran of supplying explosive technology An official was quoted yesterday as accusing Iran of supplying technology < A C D E > < B C D E > < C D E > < D E > < E > < > { A B C D E } { A C D E } { A C D E } { A C D E } { A C D E } < D E > < B C D E > < C D E > < D E > < E > < > { B C D E } { C D E } { D E } { D E } { D E } < > < B C D E > < C D E > < D E > < E > < > { B C D E } { C D E } { D E } { E } { } < C D E > < B C D E > < C D E > < D E > < E > < > { B C D E } { C D E } { C D E } { C D E } { C D E } < E > < B C D E > < C D E > < D E > < E > < > { B C D E } { C D E } { D E } { E } { E } Figure 4: Combining TDP suffixes -koto and -no. In English, we should keep a preposition from being left dangling, as in An official was quoted yesterday as accusing Iran of supplying technology used in. In any case, we need some extra rules on G(S) to take care of language specific issues (cf. Vandeghinste and Pan (2004) for English). An important point about the dependency truncation is that for most of the time, a compression it generates comes out reasonably grammatical, so the number of ‘extras’ should be small. Finally, in order for CRFs to work with the compressions, we need to translate them into a sequence of binary labels, which involves labeling an element token, bunsetsu or a word, with some label, e.g., 0 for ’remove’ and 1 for ‘retain,’ as in Figure 6. i n s o u t h e r n I r a q t r o o p s B r i t i s h o n a t t a c k s l e t h a l i n u s e d Figure 5: An English dependency structure and TDPs Consider following compressions y1 to y4 for x = β1β2β3β4β5β6. βi denotes a bunsetsu (BS). ‘0’ marks a BS to be removed and ‘1’ that to be retained. β1 β2 β3 β4 β5 β6 y1 0 1 1 1 1 1 y2 0 0 1 1 1 1 y3 0 0 0 0 0 1 y4 0 0 1 0 0 0 Assume that G(S) = {y1, y2, y3}. Because y4 is not part of G(S), it is not considered a candidate for a compression for y, even if its likelihood may exceed those of others in G(S). We note that the approach here does not rely on so much of CRFs as a discriminative classifier as CRFs as a strategy for ranking among a limited set of label sequences which correspond to syntactically plausible simplifications of input sentence. Furthermore, we could dictate the length of compression by putbting an additional constraint on out302 S 0 0 0 1 0 0 0 1 Figure 6: Compression in binary representation. put, as in: y⋆= arg max y∈G′(S) p(y|x;θθθ), (5) where G′(S) = {y : y ∈G(S), R(y, x) = r}. R(y, x) denotes a compression rate r for which y is desired, where r = # of 1 in y length of x. The constraint forces the trimmer to look for the best solution among candidates that satisfy the constraint, ignoring those that do not.5 Another point to note is that G(S) is finite and relatively small −it was found, for our domain, G(S) usually runs somewhere between a few hundred and ten thousand in length −so in practice it suffices that we visit each compression in G(S), and select one that gives the maximum value for the objective function. We will have more to say about the size of the search space in Section 6. 3 Features in CRFs We use an array of features in CRFs which are either derived or borrowed from the taxonomy that a Japanese tokenizer called JUMAN and KNP,6 a Japanese dependency parser (aka Kurohashi-Nagao Parser), make use of in characterizing the output they produce: both JUMAN and KNP are part of the compression model we build. Features come in three varieties: semantic, morphological and syntactic. Semantic features are used for classifying entities into semantic types such as name of person, organization, or place, while syntactic features characterize the kinds of dependency 5It is worth noting that the present approach can be recast into one based on ‘constraint relaxation’ (Tromble and Eisner, 2006). 6http://nlp.kuee.kyoto-u.ac.jp/nl-resource/top-e.html relations that hold among BSs such as whether a BS is of the type that combines with the verb (renyou), or of the type that combines with the noun (rentai), etc. A morphological feature could be thought of as something that broadly corresponds to an English POS, marking for some syntactic or morphological category such as noun, verb, numeral, etc. Also we included ngram features to encode the lexical context in which a given morpheme appears. Thus we might have something like: for some words (morphemes) w1, w2, and w3, fw1·w2(w3) = 1 if w3 is preceded by w1, w2; otherwise, 0. In addition, we make use of an IR-related feature, whose job is to indicate whether a given morpheme in the input appears in the title of an associated article. The motivation for the feature is obviously to identify concepts relevant to, or unique to the associated article. Also included was a feature on tfidf, to mark words that are conceptually more important than others. The number of features came to around 80,000 for the corpus we used in the experiment. 4 The Dependency Path Model In what follows, we will describe somewhat in detail a prior approach to sentence compression in Japanese which we call the ”dependency path model,” or DPM. DPM was first introduced in (Oguro et al., 2000), later explored by a number of people (Morooka et al., 2004; Yamagata et al., 2006; Fukutomi et al., 2007).7 DPM has the form: h(y) = αf(y) + (1 −α)g(y), (6) where y = β0, β1, . . . , βn−1, i.e., a compression consisting of any number of bunsetsu’s, or phraselike elements. f(·) measures the relevance of content in y; and g(·) the fluency of text. α is to provide a way of weighing up contributions from each component. We further define: f(y) = n−1 ∑ i=0 q(βi), (7) 7Kikuchi et al. (2003) explore an approach similar to DPM. 303 d i s a p p e a r e d d o g s f r o m T h r e e l e g g e d s i g h t Figure 7: A dependency structure and g(y) = max s n−2 ∑ i=0 p(βi, βs(i)). (8) q(·) is meant to quantify how worthy of inclusion in compression, a given bunsetsu is; and p(βi, βj) represents the connectivity strength of dependency relation between βi and βj. s(·) is a linking function that associates with a bunsetsu any one of those that follows it. g(y) thus represents a set of linked edges that, if combined, give the largest probability for y. Dependency path length (DL) refers to the number of (singly linked) dependency relations (or edges) that span two bunsetsu’s. Consider the dependency tree in Figure 7, which corresponds to a somewhat contrived sentence ’Three-legged dogs disappeared from sight.’ Take an English word for a bunsetsu here. We have DL(three-legged, dogs) = 1 DL(three-legged, disappeared) = 2 DL(three-legged, from) = ∞ DL(three-legged, sight) = ∞ Since dogs is one edge away from three-legged, DL for them is 1; and we have DL of two for threelegged and disappeared, as we need to cross two edges in the direction of arrow to get from the former to the latter. In case there is no path between words as in the last two cases above, we take the DL to be infinite. DPM takes a dependency tree to be a set of linked edges. Each edge is expressed as a triple < Cs(βi), Ce(βj), DL(βi, βj) >, where βi and βj represent bunsestu’s that the edge spans. Cs(β) denotes the class of a bunsetsu where the edge starts and Ce(β) that of a bunsetsu where the edge ends. What we mean by ‘class of bunsetsu’ is some sort of a classificatory scheme that concerns linguistic characteristics of bunsetsu, such as a part-of-speech of the head, whether it has an inflection, and if it does, what type of inflection it has, etc. Moreover, DPM uses two separate classificatory schemes for Cs(β) and Ce(β). In DPM, we define the connectivity strength p by: p(βi, βj) = { log S(t) if DL(βi, βj) ̸= ∞ −∞ otherwise (9) where t =< Cs(βi), Ce(βj), DL(βi, βj) >, and S(t) is the probability of t occurring in a compression, which is given by: S(t) = # of t’s found in compressions # of triples found in the training data (10) We complete the DPM formulation with: q(β) = log pc(β) + tfidf(β) (11) pc(β) denotes the probability of having bunsetsu β in compression, calculated analogously to Eq. 10,8 and tfidf(β) obviously denotes the tfidf value of β. In DPM, a compression of a given sentence can be obtained by finding arg maxy h(y), where y ranges over possible candidate compressions of a particular length one may derive from that sentence. In the experiment described later, we set α = 0.1 for DPM, following Morooka et al. (2004), who found the best performance with that setting for α. 5 Evaluation Setup We created a corpus of sentence summaries based on email news bulletins we had received over five to six months from an on-line news provider called Nikkei Net, which mostly deals with finance and politics.9 Each bulletin consists of six to seven news briefs, each with a few sentences. Since a news brief contains nothing to indicate what its longer version 8DPM puts bunsetsu’s into some groups based on linguistic features associated with them, and uses the statistics of the groups for pc rather than that of bunsetsu’s that actually appear in text. 9http://www.nikkei.co.jp 304 Table 2: The rating scale on fluency RATING EXPLANATION 1 makes no sense 2 only partially intelligible/grammatical 3 makes sense; seriously flawed in grammar 4 makes good sense; only slightly flawed in grammar 5 makes perfect sense; no grammar flaws might look like, we manually searched the news site for a full-length article that might reasonably be considered a long version of that brief. We extracted lead sentences both from the brief and from its source article, and aligned them, using what is known as the Smith-Waterman algorithm (Smith and Waterman, 1981), which produced 1,401 pairs of summary and source sentence.10 For the ease of reference, we call the corpus so produced ‘NICOM’ for the rest of the paper. A part of our system makes use of a modeling toolkit called GRMM (Sutton et al., 2004; Sutton, 2006). Throughout the experiments, we call our approach ‘Generic Sentence Trimmer’ or GST. 6 Results and Discussion We ran DPM and GST on NICOM in the 10-fold cross validation format where we break the data into 10 blocks, use 9 of them for training and test on the remaining block. In addition, we ran the test at three different compression rates, 50%, 60% and 70%, to learn how they affect the way the models perform. This means that for each input sentence in NICOM, we have three versions of its compression created, corresponding to a particular rate at which the sentence is compressed. We call a set of compressions so generated ‘NICOM-g.’ In order to evaluate the quality of outputs GST and DPM generate, we asked 6 people, all Japanese natives, to make an intuitive judgment on how each compression fares in fluency and relevance to gold 10The Smith-Waterman algorithm aims at finding a best match between two sequences which may include gaps, such as A-C-D-E and A-B-C-D-E. The algorithm is based on an idea rather akin to dynamic programming. Table 3: The rating scale on content overlap RATING EXPLANATION 1 no overlap with reference 2 poor or marginal overlap w. ref. 3 moderate overlap w. ref. 4 significant overlap w. ref. 5 perfect overlap w. ref. standards (created by humans), on a scale of 1 to 5. To this end, we conducted evaluation in two separate formats; one concerns fluency and the other relevance. The fluency test consisted of a set of compressions which we created by randomly selecting 200 of them from NICOM-g, for each model at compression rates 50%, 60%, and 70%; thus we have 200 samples for each model and each compression rate.11 The total number of test compressions came to 1,200. The relevance test, on the other hand, consisted of paired compressions along with the associated gold standard compressions. Each pair contains compressions both from DPM and from GST at a given compression rate. We randomly picked 200 of them from NICOM-g, at each compression rate, and asked the participants to make a subjective judgment on how much of the content in a compression semantically overlap with that of the gold standard, on a scale of 1 to 5 (Table 3). Also included in the survey are 200 gold standard compressions, to get some idea of how fluent “ideal” compressions are, compared to those generated by machine. Tables 4 and 5 summarize the results. Table 4 looks at the fluency of compressions generated by each of the models; Table 5 looks at how much of the content in reference is retained in compressions. In either table, CR stands for compression rate. All the results are averaged over samples. We find in Table 4 a clear superiority of GST over DPM at every compression rate examined, with fluency improved by as much as 60% at 60%. However, GST fell short of what human compressions achieved in fluency −an issue we need to address 11As stated elsewhere, by compression rate, we mean r = # of 1 in y length of x. 305 Table 4: Fluency (Average) MODEL/CR 50% 60% 70% GST 3.430 3.820 3.810 DPM 2.222 2.372 2.660 Human − 4.45 − Table 5: Semantic (Content) Overlap (Average) MODEL/CR 50% 60% 70% GST 2.720 3.181 3.405 DPM 2.210 2.548 2.890 in the future. Since the average CR of gold standard compressions was 60%, we report their fluency at that rate only. Table 5 shows the results in relevance of content. Again GST marks a superior performance over DPM, beating it at every compression rate. It is interesting to observe that GST manages to do well in the semantic overlap, despite the cutback on the search space we forced on GST. As for fluency, we suspect that the superior performance of GST is largely due to the dependency truncation the model is equipped with; and its performance in content overlap owes a lot to CRFs. However, just how much improvement GST achieved over regular CRFs (with no truncation) in fluency and in relevance is something that remains to be seen, as the latter do not allow for variable length compression, which prohibits a straightforward comparison between the two kinds of models. We conclude the section with a few words on the size of |G(S)|, i.e., the number of candidates generated per run of compression with GST. Figure 8 shows the distribution of the numbers of candidates generated per compression, which looks like the familiar scale-free power curve. Over 99% of the time, the number of candidates or |G(S)| is found to be less than 500. 7 Conclusions This paper introduced a novel approach to sentence compression in Japanese, which combines a syntactically motivated generation model and CRFs, in orNumber of Candidates Frequency 0 500 1500 2500 0 400 800 1200 Figure 8: The distribution of |G(S)| der to address fluency and relevance of compressions we generate. What distinguishes this work from prior research is its overt withdrawal from a search for global optima to a search for local optima that comply with grammar. We believe that our idea was empirically borne out, as the experiments found that our approach outperforms, by a large margin, a previously known method called DPM, which employs a global search strategy. The results on semantic overlap indicates that the narrowing down of compressions we search obviously does not harm their relevance to references. An interesting future exercise would be to explore whether it is feasible to rewrite Eq. 5 as a linear integer program. If it is, the whole scheme of ours would fall under what is known as ‘Linear Programming CRFs’ (Tasker, 2004; Roth and Yih, 2005). What remains to be seen, however, is whether GST is transferrable to languages other than Japanese, notably, English. The answer is likely to be yes, but details have yet to be worked out. References James Clarke and Mirella Lapata. 2006. Constraintbased sentence compression: An integer programming 306 approach. In Proceedings of the COLING/ACL 2006, pages 144–151. Trevor Cohn and Mirella Lapata. 2007. Large margin synchronous generation and its application to sentence compression. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 73–82, Prague, June. Bonnie Dorr, David Zajic, and Richard Schwartz. 2003. Hedge trimmer: A parse-and-trim approach to headline generataion. In Proceedings of the HLT-NAACL Text Summarization Workshop and Document Understanding Conderence (DUC03), pages 1–8, Edmonton, Canada. Satoshi Fukutomi, Kazuyuki Takagi, and Kazuhiko Ozeki. 2007. Japanese Sentence Compression using Probabilistic Approach. In Proceedings of the 13th Annual Meeting of the Association for Natural Language Processing Japan. Michel Galley and Kathleen McKeown. 2007. Lexicalized Markov grammars for sentence compression. In Proceedings of the HLT-NAACL 2007, pages 180–187. Hongyan Jing. 2000. Sentence reduction for automatic text summarization. In Proceedings of the 6th Conference on Applied Natural Language Processing, pages 310–315. Tomonori Kikuchi, Sadaoki Furui, and Chiori Hori. 2003. Two-stage automatic speech summarization by sentence extraction and compaction. In Proceedings of ICASSP 2003. Kevin Knight and Daniel Marcu. 2002. Summarization beyond sentence extraction: A probabilistic approach to sentence compression. Artificial Intelligence, 139:91–107. John Lafferty, Andrew MacCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning (ICML-2001). Ryan McDonald. 2006. Discriminative sentence compression with soft syntactic evidence. In Proceedings of the 11th Conference of EACL, pages 297–304. Yuhei Morooka, Makoto Esaki, Kazuyuki Takagi, and Kazuhiko Ozeki. 2004. Automatic summarization of news articles using sentence compaction and extraction. In Proceedings of the 10th Annual Meeting of Natural Language Processing, pages 436–439, March. (In Japanese). Tadashi Nomoto. 2007. Discriminative sentence compression with conditional random fields. Information Processing and Management, 43:1571 – 1587. Rei Oguro, Kazuhiko Ozeki, Yujie Zhang, and Kazuyuki Takagi. 2000. An efficient algorithm for Japanese sentence compaction based on phrase importance and inter-phrase dependency. In Proceedings of TSD 2000 (Lecture Notes in Artificial Intelligence 1902,Springer-Verlag), pages 65–81, Brno, Czech Republic. Stefan Riezler, Tracy H. King, Richard Crouch, and Annie Zaenen. 2003. Statistical sentence condensation using ambiguity packing and stochastic disambiguation methods for lexical functional grammar. In Proceedings of HLT-NAACL 2003, pages 118–125, Edmonton. Dan Roth and Wen-tau Yih. 2005. Integer linear programming inference for conditional random fields. In Proceedings of the 22nd International Conference on Machine Learning (ICML 05). T. F. Smith and M. S. Waterman. 1981. Identification of common molecular subsequence. Journal of Molecular Biology, 147:195–197. Charles Sutton and Andrew McCallum. 2006. An introduction to conditional random fields for relational learning. In Lise Getoor and Ben Taskar, editors, Introduction to Statistical Relational Learning. MIT Press. To appear. Charles Sutton, Khashayar Rohanimanesh, and Andrew McCallum. 2004. Dynamic conditional random fields: Factorized probabilistic labeling and segmenting sequence data. In Proceedings of the 21st International Conference on Machine Learning, Banff, Canada. Charles Sutton. 2006. GRMM: A graphical models toolkit. http://mallet.cs.umass.edu. Ben Tasker. 2004. Learning Structured Prediction Models: A Large Margin Approach. Ph.D. thesis, Stanford University. Roy W. Tromble and Jason Eisner. 2006. A fast finitestate relaxation method for enforcing global constraint on sequence decoding. In Proceeings of the NAACL, pages 423–430. Jenie Turner and Eugen Charniak. 2005. Supervised and unsupervised learning for sentence compression. In Proceedings of the 43rd Annual Meeting of the ACL, pages 290–297, Ann Arbor, June. Vincent Vandeghinste and Yi Pan. 2004. Sentence compression for automatic subtitling: A hybrid approach. In Proceedings of the ACL workshop on Text Summarization, Barcelona. Kiwamu Yamagata, Satoshi Fukutomi, Kazuyuki Takagi, and Kzauhiko Ozeki. 2006. Sentence compression using statistical information about dependency path length. In Proceedings of TSD 2006 (Lecture Notes in Computer Science, Vol. 4188/2006), pages 127–134, Brno, Czech Republic. 307
2008
35
Proceedings of ACL-08: HLT, pages 308–316, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics A Joint Model of Text and Aspect Ratings for Sentiment Summarization Ivan Titov Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801 [email protected] Ryan McDonald Google Inc. 76 Ninth Avenue New York, NY 10011 [email protected] Abstract Online reviews are often accompanied with numerical ratings provided by users for a set of service or product aspects. We propose a statistical model which is able to discover corresponding topics in text and extract textual evidence from reviews supporting each of these aspect ratings – a fundamental problem in aspect-based sentiment summarization (Hu and Liu, 2004a). Our model achieves high accuracy, without any explicitly labeled data except the user provided opinion ratings. The proposed approach is general and can be used for segmentation in other applications where sequential data is accompanied with correlated signals. 1 Introduction User generated content represents a unique source of information in which user interface tools have facilitated the creation of an abundance of labeled content, e.g., topics in blogs, numerical product and service ratings in user reviews, and helpfulness rankings in online discussion forums. Many previous studies on user generated content have attempted to predict these labels automatically from the associated text. However, these labels are often present in the data already, which opens another interesting line of research: designing models leveraging these labelings to improve a wide variety of applications. In this study, we look at the problem of aspectbased sentiment summarization (Hu and Liu, 2004a; Popescu and Etzioni, 2005; Gamon et al., 2005; Nikos’ Fine Dining Food 4/5 “Best fish in the city”, “Excellent appetizers” Decor 3/5 “Cozy with an old world feel”, “Too dark” Service 1/5 “Our waitress was rude”, “Awful service” Value 5/5 “Good Greek food for the $”, “Great price!” Figure 1: An example aspect-based summary. Carenini et al., 2006; Zhuang et al., 2006).1 An aspect-based summarization system takes as input a set of user reviews for a specific product or service and produces a set of relevant aspects, the aggregated sentiment for each aspect, and supporting textual evidence. For example, figure 1 summarizes a restaurant using aspects food, decor, service, and value plus a numeric rating out of 5. Standard aspect-based summarization consists of two problems. The first is aspect identification and mention extraction. Here the goal is to find the set of relevant aspects for a rated entity and extract all textual mentions that are associated with each. Aspects can be fine-grained, e.g., fish, lamb, calamari, or coarse-grained, e.g., food, decor, service. Similarly, extracted text can range from a single word to phrases and sentences. The second problem is sentiment classification. Once all the relevant aspects and associated pieces of texts are extracted, the system should aggregate sentiment over each aspect to provide the user with an average numeric or symbolic rating. Sentiment classification is a well studied problem (Wiebe, 2000; Pang et al., 2002; Turney, 2002) and in many domains users explicitly 1We use the term aspect to denote properties of an object that can be rated by a user as in Snyder and Barzilay (2007). Other studies use the term feature (Hu and Liu, 2004b). 308 Food: 5; Decor: 5; Service: 5; Value: 5 The chicken was great. On top of that our service was excellent and the price was right. Can’t wait to go back! Food: 2; Decor: 1; Service: 3; Value: 2 We went there for our anniversary. My soup was cold and expensive plus it felt like they hadn’t painted since 1980. Food: 3; Decor: 5; Service: 4; Value: 5 The food is only mediocre, but well worth the cost. Wait staff was friendly. Lot’s of fun decorations. → Food “The chicken was great”, “My soup was cold”, “The food is only mediocre” Decor “it felt like they hadn’t painted since 1980”, “Lots of fun decorations” Service “service was excellent”, “Wait staff was friendly” Value “the price was right”, “My soup was cold and expensive”, “well worth the cost” Figure 2: Extraction problem: Produce aspect mentions from a corpus of aspect rated reviews. provide ratings for each aspect making automated means unnecessary.2 Aspect identification has also been thoroughly studied (Hu and Liu, 2004b; Gamon et al., 2005; Titov and McDonald, 2008), but again, ontologies and users often provide this information negating the need for automation. Though it may be reasonable to expect a user to provide a rating for each aspect, it is unlikely that a user will annotate every sentence and phrase in a review as being relevant to some aspect. Thus, it can be argued that the most pressing challenge in an aspect-based summarization system is to extract all relevant mentions for each aspect, as illustrated in figure 2. When labeled data exists, this problem can be solved effectively using a wide variety of methods available for text classification and information extraction (Manning and Schutze, 1999). However, labeled data is often hard to come by, especially when one considers all possible domains of products and services. Instead, we propose an unsupervised model that leverages aspect ratings that frequently accompany an online review. In order to construct such model, we make two assumptions. First, ratable aspects normally represent coherent topics which can be potentially discovered from co-occurrence information in the text. Second, we hypothesize that the most predictive features of an aspect rating are features derived from the text segments discussing the corresponding aspect. Motivated by these observations, we construct a joint statistical model of text and sentiment ratings. The model is at heart a topic model in that it assigns words to a set of induced topics, each of which may represent one particular aspect. The model is extended through a set of maximum entropy classifiers, one per each rated aspect, that are used to pre2E.g., http://zagat.com and http://tripadvisor.com. dict the sentiment rating towards each of the aspects. However, only the words assigned to an aspects corresponding topic are used in predicting the rating for that aspect. As a result, the model enforces that words assigned to an aspects’ topic are predictive of the associated rating. Our approach is more general than the particular statistical model we consider in this paper. For example, other topic models can be used as a part of our model and the proposed class of models can be employed in other tasks beyond sentiment summarization, e.g., segmentation of blogs on the basis of topic labels provided by users, or topic discovery on the basis of tags given by users on social bookmarking sites.3 The rest of the paper is structured as follows. Section 2 begins with a discussion of the joint textsentiment model approach. In Section 3 we provide both a qualitative and quantitative evaluation of the proposed method. We conclude in Section 4 with an examination of related work. 2 The Model In this section we describe a new statistical model called the Multi-Aspect Sentiment model (MAS), which consists of two parts. The first part is based on Multi-Grain Latent Dirichlet Allocation (Titov and McDonald, 2008), which has been previously shown to build topics that are representative of ratable aspects. The second part is a set of sentiment predictors per aspect that are designed to force specific topics in the model to be directly correlated with a particular aspect. 2.1 Multi-Grain LDA The Multi-Grain Latent Dirichlet Allocation model (MG-LDA) is an extension of Latent Dirichlet Allocation (LDA) (Blei et al., 2003). As was demon3See e.g. del.ico.us (http://del.ico.us). 309 strated in Titov and McDonald (2008), the topics produced by LDA do not correspond to ratable aspects of entities. In particular, these models tend to build topics that globally classify terms into product instances (e.g., Creative Labs Mp3 players versus iPods, or New York versus Paris Hotels). To combat this, MG-LDA models two distinct types of topics: global topics and local topics. As in LDA, the distribution of global topics is fixed for a document (a user review). However, the distribution of local topics is allowed to vary across the document. A word in the document is sampled either from the mixture of global topics or from the mixture of local topics specific to the local context of the word. It was demonstrated in Titov and McDonald (2008) that ratable aspects will be captured by local topics and global topics will capture properties of reviewed items. For example, consider an extract from a review of a London hotel: “. . . public transport in London is straightforward, the tube station is about an 8 minute walk . . . or you can get a bus for £1.50”. It can be viewed as a mixture of topic London shared by the entire review (words: “London”, “tube”, “£”), and the ratable aspect location, specific for the local context of the sentence (words: “transport”, “walk”, “bus”). Local topics are reused between very different types of items, whereas global topics correspond only to particular types of items. In MG-LDA a document is represented as a set of sliding windows, each covering T adjacent sentences within a document.4 Each window v in document d has an associated distribution over local topics θloc d,v and a distribution defining preference for local topics versus global topics πd,v. A word can be sampled using any window covering its sentence s, where the window is chosen according to a categorical distribution ψd,s. Importantly, the fact that windows overlap permits the model to exploit a larger co-occurrence domain. These simple techniques are capable of modeling local topics without more expensive modeling of topic transitions used in (Griffiths et al., 2004; Wang and McCallum, 2005; Wallach, 2006; Gruber et al., 2007). Introduction of a symmetrical Dirichlet prior Dir(γ) for the distribution ψd,s can control the smoothness of transitions. 4Our particular implementation is over sentences, but sliding windows in theory can be over any sized fragment of text. (a) (b) Figure 3: (a) MG-LDA model. (b) An extension of MGLDA to obtain MAS. The formal definition of the model with Kgl global and Kloc local topics is as follows: First, draw Kgl word distributions for global topics ϕgl z from a Dirichlet prior Dir(βgl) and Kloc word distributions for local topics ϕloc z′ - from Dir(βloc). Then, for each document d: • Choose a distribution of global topics θgl d ∼Dir(αgl). • For each sentence s choose a distribution over sliding windows ψd,s(v) ∼Dir(γ). • For each sliding window v – choose θloc d,v ∼Dir(αloc), – choose πd,v ∼Beta(αmix). • For each word i in sentence s of document d – choose window vd,i ∼ψd,s, – choose rd,i ∼πd,vd,i, – if rd,i = gl choose global topic zd,i ∼θgl d , – if rd,i=loc choose local topic zd,i ∼θloc d,vd,i, – choose word wd,i from the word distribution ϕ rd,i zd,i. Beta(αmix) is a prior Beta distribution for choosing between local and global topics. In Figure 3a the corresponding graphical model is presented. 2.2 Multi-Aspect Sentiment Model MG-LDA constructs a set of topics that ideally correspond to ratable aspects of an entity (often in a many-to-one relationship of topics to aspects). A major shortcoming of this model – and all other unsupervised models – is that this correspondence is not explicit, i.e., how does one say that topic X is really about aspect Y? However, we can observe that numeric aspect ratings are often included in our data by users who left the reviews. We then make the assumption that the text of the review discussing an aspect is predictive of its rating. Thus, if we model the prediction of aspect ratings jointly with the construction of explicitly associated topics, then such a 310 model should benefit from both higher quality topics and a direct assignment from topics to aspects. This is the basic idea behind the Multi-Aspect Sentiment model (MAS). In its simplest form, MAS introduces a classifier for each aspect, which is used to predict its rating. Each classifier is explicitly associated to a single topic in the model and only words assigned to that topic can participate in the prediction of the sentiment rating for the aspect. However, it has been observed that ratings for different aspects can be correlated (Snyder and Barzilay, 2007), e.g., very negative opinion about room cleanliness is likely to result not only in a low rating for the aspect rooms, but also is very predictive of low ratings for the aspects service and dining. This complicates discovery of the corresponding topics, as in many reviews the most predictive features for an aspect rating might correspond to another aspect. Another problem with this overly simplistic model is the presence of opinions about an item in general without referring to any particular aspect. For example, “this product is the worst I have ever purchased” is a good predictor of low ratings for every aspect. In such cases, non-aspect ‘background’ words will appear to be the most predictive. Therefore, the use of the aspect sentiment classifiers based only on the words assigned to the corresponding topics is problematic. Such a model will not be able to discover coherent topics associated with each aspect, because in many cases the most predictive fragments for each aspect rating will not be the ones where this aspect is discussed. Our proposal is to estimate the distribution of possible values of an aspect rating on the basis of the overall sentiment rating and to use the words assigned to the corresponding topic to compute corrections for this aspect. An aspect rating is typically correlated to the overall sentiment rating5 and the fragments discussing this particular aspect will help to correct the overall sentiment in the appropriate direction. For example, if a review of a hotel is generally positive, but it includes a sentence “the neighborhood is somewhat seedy” then this sentence is predictive of rating for an aspect location being below other ratings. This rectifies the aforementioned 5In the dataset used in our experiments all three aspect ratings are equivalent for 5,250 reviews out of 10,000. problems. First, aspect sentiment ratings can often be regarded as conditionally independent given the overall rating, therefore the model will not be forced to include in an aspect topic any words from other aspect topics. Secondly, the fragments discussing overall opinion will influence the aspect rating only through the overall sentiment rating. The overall sentiment is almost always present in the real data along with the aspect ratings, but it can be coarsely discretized and we preferred to use a latent overall sentiment. The MAS model is presented in Figure 3b. Note that for simplicity we decided to omit in the figure the components of the MG-LDA model other than variables r, z and w, though they are present in the statistical model. MAS also allows for extra unassociated local topics in order to capture aspects not explicitly rated by the user. As in MG-LDA, MAS has global topics which are expected to capture topics corresponding to particular types of items, such London hotels or seaside resorts for the hotel domain. In figure 3b we shaded the aspect ratings ya, assuming that every aspect rating is present in the data (though in practice they might be available only for some reviews). In this model the distribution of the overall sentiment rating yov is based on all the n-gram features of a review text. Then the distribution of ya, for every rated aspect a, can be computed from the distribution of yov and from any n-gram feature where at least one word in the n-gram is assigned to the associated aspect topic (r = loc, z = a). Instead of having a latent variable yov,6 we use a similar model which does not have an explicit notion of yov. The distribution of a sentiment rating ya for each rated aspect a is computed from two scores. The first score is computed on the basis of all the ngrams, but using a common set of weights independent of the aspect a. Another score is computed only using n-grams associated with the related topic, but an aspect-specific set of weights is used in this computation. More formally, we consider the log-linear distribution: P(ya = y|w, r, z)∝exp(ba y+ X f∈w Jf,y+pa f,r,zJa f,y), (1) where w, r, z are vectors of all the words in a docu6Preliminary experiments suggested that this is also a feasible approach, but somewhat more computationally expensive. 311 ment, assignments of context (global or local) and topics for all the words in the document, respectively. ba y is the bias term which regulates the prior distribution P(ya = y), f iterates through all the n-grams, Jy,f and Ja y,f are common weights and aspect-specific weights for n-gram feature f. pa f,r,z is equal to a fraction of words in n-gram feature f assigned to the aspect topic (r = loc, z = a). 2.3 Inference in MAS Exact inference in the MAS model is intractable. Following Titov and McDonald (2008) we use a collapsed Gibbs sampling algorithm that was derived for the MG-LDA model based on the Gibbs sampling method proposed for LDA in (Griffiths and Steyvers, 2004). Gibbs sampling is an example of a Markov Chain Monte Carlo algorithm (Geman and Geman, 1984). It is used to produce a sample from a joint distribution when only conditional distributions of each variable can be efficiently computed. In Gibbs sampling, variables are sequentially sampled from their distributions conditioned on all other variables in the model. Such a chain of model states converges to a sample from the joint distribution. A naive application of this technique to LDA would imply that both assignments of topics to words z and distributions θ and ϕ should be sampled. However, (Griffiths and Steyvers, 2004) demonstrated that an efficient collapsed Gibbs sampler can be constructed, where only assignments z need to be sampled, whereas the dependency on distributions θ and ϕ can be integrated out analytically. In the case of MAS we also use maximum aposteriori estimates of the sentiment predictor parameters ba y, Jy,f and Ja y,f. The MAP estimates for parameters ba y, Jy,f and Ja y,f are obtained by using stochastic gradient ascent. The direction of the gradient is computed simultaneously with running a chain by generating several assignments at each step and averaging over the corresponding gradient estimates. For details on computing gradients for loglinear graphical models with Gibbs sampling we refer the reader to (Neal, 1992). Space constraints do not allow us to present either the derivation or a detailed description of the sampling algorithm. However, note that the conditional distribution used in sampling decomposes into two parts: P(vd,i = v, rd,i = r, zd,i = z|v’, r’, z’, w, y) ∝ ηd,i v,r,z × ρd,i r,z, (2) where v’, r’ and z’ are vectors of assignments of sliding windows, context (global or local) and topics for all the words in the collection except for the considered word at position i in document d; y is the vector of sentiment ratings. The first factor ηd,i v,r,z is responsible for modeling co-occurrences on the window and document level and coherence of the topics. This factor is proportional to the conditional distribution used in the Gibbs sampler of the MG-LDA model (Titov and McDonald, 2008). The last factor quantifies the influence of the assignment of the word (d, i) on the probability of the sentiment ratings. It appears only if ratings are known (observable) and equals: ρd,i r,z = Y a P(yd a|w, r’, rd,i = r, z’, zd,i = z) P(yda|w, r’, z’, rd,i = gl) , where the probability distribution is computed as defined in expression (1), yd a is the rating for the ath aspect of review d. 3 Experiments In this section we present qualitative and quantitative experiments. For the qualitative analysis we show that topics inferred by the MAS model correspond directly to the associated aspects. For the quantitative analysis we show that the MAS model induces a distribution over the rated aspects which can be used to accurately predict whether a text fragment is relevant to an aspect or not. 3.1 Qualitative Evaluation To perform qualitative experiments we used a set of reviews of hotels taken from TripAdvisor.com7 that contained 10,000 reviews (109,024 sentences, 2,145,313 words in total). Every review was rated with at least three aspects: service, location and rooms. Each rating is an integer from 1 to 5. The dataset was tokenized and sentence split automatically. 7(c) 2005-06, TripAdvisor, LLC All rights reserved 312 rated aspect top words service staff friendly helpful service desk concierge excellent extremely hotel great reception english pleasant help location hotel walk location station metro walking away right minutes close bus city located just easy restaurants local rooms room bathroom shower bed tv small water clean comfortable towels bath nice large pillows space beds tub topics breakfast free coffee internet morning access buffet day wine nice lobby complimentary included good fruit $ night parking rate price paid day euros got cost pay hotel worth euro expensive car extra deal booked room noise night street air did door floor rooms open noisy window windows hear outside problem quiet sleep global moscow st russian petersburg nevsky russia palace hermitage kremlin prospect river prospekt kempinski topics paris tower french eiffel dame notre rue st louvre rer champs opera elysee george parisian du pantheon cafes Table 1: Top words from MAS for hotel reviews. Krooms top words 2 rooms clean hotel room small nice comfortable modern good quite large lobby old decor spacious decorated bathroom size room noise night street did air rooms door open noisy window floor hear windows problem outside quiet sleep bit light 3 room clean bed comfortable rooms bathroom small beds nice large size tv spacious good double big space huge king room floor view rooms suite got views given quiet building small balcony upgraded nice high booked asked overlooking room bathroom shower air water did like hot small towels door old window toilet conditioning open bath dirty wall tub 4 room clean rooms comfortable bed small beds nice bathroom size large modern spacious good double big quiet decorated check arrived time day airport early room luggage took late morning got long flight ready minutes did taxi bags went room noise night street did air rooms noisy open door hear windows window outside quiet sleep problem floor conditioning bathroom room shower tv bed small water towels bath tub large nice toilet clean space toiletries flat wall sink screen Table 2: Top words for aspect rooms with different number of topics Krooms. We ran the sampling chain for 700 iterations to produce a sample. Distributions of words in each topic were estimated as the proportion of words assigned to each topic, taking into account topic model priors βgl and βloc. The sliding windows were chosen to cover 3 sentences for all the experiments. All the priors were chosen to be equal to 0.1. We used 15 local topics and 30 global topics. In the model, the first three local topics were associated to the rating classifiers for each aspects. As a result, we would expect these topics to correspond to the service, location, and rooms aspects respectively. Unigram and bigram features were used in the sentiment predictors in the MAS model. Before applying the topic models we removed punctuation and also removed stop words using the standard list of stop words,8 however, all the words and punctuation were used in the sentiment predictors. It does not take many chain iterations to discover initial topics. This happens considerably faster than the appropriate weights of the sentiment predictor being learned. This poses a problem, because, in the beginning, the sentiment predictors are not accurate enough to force the model to discover appropriate topics associated with each of the rated aspects. And as soon as topic are formed, aspect sentiment predictors cannot affect them anymore because they do not 8http://www.dcs.gla.ac.uk/idom/ir resources/linguistic utils/ stop words have access to the true words associated with their aspects. To combat this problem we first train the sentiment classifiers by assuming that pa f,r,z is equal for all the local topics, which effectively ignores the topic model. Then we use the estimated parameters within the topic model.9 Secondly, we modify the sampling algorithm. The conditional probability used in sampling, expression (2), is proportional to the product of two factors. The first factor, ηd,i v,r,z, expresses a preference for topics likely from the co-occurrence information, whereas the second one, ρd,i r,z, favors the choice of topics which are predictive of the observable sentiment ratings. We used (ρd,i r,z)1+0.95tq in the sampling distribution instead of ρd,i r,z, where t is the iteration number. q was chosen to be 4, though the quality of the topics seemed to be indistinguishable with any q between 3 and 10. This can be thought of as having 1 + 0.95tq ratings instead of a single vector assigned to each review, i.e., focusing the model on prediction of the ratings rather than finding the topic labels which are good at explaining co-occurrences of words. These heuristics influence sampling only during the first iterations of the chain. Top words for some of discovered local topics, in9Initial experiments suggested that instead of doing this ‘pre-training’ we could start with very large priors αloc and αmix, and then reduce them through the course of training. However, this is significantly more computationally expensive. 313 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 Recall Precision topic model max−ent classifier topic model max−ent classifier 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 Recall Precision max−ent classifier 1 topic 2 topics 3 topics 4 topics 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 Recall Precision (a) (b) (c) Figure 4: (a) Aspect service. (b) Aspect location. (c) Aspect rooms. cluding the first 3 topics associated with the rated aspects, and also top words for some of global topics are presented in Table 1. We can see that the model discovered as its first three topics the correct associated aspects: service, location, and rooms. Other local topics, as for the MG-LDA model, correspond to other aspects discussed in reviews (breakfast, prices, noise), and as it was previously shown in Titov and McDonald (2008), aspects for global topics correspond to the types of reviewed items (hotels in Russia, Paris hotels) or background words. Notice though, that the 3rd local topic induced for the rating rooms is slightly narrow. This can be explained by the fact that the aspect rooms is a central aspect of hotel reviews. A very significant fraction of text in every review can be thought of as a part of the aspect rooms. These portions of reviews discuss different coherent sub-aspects related to the aspect rooms, e.g., the previously discovered topic noise. Therefore, it is natural to associate several topics to such central aspects. To test this we varied the number of topics associated with the sentiment predictor for the aspect rooms. Top words for resulting topics are presented in Table 2. It can be observed that the topic model discovered appropriate topics while the number of topics was below 4. With 4 topics a semantically unrelated topic (check-in/arrival) is induced. Manual selection of the number of topics is undesirable, but this problem can be potentially tackled with Dirichlet Process priors or a topic split criterion based on the accuracy of the sentiment predictor in the MAS model. We found that both service and location did not benefit by the assignment of additional topics to their sentiment rating models. The experimental results suggest that the MAS model is reliable in the discovery of topics corresponding to the rated aspects. In the next section we will show that the induced topics can be used to accurately extract fragments for each aspect. 3.2 Sentence Labeling A primary advantage of MAS over unsupervised models, such as MG-LDA or clustering, is that topics are linked to a rated aspect, i.e., we know exactly which topics model which aspects. As a result, these topics can be directly used to extract textual mentions that are relevant for an aspect. To test this, we hand labeled 779 random sentences from the dataset considered in the previous set of experiments. The sentences were labeled with one or more aspects. Among them, 164, 176 and 263 sentences were labeled as related to aspects service, location and rooms, respectively. The remaining sentences were not relevant to any of the rated aspects. We compared two models. The first model uses the first three topics of MAS to extract relevant mentions based on the probability of that topic/aspect being present in the sentence. To obtain these probabilities we used estimators based on the proportion of words in the sentence assigned to an aspects’ topic and normalized within local topics. To improve the reliability of the estimator we produced 100 samples for each document while keeping assignments of the topics to all other words in the collection fixed. The probability estimates were then obtained by averaging over these samples. We did not perform any model selection on the basis of the hand-labeled data, and tested only a single model of each type. 314 For the second model we trained a maximum entropy classifier, one per each aspect, using 10-fold cross validation and unigram/bigram features. Note that this is a supervised system and as such represents an upper-bound in performance one might expect when comparing an unsupervised model such as MAS. We chose this comparison to demonstrate that our model can find relevant text mentions with high accuracy relative to a supervised model. It is difficult to compare our model to other unsupervised systems such as MG-LDA or LDA. Again, this is because those systems have no mechanism for directly correlating topics or clusters to corresponding aspects, highlighting the benefit of MAS. The resulting precision-recall curves for the aspects service, location and rooms are presented in Figure 4. In Figure 4c, we varied the number of topics associated with the aspect rooms.10 The average precision we obtained (the standard measure proportional to the area under the curve) is 75.8%, 85.5% for aspects service and location, respectively. For the aspect rooms these scores are equal to 75.0%, 74.5%, 87.6%, 79.8% with 1–4 topics per aspect, respectively. The logistic regression models achieve 80.8%, 94.0% and 88.3% for the aspects service, location and rooms. We can observe that the topic model, which does not use any explicitly aspect-labeled text, achieves accuracies lower than, but comparable to a supervised model. 4 Related Work There is a growing body of work on summarizing sentiment by extracting and aggregating sentiment over ratable aspects and providing corresponding textual evidence. Text excerpts are usually extracted through string matching (Hu and Liu, 2004a; Popescu and Etzioni, 2005), sentence clustering (Gamon et al., 2005), or through topic models (Mei et al., 2007; Titov and McDonald, 2008). String extraction methods are limited to fine-grained aspects whereas clustering and topic model approaches must resort to ad-hoc means of labeling clusters or topics. However, this is the first work we are aware of that uses a pre-defined set of aspects plus an associated signal to learn a mapping from text to an aspect for 10To improve readability we smoothed the curve for the aspect rooms. the purpose of extraction. A closely related model to ours is that of Mei et al. (2007) which performs joint topic and sentiment modeling of collections. Our model differs from theirs in many respects: Mei et al. only model sentiment predictions for the entire document and not on the aspect level; They treat sentiment predictions as unobserved variables, whereas we treat them as observed signals that help to guide the creation of topics; They model co-occurrences solely on the document level, whereas our model is based on MG-LDA and models both local and global contexts. Recently, Blei and McAuliffe (2008) proposed an approach for joint sentiment and topic modeling that can be viewed as a supervised LDA (sLDA) model that tries to infer topics appropriate for use in a given classification or regression problem. MAS and sLDA are similar in that both use sentiment predictions as an observed signal that is predicted by the model. However, Blei et al. do not consider multiaspect ranking or look at co-occurrences beyond the document level, both of which are central to our model. Parallel to this study Branavan et al. (2008) also showed that joint models of text and user annotations benefit extractive summarization. In particular, they used signals from pros-cons lists whereas our models use aspect rating signals. 5 Conclusions In this paper we presented a joint model of text and aspect ratings for extracting text to be displayed in sentiment summaries. The model uses aspect ratings to discover the corresponding topics and can thus extract fragments of text discussing these aspects without the need of annotated data. We demonstrated that the model indeed discovers corresponding coherent topics and achieves accuracy in sentence labeling comparable to a standard supervised model. The primary area of future work is to incorporate the model into an end-to-end sentiment summarization system in order to evaluate it at that level. Acknowledgments This work benefited from discussions with Sasha Blair-Goldensohn and Fernando Pereira. 315 References David M. Blei and Jon D. McAuliffe. 2008. Supervised topic models. In Advances in Neural Information Processing Systems (NIPS). D.M. Blei, A.Y. Ng, and M.I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3(5):993–1022. S.R.K. Branavan, H. Chen, J. Eisenstein, and R. Barzilay. 2008. Learning document-level semantic properties from free-text annotations. In Proceedings of the Annual Conference of the Association for Computational Linguistics. G. Carenini, R. Ng, and A. Pauls. 2006. Multi-Document Summarization of Evaluative Text. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics. M. Gamon, A. Aue, S. Corston-Oliver, and E. Ringger. 2005. Pulse: Mining customer opinions from free text. In Proc. of the 6th International Symposium on Intelligent Data Analysis, pages 121–132. S. Geman and D. Geman. 1984. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721–741. T. L. Griffiths and M. Steyvers. 2004. Finding scientific topics. Proceedings of the Natural Academy of Sciences, 101 Suppl 1:5228–5235. T. L. Griffiths, M. Steyvers, D. M. Blei, and J. B. Tenenbaum. 2004. Integrating topics and syntax. In Advances in Neural Information Processing Systems. A. Gruber, Y. Weiss, and M. Rosen-Zvi. 2007. Hidden Topic Markov Models. In Proceedings of the Conference on Artificial Intelligence and Statistics. M. Hu and B. Liu. 2004a. Mining and summarizing customer reviews. In Proceedings of the 2004 ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168–177. ACM Press New York, NY, USA. M. Hu and B. Liu. 2004b. Mining Opinion Features in Customer Reviews. In Proceedings of Nineteenth National Conference on Artificial Intellgience. C. Manning and M. Schutze. 1999. Foundations of Statistical Natural Language Processing. MIT Press. Q. Mei, X. Ling, M. Wondra, H. Su, and C.X. Zhai. 2007. Topic sentiment mixture: modeling facets and opinions in weblogs. In Proceedings of the 16th International Conference on World Wide Web, pages 171–180. Radford Neal. 1992. Connectionist learning of belief networks. Artificial Intelligence, 56:71–113. B. Pang, L. Lee, and S. Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. A.M. Popescu and O. Etzioni. 2005. Extracting product features and opinions from reviews. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). B. Snyder and R. Barzilay. 2007. Multiple Aspect Ranking using the Good Grief Algorithm. In Proceedings of the Joint Conference of the North American Chapter of the Association for Computational Linguistics and Human Language Technologies, pages 300–307. I. Titov and R. McDonald. 2008. Modeling online reviews with multi-grain topic models. In Proceedings of the 17h International Conference on World Wide Web. P. Turney. 2002. Thumbs up or thumbs down? Sentiment orientation applied to unsupervised classification of reviews. In Proceedings of the Annual Conference of the Association for Computational Linguistics. Hanna M. Wallach. 2006. Topic modeling; beyond bag of words. In International Conference on Machine Learning. Xuerui Wang and Andrew McCallum. 2005. A note on topical n-grams. Technical Report UM-CS-2005-071, University of Massachusetts. J. Wiebe. 2000. Learning subjective adjectives from corpora. In Proceedings of the National Conference on Artificial Intelligence. L. Zhuang, F. Jing, and X.Y. Zhu. 2006. Movie review mining and summarization. In Proceedings of the 15th ACM international conference on Information and knowledge management (CIKM), pages 43–50. 316
2008
36
Proceedings of ACL-08: HLT, pages 317–325, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Improving Parsing and PP attachment Performance with Sense Information Eneko Agirre IXA NLP Group University of the Basque Country Donostia, Basque Country [email protected] Timothy Baldwin LT Group, CSSE University of Melbourne Victoria 3010 Australia [email protected] David Martinez LT Group, CSSE University of Melbourne Victoria 3010 Australia [email protected] Abstract To date, parsers have made limited use of semantic information, but there is evidence to suggest that semantic features can enhance parse disambiguation. This paper shows that semantic classes help to obtain significant improvement in both parsing and PP attachment tasks. We devise a gold-standard sense- and parse tree-annotated dataset based on the intersection of the Penn Treebank and SemCor, and experiment with different approaches to both semantic representation and disambiguation. For the Bikel parser, we achieved a maximal error reduction rate over the baseline parser of 6.9% and 20.5%, for parsing and PP-attachment respectively, using an unsupervised WSD strategy. This demonstrates that word sense information can indeed enhance the performance of syntactic disambiguation. 1 Introduction Traditionally, parse disambiguation has relied on structural features extracted from syntactic parse trees, and made only limited use of semantic information. There is both empirical evidence and linguistic intuition to indicate that semantic features can enhance parse disambiguation performance, however. For example, a number of different parsers have been shown to benefit from lexicalisation, that is, the conditioning of structural features on the lexical head of the given constituent (Magerman, 1995; Collins, 1996; Charniak, 1997; Charniak, 2000; Collins, 2003). As an example of lexicalisation, we may observe in our training data that knife often occurs as the manner adjunct of open in prepositional phrases headed by with (c.f. open with a knife), which would provide strong evidence for with (a) knife attaching to open and not box in open the box with a knife. It would not, however, provide any insight into the correct attachment of with scissors in open the box with scissors, as the disambiguation model would not be able to predict that knife and scissors are semantically similar and thus likely to have the same attachment preferences. In order to deal with this limitation, we propose to integrate directly the semantic classes of words into the process of training the parser. This is done by substituting the original words with semantic codes that reflect semantic classes. For example, in the above example we could substitute both knife and scissors with the semantic class TOOL, thus relating the training and test instances directly. We explore several models for semantic representation, based around WordNet (Fellbaum, 1998). Our approach to exploring the impact of lexical semantics on parsing performance is to take two state-of-the-art statistical treebank parsers and preprocess the inputs variously. This simple method allows us to incorporate semantic information into the parser without having to reimplement a full statistical parser, and also allows for maximum comparability with existing results in the treebank parsing community. We test the parsers over both a PP attachment and full parsing task. In experimenting with different semantic representations, we require some strategy to disambiguate the semantic class of polysemous words in context (e.g. determining for each instance of crane whether it refers to an animal or a lifting device). We explore a number of disambiguation strategies, including the use of hand-annotated (gold-standard) senses, the 317 use of the most frequent sense, and an unsupervised word sense disambiguation (WSD) system. This paper shows that semantic classes help to obtain significant improvements for both PP attachment and parsing. We attain a 20.5% error reduction for PP attachment, and 6.9% for parsing. These results are achieved using most frequent sense information, which surprisingly outperforms both goldstandard senses and automatic WSD. The results are notable in demonstrating that very simple preprocessing of the parser input facilitates significant improvements in parser performance. We provide the first definitive results that word sense information can enhance Penn Treebank parser performance, building on earlier results of Bikel (2000) and Xiong et al. (2005). Given our simple procedure for incorporating lexical semantics into the parsing process, our hope is that this research will open the door to further gains using more sophisticated parsing models and richer semantic options. 2 Background This research is focused on applying lexical semantics in parsing and PP attachment tasks. Below, we outline these tasks. Parsing As our baseline parsers, we use two state-of-theart lexicalised parsing models, namely the Bikel parser (Bikel, 2004) and Charniak parser (Charniak, 2000). While a detailed description of the respective parsing models is beyond the scope of this paper, it is worth noting that both parsers induce a context free grammar as well as a generative parsing model from a training set of parse trees, and use a development set to tune internal parameters. Traditionally, the two parsers have been trained and evaluated over the WSJ portion of the Penn Treebank (PTB: Marcus et al. (1993)). We diverge from this norm in focusing exclusively on a sense-annotated subset of the Brown Corpus portion of the Penn Treebank, in order to investigate the upper bound performance of the models given gold-standard sense information. PP attachment in a parsing context Prepositional phrase attachment (PP attachment) is the problem of determining the correct attachment site for a PP, conventionally in the form of the noun or verb in a V NP PP structure (Ratnaparkhi et al., 1994; Mitchell, 2004). For instance, in I ate a pizza with anchovies, the PP with anchovies could attach either to the verb (c.f. ate with anchovies) or to the noun (c.f. pizza with anchovies), of which the noun is the correct attachment site. With I ate a pizza with friends, on the other hand, the verb is the correct attachment site. PP attachment is a structural ambiguity problem, and as such, a subproblem of parsing. Traditionally the so-called RRR data (Ratnaparkhi et al., 1994) has been used to evaluate PP attachment algorithms. RRR consists of 20,081 training and 3,097 test quadruples of the form (v,n1,p,n2), where the attachment decision is either v or n1. The best published results over RRR are those of Stetina and Nagao (1997), who employ WordNet sense predictions from an unsupervised WSD method within a decision tree classifier. Their work is particularly inspiring in that it significantly outperformed the plethora of lexicalised probabilistic models that had been proposed to that point, and has not been beaten in later attempts. In a recent paper, Atterer and Sch¨utze (2007) criticised the RRR dataset because it assumes that an oracle parser provides the two hypothesised structures to choose between. This is needed to derive the fact that there are two possible attachment sites, as well as information about the lexical phrases, which are typically extracted heuristically from gold standard parses. Atterer and Sch¨utze argue that the only meaningful setting for PP attachment is within a parser, and go on to demonstrate that in a parser setting, the Bikel parser is competitive with the bestperforming dedicated PP attachment methods. Any improvement in PP attachment performance over the baseline Bikel parser thus represents an advancement in state-of-the-art performance. That we specifically present results for PP attachment in a parsing context is a combination of us supporting the new research direction for PP attachment established by Atterer and Sch¨utze, and us wishing to reinforce the findings of Stetina and Nagao that word sense information significantly enhances PP attachment performance in this new setting. Lexical semantics in parsing There have been a number of attempts to incorporate word sense information into parsing tasks. The 318 most closely related research is that of Bikel (2000), who merged the Brown portion of the Penn Treebank with SemCor (similarly to our approach in Section 4.1), and used this as the basis for evaluation of a generative bilexical model for joint WSD and parsing. He evaluated his proposed model in a parsing context both with and without WordNet-based sense information, and found that the introduction of sense information either had no impact or degraded parse performance. The only successful applications of word sense information to parsing that we are aware of are Xiong et al. (2005) and Fujita et al. (2007). Xiong et al. (2005) experimented with first-sense and hypernym features from HowNet and CiLin (both WordNets for Chinese) in a generative parse model applied to the Chinese Penn Treebank. The combination of word sense and first-level hypernyms produced a significant improvement over their basic model. Fujita et al. (2007) extended this work in implementing a discriminative parse selection model incorporating word sense information mapped onto upper-level ontologies of differing depths. Based on gold-standard sense information, they achieved large-scale improvements over a basic parse selection model in the context of the Hinoki treebank. Other notable examples of the successful incorporation of lexical semantics into parsing, not through word sense information but indirectly via selectional preferences, are Dowding et al. (1994) and Hektoen (1997). For a broader review of WSD in NLP applications, see Resnik (2006). 3 Integrating Semantics into Parsing Our approach to providing the parsers with sense information is to make available the semantic denotation of each word in the form of a semantic class. This is done simply by substituting the original words with semantic codes. For example, in the earlier example of open with a knife we could substitute both knife and scissors with the class TOOL, and thus directly facilitate semantic generalisation within the parser. There are three main aspects that we have to consider in this process: (i) the semantic representation, (ii) semantic disambiguation, and (iii) morphology. There are many ways to represent semantic relationships between words. In this research we opt for a class-based representation that will map semantically-related words into a common semantic category. Our choice for this work was the WordNet 2.1 lexical database, in which synonyms are grouped into synsets, which are then linked via an IS-A hierarchy. WordNet contains other types of relations such as meronymy, but we did not use them in this research. With any lexical semantic resource, we have to be careful to choose the appropriate level of granularity for a given task: if we limit ourselves to synsets we will not be able to capture broader generalisations, such as the one between knife and scissors;1 on the other hand by grouping words related at a higher level in the hierarchy we could find that we make overly coarse groupings (e.g. mallet, square and steel-wool pad are also descendants of TOOL in WordNet, none of which would conventionally be used as the manner adjunct of cut). We will test different levels of granularity in this work. The second problem we face is semantic disambiguation. The more fine-grained our semantic representation, the higher the average polysemy and the greater the need to distinguish between these senses. For instance, if we find the word crane in a context such as demolish a house with the crane, the ability to discern that this corresponds to the DEVICE and not ANIMAL sense of word will allow us to avoid erroneous generalisations. This problem of identifying the correct sense of a word in context is known as word sense disambiguation (WSD: Agirre and Edmonds (2006)). Disambiguating each word relative to its context of use becomes increasingly difficult for fine-grained representations (Palmer et al., 2006). We experiment with different ways of tackling WSD, using both gold-standard data and automatic methods. Finally, when substituting words with semantic tags we have to decide how to treat different word forms of a given lemma. In the case of English, this pertains most notably to verb inflection and noun number, a distinction which we lose if we opt to map all word forms onto semantic classes. For our current purposes we choose to substitute all word 1In WordNet 2.1, knife and scissors are sister synsets, both of which have TOOL as their 4th hypernym. Only by mapping them onto their 1st hypernym or higher would we be able to capture the semantic generalisation alluded to above. 319 forms, but we plan to look at alternative representations in the future. 4 Experimental setting We evaluate the performance of our approach in two settings: (1) full parsing, and (2) PP attachment within a full parsing context. Below, we outline the dataset used in this research and the parser evaluation methodology, explain the methodology used to perform PP attachment, present the different options for semantic representation, and finally detail the disambiguation methods. 4.1 Dataset and parser evaluation One of the main requirements for our dataset is the availability of gold-standard sense and parse tree annotations. The gold-standard sense annotations allow us to perform upper bound evaluation of the relative impact of a given semantic representation on parsing and PP attachment performance, to contrast with the performance in more realistic semantic disambiguation settings. The gold-standard parse tree annotations are required in order to carry out evaluation of parser and PP attachment performance. The only publicly-available resource with these two characteristics at the time of this work was the subset of the Brown Corpus that is included in both SemCor (Landes et al., 1998) and the Penn Treebank (PTB).2 This provided the basis of our dataset. After sentence- and word-aligning the SemCor and PTB data (discarding sentences where there was a difference in tokenisation), we were left with a total of 8,669 sentences containing 151,928 words. Note that this dataset is smaller than the one described by Bikel (2000) in a similar exercise, the reason being our simple and conservative approach taken when merging the resources. We relied on this dataset alone for all the experiments in this paper. In order to maximise reproducibility and encourage further experimentation in the direction pioneered in this research, we partitioned the data into 3 sets: 80% training, 10% development and 10% test data. This dataset is available on request to the research community. 2OntoNotes (Hovy et al., 2006) includes large-scale treebank and (selective) sense data, which we plan to use for future experiments when it becomes fully available. We evaluate the parsers via labelled bracketing recall (R), precision (P) and F-score (F1). We use Bikel’s randomized parsing evaluation comparator3 (with p < 0.05 throughout) to test the statistical significance of the results using word sense information, relative to the respective baseline parser using only lexical features. 4.2 PP attachment task Following Atterer and Sch¨utze (2007), we wrote a script that, given a parse tree, identifies instances of PP attachment ambiguity and outputs the (v,n1,p,n2) quadruple involved and the attachment decision. This extraction system uses Collins’ rules (based on TREEP (Chiang and Bikel, 2002)) to locate the heads of phrases. Over the combined gold-standard parsing dataset, our script extracted a total of 2,541 PP attachment quadruples. As with the parsing data, we partitioned the data into 3 sets: 80% training, 10% development and 10% test data. Once again, this dataset and the script used to extract the quadruples are available on request to the research community. In order to evaluate the PP attachment performance of a parser, we run our extraction script over the parser output in the same manner as for the goldstandard data, and compare the extracted quadruples to the gold-standard ones. Note that there is no guarantee of agreement in the quadruple membership between the extraction script and the gold standard, as the parser may have produced a parse which is incompatible with either attachment possibility. A quadruple is deemed correct if: (1) it exists in the gold standard, and (2) the attachment decision is correct. Conversely, it is deemed incorrect if: (1) it exists in the gold standard, and (2) the attachment decision is incorrect. Quadruples not found in the gold standard are discarded. Precision was measured as the number of correct quadruples divided by the total number of correct and incorrect quadruples (i.e. all quadruples which are not discarded), and recall as the number of correct quadruples divided by the total number of gold-standard quadruples in the test set. This evaluation methodology coincides with that of Atterer and Sch¨utze (2007). Statistical significance was calculated based on 3www.cis.upenn.edu/˜dbikel/software.html 320 a modified version of the Bikel comparator (see above), once again with p < 0.05. 4.3 Semantic representation We experimented with a range of semantic representations, all of which are based on WordNet 2.1. As mentioned above, words in WordNet are organised into sets of synonyms, called synsets. Each synset in turn belongs to a unique semantic file (SF). There are a total of 45 SFs (1 for adverbs, 3 for adjectives, 15 for verbs, and 26 for nouns), based on syntactic and semantic categories. A selection of SFs is presented in Table 1 for illustration purposes. We experiment with both full synsets and SFs as instances of fine-grained and coarse-grained semantic representation, respectively. As an example of the difference in these two representations, knife in its tool sense is in the EDGE TOOL USED AS A CUTTING INSTRUMENT singleton synset, and also in the ARTIFACT SF along with thousands of other words including cutter. Note that these are the two extremes of semantic granularity in WordNet, and we plan to experiment with intermediate representation levels in future research (c.f. Li and Abe (1998), McCarthy and Carroll (2003), Xiong et al. (2005), Fujita et al. (2007)). As a hybrid representation, we tested the effect of merging words with their corresponding SF (e.g. knife+ARTIFACT ). This is a form of semantic specialisation rather than generalisation, and allows the parser to discriminate between the different senses of each word, but not generalise across words. For each of these three semantic representations, we experimented with substituting each of: (1) all open-class POSs (nouns, verbs, adjectives and adverbs), (2) nouns only, and (3) verbs only. There are thus a total of 9 combinations of representation type and target POS. 4.4 Disambiguation methods For a given semantic representation, we need some form of WSD to determine the semantics of each token occurrence of a target word. We experimented with three options: 1. Gold-standard: Gold-standard annotations from SemCor. This gives us the upper bound performance of the semantic representation. SF ID DEFINITION adj.all all adjective clusters adj.pert relational adjectives (pertainyms) adj.ppl participial adjectives adv.all all adverbs noun.act nouns denoting acts or actions noun.animal nouns denoting animals noun.artifact nouns denoting man-made objects ... verb.consumption verbs of eating and drinking verb.emotion verbs of feeling verb.perception verbs of seeing, hearing, feeling ... Table 1: A selection of WordNet SFs 2. First Sense (1ST): All token instances of a given word are tagged with their most frequent sense in WordNet.4 Note that the first sense predictions are based largely on the same dataset as we use in our evaluation, such that the predictions are tuned to our dataset and not fully unsupervised. 3. Automatic Sense Ranking (ASR): First sense tagging as for First Sense above, except that an unsupervised system is used to automatically predict the most frequent sense for each word based on an independent corpus. The method we use to predict the first sense is that of McCarthy et al. (2004), which was obtained using a thesaurus automatically created from the British National Corpus (BNC) applying the method of Lin (1998), coupled with WordNetbased similarity measures. This method is fully unsupervised and completely unreliant on any annotations from our dataset. In the case of SFs, we perform full synset WSD based on one of the above options, and then map the prediction onto the corresponding (unique) SF. 5 Results We present the results for each disambiguation approach in turn, analysing the results for parsing and PP attachment separately. 4There are some differences with the most frequent sense in SemCor, due to extra corpora used in WordNet development, and also changes in WordNet from the original version used for the SemCor tagging. 321 CHARNIAK BIKEL SYSTEM R P F1 R P F1 Baseline .857 .808 .832 .837 .845 .841 SF .855 .809 .831 .847∗ .854∗ .850∗ SFn .860 .808 .833 .847∗ .853∗ .850∗ SFv .861 .811 .835 .847∗ .856∗ .851∗ word + SF .865∗ .814∗ .839∗ .837 .846 .842 word + SFn .862 .809 .835 .841∗ .850∗ .846∗ word + SFv .862 .810 .835 .840 .851 .845 Syn .863∗ .812 .837 .845∗ .853∗ .849∗ Synn .860 .807 .832 .841 .849 .845 Synv .863∗ .813∗ .837∗ .843∗ .851∗ .847∗ Table 2: Parsing results with gold-standard senses (∗indicates that the recall or precision is significantly better than baseline; the best performing method in each column is shown in bold) 5.1 Gold standard We disambiguated each token instance in our corpus according to the gold-standard sense data, and trained both the Charniak and Bikel parsers over each semantic representation. We evaluated the parsers in full parsing and PP attachment contexts. The results for parsing are given in Table 2. The rows represent the three semantic representations (including whether we substitute only nouns, only verbs or all POS). We can see that in almost all cases the semantically-enriched representations improve over the baseline parsers. These results are statistically significant in some cases (as indicated by ∗). The SFv representation produces the best results for Bikel (F-score 0.010 above baseline), while for Charniak the best performance is obtained with word+SF (F-score 0.007 above baseline). Comparing the two baseline parsers, Bikel achieves better precision and Charniak better recall. Overall, Bikel obtains a superior F-score in all configurations. The results for the PP attachment experiments using gold-standard senses are given in Table 3, both for the Charniak and Bikel parsers. Again, the Fscore for the semantic representations is better than the baseline in all cases. We see that the improvement is significant for recall in most cases (particularly when using verbs), but not for precision (only Charniak over Synv and word+SFv for Bikel). For both parsers the best results are achieved with SFv, which was also the best configuration for parsing with Bikel. The performance gain obtained here is larger than in parsing, which is in accordance with the findings of Stetina and Nagao that lexical semantics has a considerable effect on PP attachment CHARNIAK BIKEL SYSTEM R P F1 R P F1 Baseline .667 .798 .727 .659 .820 .730 SF .710 .808 .756 .714∗ .809 .758 SFn .671 .792 .726 .706 .818 .758 SFv .729∗ .823 .773∗ .733∗ .827 .778∗ word + SF .710∗ .801 .753 .706∗ .837 .766∗ word + SFn .698∗ .813 .751 .706∗ .829 .763∗ word + SFv .714∗ .805 .757∗ .706∗ .837∗ .766∗ Syn .722∗ .814 .765∗ .702∗ .825 .758 Synn .678 .805 .736 .690 .822 .751 Synv .702∗ .817∗ .755∗ .690∗ .834 .755∗ Table 3: PP attachment results with gold-standard senses (∗indicates that the recall or precision is significantly better than baseline; the best performing method in each column is shown in bold) performance. As in full-parsing, Bikel outperforms Charniak, but in this case the difference in the baselines is not statistically significant. 5.2 First sense (1ST) For this experiment, we use the first sense data from WordNet for disambiguation. The results for full parsing are given in Table 4. Again, the performance is significantly better than baseline in most cases, and surprisingly the results are even better than gold-standard in some cases. We hypothesise that this is due to the avoidance of excessive fragmentation, as occurs with fine-grained senses. The results are significantly better for nouns, with SFn performing best. Verbs seem to suffer from lack of disambiguation precision, especially for Bikel. Here again, Charniak trails behind Bikel. The results for the PP attachment task are shown in Table 5. The behaviour is slightly different here, with Charniak obtaining better results than Bikel in most cases. As was the case for parsing, the performance with 1ST reaches and in many instances surpasses gold-standard levels, achieving statistical significance over the baseline in places. Comparing the semantic representations, the best results are achieved with SFv, as we saw in the gold-standard PP-attachment case. 5.3 Automatic sense ranking (ASR) The final option for WSD is automatic sense ranking, which indicates how well our method performs in a completely unsupervised setting. The parsing results are given in Table 6. We can see that the scores are very similar to those from 322 CHARNIAK BIKEL SYSTEM R P F1 R P F1 Baseline .857 .807 .832 .837 .845 .841 SF .851 .804 .827 .843 .850 .846 SFn .863∗ .813 .837∗ .850∗ .854∗ .852∗ SFv .857 .808 .832 .843 .853∗ .848 word + SF .859 .810 .834 .833 .841 .837 word + SFn .862∗ .811 .836 .844∗ .851∗ .848∗ word + SFv .857 .808 .832 .831 .839 .835 Syn .857 .810 .833 .837 .844 .840 Synn .863∗ .812 .837∗ .844∗ .851∗ .848∗ Synv .860 .810 .834 .836 .844 .840 Table 4: Parsing results with 1ST (∗indicates that the recall or precision is significantly better than baseline; the best performing method in each column is shown in bold) CHARNIAK BIKEL SYSTEM R P F1 R P F1 Baseline .667 .798 .727 .659 .820 .730 SF .710 .808 .756 .702 .806 .751 SFn .671 .781 .722 .702 .829 .760 SFv .737∗ .836∗ .783∗ .718∗ .821 .766∗ word + SF .706 .811 .755 .694 .823 .753 word + SFn .690 .815 .747 .667 .810 .731 word + SFv .714∗ .805 .757∗ .710∗ .819 .761∗ Syn .725∗ .833∗ .776∗ .698 .828 .757 Synn .698 .828∗ .757∗ .667 .817 .734 Synv .722∗ .811 .763∗ .706∗ .818 .758∗ Table 5: PP attachment results with 1ST (∗indicates that the recall or precision is significantly better than baseline; the best performing method in each column is shown in bold) 1ST, with improvements in some cases, particularly for Charniak. Again, the results are better for nouns, except for the case of SFv with Bikel. Bikel outperforms Charniak in terms of F-score in all cases. The PP attachment results are given in Table 7. The results are similar to 1ST, with significant improvements for verbs. In this case, synsets slightly outperform SF. Charniak performs better than Bikel, and the results for Synv are higher than the best obtained using gold-standard senses. 6 Discussion The results of the previous section show that the improvements in parsing results are small but significant, for all three word sense disambiguation strategies (gold-standard, 1ST and ASR). Table 8 summarises the results, showing that the error reduction rate (ERR) over the parsing F-score is up to 6.9%, which is remarkable given the relatively superficial strategy for incorporating sense information into the parser. Note also that our baseline results for the CHARNIAK BIKEL SYSTEM R P F1 R P F1 Baseline .857 .807 .832 .837 .845 .841 SF .863 .815∗ .838 .845∗ .852 .849 SFn .862 .810 .835 .845∗ .850 .847∗ SFv .859 .810 .833 .846∗ .856∗ .851∗ word + SF .859 .810 .834 .836 .844 .840 word + SFn .865∗ .813∗ .838∗ .844∗ .852∗ .848∗ word + SFv .856 .806 .830 .832 .839 .836 Syn .856 .807 .831 .840 .847 .843 Synn .864∗ .813∗ .838∗ .844∗ .851∗ .847∗ Synv .857 .806 .831 .837 .845 .841 Table 6: Parsing results with ASR (∗indicates that the recall or precision is significantly better than baseline; the best performing method in each column is shown in bold) CHARNIAK BIKEL SYSTEM R P F1 R P F1 Baseline .667 .798 .727 .659 .820 .730 SF .733∗ .824 .776∗ .698 .805 .748 SFn .682 .791 .733 .671 .807 .732 SFv .733∗ .813 .771∗ .710∗ .812 .757∗ word + SF .714∗ .798 .754 .675 .800 .732 word + SFn .690 .807 .744 .659 .804 .724 word + SFv .706∗ .800 .750 .702∗ .814 .754∗ Syn .733∗ .827 .778∗ .694 .805 .745 Synn .686 .810 .743 .667 .806 .730 Synv .714∗ .816 .762∗ .714∗ .816 .762∗ Table 7: PP attachment results with ASR (∗indicates that the recall or precision is significantly better than baseline; the best performance in each column is shown in bold) dataset are almost the same as previous work parsing the Brown corpus with similar models (Gildea, 2001), which suggests that our dataset is representative of this corpus. The improvement in PP attachment was larger (20.5% ERR), and also statistically significant. The results for PP attachment are especially important, as we demonstrate that the sense information has high utility when embedded within a parser, where the parser needs to first identify the ambiguity and heads correctly. Note that Atterer and Sch¨utze (2007) have shown that the Bikel parser performs as well as the state-of-the-art in PP attachment, which suggests our method improves over the current stateof-the-art. The fact that the improvement is larger for PP attachment than for full parsing is suggestive of PP attachment being a parsing subtask where lexical semantic information is particularly important, supporting the findings of Stetina and Nagao (1997) over a standalone PP attachment task. We also observed that while better PP-attachment usually improves parsing, there is some small variation. This 323 WSD TASK PAR BASE SEM ERR BEST Pars. C .832 .839∗ 4.2% word+SF GoldB .841 .851∗ 6.3% SFv standard PP C .727 .773∗ 16.9% SFv B .730 .778∗ 17.8% SFv Pars. C .832 .837∗ 3.0% SFn, Synn 1ST B .841 .852∗ 6.9% SFn PP C .727 .783∗ 20.5% SFv B .730 .766∗ 13.3% SFv Pars. C .832 .838∗ 3.6% SF, word+SFn, Synn ASR B .841 .851∗ 6.3% SFv PP C .727 .778∗ 18.7% Syn B .730 .762∗ 11.9% Synv Table 8: Summary of F-score results with error reduction rates and the best semantic representation(s) for each setting (C = Charniak, B = Bikel) means that the best configuration for PP-attachment does not always produce the best results for parsing One surprising finding was the strong performance of the automatic WSD systems, actually outperforming the gold-standard annotation overall. Our interpretation of this result is that the approach of annotating all occurrences of the same word with the same sense allows the model to avoid the data sparseness associated with the gold-standard distinctions, as well as supporting the merging of different words into single semantic classes. While the results for gold-standard senses were intended as an upper bound for WordNet-based sense information, in practice there was very little difference between gold-standard senses and automatic WSD in all cases barring the Bikel parser and PP attachment. Comparing the two parsers, Charniak performs better than Bikel on PP attachment when automatic WSD is used, while Bikel performs better on parsing overall. Regarding the choice of WSD system, the results for both approaches are very similar, showing that ASR performs well, even if it does not require sense frequency information. The analysis of performance according to the semantic representation is not so clear cut. Generalising only verbs to semantic files (SFv) was the best option in most of the experiments, particularly for PP-attachment. This could indicate that semantic generalisation is particularly important for verbs, more so than nouns. Our hope is that this paper serves as the bridgehead for a new line of research into the impact of lexical semantics on parsing. Notably, more could be done to fine-tune the semantic representation between the two extremes of full synsets and SFs. One could also imagine that the appropriate level of generalisation differs across POS and even the relative syntactic role, e.g. finer-grained semantics are needed for the objects than subjects of verbs. On the other hand, the parsing strategy is very simple, as we just substitute words by their semantic class and then train statistical parsers on the transformed input. The semantic class should be an information source that the parsers take into account in addition to analysing the actual words used. Tighter integration of semantics into the parsing models, possibly in the form of discriminative reranking models (Collins and Koo, 2005; Charniak and Johnson, 2005; McClosky et al., 2006), is a promising way forward in this regard. 7 Conclusions In this work we have trained two state-of-the-art statistical parsers on semantically-enriched input, where content words have been substituted with their semantic classes. This simple method allows us to incorporate lexical semantic information into the parser, without having to reimplement a full statistical parser. We tested the two parsers in both a full parsing and a PP attachment context. This paper shows that semantic classes achieve significant improvement both on full parsing and PP attachment tasks relative to the baseline parsers. PP attachment achieves a 20.5% ERR, and parsing 6.9% without requiring hand-tagged data. The results are highly significant in demonstrating that a simplistic approach to incorporating lexical semantics into a parser significantly improves parser performance. As far as we know, these are the first results over both WordNet and the Penn Treebank to show that semantic processing helps parsing. Acknowledgements We wish to thank Diana McCarthy for providing us with the sense rank for the target words. This work was partially funded by the Education Ministry (project KNOW TIN2006-15049), the Basque Government (IT397-07), and the Australian Research Council (grant no. DP0663879). Eneko Agirre participated in this research while visiting the University of Melbourne, based on joint funding from the Basque Government and HCSNet. 324 References Eneko Agirre and Philip Edmonds, editors. 2006. Word Sense Disambiguation: Algorithms and Applications. Springer, Dordrecht, Netherlands. Michaela Atterer and Hinrich Sch¨utze. 2007. Prepositional phrase attachment without oracles. Computational Linguistics, 33(4):469–476. Daniel M. Bikel. 2000. A statistical model for parsing and word-sense disambiguation. In Proc. of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC-2000), pages 155–63, Hong Kong, China. Daniel M. Bikel. 2004. Intricacies of Collins’ parsing model. Computational Linguistics, 30(4):479–511. Eugene Charniak and Mark Johnson. 2005. Coarse-to-fine nbest parsing and maxent discriminative reranking. In Proc. of the 43rd Annual Meeting of the ACL, pages 173–80, Ann Arbor, USA. Eugene Charniak. 1997. Statistical parsing with a context-free grammar and word statistics. In Proc. of the 15th Annual Conference on Artificial Intelligence (AAAI-97), pages 598– 603, Stanford, USA. Eugene Charniak. 2000. A maximum entropy-based parser. In Proc. of the 1st Annual Meeting of the North American Chapter of Association for Computational Linguistics (NAACL2000), Seattle, USA. David Chiang and David M. Bikel. 2002. Recovering latent information in treebanks. In Proc. of the 19th International Conference on Computational Linguistics (COLING 2002), pages 183–9, Taipei, Taiwan. Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 31(1):25–69. Michael J. Collins. 1996. A new statistical parser based on lexical dependencies. In Proc. of the 34th Annual Meeting of the ACL, pages 184–91, Santa Cruz, USA. Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational Linguistics, 29(4):589–637. John Dowding, Robert Moore, Franc¸ois Andry, and Douglas Moran. 1994. Interleaving syntax and semantics in an efficient bottom-up parser. In Proc. of the 32nd Annual Meeting of the ACL, pages 110–6, Las Cruces, USA. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, USA. Sanae Fujita, Francis Bond, Stephan Oepen, and Takaaki Tanaka. 2007. Exploiting semantic information for HPSG parse selection. In Proc. of the ACL 2007 Workshop on Deep Linguistic Processing, pages 25–32, Prague, Czech Republic. Daniel Gildea. 2001. Corpus variation and parser performance. In Proc. of the 6th Conference on Empirical Methods in Natural Language Processing (EMNLP 2001), pages 167–202, Pittsburgh, USA. Erik Hektoen. 1997. Probabilistic parse selection based on semantic cooccurrences. In Proc. of the 5th International Workshop on Parsing Technologies (IWPT-1997), pages 113–122, Boston, USA. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: The 90% solution. In Proc. of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 57–60, New York City, USA. Shari Landes, Claudia Leacock, and Randee I. Tengi. 1998. Building semantic concordances. In Christiane Fellbaum, editor, WordNet: An Electronic Lexical Database. MIT Press, Cambridge, USA. Hang Li and Naoki Abe. 1998. Generalising case frames using a thesaurus and the MDL principle. Computational Linguistics, 24(2):217–44. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proc. of the 36th Annual Meeting of the ACL and 17th International Conference on Computational Linguistics: COLING/ACL-98, pages 768–774, Montreal, Canada. David M. Magerman. 1995. Statistical decision-tree models for parsing. In Proc. of the 33rd Annual Meeting of the ACL, pages 276–83, Cambridge, USA. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn treebank. Computational Linguistics, 19(2):313–30. Diana McCarthy and John Carroll. 2003. Disambiguating nouns, verbs and adjectives using automatically acquired selectional preferences. Computational Linguistics, 29(4):639–654. Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2004. Finding predominant senses in untagged text. In Proc. of the 42nd Annual Meeting of the ACL, pages 280– 7, Barcelona, Spain. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proc. of the Human Language Technology Conference of the NAACL (NAACL2006), pages 152–159, New York City, USA. Brian Mitchell. 2004. Prepositional Phrase Attachment using Machine Learning Algorithms. Ph.D. thesis, University of Sheffield. Martha Palmer, Hoa Dang, and Christiane Fellbaum. 2006. Making fine-grained and coarse-grained sense distinctions, both manually and automatically. Natural Language Engineering, 13(2):137–63. Adwait Ratnaparkhi, Jeff Reynar, and Salim Roukos. 1994. A maximum entropy model for prepositional phrase attachment. In HLT ’94: Proceedings of the Workshop on Human Language Technology, pages 250–255, Plainsboro, USA. Philip Resnik. 2006. WSD in NLP applications. In Eneko Agirre and Philip Edmonds, editors, Word Sense Disambiguation: Algorithms and Applications, chapter 11, pages 303–40. Springer, Dordrecht, Netherlands. Jiri Stetina and Makoto Nagao. 1997. Corpus based PP attachment ambiguity resolution with a semantic dictionary. In Proc. of the 5th Annual Workshop on Very Large Corpora, pages 66–80, Hong Kong, China. Deyi Xiong, Shuanglong Li, Qun Liu, Shouxun Lin, and Yueliang Qian. 2005. Parsing the Penn Chinese Treebank with semantic knowledge. In Proc. of the 2nd International Joint Conference on Natural Language Processing (IJCNLP-05), pages 70–81, Jeju Island, Korea. 325
2008
37
Proceedings of ACL-08: HLT, pages 326–334, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics A Logical Basis for the D Combinator and Normal Form in CCG Frederick Hoyt and Jason Baldridge The Department of Linguistics The University of Texas at Austin {fmhoyt,jbaldrid}@mail.utexas.edu Abstract The standard set of rules defined in Combinatory Categorial Grammar (CCG) fails to provide satisfactory analyses for a number of syntactic structures found in natural languages. These structures can be analyzed elegantly by augmenting CCG with a class of rules based on the combinator D (Curry and Feys, 1958). We show two ways to derive the D rules: one based on unary composition and the other based on a logical characterization of CCG’s rule base (Baldridge, 2002). We also show how Eisner’s (1996) normal form constraints follow from this logic, ensuring that the D rules do not lead to spurious ambiguities. 1 Introduction Combinatory Categorial Grammar (CCG, Steedman (2000)) is a compositional, semantically transparent formalism that is both linguistically expressive and computationally tractable. It has been used for a variety of tasks, such as wide-coverage parsing (Hockenmaier and Steedman, 2002; Clark and Curran, 2007), sentence realization (White, 2006), learning semantic parsers (Zettlemoyer and Collins, 2007), dialog systems (Kruijff et al., 2007), grammar engineering (Beavers, 2004; Baldridge et al., 2007), and modeling syntactic priming (Reitter et al., 2006). A distinctive aspect of CCG is that it provides a very flexible notion of constituency. This supports elegant analyses of several phenomena (e.g., coordination, long-distance extraction, and intonation) and allows incremental parsing with the competence grammar (Steedman, 2000). Here, we argue that even with its flexibility, CCG as standardly defined is not permissive enough for certain linguistic constructions and greater incrementality. Following Wittenburg (1987), we remedy this by adding a set of rules based on the D combinator of combinatory logic (Curry and Feys, 1958). (1) x/(y/z):f y/w:g ⇒x/(w/z):λh.f(λx.ghx) We show that CCG augmented with this rule improves CCG’s empirical coverage by allowing better analyses of modal verbs in English and causatives in Spanish, and certain coordinate constructions. The D rules are well-behaved; we show this by deriving them both from unary composition and from the logic defined by Baldridge (2002). Both perspectives on D ensure that the new rules are compatible with normal form constraints (Eisner, 1996) for controlling spurious ambiguity. The logic also ensures that the new rules are subject to modalities consistent with those defined by Baldridge and Kruijff (2003). Furthermore, we define a logic that produces Eisner’s constraints as grammar internal theorems rather than parsing stipulations. 2 Combinatory Categorial Grammar CCG uses a universal set of syntactic rules based on the B, T, and S combinators of combinatory logic (Curry and Feys, 1958): (2) B: ((Bf)g)x = f(gx) T: Txf = fx S: ((Sf)g)x = fx(gx) CCG functors are functions over strings of symbols, so different linearized versions of each of the combinators have to be specified (ignoring S here): 326 (3) FA: (>) x/⋆y y ⇒x (<) y x\⋆y ⇒x B: (>B) x/⋄y y/⋄z ⇒x/⋄z (<B) y\⋄z x\⋄y ⇒x\⋄z (>B×) x/×y y\×z ⇒x\×z (<B×) y/×z x\×y ⇒x/×z T: (>T) x ⇒t/i(t\ix) (<T) x ⇒t\i(t/ix) The symbols {⋆, ⋄, ×, ·} are modalities that allow subtypes of slashes to be defined; this in turn allows the slashes on categories to be defined in a way that allows them to be used (or not) with specific subsets of the above rules. The rules of this multimodal version of CCG (Baldridge, 2002; Baldridge and Kruijff, 2003) are derived as theorems of a Categorial Type Logic (CTL, Moortgat (1997)). This treats CCG as a compilation of CTL proofs, providing a principled, grammar-internal basis for restrictions on the CCG rules, transferring languageparticular restrictions on rule application to the lexicon, and allowing the CCG rules to be viewed as grammatical universals (Baldridge and Kruijff, 2003; Steedman and Baldridge, To Appear). These rules—especially the B rules—allow derivations to be partially associative: given appropriate type assignments, a string ABC can be analyzed as either A(BC) or (AB)C. This associativity leads to elegant analyses of phenomena that demand more effort in less flexible frameworks. One of the best known is “odd constituent” coordination: (4) Bob gave Stan a beer and Max a coke. (5) I will buy and you will eat a cheeseburger. The coordinated constituents are challenging because they are at odds with standardly assumed phrase structure constituents. In CCG, such constituents simply follow from the associativity added by the B and T rules. For example, given the category assignments in (6) and the abbreviations in (7), (4) is analyzed as in (8) and (9). Each conjunct is a pair of type-raised NPs combined by means of the >B-rule, deriving two composed constituents that are arguments to the conjunction:1 (6) i. Bob ⊢s/(s\np) 1We follow (Steedman, 2000) in assuming that type-raising applies in the lexicon, and therefore that nominals such as Stan ii. Stan, Max ⊢ ((s\np)/np)\(((s\np)/np)/np) iii. a beer, a coke ⊢(s\np)\((s\np)/np) iv. and ⊢(x\⋆x)/⋆x v. gave ⊢((s\np)/np)/np (7) i. vp = s\np ii. tv = (s\np)/np iii. dtv = ((s\np)/np)/np (8) Stan a beer and Max a coke tv\dt vp\tv (x\⋆x)/⋆x tv\dt vp\tv <B <B vp\dt vp\dt > (vp\dt)\(vp\dt) < vp\dt (9) Bill gave Stan a beer and Max a coke s/vp dt vp\dt < vp > s Similarly, I will buy is derived with category s/np by assuming the category (6i) for I and composing that with both verbs in turn. CCG’s approach is appealing because such constituents are not odd at all: they simply follow from the fact that CCG is a system of type-based grammatical inference that allows left associativity. 3 Linguistic Motivation for D CCG is only partially associative. Here, we discuss several situations which require greater associativity and thus cannot be given an adequate analysis with CCG as standardly defined. These structures have in common that a category of the form x|(y|z) must combine with one of the form y|w—exactly the configuration handled by the D schemata in (1). 3.1 Cross-Conjunct Extraction In the first situation, a question word is distributed across auxiliary or subordinating verb categories: (10) ...what you can and what you must not base your verdict on. We call this cross-conjunct extraction. It was noted by Pickering and Barry (1993) for English, but to the best of our knowledge it has not been treated in the have type-raised lexical assignments. We also suppress semantic representations in the derivations for the sake of space. 327 CCG literature, nor noted in other languages. The problem it presents to CCG is clear in (11), which shows the necessary derivation of (10) using standard multimodal category assignments. For the tokens of what to form constituents with you can and you must not, they must must combine directly. The problem is that these constituents (in bold) cannot be created with the standard CCG combinators in (3). (11) s s/(vp/np) s/(vp/np) s/(s/np) what s/vp you can (s/(vp/np))\(s/(vp/np)) (x\⋆x)/⋆x and s/(vp/np) s/(s/np) what s/vp you must not vp/np base your verdict on The category for and is marked for non-associativity with ⋆, and thus combines with other expressions only by function application (Baldridge, 2002). This ensures that each conjunct is a discrete constituent. Cross-conjunct extraction occurs in other languages as well, including Dutch (12), German (13), Romanian (14), and Spanish (15): (12) dat that ik I haar her wil want en and dat that ik I haar her moet can helpen. help “. . . that I want to and that I can help her.” (13) Wen who kann can ich I und and wen who darf may ich I noch still wählen? choose “Whom can I and whom may I still chose?” (14) Gandeste-te consider.imper.2s-refl.2s cui who.dat çe what vrei, want.2s ¸si and cui who.dat çe what po¸ti, can.2s s˘a to dai. give.subj.2s “Consider to whom you want and to whom you are able to give what.” (15) Me me lo it puedes can.2s y and me me lo it debes must.2s explicar ask “You can and should explain it to me.” It is thus a general phenomenon, not just a quirk of English. While it could be handled with extra categories, such as (s/(vp/np))/(s/np) for what, this is exactly the sort of strong-arm tactic that inclusion of the standard B, T, and S rules is meant to avoid. 3.2 English Auxiliary Verbs The standard CCG analysis for English auxiliary verbs is the type exemplified in (16) (Steedman, 2000, 68), interpreted as a unary operator over sentence meanings (Gamut, 1991; Kratzer, 1991): (16) can ⊢(s\np)/(s\np) : λP etλx.♦P(x) However, this type is empirically underdetermined, given a widely-noted set of generalizations suggesting that auxiliaries and raising verbs take no subject argument at all (Jacobson, 1990, a.o.). (17) i. Lack of syntactic restrictions on the subject; ii. Lack of semantic restrictions on the subject; iii. Inheritance of selectional restrictions from the subordinate predicate. Two arguments are made for (16). First, it is necessary so that type-raised subjects can compose with the auxiliary in extraction contexts, as in (18): (18) what I can eat s/(s/np) s/vp vp/vp tv >B s/vp >B s/np > s Second, it is claimed to be necessary in order to account for subject-verb agreement, on the assumption that agreement features are domain restrictions on functors of type s\np (Steedman, 1992, 1996). The first argument is the topic of this paper, and, as we show below, is refuted by the use of the Dcombinator. The second argument is undermined by examples like (19): (19) There appear to have been [ neither [ any catastrophic consequences ], nor [ a drastic change in the average age of retirement ] ] . In (19), appear agrees with two negative-polaritysensitive NPs trapped inside a neither-nor coordinate structure in which they are licensed. Appear therefore does not combine with them directly, showing that the agreement relation need not be mediated by direct application of a subject argument. We conclude, therefore, that the assignment of the vp/vp type to English auxiliaries and modal verbs is unsupported on both formal and linguistic grounds. Following Jacobson (1990), a more empiricallymotivated assignment is (20): 328 (20) can ⊢s/s : λpt.♦p Combining (20) with a type-raised subject presents another instance of the structure in (1), where that question words are represented as variable-binding operators (Groenendijk and Stokhof, 1997): (21) what I can s/(s/np) : λQet?yQy s/vp : λP et.Pi′ s/s : λpt.♦p ∗∗∗ >B ∗∗∗ 3.3 The Spanish Causative Construction The schema in (1) is also found in the widelystudied Romance causative construction (Andrews and Manning, 1999, a.m.o), illustrated in (22): (22) Nos cl.1p hizo made.3s leer read El the Señor Lord de of los the Anillos. Rings “He made us read The Lord of the Rings.” The aspect of the construction that is relevant here is that the causative verb hacer appears to take an object argument understood as the subject or agent of the subordinate verb (the causee). However, it has been argued that Spanish causative verbs do not in fact take objects (Ackerman and Moore, 1999, and refs therein). There are two arguments for this. First, syntactic alternations that apply to objecttaking verbs, such as passivization and periphrasis with subjunctive complements, do not apply to hacer (Luján, 1980). Second, hacer specifies neither the case form of the causee, nor any semantic entailments with respect to it. These are instead determined by syntactic, semantic, and pragmatic factors, such as transitivity, word order, animacy, gender, social prestige, and referential specificity (Finnemann, 1982, a.o). Thus, there is neither syntactic nor semantic evidence that hacer takes an object argument. On this basis, we assign hacer the category (23): (23) hacer ⊢(s\np)/s : λPλx.cause′Px However, Spanish has examples of cross-conjunct extraction in which hacer hosts clitics: (24) No not solo only le cl.dat.3ms ordenaron, ordered.3p sino que but le cl.dat.3ms hicieron made.3p barrer sweep la the verada. sidewalk “They not only ordered him to, but also made him sweep the sidewalk.” This shows another instance of the schema in (1), which is undefined for any of the combinators in (3): (25) le hicieron barrer la verada (s\np)/((s\np)/np) (s\np)/s (s|np) ∗∗∗ >B ∗∗∗ 3.4 Analyses Based on D The preceding data motivates adding D rules (we return to the distribution of the modalities below): (26) >D x/⋄(y/⋄z) y/⋄w ⇒x/⋄(w/⋄z) >D× x/×(y/×z) y\×w ⇒x\×(w/×z) >D⋄× x/⋄(y\×z) y/·w ⇒x/⋄(w\×z) >D×⋄ x/×(y\⋄z) y\·w ⇒x\×(w\⋄z) (27) <D y\⋄w x\⋄(y\⋄z) ⇒x\⋄(w\⋄z) <D× y/×w x\×(y\×z) ⇒x/×(w\×z) <D⋄× y\·w x\⋄(y/×z) ⇒x\⋄(w/×z) <D×⋄ y/·w x\×(y/⋄z) ⇒x/×(w/⋄z) To illustrate with example (10), one application of >D allows you and can to combine when the auxiliary is given the principled type assignment s/s, and another combines what with the result. (28) what you can s/⋄(s/⋄np) s/⋄(s\×np) s/·s >D⋄× s/⋄(s\×np) >D s/⋄((s\×np)/⋄np) The derivation then proceeds in the usual way. Likewise, D handles the Spanish causative constructions (29) straightforwardly : (29) lo hice dormir (s\np)/⋄((s\np)/⋄np) (s\np)/⋄s s/np >D (s\np)/⋄(s/⋄np) > s\np The D-rules thus provide straightforward analyses of such constructions by delivering flexible constituency while maintaining CCG’s committment to low categorial ambiguity and semantic transparency. 4 Deriving Eisner Normal Form Adding new rules can have implications for parsing efficiency. In this section, we show that the D rules fit naturally within standard normal form constraints for CCG parsing (Eisner, 1996), by providing both 329 combinatory and logical bases for D. This additionally allows Eisner’s normal form constraints to be derived as grammar internal theorems. 4.1 The Spurious Ambiguity Problem CCG’s flexibility is useful for linguistic analyses, but leads to spurious ambiguity (Wittenburg, 1987) due to the associativity introduced by the B and T rules. This can incur a high computational cost which parsers must deal with. Several techniques have been proposed for the problem (Wittenburg, 1987; Karttunen, 1989; Hepple and Morrill, 1989; Eisner, 1996). The most commonly used are Karttunnen’s chart subsumption check (White and Baldridge, 2003; Hockenmaier and Steedman, 2002) and Eisner’s normal-form constraints (Bozsahin, 1998; Clark and Curran, 2007). Eisner’s normal form, referred to here as Eisner NF and paraphrased in (30), has the advantage of not requiring comparisons of logical forms: it functions purely on the syntactic types being combined. (30) For a set S of semantically equivalent2 parse trees for a string ABC, admit the unique parse tree such that at least one of (i) or (ii) holds: i. C is not the argument of (AB) resulting from application of >B1+. ii. A is not the argument of (BC) resulting from application of <B1+. The implication is that outputs of B1+ rules are inert, using the terminology of Baldridge (2002). Inert slashes are Baldridge’s (2002) encoding in OpenCCG3 of his CTL interpretation of Steedman’s (2000) antecedent-government feature. Eisner derives (30) from two theorems about the set of semantically equivalent parses that a CCG parser will generate for a given string (see (Eisner, 1996) for proofs and discussion of the theorems): (31) Theorem 1: For every parse tree α, there is a semantically equivalent parse-tree NF(α) in which no node resulting from application of B or S functions as the primary functor in a rule application. (32) Theorem 2: If NF(α) and NF(α′) are distinct parse trees, then their model-theoretic interpretations are distinct. 2Two parse trees are semantically equivalent if: (i) their leaf nodes have equivalent interpretations, and (ii) equivalent scope relations hold between their respective leaf-node meanings. 3http://openccg.sourceforge.net Eisner uses a generalized form Bn (n≥0) of composition that subsumes function application:4 (33) >Bn: x/y y$n ⇒ x$n (34) <Bn: y$n x\y ⇒ x$n Based on these theorems, Eisner defines NF as follows (for R, S, T as Bn or S, and Q=Bn≥1): (35) Given a parse tree α: i. If α is a lexical item, then α is in Eisner-NF. ii. If α is a parse tree ⟨R, β, γ⟩and NF(β), NF(γ), then NF(α). iii. If β is not in Eisner-NF, then NF(β) = ⟨Q, β1, β2⟩, and NF(α) = ⟨S, β1, NF(⟨T, β2, γ⟩)⟩. As a parsing constraint, (30) is a filter on the set of parses produced for a given string. It preserves all the unique semantic forms generated for the string while eliminating all spurious ambiguities: it is both safe and complete. Given the utility of Eisner NF for practical CCG parsing, the D rules we propose should be compatible with (30). This requires that the generalizations underlying (30) apply to D as well. In the remainder of this section, we show this in two ways. 4.2 Deriving D from B The first is to derive the binary B rules from a unary rule based on the unary combinator ˆB:5 (36) x/y : f xy ⇒ (x/z)/(y/z) : λhzyλxz.f(hx) We then derive D from ˆB and show that clause (iii) of (35) holds of Q schematized over both B and D. Applying D to an argument sequence is equivalent to compound application of binary B: (37) (((Df)g)h)x = (fg)(hx) (38) ((((BB)f)g)h)x = ((B(fg))h)x = (fg)(hx) Syntactically, binary B is equivalent to application of unary ˆB to the primary functor ∆, followed by applying the secondary functor Γ to the output of ˆB by means of function application (Jacobson, 1999): 4We use Steedman’s (Steedman, 1996) “$”-convention for representing argument stacks of length n, for n ≥0. 5This is Lambek’s (1958) Division rule, also known as the “Geach rule” (Jacobson, 1999). 330 (39) ∆ Γ x/y y/z >ˆB (x/z)/(y/z) > x/z Bn (n ≥1) is derived by applying ˆB to the primary functor n times. For example, B2 is derived by 2 applications of ˆB to the primary functor: (40) ∆ Γ x/y (y/w)/z ˆB (x/w)/(y/w) ˆB ((x/w)/z)/((y/w)/z) > (x/w)/z The rules for D correspond to application of ˆB to both the primary and secondary functors, followed by function application: (41) ∆ Γ x/(y/z) y/w >ˆB >ˆB (x/(w/z))/((y/z)/(w/z)) (y/z)/(w/z) > x/(w/z) As with Bn, Dn≥1 can be derived by iterative application of ˆB to both primary and secondary functors. Because B can be derived from ˆB, clause (iii) of (35) is equivalent to the following: (42) If β is not in Eisner-NF, then NF(β) = ⟨FA, ⟨ˆB, β1⟩, β2⟩, such that NF(α) = ⟨S, β1, NF(⟨T, β2, γ⟩)⟩ Interpreted in terms of ˆB, both B and D involve application of ˆB to the primary functor. It follows that Theorem I applies directly to D simply by virtue of the equivalence between binary B and unary-ˆB+FA. Eisner’s NF constraints can then be reinterpreted as a constraint on ˆB requiring its output to be an inert result category. We represent this in terms of the ˆBrules introducing an inert slash, indicated with “!” (adopting the convention from OpenCCG): (43) x/y : f xy ⇒ (x/!z)/(y/!z) : λhzyλxzfhx Hence, both binary B and D return inert functors: (44) ∆ Γ x/y y/z >ˆB (x/!z)/(y/!z) > x/!z (45) ∆ Γ x/(y/z) y/w >ˆB >ˆB (x/!(w/z))/((y/z)/!(w/z)) (y/!z)/(w/!z) > x/!(w/z) The binary substitution (S) combinator can be similarly incorporated into the system. Unary substitution ˆS is like ˆB except that it introduces a slash on only the argument-side of the input functor. We stipulate that ˆS returns a category with inert slashes: (46) (ˆS) (x/y)/z ⇒(x/!z)/(y/!z) T is by definition unary. It follows that all the binary rules in CCG (including the D-rules) can be reduced to (iterated) instantiations of the unary combinators ˆB, ˆS, or T plus function application. This provides a basis for CCG in which all combinatory rules are derived from unary ˆB ˆS, and T. 4.3 A Logical Basis for Eisner Normal Form The previous section shows that deriving CCG rules from unary combinators allows us to derive the Drules while preserving Eisner NF. In this section, we present an alternate formulation of Eisner NF with Baldridge’s (2002) CTL basis for CCG. This formulation allows us to derive the D-rules as before, and does so in a way that seamlessly integrates with Baldridge’s system of modalized functors. In CTL, B⋄and B× are proofs derived via structural rules that allow associativity and permutation of symbols within a sequent, in combination with the slash introduction and elimination rules of the base logic. To control application of these rules, Baldridge keys them to binary modal operators ⋄(for associativity) and × (for permutation). Given these, >B is proven in (47): (47) ∆⊢x/⋄y Γ ⊢y/⋄z [a ⊢z] [/⋄E] (Γ ◦⋄ai) ⊢y [/⋄E] (∆◦⋄(Γ ◦⋄ai)) ⊢x [RA] ((∆◦⋄Γ) ◦⋄ai) ⊢x [/⋄I] (∆◦⋄Γ) ⊢x/⋄z In a CCG ruleset compiled from such logics, a category must have an appropriately decorated slash in order to be the input to a rule. This means that rules apply universally, without language-specific 331 restrictions. Instead, restrictions can only be declared via modalities marked on lexical categories. Unary ˆB and the D rules in 4.2 can be derived using the same logic. For example, >ˆB can be derived as in (48): (48) ∆⊢x/⋄y [f ⊢y/⋄z]1 [a ⊢z]2 [/E] (f 1 ◦⋄a2) ⊢y [/⋄E] (∆◦⋄(f 1 ◦⋄a2)) ⊢x [RA] ((∆◦⋄f 1) ◦⋄a2) ⊢x [/⋄I] (∆◦⋄f 1) ⊢x/⋄z [/⋄I] ∆⊢(x/⋄z)/⋄(y/⋄z) The D rules are also theorems of this system. For example, the proof for >D applies (48) as a lemma to each of the primary and secondary functors: (49) ∆⊢x/⋄(y/⋄z) Γ ⊢y/⋄w >ˆB >ˆB ∆⊢(x/⋄(w/⋄z))/⋄((y/⋄z)/⋄(w/⋄z)) Γ ⊢(y/⋄z)/⋄(w/⋄z) [/E] (∆◦⋄Γ) ⊢x/⋄(w/⋄z) >D⋄× involves an associative version of ˆB applied to the primary functor (50), and a permutative version to the secondary functor (51). (50) ∆⊢x/⋄(y\×z) [f ⊢(y\×z)/·(w\×z)]1 [g ⊢w\×z]2 [/·E] (f 1 ◦· g2) ⊢y\×z [/⋄E] (∆◦⋄(f 1 ◦. g2)) ⊢x [RA] ((∆◦⋄f 1) ◦. g2) ⊢x [/·I] (∆◦⋄f 1) ⊢x/·(w\×z) [/⋄I] ∆⊢(x/·(w\×z))/⋄((y\×z)/·(w\×z)) (51) Γ ⊢y/·w [a ⊢z]1 [f ⊢w\×z]2 [\×E] (a1 ◦× f 2) ⊢w [/·E] (Γ ◦· (a1 ◦× f 2)) ⊢y [LP ] (a1 ◦× (Γ ◦· f 2)) ⊢y [\×I] (Γ ◦· f 2) ⊢y\×z [/·I] Γ ⊢(y\×z)/·(w\×z) Rules for D with appropriate modalities can therefore be incorporated seamlessly into CCG. In the preceding subsection, we encoded Eisner NF with inert slashes. In Baldridge’s CTL basis for CCG, inert slashes are represented as functors seeking non-lexical arguments, represented as categories marked with an antecedent-governed feature, reflecting the intuition that non-lexical arguments have to be “bound” by a superordinate functor. This is based on an interpretation of antecedentgovernment as a unary modality ♦ant that allows structures marked by it to permute to the left or right periphery of a structure:6 (52) ((∆a ◦× ♦ant∆b) ◦× ∆c) ⊢x ((∆a ◦× ∆c) ◦× ♦ant∆b) ⊢x [ARP] (∆a ◦× (♦ant∆b ◦× ∆c)) ⊢x (♦ant∆b ◦× (∆a ◦× ∆c)) ⊢x [ALP] Unlike permutation rules without ♦ant, these permutation rules can only be used in a proof when preceeded by a hypothetical category marked with the 2↓ ant modality. The elimination rule for 2↓modalities introduces a corresponding ♦-marked object in the resulting structure, feeding the rule: (53) [a ⊢2↓ antz]1 [2↓E] ♦anta1 ⊢z Γ ⊢y\×z [\×E] ∆⊢x/×y (♦anta1 ◦× Γ) ⊢y [/×E] (∆◦× (♦anta1 ◦× Γ)) ⊢x [ALP ] [a ⊢♦ant2↓ antz]2 (♦anta1 ◦× (∆◦× Γ)) ⊢x [♦E] (a ◦× (∆◦× Γ)) ⊢x [\×I]2 (∆◦× Γ) ⊢x\×♦ant2↓ antz Re-introduction of the [a ⊢♦ant2↓ antz]k hypothesis results in a functor the argument of which is marked with ♦ant2↓ ant. Because lexical categories are not marked as such, the functor cannot take a lexical argument, and so is effectively an inert functor. In Baldridge’s (2002) system, only proofs involving the ARP and ALP rules produce inert categories. In Eisner NF, all instances of B-rules result in inert categories. This can be reproduced in Baldridge’s system simply by keying all structural rules to the ant-modality, the result being that all proofs involving structural rules result in inert functors. As desired, the D-rules result in inert categories as well. For example, >D is derived as follows (2↓ ant and ♦ant are abbreviated as 2↓and ♦): 6Note that the diamond operator used here is a syntactic operator, rather than a semantic operator as used in (16) above. The unary modalities used in CTL describe accessibility relationships between subtypes and supertypes of particular categories: in effect, they define feature hierarchies. See Moortgat (1997) and Oehrle (To Appear) for further explanation. 332 (54) Γ ⊢y/⋄w [a ⊢2↓(w/⋄z)]1 [b ⊢2↓z]2 [2↓E] [2↓E] ♦a ⊢w/⋄z ♦b ⊢z [/⋄E] (♦a ◦⋄♦b) ⊢w [/⋄E] (Γ ◦⋄(♦a ◦⋄♦b)) ⊢y [RA] [c ⊢♦2↓z]3 ((Γ ◦⋄♦a) ◦⋄♦b) ⊢y [♦E]2 ((Γ ◦⋄♦a) ◦⋄c) ⊢y [/⋄I]3 (Γ ◦⋄♦a) ⊢y/⋄♦2↓z (55) (54) ... ∆⊢x/⋄(y/⋄♦2↓z) (Γ ◦⋄♦a) ⊢y/⋄♦2↓z [/⋄E] (∆◦⋄(Γ ◦⋄♦a)) ⊢x [RA] [d ⊢♦2↓(w/⋄z)]4 ((∆◦⋄Γ) ◦⋄♦a) ⊢x [♦E]1 ((∆◦⋄Γ) ◦⋄d) ⊢x [/⋄I]4 (∆◦⋄Γ) ⊢x/⋄♦2↓(w/⋄z) (54)-(55) can be used as a lemma corresponding to the CCG rule in (57): (56) ∆⊢x/⋄(y/⋄♦2↓z) Γ ⊢y/⋄w [D] (∆◦⋄Γ) ⊢x/⋄♦2↓(w/⋄z) (57) x/⋄(y/⋄!z) y/⋄w ⇒ x/⋄!(w/⋄z) This means that all CCG rules compiled from the logic—which requires ♦ant to licence the structural rules necessary to prove the rules—return inert functors. Eisner NF thus falls out of the logic because all instances of B, D, and S produce inert categories. This in turns allows us to view Eisner NF as part of a theory of grammatical competence, in addition to being a useful technique for constraining parsing. 5 Conclusion Including the D-combinator rules in the CCG rule set lets us capture several linguistic generalizations that lack satisfactory analyses in standard CCG. Furthermore, CCG augmented with D is compatible with Eisner NF (Eisner, 1996), a standard technique for controlling derivational ambiguity in CCG-parsers, and also with the modalized version of CCG (Baldridge and Kruijff, 2003). A consequence is that both the D rules and the NF constraints can be derived from a grammar-internal perspective. This extends CCG’s linguistic applicability without sacrificing efficiency. Wittenburg (1987) originally proposed using rules based on D as a way to reduce spurious ambiguity, which he achieved by eliminating B rules entirely and replacing them with variations on D. Wittenburg notes that doing so produces as many instances of D as there are rules in the standard rule set. Our proposal retains B and S, but, thanks to Eisner NF, eliminates spurious ambiguity, a result that Wittenburg was not able to realize at the time. Our approach can be incorporated into Eisner NF straightforwardly However, Eisner NF disprefers incremental analyses by forcing right-corner analyses of long-distance dependencies, such as in (58): (58) (What (does (Grommet (think (Tottie (said (Victor (knows (Wallace ate)))))))))? For applications that call for increased incrementality (e.g., aligning visual and spoken input incrementally (Kruijff et al., 2007)), CCG rules that do not produce inert categories can be derived a CTL basis that does not require ♦ant for associativity and permutation. The D-rules derived from this kind of CTL specification would allow for left-corner analyses of such dependencies with the competence grammar. An extracted element can “wrap around” the words intervening between it and its extraction site. For example, D would allow the following bracketing for the same example (while producing the same logical form): (59) (((((((((What does) Grommet) think) Tottie) said) Victor) knows) Wallace) ate)? Finally, the unary combinator basis for CCG provides an interesting additional specification for generating CCG rules. Like the CTL basis, the unary combinator basis can produce a much wider range of possible rules, such as D rules, that may be relevant for linguistic applications. Whichever basis is used, inclusion of the D-rules increases empirical coverage, while at the same time preserving CCG’s computational attractiveness. Acknowledgments Thanks Mark Steedman for extensive comments and suggestions, and particularly for noting the relationship between the D-rules and unary ˆB. Thanks also to Emmon Bach, Cem Bozsahin, Jason Eisner, Geert-Jan Kruijff and the ACL reviewers. 333 References Farrell Ackerman and John Moore. 1999. Syntagmatic and Paradigmatic Dimensions of Causee Encodings. Linguistics and Philosophy, 24:1–44. Avery D. Andrews and Christopher D. Manning. 1999. Complex Predicates and Information Spreading in LFG. CSLI Publications, Palo Alto, California. Jason Baldridge and Geert-Jan Kruijff. 2003. MultiModal Combinatory Categorial Grammar. In Proceedings of EACL 10, pages 211–218. Jason Baldridge, Sudipta Chatterjee, Alexis Palmer, and Ben Wing. 2007. DotCCG and VisCCG: Wiki and Programming Paradigms for Improved Grammar Engineering with OpenCCG. In Proceedings of GEAF 2007. Jason Baldridge. 2002. Lexically Specified Derivational Control in Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh. John Beavers. 2004. Type-inheritance Combinatory Categorial Grammar. In Proceedings of COLING-04, Geneva, Switzerland. Robert Borsley and Kersti Börjars, editors. To Appear. Non-Transformational Syntax: A Guide to Current Models. Blackwell. Cem Bozsahin. 1998. Deriving the Predicate-Argument Structure for a Free Word Order Language. In Proceedings of COLING-ACL ’98. Stephen Clark and James Curran. 2007. Wide-Coverage Efficient Statistical Parsing with CCG and Log-Linear Models. Computational Linguistics, 33(4). Haskell B. Curry and Robert Feys. 1958. Combinatory Logic, volume 1. North Holland, Amsterdam. Jason Eisner. 1996. Efficient Normal-Form Parsing for Combinatory Categorial Grammars. In Proceedings of the ACL 34. Michael D Finnemann. 1982. Aspects of the Spanish Causative Construction. Ph.D. thesis, University of Minnesota. L. T. F. Gamut. 1991. Logic, Language, and Meaning, volume II. Chicago University Press. Jeroen Groenendijk and Martin Stokhof. 1997. Questions. In Johan van Benthem and Alice ter Meulen, editors, Handbook of Logic and Language, chapter 19, pages 1055–1124. Elsevier Science, Amsterdam. Mark Hepple and Glyn Morrill. 1989. Parsing and Derivational Equivalence. In Proceedings of EACL 4. Julia Hockenmaier and Mark Steedman. 2002. Generative Models for Statistical Parsing with Combinatory Categorial Grammar. In Proceedings. of ACL 40, pages 335–342, Philadelpha, PA. Pauline Jacobson. 1990. Raising as Function Composition. Linguistics and Philosophy, 13:423–475. Pauline Jacobson. 1999. Towards a Variable-Free Semantics. Linguistics and Philosophy, 22:117–184. Lauri Karttunen. 1989. Radical Lexicalism. In Mark Baltin and Anthony Kroch, editors, Alternative Conceptions of Phrase Structure. University of Chicago Press, Chicago. Angelika Kratzer. 1991. Modality. In Arnim von Stechow and Dieter Wunderlich, editors, Semantics: An International Handbook of Contemporary Semantic Research, pages 639–650. Walter de Gruyter, Berlin. Geert-Jan M. Kruijff, Pierre Lison, Trevor Benjamin, Henrik Jacobsson, and Nick Hawes. 2007. Incremental, Multi-Level Processing for Comprehending Situated Dialogue in Human-Robot Interaction. In Language and Robots: Proceedings from the Symposium (LangRo’2007), Aveiro, Portugal. Joachim Lambek. 1958. The mathematics of sentence structure. American Mathematical Monthly, 65:154– 169. Marta Luján. 1980. Clitic Promotion and Mood in Spanish Verbal Complements. Linguistics, 18:381–484. Michael Moortgat. 1997. Categorial Type Logics. In Johan van Benthem and Alice ter Meulen, editors, Handbook of Logic and Language, pages 93–177. North Holland, Amsterdam. Richard T Oehrle. To Appear. Multi-Modal Type Logical Grammar. In Boersley and Börjars (Borsley and Börjars, To Appear). Martin Pickering and Guy Barry. 1993. Dependency Categorial Grammar and Coordination. Linguistics, 31:855–902. David Reitter, Julia Hockenmaier, and Frank Keller. 2006. Priming Effects in Combinatory Categorial Grammar. In Proceedings of EMNLP-2006. Mark Steedman and Jason Baldridge. To Appear. Combinatory Categorial Grammar. In Borsley and Börjars (Borsley and Börjars, To Appear). Mark Steedman. 1996. Surface Structure and Interpretation. MIT Press. Mark Steedman. 2000. The Syntactic Process. MIT Press. Michael White and Jason Baldridge. 2003. Adapting Chart Realization to CCG. In Proceedings of ENLG. Michael White. 2006. Efficient Realization of Coordinate Structures in Combinatory Categorial Grammar. Research on Language and Computation, 4(1):39–75. Kent Wittenburg. 1987. Predictive Combinators: A Method for Efficient Processing of Combinatory Categorial Grammars. In Proceedings of ACL 25. Luke Zettlemoyer and Michael Collins. 2007. Online Learning of Relaxed CCG Grammars for Parsing to Logical Form. In Proceedings of EMNLP-CoNLL 2007. 334
2008
38
Proceedings of ACL-08: HLT, pages 335–343, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Parsing Noun Phrase Structure with CCG David Vadas and James R. Curran School of Information Technologies University of Sydney NSW 2006, Australia {dvadas1, james}@it.usyd.edu.au Abstract Statistical parsing of noun phrase (NP) structure has been hampered by a lack of goldstandard data. This is a significant problem for CCGbank, where binary branching NP derivations are often incorrect, a result of the automatic conversion from the Penn Treebank. We correct these errors in CCGbank using a gold-standard corpus of NP structure, resulting in a much more accurate corpus. We also implement novel NER features that generalise the lexical information needed to parse NPs and provide important semantic information. Finally, evaluating against DepBank demonstrates the effectiveness of our modified corpus and novel features, with an increase in parser performance of 1.51%. 1 Introduction Internal noun phrase (NP) structure is not recovered by a number of widely-used parsers, e.g. Collins (2003). This is because their training data, the Penn Treebank (Marcus et al., 1993), does not fully annotate NP structure. The flat structure described by the Penn Treebank can be seen in this example: (NP (NN lung) (NN cancer) (NNS deaths)) CCGbank (Hockenmaier and Steedman, 2007) is the primary English corpus for Combinatory Categorial Grammar (CCG) (Steedman, 2000) and was created by a semi-automatic conversion from the Penn Treebank. However, CCG is a binary branching grammar, and as such, cannot leave NP structure underspecified. Instead, all NPs were made rightbranching, as shown in this example: (N (N/N lung) (N (N/N cancer) (N deaths) ) ) This structure is correct for most English NPs and is the best solution that doesn’t require manual reannotation. However, the resulting derivations often contain errors. This can be seen in the previous example, where lung cancer should form a constituent, but does not. The first contribution of this paper is to correct these CCGbank errors. We apply an automatic conversion process using the gold-standard NP data annotated by Vadas and Curran (2007a). Over a quarter of the sentences in CCGbank need to be altered, demonstrating the magnitude of the NP problem and how important it is that these errors are fixed. We then run a number of parsing experiments using our new version of the CCGbank corpus. In particular, we implement new features using NER tags from the BBN Entity Type Corpus (Weischedel and Brunstein, 2005). These features are targeted at improving the recovery of NP structure, increasing parser performance by 0.64% F-score. Finally, we evaluate against DepBank (King et al., 2003). This corpus annotates internal NP structure, and so is particularly relevant for the changes we have made to CCGbank. The CCG parser now recovers additional structure learnt from our NP corrected corpus, increasing performance by 0.92%. Applying the NER features results in a total increase of 1.51%. This work allows parsers trained on CCGbank to model NP structure accurately, and then pass this crucial information on to downstream systems. 335 (a) (b) N N /N cotton N conj and N N /N acetate N fibers N N /N N /N cotton N /N [conj] conj and N /N acetate N fibers Figure 1: (a) Incorrect CCG derivation from Hockenmaier and Steedman (2007) (b) The correct derivation 2 Background Parsing of NPs is typically framed as NP bracketing, where the task is limited to discriminating between left and right-branching NPs of three nouns only: • (crude oil) prices – left-branching • world (oil prices) – right-branching Lauer (1995) presents two models to solve this problem: the adjacency model, which compares the association strength between words 1–2 to words 2–3; and the dependency model, which compares words 1–2 to words 1–3. Lauer (1995) experiments with a data set of 244 NPs, and finds that the dependency model is superior, achieving 80.7% accuracy. Most NP bracketing research has used Lauer’s data set. Because it is a very small corpus, most approaches have been unsupervised, measuring association strength with counts from a separate large corpus. Nakov and Hearst (2005) use search engine hit counts and extend the query set with typographical markers. This results in 89.3% accuracy. Recently, Vadas and Curran (2007a) annotated internal NP structure for the entire Penn Treebank, providing a large gold-standard corpus for NP bracketing. Vadas and Curran (2007b) carry out supervised experiments using this data set of 36,584 NPs, outperforming the Collins (2003) parser. The Vadas and Curran (2007a) annotation scheme inserts NML and JJP brackets to describe the correct NP structure, as shown below: (NP (NML (NN lung) (NN cancer) ) (NNS deaths) ) We use these brackets to determine new goldstandard CCG derivations in Section 3. 2.1 Combinatory Categorial Grammar Combinatory Categorial Grammar (CCG) (Steedman, 2000) is a type-driven, lexicalised theory of grammar. Lexical categories (also called supertags) are made up of basic atoms such as S (Sentence) and NP (Noun Phrase), which can be combined to form complex categories. For example, a transitive verb such as bought (as in IBM bought the company) would have the category: (S\NP)/NP. The slashes indicate the directionality of arguments, here two arguments are expected: an NP subject on the left; and an NP object on the right. Once these arguments are filled, a sentence is produced. Categories are combined using combinatory rules such as forward and backward application: X /Y Y ⇒ X (>) (1) Y X \Y ⇒ X (<) (2) Other rules such as composition and type-raising are used to analyse some linguistic constructions, while retaining the canonical categories for each word. This is an advantage of CCG, allowing it to recover long-range dependencies without the need for postprocessing, as is the case for many other parsers. In Section 1, we described the incorrect NP structures in CCGbank, but a further problem that highlights the need to improve NP derivations is shown in Figure 1. When a conjunction occurs in an NP, a non-CCG rule is required in order to reach a parse: conj N ⇒ N (3) This rule treats the conjunction in the same manner as a modifier, and results in the incorrect derivation shown in Figure 1(a). Our work creates the correct CCG derivation, shown in Figure 1(b), and removes the need for the grammar rule in (3). Honnibal and Curran (2007) have also made changes to CCGbank, aimed at better differentiating between complements and adjuncts. PropBank (Palmer et al., 2005) is used as a gold-standard to inform these decisions, similar to the way that we use the Vadas and Curran (2007a) data. 336 (a) (b) (c) N N /N lung N N /N cancer N deaths N ??? ??? lung ??? cancer ??? deaths N N /N (N /N )/(N /N ) lung N /N cancer N deaths Figure 2: (a) Original right-branching CCGbank (b) Left-branching (c) Left-branching with new supertags 2.2 CCG parsing The C&C CCG parser (Clark and Curran, 2007b) is used to perform our experiments, and to evaluate the effect of the changes to CCGbank. The parser uses a two-stage system, first employing a supertagger (Bangalore and Joshi, 1999) to propose lexical categories for each word, and then applying the CKY chart parsing algorithm. A log-linear model is used to identify the most probable derivation, which makes it possible to add the novel features we describe in Section 4, unlike a PCFG. The C&C parser is evaluated on predicateargument dependencies derived from CCGbank. These dependencies are represented as 5-tuples: ⟨hf, f, s, ha, l⟩, where hf is the head of the predicate; f is the supertag of hf; s describes which argument of f is being filled; ha is the head of the argument; and l encodes whether the dependency is local or long-range. For example, the dependency encoding company as the object of bought (as in IBM bought the company) is represented by: ⟨bought, (S\NP1 )/NP2, 2, company, −⟩ (4) This is a local dependency, where company is filling the second argument slot, the object. 3 Conversion Process This section describes the process of converting the Vadas and Curran (2007a) data to CCG derivations. The tokens dominated by NML and JJP brackets in the source data are formed into constituents in the corresponding CCGbank sentence. We generate the two forms of output that CCGbank contains: AUTO files, which represent the tree structure of each sentence; and PARG files, which list the word–word dependencies (Hockenmaier and Steedman, 2005). We apply one preprocessing step on the Penn Treebank data, where if multiple tokens are enclosed by brackets, then a NML node is placed around those tokens. For example, we would insert the NML bracket shown below: (NP (DT a) (-LRB- -LRB-) (NML (RB very) (JJ negative) ) (-RRB- -RRB-) (NN reaction) ) This simple heuristic captures NP structure not explicitly annotated by Vadas and Curran (2007a). The conversion algorithm applies the following steps for each NML or JJP bracket: 1. Identify the CCGbank lowest spanning node, the lowest constituent that covers all of the words in the NML or JJP bracket; 2. flatten the lowest spanning node, to remove the right-branching structure; 3. insert new left-branching structure; 4. identify heads; 5. assign supertags; 6. generate new dependencies. As an example, we will follow the conversion process for the NML bracket below: (NP (NML (NN lung) (NN cancer) ) (NNS deaths) ) The corresponding lowest spanning node, which incorrectly has cancer deaths as a constituent, is shown in Figure 2(a). To flatten the node, we recursively remove brackets that partially overlap the NML bracket. Nodes that don’t overlap at all are left intact. This process results in a list of nodes (which may or may not be leaves), which in our example is [lung, cancer, deaths]. We then insert the correct left-branching structure, shown in Figure 2(b). At this stage, the supertags are still incomplete. Heads are then assigned using heuristics adapted from Hockenmaier and Steedman (2007). Since we are applying these to CCGbank NP structures rather than the Penn Treebank, the POS tag based heuristics are sufficient to determine heads accurately. 337 Finally, we assign supertags to the new structure. We want to make the minimal number of changes to the entire sentence derivation, and so the supertag of the dominating node is fixed. Categories are then propagated recursively down the tree. For a node with category X , its head child is also given the category X . The non-head child is always treated as an adjunct, and given the category X /X or X \X as appropriate. Figure 2(c) shows the final result of this step for our example. 3.1 Dependency generation The changes described so far have generated the new tree structure, but the last step is to generate new dependencies. We recursively traverse the tree, at each level creating a dependency between the heads of the left and right children. These dependencies are never long-range, and therefore easy to deal with. We may also need to change dependencies reaching from inside to outside the NP, if the head(s) of the NP have changed. In these cases we simply replace the old head(s) with the new one(s) in the relevant dependencies. The number of heads may change because we now analyse conjunctions correctly. In our example, the original dependencies were: ⟨lung, N /N1, 1, deaths, −⟩ (5) ⟨cancer, N /N1 , 1, deaths, −⟩ (6) while after the conversion process, (5) becomes: ⟨lung, (N /N1)/(N /N )2, 2, cancer, −⟩ (7) To determine that the conversion process worked correctly, we manually inspected its output for unique tree structures in Sections 00–07. This identified problem cases to correct, such as those described in the following section. 3.2 Exceptional cases Firstly, when the lowest spanning node covers the NML or JJP bracket exactly, no changes need to be made to CCGbank. These cases occur when CCGbank already received the correct structure during the original conversion process. For example, brackets separating a possessive from its possessor were detected automatically. A more complex case is conjunctions, which do not follow the simple head/adjunct method of assigning supertags. Instead, conjuncts are identified during the head-finding stage, and then assigned the supertag dominating the entire coordination. Intervening non-conjunct nodes are given the same category with the conj feature, resulting in a derivation that can be parsed with the standard CCGbank binary coordination rules: conj X ⇒ X[conj] (8) X X[conj] ⇒ X (9) The derivation in Figure 1(b) is produced by these corrections to coordination derivations. As a result, applications of the non-CCG rule shown in (3) have been reduced from 1378 to 145 cases. Some POS tags require special behaviour. Determiners and possessive pronouns are both usually given the supertag NP[nb]/N , and this should not be changed by the conversion process. Accordingly, we do not alter tokens with POS tags of DT and PRP$. Instead, their sibling node is given the category N and their parent node is made the head. The parent’s sibling is then assigned the appropriate adjunct category (usually NP\NP). Tokens with punctuation POS tags1 do not have their supertag changed either. Finally, there are cases where the lowest spanning node covers a constituent that should not be changed. For example, in the following NP: (NP (NML (NN lower) (NN court) ) (JJ final) (NN ruling) ) with the original CCGbank lowest spanning node: (N (N/N lower) (N (N/N court) (N (N/N final) (N ruling) ) ) ) the final ruling node should not be altered. It may seem trivial to process in this case, but consider a similarly structured NP: lower court ruling that the U.S. can bar the use of... Our minimalist approach avoids reanalysing the many linguistic constructions that can be dominated by NPs, as this would reinvent the creation of CCGbank. As a result, we only flatten those constituents that partially overlap the NML or JJP bracket. The existing structure and dependencies of other constituents are retained. Note that we are still converting every NML and JJP bracket, as even in the subordinate clause example, only the structure around lower court needs to be altered. 1period, comma, colon, and left and right bracket. 338 the world ’s largest aid donor NP[nb]/N N /N N NP\NP NP\NP NP\NP > N > NP < NP < NP < NP the world ’s largest aid donor NP[nb]/N N (NP[nb]/N )\NP N /N N /N N > > NP N < > NP[nb]/N N > NP (a) (b) Figure 3: CCGbank derivations for possessives # % Possessive 224 43.75 Left child contains DT/PRP$ 87 16.99 Couldn’t assign to non-leaf 66 12.89 Conjunction 35 6.84 Automatic conversion was correct 26 5.08 Entity with internal brackets 23 4.49 DT 22 4.30 NML/JJP bracket is an error 12 2.34 Other 17 3.32 Total 512 100.00 Table 1: Manual analysis 3.3 Manual annotation A handful of problems that occurred during the conversion process were corrected manually. The first indicator of a problem was the presence of a possessive. This is unexpected, because possessives were already bracketed properly when CCGbank was originally created (Hockenmaier, 2003, §3.6.4). Secondly, a non-flattened node should not be assigned a supertag that it did not already have. This is because, as described previously, a non-leaf node could dominate any kind of structure. Finally, we expect the lowest spanning node to cover only the NML or JJP bracket and one more constituent to the right. If it doesn’t, because of unusual punctuation or an incorrect bracket, then it may be an error. In all these cases, which occur throughout the corpus, we manually analysed the derivation and fixed any errors that were observed. 512 cases were flagged by this approach, or 1.90% of the 26,993 brackets converted to CCG. Table 1 shows the causes of these problems. The most common cause of errors was possessives, as the conversion process highlighted a number of instances where the original CCGbank analysis was incorrect. An example of this error can be seen in Figure 3(a), where the possessive doesn’t take any arguments. Instead, largest aid donor incorrectly modifies the NP one word at a time. The correct derivation after manual analysis is in (b). The second-most common cause occurs when there is apposition inside the NP. This can be seen in Figure 4. As there is no punctuation on which to coordinate (which is how CCGbank treats most appositions) the best derivation we can obtain is to have Victor Borge modify the preceding NP. The final step in the conversion process was to validate the corpus against the CCG grammar, first by those productions used in the existing CCGbank, and then against those actually licensed by CCG (with pre-existing ungrammaticalities removed). Sixteen errors were identified by this process and subsequently corrected by manual analysis. In total, we have altered 12,475 CCGbank sentences (25.5%) and 20,409 dependencies (1.95%). 4 NER features Named entity recognition (NER) provides information that is particularly relevant for NP parsing, simply because entities are nouns. For example, knowing that Air Force is an entity tells us that Air Force contract is a left-branching NP. Vadas and Curran (2007a) describe using NE tags during the annotation process, suggesting that NERbased features will be helpful in a statistical model. There has also been recent work combining NER and parsing in the biomedical field. Lewin (2007) experiments with detecting base-NPs using NER information, while Buyko et al. (2007) use a CRF to identify 339 a guest comedian Victor Borge NP[nb]/N N /N N /N N /N N > N > N > N > NP a guest comedian Victor Borge NP[nb]/N N /N N (NP\NP)/(NP\NP) NP\NP > > N NP\NP > NP < NP (a) (b) Figure 4: CCGbank derivations for apposition with DT coordinate structure in biological named entities. We draw NE tags from the BBN Entity Type Corpus (Weischedel and Brunstein, 2005), which describes 28 different entity types. These include the standard person, location and organization classes, as well person descriptions (generally occupations), NORP (National, Other, Religious or Political groups), and works of art. Some classes also have finer-grained subtypes, although we use only the coarse tags in our experiments. Clark and Curran (2007b) has a full description of the C&C parser’s pre-existing features, to which we have added a number of novel NER-based features. Many of these features generalise the head words and/or POS tags that are already part of the feature set. The results of applying these features are described in Sections 5.3 and 6. The first feature is a simple lexical feature, describing the NE tag of each token in the sentence. This feature, and all others that we describe here, are not active when the NE tag(s) are O, as there is no NER information from tokens that are not entities. The next group of features is based on the local tree (a parent and two child nodes) formed by every grammar rule application. We add a feature where the rule being applied is combined with the parent’s NE tag. For example, when joining two constituents2: ⟨five, CD, CARD, N /N ⟩and ⟨Europeans, NNPS, NORP, N ⟩, the feature is: N →N /N N + NORP as the head of the constituent is Europeans. In the same way, we implement features that combine the grammar rule with the child nodes. There are already features in the model describing each combination of the children’s head words and POS tags, which we extend to include combinations with 2These 4-tuples are the node’s head, POS, NE, and supertag. the NE tags. Using the same example as above, one of the new features would be: N →N /N N + CARD + NORP The last group of features is based on the NE category spanned by each constituent. We identify constituents that dominate tokens that all have the same NE tag, as these nodes will not cause a “crossing bracket” with the named entity. For example, the constituent Force contract, in the NP Air Force contract, spans two different NE tags, and should be penalised by the model. Air Force, on the other hand, only spans ORG tags, and should be preferred accordingly. We also take into account whether the constituent spans the entire named entity. Combining these nodes with others of different NE tags should not be penalised by the model, as the NE must combine with the rest of the sentence at some point. These NE spanning features are implemented as the grammar rule in combination with the parent node or the child nodes. For the former, one feature is active when the node spans the entire entity, and another is active in other cases. Similarly, there are four features for the child nodes, depending on whether neither, the left, the right or both nodes span the entire NE. As an example, if the Air Force constituent were being joined with contract, then the child feature would be: N →N /N N + LEFT + ORG + O assuming that there are more O tags to the right. 5 Experiments Our experiments are run with the C&C CCG parser (Clark and Curran, 2007b), and will evaluate the changes made to CCGbank, as well as the effectiveness of the NER features. We train on Sections 0221, and test on Section 00. 340 PREC RECALL F-SCORE Original 91.85 92.67 92.26 NP corrected 91.22 92.08 91.65 Table 2: Supertagging results PREC RECALL F-SCORE Original 85.34 84.55 84.94 NP corrected 85.08 84.17 84.63 Table 3: Parsing results with gold-standard POS tags 5.1 Supertagging Before we begin full parsing experiments, we evaluate on the supertagger alone. The supertagger is an important stage of the CCG parsing process, its results will affect performance in later experiments. Table 2 shows that F-score has dropped by 0.61%. This is not surprising, as the conversion process has increased the ambiguity of supertags in NPs. Previously, a bare NP could only have a sequence of N /N tags followed by a final N . There are now more complex possibilities, equal to the Catalan number of the length of the NP. 5.2 Initial parsing results We now compare parser performance on our NP corrected version of the corpus to that on original CCGbank. We are using the normal-form parser model and report labelled precision, recall and F-score for all dependencies. The results are shown in Table 3. The F-score drops by 0.31% in our new version of the corpus. However, this comparison is not entirely fair, as the original CCGbank test data does not include the NP structure that the NP corrected model is being evaluated on. Vadas and Curran (2007a) experienced a similar drop in performance on Penn Treebank data, and noted that the F-score for NML and JJP brackets was about 20% lower than the overall figure. We suspect that a similar effect is causing the drop in performance here. Unfortunately, there are no explicit NML and JJP brackets to evaluate on in the CCG corpus, and so an NP structure only figure is difficult to compute. Recall can be calculated by marking those dependencies altered in the conversion process, and evaluating only on them. Precision cannot be measured in this PREC RECALL F-SCORE Original 83.65 82.81 83.23 NP corrected 83.31 82.33 82.82 Table 4: Parsing results with automatic POS tags PREC RECALL F-SCORE Original 86.00 85.15 85.58 NP corrected 85.71 84.83 85.27 Table 5: Parsing results with NER features way, as NP dependencies remain undifferentiated in parser output. The result is a recall of 77.03%, which is noticeably lower than the overall figure. We have also experimented with using automatically assigned POS tags. These tags are accurate with an F-score of 96.34%, with precision 96.20% and recall 96.49%. Table 4 shows that, unsurprisingly, performance is lower without the goldstandard data. The NP corrected model drops an additional 0.1% F-score over the original model, suggesting that POS tags are particularly important for recovering internal NP structure. Evaluating NP dependencies only, in the same manner as before, results in a recall figure of 75.21%. 5.3 NER features results Table 5 shows the results of adding the NER features we described in Section 4. Performance has increased by 0.64% on both versions of the corpora. It is surprising that the NP corrected increase is not larger, as we would expect the features to be less effective on the original CCGbank. This is because incorrect right-branching NPs such as Air Force contract would introduce noise to the NER features. Table 6 presents the results of using automatically assigned POS and NE tags, i.e. parsing raw text. The NER tagger achieves 84.45% F-score on all non-O classes, with precision being 78.35% and recall 91.57%. We can see that parsing F-score has dropped by about 2% compared to using goldstandard POS and NER data, however, the NER features still improve performance by about 0.3%. 341 PREC RECALL F-SCORE Original 83.92 83.06 83.49 NP corrected 83.62 82.65 83.14 Table 6: Parsing results with automatic POS and NE tags 6 DepBank evaluation One problem with the evaluation in the previous section, is that the original CCGbank is not expected to recover internal NP structure, making its task easier and inflating its performance. To remove this variable, we carry out a second evaluation against the Briscoe and Carroll (2006) reannotation of DepBank (King et al., 2003), as described in Clark and Curran (2007a). Parser output is made similar to the grammatical relations (GRs) of the Briscoe and Carroll (2006) data, however, the conversion remains complex. Clark and Curran (2007a) report an upper bound on performance, using gold-standard CCGbank dependencies, of 84.76% F-score. This evaluation is particularly relevant for NPs, as the Briscoe and Carroll (2006) corpus has been annotated for internal NP structure. With our new version of CCGbank, the parser will be able to recover these GRs correctly, where before this was unlikely. Firstly, we show the figures achieved using goldstandard CCGbank derivations in Table 7. In the NP corrected version of the corpus, performance has increased by 1.02% F-score. This is a reversal of the results in Section 5, and demonstrates that correct NP structure improves parsing performance, rather than reduces it. Because of this increase to the upper bound of performance, we are now even closer to a true formalism-independent evaluation. We now move to evaluating the C&C parser itself and the improvement gained by the NER features. Table 8 show our results, with the NP corrected version outperforming original CCGbank by 0.92%. Using the NER features has also caused an increase in F-score, giving a total improvement of 1.51%. These results demonstrate how successful the correcting of NPs in CCGbank has been. Furthermore, the performance increase of 0.59% on the NP corrected corpus is more than the 0.25% increase on the original. This demonstrates that NER features are particularly helpful for NP structure. PREC RECALL F-SCORE Original 86.86 81.61 84.15 NP corrected 87.97 82.54 85.17 Table 7: DepBank gold-standard evaluation PREC RECALL F-SCORE Original 82.57 81.29 81.92 NP corrected 83.53 82.15 82.84 Original, NER 82.87 81.49 82.17 NP corrected, NER 84.12 82.75 83.43 Table 8: DepBank evaluation results 7 Conclusion The first contribution of this paper is the application of the Vadas and Curran (2007a) data to Combinatory Categorial Grammar. Our experimental results have shown that this more accurate representation of CCGbank’s NP structure increases parser performance. Our second major contribution is the introduction of novel NER features, a source of semantic information previously unused in parsing. As a result of this work, internal NP structure is now recoverable by the C&C parser, a result demonstrated by our total performance increase of 1.51% F-score. Even when parsing raw text, without gold standard POS and NER tags, our approach has resulted in performance gains. In addition, we have made possible further increases to NP structure accuracy. New features can now be implemented and evaluated in a CCG parsing context. For example, bigram counts from a very large corpus have already been used in NP bracketing, and could easily be applied to parsing. Similarly, additional supertagging features can now be created to deal with the increased ambiguity in NPs. Downstream NLP components can now exploit the crucial information in NP structure. Acknowledgements We would like to thank Mark Steedman and Matthew Honnibal for help with converting the NP data to CCG; and the anonymous reviewers for their helpful feedback. This work has been supported by the Australian Research Council under Discovery Project DP0665973. 342 References Srinivas Bangalore and Aravind Joshi. 1999. Supertagging: An approach to almost parsing. Computational Linguistics, 25(2):237–265. Ted Briscoe and John Carroll. 2006. Evaluating the accuracy of an unlexicalized statistical parser on the PARC DepBank. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 41–48. Sydney, Australia. Ekaterina Buyko, Katrin Tomanek, and Udo Hahn. 2007. Resolution of coordination ellipses in biological named entities with conditional random fields. In Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics (PACLING-2007), pages 163–171. Melbourne, Australia. Stephen Clark and James R. Curran. 2007a. Formalismindependent parser evaluation with CCG and DepBank. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL07), pages 248–255. Prague, Czech Republic. Stephen Clark and James R. Curran. 2007b. Widecoverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493–552. Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational Linguistics, 29(4):589–637. Julia Hockenmaier. 2003. Data and Models for Statistical Parsing with Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh. Julia Hockenmaier and Mark Steedman. 2005. CCGbank manual. Technical Report MS-CIS-05-09, Department of Computer and Information Science, University of Pennsylvania. Julia Hockenmaier and Mark Steedman. 2007. CCGbank: a corpus of CCG derivations and dependency structures extracted from the Penn Treebank. Computational Linguistics, 33(3):355–396. Matthew Honnibal and James R. Curran. 2007. Improving the complement/adjunct distinction in CCGbank. In Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics (PACLING-07), pages 210–217.Melbourne, Australia. Tracy Holloway King, Richard Crouch, Stefan Riezler, Mary Dalrymple, and Ronald M. Kaplan. 2003. The PARC700 dependency bank. In Proceedings of the 4th International Workshop on Linguistically Interpreted Corpora (LINC-03). Budapest, Hungary. Mark Lauer. 1995. Corpus statistics meet the compound noun: Some empirical results. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, pages 47–54. Cambridge, MA. Ian Lewin. 2007. BaseNPs that contain gene names: domain specificity and genericity. In Biological, translational, and clinical language processing workshop, pages 163–170. Prague, Czech Republic. Mitchell Marcus, Beatrice Santorini, and Mary Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Preslav Nakov and Marti Hearst. 2005. Search engine statistics beyond the n-gram: Application to noun compound bracketing. In Proceedings of the 9th Conference on Computational Natural Language Learning (CoNLL-05), pages 17–24. Ann Arbor, MI. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71–106. Mark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge, MA. David Vadas and James R. Curran. 2007a. Adding noun phrase structure to the Penn Treebank. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL-07), pages 240–247. Prague, Czech Republic. David Vadas and James R. Curran. 2007b. Large-scale supervised models for noun phrase bracketing. In Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics (PACLING-2007), pages 104–112. Melbourne, Australia. Ralph Weischedel and Ada Brunstein. 2005. BBN pronoun coreference and entity type corpus. Technical report. 343
2008
39
Proceedings of ACL-08: HLT, pages 28–36, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics The Tradeoffs Between Open and Traditional Relation Extraction Michele Banko and Oren Etzioni Turing Center University of Washington Computer Science and Engineering Box 352350 Seattle, WA 98195, USA banko,[email protected] Abstract Traditional Information Extraction (IE) takes a relation name and hand-tagged examples of that relation as input. Open IE is a relationindependent extraction paradigm that is tailored to massive and heterogeneous corpora such as the Web. An Open IE system extracts a diverse set of relational tuples from text without any relation-specific input. How is Open IE possible? We analyze a sample of English sentences to demonstrate that numerous relationships are expressed using a compact set of relation-independent lexico-syntactic patterns, which can be learned by an Open IE system. What are the tradeoffs between Open IE and traditional IE? We consider this question in the context of two tasks. First, when the number of relations is massive, and the relations themselves are not pre-specified, we argue that Open IE is necessary. We then present a new model for Open IE called O-CRF and show that it achieves increased precision and nearly double the recall than the model employed by TEXTRUNNER, the previous stateof-the-art Open IE system. Second, when the number of target relations is small, and their names are known in advance, we show that O-CRF is able to match the precision of a traditional extraction system, though at substantially lower recall. Finally, we show how to combine the two types of systems into a hybrid that achieves higher precision than a traditional extractor, with comparable recall. 1 Introduction Relation Extraction (RE) is the task of recognizing the assertion of a particular relationship between two or more entities in text. Typically, the target relation (e.g., seminar location) is given to the RE system as input along with hand-crafted extraction patterns or patterns learned from hand-labeled training examples (Brin, 1998; Riloff and Jones, 1999; Agichtein and Gravano, 2000). Such inputs are specific to the target relation. Shifting to a new relation requires a person to manually create new extraction patterns or specify new training examples. This manual labor scales linearly with the number of target relations. In 2007, we introduced a new approach to the RE task, called Open Information Extraction (Open IE), which scales RE to the Web. An Open IE system extracts a diverse set of relational tuples without requiring any relation-specific human input. Open IE’s extraction process is linear in the number of documents in the corpus, and constant in the number of relations. Open IE is ideally suited to corpora such as the Web, where the target relations are not known in advance, and their number is massive. The relationship between standard RE systems and the new Open IE paradigm is analogous to the relationship between lexicalized and unlexicalized parsers. Statistical parsers are usually lexicalized (i.e. they make parsing decisions based on n-gram statistics computed for specific lexemes). However, Klein and Manning (2003) showed that unlexicalized parsers are more accurate than previously believed, and can be learned in an unsupervised manner. Klein and Manning analyze the tradeoffs be28 tween the two approaches to parsing and argue that state-of-the-art parsing will benefit from employing both approaches in concert. In this paper, we examine the tradeoffs between relation-specific (“lexicalized”) extraction and relation-independent (“unlexicalized”) extraction and reach an analogous conclusion. Is it, in fact, possible to learn relation-independent extraction patterns? What do they look like? We first consider the task of open extraction, in which the goal is to extract relationships from text when their number is large and identity unknown. We then consider the targeted extraction task, in which the goal is to locate instances of a known relation. How does the precision and recall of Open IE compare with that of relation-specific extraction? Is it possible to combine Open IE with a “lexicalized” RE system to improve performance? This paper addresses the questions raised above and makes the following contributions: • We present O-CRF, a new Open IE system that uses Conditional Random Fields, and demonstrate its ability to extract a variety of relations with a precision of 88.3% and recall of 45.2%. We compare O-CRF to O-NB, the extraction model previously used by TEXTRUNNER (Banko et al., 2007), a state-of-the-art Open IE system. We show that O-CRF achieves a relative gain in F-measure of 63% over O-NB. • We provide a corpus-based characterization of how binary relationships are expressed in English to demonstrate that learning a relationindependent extractor is feasible, at least for the English language. • In the targeted extraction case, we compare the performance of O-CRF to a traditional RE system and find that without any relation-specific input, O-CRF obtains the same precision with lower recall compared to a lexicalized extractor trained using hundreds, and sometimes thousands, of labeled examples per relation. • We present H-CRF, an ensemble-based extractor that learns to combine the output of the lexicalized and unlexicalized RE systems and achieves a 10% relative increase in precision with comparable recall over traditional RE. The remainder of this paper is organized as follows. Section 2 assesses the promise of relationindependent extraction for the English language by characterizing how a sample of relations is expressed in text. Section 3 describes O-CRF, a new Open IE system, as well as R1-CRF, a standard RE system; a hybrid RE system is then presented in Section 4. Section 5 reports on our experimental results. Section 6 considers related work, which is then followed by a discussion of future work. 2 The Nature of Relations in English How are relationships expressed in English sentences? In this section, we show that many relationships are consistently expressed using a compact set of relation-independent lexico-syntactic patterns, and quantify their frequency based on a sample of 500 sentences selected at random from an IE training corpus developed by (Bunescu and Mooney, 2007).1 This observation helps to explain the success of open relation extraction, which learns a relation-independent extraction model as described in Section 3.1. Previous work has noted that distinguished relations, such as hypernymy (is-a) and meronymy (part-whole), are often expressed using a small number of lexico-syntactic patterns (Hearst, 1992). The manual identification of these patterns inspired a body of work in which this initial set of extraction patterns is used to seed a bootstrapping process that automatically acquires additional patterns for is-a or part-whole relations (Etzioni et al., 2005; Snow et al., 2005; Girju et al., 2006), It is quite natural then to consider whether the same can be done for all binary relationships. To characterize how binary relationships are expressed, one of the authors of this paper carefully studied the labeled relation instances and produced a lexico-syntactic pattern that captured the relation for each instance. Interestingly, we found that 95% of the patterns could be grouped into the categories listed in Table 1. Note, however, that the patterns shown in Table 1 are greatly simplified by omitting the exact conditions under which they will reliably produce a correct extraction. For instance, while many relationships are indicated strictly by a verb, 1For simplicity, we restrict our study to binary relationships. 29 Simplified Relative Lexico-Syntactic Frequency Category Pattern 37.8 Verb E1 Verb E2 X established Y 22.8 Noun+Prep E1 NP Prep E2 X settlement with Y 16.0 Verb+Prep E1 Verb Prep E2 X moved to Y 9.4 Infinitive E1 to Verb E2 X plans to acquire Y 5.2 Modifier E1 Verb E2 Noun X is Y winner 1.8 Coordinaten E1 (and|,|-|:) E2 NP X-Y deal 1.0 Coordinatev E1 (and|,) E2 Verb X , Y merge 0.8 Appositive E1 NP (:|,)? E2 X hometown : Y Table 1: Taxonomy of Binary Relationships: Nearly 95% of 500 randomly selected sentences belongs to one of the eight categories above. detailed contextual cues are required to determine, exactly which, if any, verb observed in the context of two entities is indicative of a relationship between them. In the next section, we show how we can use a Conditional Random Field, a model that can be described as a finite state machine with weighted transitions, to learn a model of how binary relationships are expressed in English. 3 Relation Extraction Given a relation name, labeled examples of the relation, and a corpus, traditional Relation Extraction (RE) systems output instances of the given relation found in the corpus. In the open extraction task, relation names are not known in advance. The sole input to an Open IE system is a corpus, along with a small set of relation-independent heuristics, which are used to learn a general model of extraction for all relations at once. The task of open extraction is notably more difficult than the traditional formulation of RE for several reasons. First, traditional RE systems do not attempt to extract the text that signifies a relation in a sentence, since the relation name is given. In contrast, an Open IE system has to locate both the set of entities believed to participate in a relation, and the salient textual cues that indicate the relation among them. Knowledge extracted by an open system takes the form of relational tuples (r, e1, . . . , en) that contain two or more entities e1, . . . , en, and r, the name of the relationship among them. For example, from the sentence, “Microsoft is headquartered in beautiful Redmond”, we expect to extract (is headquartered in, Microsoft, Redmond). Moreover, following extraction, the system must identify exactly which relation strings r correspond to a general relation of interest. To ensure high-levels of coverage on a perrelation basis, we need, for example to deduce that “ ’s headquarters in”, “is headquartered in” and “is based in” are different ways of expressing HEADQUARTERS(X,Y). Second, a relation-independent extraction process makes it difficult to leverage the full set of features typically used when performing extraction one relation at a time. For instance, the presence of the words company and headquarters will be useful in detecting instances of the HEADQUARTERS(X,Y) relation, but are not useful features for identifying relations in general. Finally, RE systems typically use named-entity types as a guide (e.g., the second argument to HEADQUARTERS should be a LOCATION). In Open IE, the relations are not known in advance, and neither are their argument types. The unique nature of the open extraction task has led us to develop O-CRF, an open extraction system that uses the power of graphical models to identify relations in text. The remainder of this section describes O-CRF, and compares it to the extraction model employed by TEXTRUNNER, the first Open IE system (Banko et al., 2007). We then describe R1-CRF, a RE system that can be applied in a typical one-relation-at-a-time setting. 3.1 Open Extraction with Conditional Random Fields TEXTRUNNER initially treated Open IE as a classification problem, using a Naive Bayes classifier to predict whether heuristically-chosen tokens between two entities indicated a relationship or not. For the remainder of this paper, we refer to this model as O-NB. Whereas classifiers predict the label of a single variable, graphical models model multiple, in30 K a f k a E N T O E N T O E N T B R E L I R E L , P r a g u e a w r i t e r b o r n i n Figure 1: Relation Extraction as Sequence Labeling: A CRF is used to identify the relationship, born in, between Kafka and Prague terdependent variables. Conditional Random Fields (CRFs) (Lafferty et al., 2001), are undirected graphical models trained to maximize the conditional probability of a finite set of labels Y given a set of input observations X. By making a first-order Markov assumption about the dependencies among the output variables Y , and arranging variables sequentially in a linear chain, RE can be treated as a sequence labeling problem. Linear-chain CRFs have been applied to a variety of sequential text processing tasks including named-entity recognition, part-of-speech tagging, word segmentation, semantic role identification, and recently relation extraction (Culotta et al., 2006). 3.1.1 Training As with O-NB, O-CRF’s training process is selfsupervised. O-CRF applies a handful of relationindependent heuristics to the PennTreebank and obtains a set of labeled examples in the form of relational tuples. The heuristics were designed to capture dependencies typically obtained via syntactic parsing and semantic role labelling. For example, a heuristic used to identify positive examples is the extraction of noun phrases participating in a subjectverb-object relationship, e.g., “<Einstein> received <the Nobel Prize> in 1921.” An example of a heuristic that locates negative examples is the extraction of objects that cross the boundary of an adverbial clause, e.g. “He studied <Einstein’s work> when visiting <Germany>.” The resulting set of labeled examples are described using features that can be extracted without syntactic or semantic analysis and used to train a CRF, a sequence model that learns to identify spans of tokens believed to indicate explicit mentions of relationships between entities. O-CRF first applies a phrase chunker to each document, and treats the identified noun phrases as candidate entities for extraction. Each pair of entities appearing no more than a maximum number of words apart and their surrounding context are considered as possible evidence for RE. The entity pair serves to anchor each end of a linear-chain CRF, and both entities in the pair are assigned a fixed label of ENT. Tokens in the surrounding context are treated as possible textual cues that indicate a relation, and can be assigned one of the following labels: B-REL, indicating the start of a relation, I-REL, indicating the continuation of a predicted relation, or O, indicating the token is not believed to be part of an explicit relationship. An illustration is given in Figure 1. The set of features used by O-CRF is largely similar to those used by O-NB and other stateof-the-art relation extraction systems, They include part-of-speech tags (predicted using a separately trained maximum-entropy model), regular expressions (e.g.detecting capitalization, punctuation, etc.), context words, and conjunctions of features occurring in adjacent positions within six words to the left and six words to the right of the current word. A unique aspect of O-CRF is that O-CRF uses context words belonging only to closed classes (e.g. prepositions and determiners) but not function words such as verbs or nouns. Thus, unlike most RE systems, O-CRF does not try to recognize semantic classes of entities. O-CRF has a number of limitations, most of which are shared with other systems that perform extraction from natural language text. First, O-CRF only extracts relations that are explicitly mentioned in the text; implicit relationships that could inferred from the text would need to be inferred from OCRF extractions. Second, O-CRF focuses on relationships that are primarily word-based, and not indicated solely from punctuation or document-level features. Finally, relations must occur between entity names within the same sentence. O-CRF was built using the CRF implementation provided by MALLET (McCallum, 2002), as well as part-of-speech tagging and phrase-chunking tools available from OPENNLP.2 2http://opennlp.sourceforge.net 31 3.1.2 Extraction Given an input corpus, O-CRF makes a single pass over the data, and performs entity identification using a phrase chunker. The CRF is then used to label instances relations for each possible entity pair, subject to the constraints mentioned previously. Following extraction, O-CRF applies the RESOLVER algorithm (Yates and Etzioni, 2007) to find relation synonyms, the various ways in which a relation is expressed in text. RESOLVER uses a probabilistic model to predict if two strings refer to the same item, based on relational features, in an unsupervised manner. In Section 5.2 we report that RESOLVER boosts the recall of O-CRF by 50%. 3.2 Relation-Specific Extraction To compare the behavior of open, or “unlexicalized,” extraction to relation-specific, or “lexicalized” extraction, we developed a CRF-based extractor under the traditional RE paradigm. We refer to this system as R1-CRF. Although the graphical structure of R1-CRF is the same as O-CRF R1-CRF differs in a few ways. A given relation R is specified a priori, and R1-CRF is trained from hand-labeled positive and negative instances of R. The extractor is also permitted to use all lexical features, and is not restricted to closedclass words as is O-CRF. Since R is known in advance, if R1-CRF outputs a tuple at extraction time, the tuple is believed to be an instance of R. 4 Hybrid Relation Extraction Since O-CRF and R1-CRF have complementary views of the extraction process, it is natural to wonder whether they can be combined to produce a more powerful extractor. In many machine learning settings, the use of an ensemble of diverse classifiers during prediction has been observed to yield higher levels of performance compared to individual algorithms. We now describe an ensemble-based or hybrid approach to RE that leverages the different views offered by open, self-supervised extraction in O-CRF, and lexicalized, supervised extraction in R1-CRF. 4.1 Stacking Stacked generalization, or stacking, (Wolpert, 1992), is an ensemble-based framework in which the goal is learn a meta-classifier from the output of several base-level classifiers. The training set used to train the meta-classifier is generated using a leaveone-out procedure: for each base-level algorithm, a classifier is trained from all but one training example and then used to generate a prediction for the leftout example. The meta-classifier is trained using the predictions of the base-level classifiers as features, and the true label as given by the training data. Previous studies (Ting and Witten, 1999; Zenko and Dzeroski, 2002; Sigletos et al., 2005) have shown that the probabilities of each class value as estimated by each base-level algorithm are effective features when training meta-learners. Stacking was shown to be consistently more effective than voting, another popular ensemble-based method in which the outputs of the base-classifiers are combined either through majority vote or by taking the class value with the highest average probability. 4.2 Stacked Relation Extraction We used the stacking methodology to build an ensemble-based extractor, referred to as H-CRF. Treating the output of an O-CRF and R1-CRF as black boxes, H-CRF learns to predict which, if any, tokens found between a pair of entities (e1, e2), indicates a relationship. Due to the sequential nature of our RE task, H-CRF employs a CRF as the metalearner, as opposed to a decision tree or regressionbased classifier. H-CRF uses the probability distribution over the set of possible labels according to each O-CRF and R1-CRF as features. To obtain the probability at each position of a linear-chain CRF, the constrained forward-backward technique described in (Culotta and McCallum, 2004) is used. H-CRF also computes the Monge Elkan distance (Monge and Elkan, 1996) between the relations predicted by O-CRF and R1CRF and includes the result in the feature set. An additional meta-feature utilized by H-CRF indicates whether either or both base extractors return “no relation” for a given pair of entities. In addition to these numeric features, H-CRF uses a subset of the base features used by O-CRF and R1-CRF. At each 32 O-CRF O-NB Category P R F1 P R F1 Verb 93.9 65.1 76.9 100 38.6 55.7 Noun+Prep 89.1 36.0 51.3 100 9.7 55.7 Verb+Prep 95.2 50.0 65.6 95.2 25.3 40.0 Infinitive 95.7 46.8 62.9 100 25.5 40.6 Other 0 0 0 0 0 0 All 88.3 45.2 59.8 86.6 23.2 36.6 Table 2: Open Extraction by Relation Category. O-CRF outperforms O-NB, obtaining nearly double its recall and increased precision. O-CRF’s gains are partly due to its lower false positive rate for relationships categorized as “Other.” given position i between e1 and e2, the presence of the word observed at i as a feature, as well as the presence of the part-of-speech-tag at i. 5 Experimental Results The following experiments demonstrate the benefits of Open IE for two tasks: open extraction and targeted extraction. Section 5.1, assesses the ability of O-CRF to locate instances of relationships when the number of relationships is large and their identity is unknown. We show that without any relation-specific input, OCRF extracts binary relationships with high precision and a recall that nearly doubles that of O-NB. Sections 5.2 and 5.3 compare O-CRF to traditional and hybrid RE when the goal is to locate instances of a small set of known target relations. We find that while single-relation extraction, as embodied by R1-CRF, achieves comparatively higher levels of recall, it takes hundreds, and sometimes thousands, of labeled examples per relation, for R1CRF to approach the precision obtained by O-CRF, which is self-trained without any relation-specific input. We also show that the combination of unlexicalized, open extraction in O-CRF and lexicalized, supervised extraction in R1-CRF improves precision and F-measure compared to a standalone RE system. 5.1 Open Extraction This section contrasts the performance of O-CRF with that of O-NB on an Open IE task, and shows that O-CRF achieves both double the recall and increased precision relative to O-NB. For this experiment, we used the set of 500 sentences3 described in Section 2. Both IE systems were designed and trained prior to the examination of the sample sentences; thus the results on this sentence sample provide a fair measurement of their performance. While the TEXTRUNNER system was previously found to extract over 7.5 million tuples from a corpus of 9 million Web pages, these experiments are the first to assess its true recall over a known set of relational tuples. As reported in Table 2, O-CRF extracts relational tuples with a precision of 88.3% and a recall of 45.2%. O-CRF achieves a relative gain in F1 of 63.4% over the O-NB model employed by TEXTRUNNER, which obtains a precision of 86.6% and a recall of 23.2%. The recall of O-CRF nearly doubles that of O-NB. O-CRF is able to extract instances of the four most frequently observed relation types – Verb, Noun+Prep, Verb+Prep and Infinitive. Three of the four remaining types – Modifier, Coordinaten and Coordinatev – which comprise only 8% of the sample, are not handled due to simplifying assumptions made by both O-CRF and O-NB that tokens indicating a relation occur between entity mentions in the sentence. 5.2 O-CRF vs. R1-CRF Extraction To compare performance of the extractors when a small set of target relationships is known in advance, we used labeled data for four different relations – corporate acquisitions, birthplaces, inventors of products and award winners. The first two datasets were collected from the Web, and made available by Bunescu and Mooney (2007). To augment the size of our corpus, we used the same technique to collect data for two additional relations, and manually labelled positive and negative instances by hand over all collections. For each of the four relations in our collection, we trained R1-CRF from labeled training data, and ran each of R1-CRF and O-CRF over the respective test sets, and compared the precision and recall of all tuples output by each system. Table 3 shows that from the start, O-CRF achieves a high level of precision – 75.0% – without any 3Available at http://www.cs.washington.edu/research/ knowitall/hlt-naacl08-data.txt 33 O-CRF R1-CRF Relation P R P R Train Ex Acquisition 75.6 19.5 67.6 69.2 3042 Birthplace 90.6 31.1 92.3 64.4 1853 InventorOf 88.0 17.5 81.3 50.8 682 WonAward 62.5 15.3 73.6 52.8 354 All 75.0 18.4 73.9 58.4 5930 Table 3: Precision (P) and Recall (R) of O-CRF and R1CRF. O-CRF R1-CRF Relation P R P R Train Ex Acquisition 75.6 19.5 67.6 69.2 3042∗ Birthplace 90.6 31.1 92.3 53.3 600 InventorOf 88.0 17.5 81.3 50.8 682∗ WonAward 62.5 15.3 65.4 61.1 50 All 75.0 18.4 70.17 60.7 >4374 Table 4: For 4 relations, a minimum of 4374 hand-tagged examples is needed for R1-CRF to approximately match the precision of O-CRF for each relation. A “∗” indicates the use of all available training data; in these cases, R1CRF was unable to match the precision of O-CRF. relation-specific data. Using labeled training data, the R1-CRF system achieves a slightly lower precision of 73.9%. Exactly how many training examples per relation does it take R1-CRF to achieve a comparable level of precision? We varied the number of training examples given to R1-CRF, and found that in 3 out of 4 cases it takes hundreds, if not thousands of labeled examples for R1-CRF to achieve acceptable levels of precision. In two cases – acquisitions and inventions – R1-CRF is unable to match the precision of O-CRF, even with many labeled examples. Table 4 summarizes these findings. Using labeled data, R1-CRF obtains a recall of 58.4%, compared to O-CRF, whose recall is 18.4%. A large number of false negatives on the part of OCRF can be attributed to its lack of lexical features, which are often crucial when part-of-speech tagging errors are present. For instance, in the sentence, “Yahoo To Acquire Inktomi”, “Acquire” is mistaken for a proper noun, and sufficient evidence of the existence of a relationship is absent. The lexicalized R1CRF extractor is able to recover from this error; the presence of the word “Acquire” is enough to recogR1-CRF Hybrid Relation P R F1 P R F1 Acquisition 67.6 69.2 68.4 76.0 67.5 71.5 Birthplace 93.6 64.4 76.3 96.5 62.2 75.6 InventorOf 81.3 50.8 62.5 87.5 52.5 65.6 WonAward 73.6 52.8 61.5 75.0 50.0 60.0 All 73.9 58.4 65.2 79.2 56.9 66.2 Table 5: A hybrid extractor that uses O-CRF improves precision for all relations, at a small cost to recall. nize the positive instance, despite the incorrect partof-speech tag. Another source of recall issues facing O-CRF is its ability to discover synonyms for a given relation. We found that while RESOLVER improves the relative recall of O-CRF by nearly 50%, O-CRF locates fewer synonyms per relation compared to its lexicalized counterpart. With RESOLVER, O-CRF finds an average of 6.5 synonyms per relation compared to R1-CRF’s 16.25. In light of our findings, the relative tradeoffs of open versus traditional RE are as follows. Open IE automatically offers a high level of precision without requiring manual labor per relation, at the expense of recall. When relationships in a corpus are not known, or their number is massive, Open IE is essential for RE. When higher levels of recall are desirable for a small set of target relations, traditional RE is more appropriate. However, in this case, one must be willing to undertake the cost of acquiring labeled training data for each relation, either via a computational procedure such as bootstrapped learning or by the use of human annotators. 5.3 Hybrid Extraction In this section, we explore the performance of HCRF, an ensemble-based extractor that learns to perform RE for a set of known relations based on the individual behaviors of O-CRF and R1-CRF. As shown in Table 5, the use of O-CRF as part of H-CRF, improves precision from 73.9% to 79.2% with only a slight decrease in recall. Overall, F1 improved from 65.2% to 66.2%. One disadvantage of a stacking-based hybrid system is that labeled training data is still required. In the future, we would like to explore the development of hybrid systems that leverage Open IE methods, 34 like O-CRF, to reduce the number of training examples required per relation. 6 Related Work TEXTRUNNER, the first Open IE system, is part of a body of work that reflects a growing interest in avoiding relation-specificity during extraction. Sekine (2006) developed a paradigm for “ondemand information extraction” in order to reduce the amount of effort involved when porting IE systems to new domains. Shinyama and Sekine’s “preemptive” IE system (2006) discovers relationships from sets of related news articles. Until recently, most work in RE has been carried out on a per-relation basis. Typically, RE is framed as a binary classification problem: Given a sentence S and a relation R, does S assert R between two entities in S? Representative approaches include (Zelenko et al., 2003) and (Bunescu and Mooney, 2005), which use support-vector machines fitted with language-oriented kernels to classify pairs of entities. Roth and Yih (2004) also described a classification-based framework in which they jointly learn to identify named entities and relations. Culotta et al. (2006) used a CRF for RE, yet their task differs greatly from open extraction. RE was performed from biographical text in which the topic of each document was known. For every entity found in the document, their goal was to predict what relation, if any, it had relative to the page topic, from a set of given relations. Under these restrictions, RE became an instance of entity labeling, where the label assigned to an entity (e.g. Father) is its relation to the topic of the article. Others have also found the stacking framework to yield benefits for IE. Freitag (2000) used linear regression to model the relationship between the confidence of several inductive learning algorithms and the probability that a prediction is correct. Over three different document collections, the combined method yielded improvements over the best individual learner for all but one relation. The efficacy of ensemble-based methods for extraction was further investigated by (Sigletos et al., 2005), who experimented with combining the outputs of a rule-based learner, a Hidden Markov Model and a wrapperinduction algorithm in five different domains. Of a variety ensemble-based methods, stacking proved to consistently outperform the best base-level system, obtaining more precise results at the cost of somewhat lower recall. (Feldman et al., 2005) demonstrated that a hybrid extractor composed of a statistical and knowledge-based models outperform either in isolation. 7 Conclusions and Future Work Our experiments have demonstrated the promise of relation-independent extraction using the Open IE paradigm. We have shown that binary relationships can be categorized using a compact set of lexicosyntactic patterns, and presented O-CRF, a CRFbased Open IE system that can extract different relationships with a precision of 88.3% and a recall of 45.2%4. Open IE is essential when the number of relationships of interest is massive or unknown. Traditional IE is more appropriate for targeted extraction when the number of relations of interest is small and one is willing to incur the cost of acquiring labeled training data. Compared to traditional IE, the recall of our Open IE system is admittedly lower. However, in a targeted extraction scenario, Open IE can still be used to reduce the number of hand-labeled examples. As Table 4 shows, numerous hand-labeled examples (ranging from 50 for one relation to over 3,000 for another) are necessary to match the precision of O-CRF. In the future, O-CRF’s recall may be improved by enhancements to its ability to locate the various ways in which a given relation is expressed. We also plan to explore the capacity of Open IE to automatically provide labeled training data, when traditional relation extraction is a more appropriate choice. Acknowledgments This research was supported in part by NSF grants IIS-0535284 and IIS-0312988, ONR grant N0001408-1-0431 as well as gifts from Google, and carried out at the University of Washington’s Turing Center. Doug Downey, Stephen Soderland and Dan Weld provided helpful comments on previous drafts. 4The TEXTRUNNER Open IE system now indexes extractions found by O-CRF from millions of Web pages, and is located at http://www.cs.washington.edu/research/textrunner 35 References E. Agichtein and L. Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Procs. of the Fifth ACM International Conference on Digital Libraries. M. Banko, M. Cararella, S. Soderland, M. Broadhead, and O. Etzioni. 2007. Open information extraction from the web. In Procs. of IJCAI. S. Brin. 1998. Extracting Patterns and Relations from the World Wide Web. In WebDB Workshop at 6th International Conference on Extending Database Technology, EDBT’98, pages 172–183, Valencia, Spain. R. Bunescu and R. Mooney. 2005. Subsequence kernels for relation extraction. In In Procs. of Neural Information Processing Systems. R. Bunescu and R. Mooney. 2007. Learning to extract relations from the web using minimal supervision. In Proc. of ACL. A. Culotta and A. McCallum. 2004. Confidence estimation for information extraction. In Procs of HLT/NAACL. A. Culotta, A. McCallum, and J. Betz. 2006. Integrating probabilistic extraction models and data mining to discover relations and patterns in text. In Procs of HLT/NAACL, pages 296–303. P. Domingos. 1996. Unifying instance-based and rulebased induction. Machine Learning, 24(2):141–168. O. Etzioni, M. Cafarella, D. Downey, S. Kok, A. Popescu, T. Shaked, S. Soderland, D. Weld, and A. Yates. 2005. Unsupervised named-entity extraction from the web: An experimental study. Artificial Intelligence, 165(1):91–134. R. Feldman, B. Rosenfeld, and M. Fresko. 2005. Teg - a hybrid approach to information extraction. Knowledge and Information Systems, 9(1):1–18. D. Freitag. 2000. Machine learning for information extraction in informal domains. Machine Learning, 39(2-3):169–202. R. Girju, A. Badulescu, and D. Moldovan. 2006. Automatic discovery of part-whole relations. Computational Linguistics, 32(1). M. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Procs. of the 14th International Conference on Computational Linguistics, pages 539–545. D. Klein and C. Manning. 2003. Accurate unlexicalized parsing. In ACL. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Procs. of ICML. A. McCallum. 2002. Mallet: A machine learning for language toolkit. http://mallet.cs.umass.edu. A. E. Monge and C. P. Elkan. 1996. The field matching problem: Algorithms and applications. In Procs. of KDD. E. Riloff and R. Jones. 1999. Learning Dictionaries for Information Extraction by Multi-level Boot-strapping. In Procs. of AAAI-99, pages 1044–1049. D. Roth and W. Yih. 2004. A linear progamming formulation for global inference in natural language tasks. In Procs. of CoNLL. S. Sekine. 2006. On-demand information extraction. In Proc. of COLING. Y. Shinyama and S. Sekine. 2006. Preemptive information extraction using unrestricted relation discovery. In Proc. of the HLT-NAACL. G. Sigletos, G. Paliouras, C. D. Spyropoulos, and M. Hatzopoulos. 2005. Combining infomation extraction systems using voting and stacked generalization. Journal of Machine Learning Research, 6:1751,1782. R. Snow, D. Jurafsky, and A. Ng. 2005. Learning syntactic patterns for automatic hypernym discovery. In Advances in Neural Information Processing Systems 17. MIT Press. K.M. Ting and I. H. Witten. 1999. Issues in stacked generalization. Artificial Intelligence Research, 10:271– 289. D. Wolpert. 1992. Stacked generalization. Neural Networks, 5(2):241–260. A. Yates and O. Etzioni. 2007. Unsupervised resolution of objects and relations on the web. In Procs of NAACL/HLT. D. Zelenko, C. Aone, and A. Richardella. 2003. Kernel methods for relation extraction. JMLR, 3:1083–1106. B. Zenko and S. Dzeroski. 2002. Stacking with an extended set of meta-level attributes and mlr. In Proc. of ECML. 36
2008
4
Proceedings of ACL-08: HLT, pages 344–352, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Sentence Simplification for Semantic Role Labeling David Vickrey and Daphne Koller Stanford University Stanford, CA 94305-9010 {dvickrey,koller}@cs.stanford.edu Abstract Parse-tree paths are commonly used to incorporate information from syntactic parses into NLP systems. These systems typically treat the paths as atomic (or nearly atomic) features; these features are quite sparse due to the immense variety of syntactic expression. In this paper, we propose a general method for learning how to iteratively simplify a sentence, thus decomposing complicated syntax into small, easy-to-process pieces. Our method applies a series of hand-written transformation rules corresponding to basic syntactic patterns — for example, one rule “depassivizes” a sentence. The model is parameterized by learned weights specifying preferences for some rules over others. After applying all possible transformations to a sentence, we are left with a set of candidate simplified sentences. We apply our simplification system to semantic role labeling (SRL). As we do not have labeled examples of correct simplifications, we use labeled training data for the SRL task to jointly learn both the weights of the simplification model and of an SRL model, treating the simplification as a hidden variable. By extracting and labeling simplified sentences, this combined simplification/SRL system better generalizes across syntactic variation. It achieves a statistically significant 1.2% F1 measure increase over a strong baseline on the Conll2005 SRL task, attaining near-state-of-the-art performance. 1 Introduction In semantic role labeling (SRL), given a sentence containing a target verb, we want to label the semantic arguments, or roles, of that verb. For the verb “eat”, a correct labeling of “Tom ate a salad” is {ARG0(Eater)=“Tom”, ARG1(Food)=“salad”}. Current semantic role labeling systems rely primarily on syntactic features in order to identify and S NP VP VP NP PP Tom wants S a to eat VP NP NP salad croutons with Tom: NP S(NP) VP VP VP S T NP1 croutons: VP PP(with) T salad: NP1 VP T Figure 1: Parse with path features for verb “eat”. classify roles. Features derived from a syntactic parse of the sentence have proven particularly useful (Gildea & Jurafsky, 2002). For example, the syntactic subject of “give” is nearly always the Giver. Path features allow systems to capture both general patterns, e.g., that the ARG0 of a sentence tends to be the subject of the sentence, and specific usage, e.g., that the ARG2 of “give” is often a post-verbal prepositional phrase headed by “to”. An example sentence with extracted path features is shown in Figure 1. A major problem with this approach is that the path from an argument to the verb can be quite complicated. In the sentence “He expected to receive a prize for winning,” the path from “win” to its ARG0, “he”, involves the verbs “expect” and “receive” and the preposition “for.” The corresponding path through the parse tree likely occurs a relatively small number of times (or not at all) in the training corpus. If the test set contained exactly the same sentence but with “expected” replaced by “did not expect” we would extract a different parse path feature; therefore, as far as the classifier is concerned, the syntax of the two sentences is totally unrelated. In this paper we learn a mapping from full, complicated sentences to simplified sentences. For example, given a correct parse, our system simplifies the above sentence with target verb “win” to “He won.” Our method combines hand-written syntactic simplification rules with machine learning, which 344 determines which rules to prefer. We then use the output of the simplification system as input to a SRL system that is trained to label simplified sentences. Compared to previous SRL models, our model has several qualitative advantages. First, we believe that the simplification process, which represents the syntax as a set of local syntactic transformations, is more linguistically satisfying than using the entire parse path as an atomic feature. Improving the simplification process mainly involves adding more linguistic knowledge in the form of simplification rules. Second, labeling simple sentences is much easier than labeling raw sentences and allows us to generalize more effectively across sentences with differing syntax. This is particularly important for verbs with few labeled training instances; using training examples as efficiently as possible can lead to considerable gains in performance. Third, our model is very effective at sharing information across verbs, since most of our simplification rules apply equally well regardless of the target verb. A major difficulty in learning to simplify sentences is that we do not have labeled data for this task. To address this problem, we simultaneously train our simplification system and the SRL system. We treat the correct simplification as a hidden variable, using labeled SRL data to guide us towards “more useful” simplifications. Specifically, we train our model discriminatively to predict the correct role labeling assignment given an input sentence, treating the simplification as a hidden variable. Applying our combined simplification/SRL model to the Conll 2005 task, we show a significant improvement over a strong baseline model. Our model does best on verbs with little training data and on instances with paths that are rare or have never been seen before, matching our intuitions about the strengths of the model. Our model outperforms all but the best few Conll 2005 systems, each of which uses multiple different automatically-generated parses (which would likely improve our model). 2 Sentence Simplification We will begin with an example before describing our model in detail. Figure 2 shows a series of transformations applied to the sentence “I was not given a chance to eat,” along with the interpretation of each transformation. Here, the target verb is “eat.” I was not given a chance to eat. Someone gave me a chance to eat. I had a chance to eat. I ate. depassivize give -> have chance to X I was given a chance to eat. remove not Figure 2: Example simplification Sam’s chance to eat has passed. Sam has a chance to eat. Sam ate. chance to X possessive Figure 3: Shared simplification structure There are several important things to note. First, many of the steps do lose some semantic information; clearly, having a chance to eat is not the same as eating. However, since we are interested only in labeling the core arguments of the verb (which in this case is simply the Eater, “I”), it is not important to maintain this information. Second, there is more than one way to choose a set of rules which lead to the desired final sentence “I ate.” For example, we could have chosen to include a rule which went directly from the second step to the fourth. In general, the rules were designed to allow as much reuse of rules as possible. Figure 3 shows the simplification of “Sam’s chance to eat has passed” (again with target verb “eat”); by simplifying both of these sentences as “X had a chance to Y”, we are able to use the same final rule in both cases. Of course, there may be more than one way to simplify a sentence for a given rule set; this ambiguity is handled by learning which rules to prefer. In this paper, we use simplification to mean something which is closer to canonicalization that summarization. Thus, given an input sentence, our goal is not to produce a single shortened sentence which contains as much information from the original sentence as possible. Rather, the goal is, for each verb in the sentence, to produce a “simple” sentence which is in a particular canonical form (described below) relative to that verb. 3 Transformation Rules A transformation rule takes as input a parse tree and produces as output a different, changed parse tree. Since our goal is to produce a simplified version of the sentence, the rules are designed to bring all sentences toward the same common format. A rule (see left side of Figure 4) consists of two 345 NP-7 [Someone] VB-5 NP VP-4 give chance NP-2 I VB-5 NP VP-4 give chance NP-2 I S-1 S-1 NP-2 VP-3 VB*-6 VBN-5 be VP-4 Transformed Rule Replace 3 with 4 Create new node 7 – [Someone] Substitute 7 for 2 Add 2 after 5 Set category of 5 to VB S NP VP VBD VBN NP was VP given chance I Original Figure 4: Rule for depassivizing a sentence parts. The first is a “tree regular expression” which is most simply viewed as a tree fragment with optional constraints at each node. The rule assigns numbers to each node which are referred to in the second part of the rule. Formally, a rule node X matches a parse-tree node A if: (1) All constraints of node X (e.g., constituent category, head word, etc.) are satisfied by node A. (2) For each child node Y of X, there is a child B of A that matches Y; two children of X cannot be matched to the same child B. There are no other requirements. A can have other children besides those matched, and leaves of the rule pattern can match to internal nodes of the parse (corresponding to entire phrases in the original sentence). For example, the same rule is used to simplify both “I had a chance to eat,” and “I had a chance to eat a sandwich,” (into “I ate,” and “I ate a sandwich,”). The insertion of the phrase “a sandwich” does not prevent the rule from matching. The second part of the rule is a series of simple steps that are applied to the matched nodes. For example, one type of simple step applied to the pair of nodes (X,Y) removes X from its current parent and adds it as the final child of Y. Figure 4 shows the depassivizing rule and the result of applying it to the sentence “I was given a chance.” The transformation steps are applied sequentially from top to bottom. Note that any nodes not matched are unaffected by the transformation; they remain where they are relative to their parents. For example, “chance” is not matched by the rule, and thus remains as a child of the VP headed by “give.” There are two significant pieces of “machinery” in our current rule set. The first is the idea of a floating node, used for locating an argument within a subordinate clause. For example, in the phrases “The cat that ate the mouse”, “The seed that the mouse ate”, and “The person we gave the gift to”, the modified nouns (“cat”, “seed”, and “person”, respectively) all Simplified Original # Rule Category I ate the food. Float(The food) I ate. 5 Floating nodes He slept. I said he slept. 4 Sentence extraction Food is tasty. Salt makes food tasty. 8 “Make” rewrites The total includes tax. Including tax, the total… 7 Verb acting as PP/NP John has a chance to eat. John’s chance to eat… 7 Possessive I will eat. Will I eat? 7 Questions I will eat. Nor will I eat. 7 Inverted sentences Float(The food) I ate. The food I ate … 8 Modified nouns I eat. I have a chance to eat. 7 Verb RC (Noun) I eat. I am likely to eat. 6 Verb RC (ADJP/ADVP) I eat. I want to eat. 17 Verb Raising/Control (basic) I eat. I must eat. 14 Verb Collapsing/Rewriting I ate. I ate and slept. 8 Conjunctions John is a lawyer. John, a lawyer, … 20 Misc Collapsing/Rewriting A car hit me. I was hit by a car. 5 Passive I slept Thursday. Thursday, I slept. 24 Sentence normalization Simplified Original # Rule Category I ate the food. Float(The food) I ate. 5 Floating nodes He slept. I said he slept. 4 Sentence extraction Food is tasty. Salt makes food tasty. 8 “Make” rewrites The total includes tax. Including tax, the total… 7 Verb acting as PP/NP John has a chance to eat. John’s chance to eat… 7 Possessive I will eat. Will I eat? 7 Questions I will eat. Nor will I eat. 7 Inverted sentences Float(The food) I ate. The food I ate … 8 Modified nouns I eat. I have a chance to eat. 7 Verb RC (Noun) I eat. I am likely to eat. 6 Verb RC (ADJP/ADVP) I eat. I want to eat. 17 Verb Raising/Control (basic) I eat. I must eat. 14 Verb Collapsing/Rewriting I ate. I ate and slept. 8 Conjunctions John is a lawyer. John, a lawyer, … 20 Misc Collapsing/Rewriting A car hit me. I was hit by a car. 5 Passive I slept Thursday. Thursday, I slept. 24 Sentence normalization Table 1: Rule categories with sample simplifications. Target verbs are underlined. should be placed in different positions in the subordinate clauses (subject, direct object, and object of “to”) to produce the phrases “The cat ate the mouse,” “The mouse ate the seed”, and “We gave the gift to the person.” We handle these phrases by placing a floating node in the subordinate clause which points to the argument; other rules try to place the floating node into each possible position in the sentence. The second construct is a system for keeping track of whether a sentence has a subject, and if so, what it is. A subset of our rule set normalizes the input sentence by moving modifiers after the verb, leaving either a single phrase (the subject) or nothing before the verb. For example, the sentence “Before leaving, I ate a sandwich,” is rewritten as “I ate a sandwich before leaving.” In many cases, keeping track of the presence or absence of a subject greatly reduces the set of possible simplifications. Altogether, we currently have 154 (mostly unlexicalized) rules. Our general approach was to write very conservative rules, i.e., avoid making rules with low precision, as these can quickly lead to a large blowup in the number of generated simple sentences. Table 1 shows a summary of our rule-set, grouped by type. Note that each row lists only one possible sentence and simplification rule from that 346 S-1 NP or S VP VB* eat #children(S-1) = 2 S-1 VP VB* eat #children(S-1) = 1 Figure 5: Simple sentence constraints for “eat” category; many of the categories handle a variety of syntax patterns. The two examples without target verbs are helper transformations; in more complex sentences, they can enable further simplifications. Another thing to note is that we use the terms Raising/Control (RC) very loosely to mean situations where the subject of the target verb is displaced, appearing as the subject of another verb (see table). Our rule set was developed by analyzing performance and coverage on the PropBank WSJ training set; neither the development set nor (of course) the test set were used during rule creation. 4 Simple Sentence Production We now describe how to take a set of rules and produce a set of candidate simple sentences. At a high level, the algorithm is very simple. We maintain a set of derived parses S which is initialized to contain only the original, untransformed parse. One iteration of the algorithm consists of applying every possible matching transformation rule to every parse in S, and adding all resulting parses to S. With carefully designed rules, repeated iterations are guaranteed to converge; that is, we eventually arrive at a set ˆS such that if we apply an iteration of rule application to ˆS, no new parses will be added. Note that we simplify the whole sentence without respect to a particular verb. Thus, this process only needs to be done once per sentence (not once per verb). To label arguments of a particular target verb, we remove any parse from our set which does not match one of the two templates in Figure 5 (for verb “eat”). These select simple sentences that have all nonsubject modifiers moved to the predicate and “eat” as the main verb. Note that the constraint VB* indicates any terminal verb category (e.g., VBN, VBD, etc.) A parse that matches one of these templates is called a valid simple sentence; this is exactly the canonicalized version of the sentence which our simplification rules are designed to produce. This procedure is quite expensive; we have to copy the entire parse tree at each step, and in general, this procedure could generate an exponential number of transformed parses. The first issue can be solved, and the second alleviated, using a dynamicprogramming data structure similar to the one used to store parse forests (as in a chart parser). This data structure is not essential for exposition; we delay discussion until Section 7. 5 Labeling Simple Sentences For a particular sentence/target verb pair s, v, the output from the previous section is a set Ssv = {tsv i }i of valid simple sentences. Although labeling a simple sentence is easier than labeling the original sentence, there are still many choices to be made. There is one key assumption that greatly reduces the search space: in a simple sentence, only the subject (if present) and direct modifiers of the target verb can be arguments of that verb. On the training set, we now extract a set of role patterns Gv = {gv j }j for each verb v. For example, a common role pattern for “give” is that of “I gave him a sandwich”. We represent this pattern as ggive 1 = {ARG0 = Subject NP, ARG1 = Postverb NP2, ARG2 = Postverb NP1}. Note that this is one atomic pattern; thus, we are keeping track not just of occurrences of particular roles in particular places in the simple sentence, but also how those roles co-occur with other roles. For a particular simple sentence tsv i , we apply all extracted role patterns gv j to tsv i , obtaining a set of possible role labelings. We call a simple sentence/role labeling pair a simple labeling and denote the set of candidate simple labelings Csv = {csv k }k. Note that a given pair tsv i , gv j may generate more than one simple labeling, if there is more than one way to assign the elements of gv j to constituents in tsv i . Also, for a sentence s there may be several simple labelings that lead to the same role labeling. In particular, there may be several simple labelings which assign the correct labels to all constituents; we denote this set Ksv ⊆Csv. 6 Probabilistic Model We now define our probabilistic model. Given a (possibly large) set of candidate simple labelings Csv, we need to select a correct one. We assign a score to each candidate based on its features: 347 Rule = Depassivize Pattern = {ARG0 = Subj NP, ARG1 = PV NP2, ARG2 = PV NP1} Role = ARG0, Head Word = John Role = ARG1, Head Word = sandwich Role = ARG2, Head Word = I Role = ARG0, Category = NP Role = ARG1, Category = NP Role = ARG2, Category = NP Role = ARG0, Position = Subject NP Role = ARG1, Position = Postverb NP2 Role = ARG2, Position = Postverb NP1 Figure 6: Features for “John gave me a sandwich.” which rules were used to obtain the simple sentence, which role pattern was used, and features about the assignment of constituents to roles. A log-linear model then assigns probability to each simple labeling equal to the normalized exponential of the score. The first type of feature is which rules were used to obtain the simple sentence. These features are indicator functions for each possible rule. Thus, we do not currently learn anything about interactions between different rules. The second type of feature is an indicator function of the role pattern used to generate the labeling. This allows us to learn that “give” has a preference for the labeling {ARG0 = Subject NP, ARG1 = Postverb NP2, ARG2 = Postverb NP1}. Our final features are analogous to those used in semantic role labeling, but greatly simplified due to our use of simple sentences: head word of the constituent; category (i.e., constituent label); and position in the simple sentence. Each of these features is combined with the role assignment, so that each feature indicates a preference for a particular role assignment (i.e., for “give”, head word “sandwich” tends to be ARG1). For each feature, we have a verb-specific and a verb-independent version, allowing sharing across verbs while still permitting different verbs to learn different preferences. The set of extracted features for the sentence “I was given a sandwich by John” with simplification “John gave me a sandwich” is shown in Figure 6. We omit verbspecific features to save space . Note that we “stem” all pronouns (including possessive pronouns). For each candidate simple labeling csv k we extract a vector of features fsv k as described above. We now define the probability of a simple labeling csv k with respect to a weight vector w P(csv k ) = ewT fsv k P k′ e wT fsv k′ . Our goal is to maximize the total probability assigned to any correct simple labeling; therefore, for each sentence/verb pair (s, v), we want to increase P csv k ∈Ksv P(csv k ). This expression treats the simple labeling (consisting of a simple sentence and a role assignment) as a hidden variable that is summed out. Taking the log, summing across all sentence/verb pairs, and adding L2 regularization on the weights, we have our final objective F(w): X s,v  log P csv k ∈Ksv ewT f sv k P csv k′ ∈Csv ewT f sv k′  −wT w 2σ2 We train our model by optimizing the objective using standard methods, specifically BFGS. Due to the summation over the hidden variable representing the choice of simple sentence (not observed in the training data), our objective is not convex. Thus, we are not guaranteed to find a global optimum; in practice we have gotten good results using the default initialization of setting all weights to 0. Consider the derivative of the likelihood component with respect to a single weight wl: X csv k ∈Ksv fsv k (l) P(csv k ) P csv k′ ∈Ksv P(csv k′ )− X csv k ∈Csv fsv k (l)P(csv k ) where fsv k (l) denotes the lth component of fsv k . This formula is positive when the expected value of the lth feature is higher on the set of correct simple labelings Ksv than on the set of all simple labelings Csv. Thus, the optimization procedure will tend to be self-reinforcing, increasing the score of correct simple labelings which already have a high score. 7 Simplification Data Structure Our representation of the set of possible simplifications of a sentence addresses two computational bottlenecks. The first is the need to repeatedly copy large chunks of the sentence. For example, if we are depassivizing a sentence, we can avoid copying the subject and object of the original sentence by simply referring back to them in the depassivized version. At worst, we only need to add one node for each numbered node in the transformation rule. The second issue is the possible exponential blowup of the number of generated sentences. Consider “I want to eat and I want to drink and I want to play and ...” Each subsentence can be simplified, yielding two possibilities for each subsentence. The number of simplifications of the entire sentence is then exponential in the length of the sentence. However, 348 ROOT S NP([Someone]) VP VBD(gave) S NP(chance) VP VBD(was) NP(I) VBN(given) VP Figure 7: Data structure after applying the depassivize rule to “I was given (a) chance.” Circular nodes are ORnodes, rectangular nodes are AND-nodes. we can store these simplifications compactly as a set of independent decisions, “I {want to eat OR eat} and I {want to drink OR drink} and . . . ” Both issues can be addressed by representing the set of simplifications using an AND-OR tree, a general data structure also used to store parse forests such as those produced by a chart parser. In our case, the AND nodes are similar to constituent nodes in a parse tree – each has a category (e.g. NP) and (if it is a leaf) a word (e.g. “chance”), but instead of having a list of child constituents, it instead has a list of child OR nodes. Each OR node has one or more constituent children that correspond to the different options at this point in the tree. Figure 7 shows the resulting AND-OR tree after applying the depassivize rule to the original parse of “I was given a chance.” Because this AND-OR tree represents only two different parses, the original parse and the depassivized version, only one OR node in the tree has more than one child – the root node, which has two choices, one for each parse. However, the AND nodes immediately above “I” and “chance” each have more than one OR-node parent, since they are shared by the original and depassivized parses1. To extract a parse from this data structure, we apply the following recursive algorithm: starting at the root OR node, each time we reach an OR node, we choose and recurse on exactly one of its children; each time we reach an AND node, we recurse on all of its children. In Figure 7, we have only one choice: if we go left at the root, we generate the original parse; otherwise, we generate the depassivized version. Unfortunately, it is difficult to find the optimal AND-OR tree. We use a greedy but smart proce1In this particular example, both of these nodes are leaves, but in general shared nodes can be entire tree fragments dure to try to produce a small tree. We omit details for lack of space. Using our rule set, the compact representation is usually (but not always) small. For our compact representation to be useful, we need to be able to optimize our objective without expanding all possible simple sentences. A relatively straight-forward extension of the inside-outside algorithm for chart-parses allows us to learn and perform inference in our compact representation (a similar algorithm is presented in (Geman & Johnson, 2002)). We omit details for lack of space. 8 Experiments We evaluated our system using the setup of the Conll 2005 semantic role labeling task.2 Thus, we trained on Sections 2-21 of PropBank and used Section 24 as development data. Our test data includes both the selected portion of Section 23 of PropBank, plus the extra data on the Brown corpus. We used the Charniak parses provided by the Conll distribution. We compared to a strong Baseline SRL system that learns a logistic regression model using the features of Pradhan et al. (2005). It has two stages. The first filters out nodes that are unlikely to be arguments. The second stage labels each remaining node either as a particular role (e.g. “ARGO”) or as a non-argument. Note that the baseline feature set includes a feature corresponding to the subcategorization of the verb (specifically, the sequence of nonterminals which are children of the predicate’s parent node). Thus, Baseline does have access to something similar to our model’s role pattern feature, although the Baseline subcategorization feature only includes post-verbal modifiers and is generally much noisier because it operates on the original sentence. Our Transforms model takes as input the Charniak parses supplied by the Conll release, and labels every node with Core arguments (ARG0-ARG5). Our rule set does not currently handle either referent arguments (such as “who” in “The man who ate ...”) or non-core arguments (such as ARGMTMP). For these arguments, we simply filled in using our baseline system (specifically, any non-core argument which did not overlap an argument predicted by our model was added to the labeling). Also, on some sentences, our system did not generate any predictions because no valid simple sen2http://www.lsi.upc.es/ srlconll/home.html 349 Model Dev Test Test Test WSJ Brown WSJ+Br Baseline 74.7 76.9 64.7 75.3 Transforms 75.6 77.4 66.8 76.0 Combined 76.0 78.0 66.4 76.5 Punyakanok 77.35 79.44 67.75 77.92 Table 2: F1 Measure using Charniak parses tences were produced by the simplification system . Again, we used the baseline to fill in predictions (for all arguments) for these sentences. Baseline and Transforms were regularized using a Gaussian prior; for both models, σ2 = 1.0 gave the best results on the development set. For generating role predictions from our model, we have two reasonable options: use the labeling given by the single highest scoring simple labeling; or compute the distribution over predictions for each node by summing over all simple labelings. The latter method worked slightly better, particularly when combined with the baseline model as described below, so all reported results use this method. We also evaluated a hybrid model that combines the Baseline with our simplification model. For a given sentence/verb pair (s, v), we find the set of constituents Nsv that made it past the first (filtering) stage of Baseline. For each candidate simple sentence/labeling pair csv k = (tsv i , gv j ) proposed by our model, we check to see which of the constituents in Nsv are already present in our simple sentence tsv i . Any constituents that are not present are then assigned a probability distribution over possible roles according to Baseline. Thus, we fall back Baseline whenever the current simple sentence does not have an “opinion” about the role of a particular constituent. The Combined model is thus able to correctly label sentences when the simplification process drops some of the arguments (generally due to unusual syntax). Each of the two components was trained separately and combined only at testing time. Table 2 shows results of these three systems on the Conll-2005 task, plus the top-performing system (Punyakanok et al., 2005) for reference. Baseline already achieves good performance on this task, placing at about 75th percentile among evaluated systems. Our Transforms model outperforms Baseline on all sets. Finally, our Combined model improves over Transforms on all but the test Brown corpus, Model Test WSJ Baseline 87.6 Transforms 88.2 Combined 88.5 Table 3: F1 Measure using gold parses achieving a statistically significant increase over the Baseline system (according to confidence intervals calculated for the Conll-2005 results). The Combined model still does not achieve the performance levels of the top several systems. However, these systems all use information from multiple parses, allowing them to fix many errors caused by incorrect parses. We return to this issue in Section 10. Indeed, as shown in Table 3, performance on gold standard parses is (as expected) much better than on automatically generated parses, for all systems. Importantly, the Combined model again achieves a significant improvement over Baseline. We expect that by labeling simple sentences, our model will generalize well even on verbs with a small number of training examples. Figure 8 shows F1 measure on the WSJ test set as a function of training set size. Indeed, both the Transforms model and the Combined model significantly outperform the Baseline model when there are fewer than 20 training examples for the verb. While the Baseline model has higher accuracy than the Transforms model for verbs with a very large number of training examples, the Combined model is at or above both of the other models in all but the rightmost bucket, suggesting that it gets the best of both worlds. We also found, as expected, that our model improved on sentences with very long parse paths. For example, in the sentence “Big investment banks refused to step up to the plate to support the beleagured floor traders by buying blocks of stock, traders say,” the parse path from “buy” to its ARG0, “Big investment banks,” is quite long. The Transforms model correctly labels the arguments of “buy”, while the Baseline system misses the ARG0. To understand the importance of different types of rules, we performed an ablation analysis. For each major rule category in Figure 1, we deleted those rules from the rule set, retrained, and evaluated using the Combined model. To avoid parse-related issues, we trained and evaluated on gold-standard parses. Most important were rules relating to (ba350 F1 vs. Verb Training Examples 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0-4 5-9 10-19 20-49 50-99 100-199 200-499 500-999 1000-1999 2000-4999 5000+ Training Examples F1 Measure Combined Transforms Baseline Figure 8: F1 Measure on the WSJ test set as a function of training set size. Each bucket on the X-axis corresponds to a group of verbs for which the number of training examples fell into the appropriate range; the value is the average performance for verbs in that bucket. sic) verb raising/control, “make” rewrites, modified nouns, and passive constructions. Each of these rule categories when removed lowered the F1 score by approximately .4%. In constrast, removing rules for non-basic control, possessives, and inverted sentences caused a negligible reduction in performance. This may be because the relevant syntactic structures occur rarely; because Baseline does well on those constructs; or because the simplification model has trouble learning when to apply these rules. 9 Related Work One area of current research which has similarities with this work is on Lexical Functional Grammars (LFGs). Both approaches attempt to abstract away from the surface level syntax of the sentence (e.g., the XLE system3). The most obvious difference between the approaches is that we use SRL data to train our system, avoiding the need to have labeled data specific to our simplification scheme. There have been a number of works which model verb subcategorization. Approaches include incorporating a subcategorization feature (Gildea & Jurafsky, 2002; Xue & Palmer, 2004), such as the one used in our baseline; and building a model which jointly classifies all arguments of a verb (Toutanova et al., 2005). Our method differs from past work in that it extracts its role pattern feature from the simplified sentence. As a result, the feature is less noisy 3http://www2.parc.com/isl/groups/nltt/xle/ and generalizes better across syntactic variation than a feature extracted from the original sentence. Another group of related work focuses on summarizing sentences through a series of deletions (Jing, 2000; Dorr et al., 2003; Galley & McKeown, 2007). In particular, the latter two works iteratively simplify the sentence by deleting a phrase at a time. We differ from these works in several important ways. First, our transformation language is not context-free; it can reorder constituents and then apply transformation rules to the reordered sentence. Second, we are focusing on a somewhat different task; these works are interested in obtaining a single summary of each sentence which maintains all “essential” information, while in our work we produce a simplification that may lose semantic content, but aims to contain all arguments of a verb. Finally, training our model on SRL data allows us to avoid the relative scarcity of parallel simplification corpora and the issue of determining what is “essential” in a sentence. Another area of related work in the semantic role labeling literature is that on tree kernels (Moschitti, 2004; Zhang et al., 2007). Like our method, tree kernels decompose the parse path into smaller pieces for classification. Our model can generalize better across verbs because it first simplifies, then classifies based on the simplified sentence. Also, through iterative simplifications we can discover structure that is not immediately apparent in the original parse. 10 Future Work There are a number of improvements that could be made to the current simplification system, including augmenting the rule set to handle more constructions and doing further sentence normalizations, e.g., identifying whether a direct object exists. Another interesting extension involves incorporating parser uncertainty into the model; in particular, our simplification system is capable of seamlessly accepting a parse forest as input. There are a variety of other tasks for which sentence simplification might be useful, including summarization, information retrieval, information extraction, machine translation and semantic entailment. In each area, we could either use the simplification system as learned on SRL data, or retrain the simplification model to maximize performance on the particular task. 351 References Dorr, B., Zajic, D., & Schwartz, R. (2003). Hedge: A parse-and-trim approach to headline generation. Proceedings of the HLT-NAACL Text Summarization Workshop and Document Understanding Conference. Galley, M., & McKeown, K. (2007). Lexicalized markov grammars for sentence compression. Proceedings of NAACL-HLT. Geman, S., & Johnson, M. (2002). Dynamic programming for parsing and estimation of stochastic unification-based grammars. Proceedings of ACL. Gildea, D., & Jurafsky, D. (2002). Automatic labeling of semantic roles. Computational Linguistics. Jing, H. (2000). Sentence reduction for automatic text summarization. Proceedings of Applied NLP. Moschitti, A. (2004). A study on convolution kernels for shallow semantic parsing. Proceedings of ACL. Pradhan, S., Hacioglu, K., Krugler, V., Ward, W., Martin, J. H., & Jurafsky, D. (2005). Support vector learning for semantic argument classification. Machine Learning, 60, 11–39. Punyakanok, V., Koomen, P., Roth, D., & Yih, W. (2005). Generalized inference with multiple semantic role labeling systems. Proceedings of CoNLL. Toutanova, K., Haghighi, A., & Manning, C. (2005). Joint learning improves semantic role labeling. Proceedings of ACL, 589–596. Xue, N., & Palmer, M. (2004). Calibrating features for semantic role labeling. Proceedings of EMNLP. Zhang, M., Che, W., Aw, A., Tan, C. L., Zhou, G., Liu, T., & Li, S. (2007). A grammar-driven convolution tree kernel for semantic role classification. Proceedings of ACL. 352
2008
40
Proceedings of ACL-08: HLT, pages 353–361, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Summarizing Emails with Conversational Cohesion and Subjectivity Giuseppe Carenini, Raymond T. Ng and Xiaodong Zhou Department of Computer Science University of British Columbia Vancouver, BC, Canada {carenini, rng, xdzhou}@cs.ubc.ca Abstract In this paper, we study the problem of summarizing email conversations. We first build a sentence quotation graph that captures the conversation structure among emails. We adopt three cohesion measures: clue words, semantic similarity and cosine similarity as the weight of the edges. Second, we use two graph-based summarization approaches, Generalized ClueWordSummarizer and PageRank, to extract sentences as summaries. Third, we propose a summarization approach based on subjective opinions and integrate it with the graph-based ones. The empirical evaluation shows that the basic clue words have the highest accuracy among the three cohesion measures. Moreover, subjective words can significantly improve accuracy. 1 Introduction With the ever increasing popularity of emails, it is very common nowadays that people discuss specific issues, events or tasks among a group of people by emails(Fisher and Moody, 2002). Those discussions can be viewed as conversations via emails and are valuable for the user as a personal information repository(Ducheneaut and Bellotti, 2001). In this paper, we study the problem of summarizing email conversations. Solutions to this problem can help users access the information embedded in emails more effectively. For instance, 10 minutes before a meeting, a user may want to quickly go through a previous discussion via emails that is going to be discussed soon. In that case, rather than reading each individual email one by one, it would be preferable to read a concise summary of the previous discussion with the major information summarized. Email summarization is also helpful for mobile email users on a small screen. Summarizing email conversations is challenging due to the characteristics of emails, especially the conversational nature. Most of the existing methods dealing with email conversations use the email thread to represent the email conversation structure, which is not accurate in many cases (Yeh and Harnly, 2006). Meanwhile, most existing email summarization approaches use quantitative features to describe the conversation structure, e.g., number of recipients and responses, and apply some general multi-document summarization methods to extract some sentences as the summary (Rambow et al., 2004) (Wan and McKeown, 2004). Although such methods consider the conversation structure somehow, they simplify the conversation structure into several features and do not fully utilize it into the summarization process. In contrast, in this paper, we propose new summarization approaches by sentence extraction, which rely on a fine-grain representation of the conversation structure. We first build a sentence quotation graph by content analysis. This graph not only captures the conversation structure more accurately, especially for selective quotations, but it also represents the conversation structure at the finer granularity of sentences. As a second contribution of this paper, we study several ways to measure the cohesion between parent and child sentences in the quotation graph: clue words (re-occurring words in the reply) 353 (Carenini et al., 2007), semantic similarity and cosine similarity. Hence, we can directly evaluate the importance of each sentence in terms of its cohesion with related ones in the graph. The extractive summarization problem can be viewed as a node ranking problem. We apply two summarization algorithms, Generalized ClueWordSummarizer and Page-Rank to rank nodes in the sentence quotation graph and to select the corresponding most highly ranked sentences as the summary. Subjective opinions are often critical in many conversations. As a third contribution of this paper, we study how to make use of the subjective opinions expressed in emails to support the summarization task. We integrate our best cohesion measure together with the subjective opinions. Our empirical evaluations show that subjective words and phrases can significantly improve email summarization. To summarize, this paper is organized as follows. In Section 2, we discuss related work. After building a sentence quotation graph to represent the conversation structure in Section 3, we apply two summarization methods in Section 4. In Section 5, we study summarization approaches with subjective opinions. Section 6 presents the empirical evaluation of our methods. We conclude this paper and propose future work in Section 7. 2 Related Work Rambow et al. proposed a sentence extraction summarization approach for email threads (Rambow et al., 2004). They described each sentence in an email conversations by a set of features and used machine learning to classify whether or not a sentence should be included into the summary. Their experiments showed that features about emails and the email thread could significantly improve the accuracy of summarization. Wan et al. proposed a summarization approach for decision-making email discussions (Wan and McKeown, 2004). They extracted the issue and response sentences from an email thread as a summary. Similar to the issue-response relationship, Shrestha et al.(Shrestha and McKeown, 2004) proposed methods to identify the question-answer pairs from an email thread. Once again, their results showed that including features about the email thread could greatly improve the accuracy. Similar results were obtained by Corston-Oliver et al. They studied how to identify “action” sentences in email messages and use those sentences as a summary(Corston-Oliver et al., 2004). All these approaches used the email thread as a coarse representation of the underlying conversation structure. In our recent study (Carenini et al., 2007), we built a fragment quotation graph to represent an email conversation and developed a ClueWordSummarizer (CWS) based on the concept of clue words. Our experiments showed that CWS had a higher accuracy than the email summarization approach in (Rambow et al., 2004) and the generic multidocument summarization approach MEAD (Radev et al., 2004). Though effective, the CWS method still suffers from the following four substantial limitations. First, we used a fragment quotation graph to represent the conversation, which has a coarser granularity than the sentence level. For email summarization by sentence extraction, the fragment granularity may be inadequate. Second, we only adopted one cohesion measure (clue words that are based on stemming), and did not consider more sophisticated ones such as semantically similar words. Third, we did not consider subjective opinions. Finally, we did not compared CWS to other possible graph-based approaches as we propose in this paper. Other than for email summarization, other document summarization methods have adopted graphranking algorithms for summarization, e.g., (Wan et al., 2007), (Mihalcea and Tarau, 2004) and (Erkan and Radev, 2004). Those methods built a complete graph for all sentences in one or multiple documents and measure the similarity between every pair of sentences. Graph-ranking algorithms, e.g., PageRank (Brin and Page, 1998), are then applied to rank those sentences. Our method is different from them. First, instead of using the complete graph, we build the graph based on the conversation structure. Second, we try various ways to compute the similarity among sentences and the ranking of the sentences. Several studies in the NLP literature have explored the reoccurrence of similar words within one document due to text cohesion. The idea has been formalized in the construct of lexical chains (Barzilay and Elhadad, 1997). While our approach and lexical chains both rely on lexical cohesion, they are 354 quite different with respect to the kind of linkages considered. Lexical chain is only based on similarities between lexical items in contiguous sentences. In contrast, in our approach, the linkage is based on the existing conversation structure. In our approach, the “chain” is not only “lexical” but also “conversational”, and typically spans over several emails. 3 Extracting Conversations from Multiple Emails In this section, we first review how to build a fragment quotation graph through an example. Then we extend this structure into a sentence quotation graph, which can allow us to capture the conversational relationship at the level of sentences. 3.1 Building the Fragment Quotation Graph b > a E2 c > b > > a E3 E4 d e > c > > b > > > a E5 g h > > d > f > > e E6 > g i > h j a E1 (a) Conversation involving 6 Emails b a c e d f h g i j (b) Fragment Quotation Graph Figure 1: A Real Example Figure 1(a) shows a real example of a conversation from a benchmark data set involving 6 emails. For the ease of representation, we do not show the original content but abbreviate them as a sequence of fragments. In the first step, all new and quoted fragments are identified. For instance, email E3 is decomposed into 3 fragments: new fragment c and quoted fragments b, which in turn quoted a. E4 is decomposed into de, c, b and a. Then, in the second step, to identify distinct fragments (nodes), fragments are compared with each other and overlaps are identified. Fragments are split if necessary (e.g., fragment gh in E5 is split into g and h when matched with E6), and duplicates are removed. At the end, 10 distinct fragments a, . . . , j give rise to 10 nodes in the graph shown in Figure 1(b). As the third step, we create edges, which represent the replying relationship among fragments. In general, it is difficult to determine whether one fragment is actually replying to another fragment. We assume that any new fragment is a potential reply to neighboring quotations – quoted fragments immediately preceding or following it. Let us consider E6 in Figure 1(a). there are two edges from node i to g and h, while there is only a single edge from j to h. For E3, there are the edges (c, b) and (c, a). Because of the edge (b, a), the edge (c, a) is not included in Figure 1(b). Figure 1(b) shows the fragment quotation graph of the conversation shown in Figure 1(a) with all the redundant edges removed. In contrast, if threading is done at the coarse granularity of entire emails, as adopted in many studies, the threading would be a simple chain from E6 to E5, E5 to E4 and so on. Fragment f reflects a special and important phenomenon, where the original email of a quotation does not exist in the user’s folder. We call this as the hidden email problem. This problem and its influence on email summarization were studied in (Carenini et al., 2005) and (Carenini et al., 2007). 3.2 Building the Sentence Quotation Graph A fragment quotation graph can only represent the conversation in the fragment granularity. We notice that some sentences in a fragment are more relevant to the conversation than the remaining ones. The fragment quotation graph is not capable of representing this difference. Hence, in the following, we describe how to build a sentence quotation graph from the fragment quotation graph and introduce several ways to give weight to the edges. In a sentence quotation graph GS, each node represents a distinct sentence in the email conversation, and each edge (u, v) represents the replying relationship between node u and v. The algorithm to create the sentence quotation graph contains the following 3 steps: create nodes, create edges and assign weight to edges. In the following, we first illustrate how to create nodes and edges. In Section 3.3, we discuss different ways to assign weight to edges. Given a fragment quotation graph GF, we first split each fragment into a set of sentences. For each sentence, we create a node in the sentence quotation graph GS. In this way, each sentence in the email conversation is represented by a distinct node in GS. As the second step, we create the edges in GS. The edges in GS are based on the edges in GF 355 Pk s1 s2 sn P1 C1 Ck (a) Fragment Quotation Graph (b) Sentence Quotation Graph F: Ct s1, s2,...,sn ... ... P1 C1 Pk ... ... Figure 2: Create the Sentence Quotation Graph from the Fragment Quotation Graph because the edges in GF already reflect the replying relationship among fragments. For each edge (u, v) ∈GF, we create edges from each sentence of u to each sentence of v in the sentence quotation graph GS. This is illustrated in Figure 2. Note that when each distinct sentence in an email conversation is represented as one node in the sentence quotation graph, the extractive email summarization problem is transformed into a standard node ranking problem within the sentence quotation graph. Hence, general node ranking algorithms, e.g., Page-Rank, can be used for email summarization as well. 3.3 Measuring the Cohesion Between Sentences After creating the nodes and edges in the sentence quotation graph, a key technical question is how to measure the degree that two sentences are related to each other, e.g., a sentence su is replying to or being replied by sv. In this paper, we use text cohesion between two sentences su and sv to make this assessment and assign this as the weight of the corresponding edge (su, sv). We explore three types of cohesion measures: (1) clue words that are based on stems, (2) semantic distance based on WordNet and (3) cosine similarity that is based on the word TFIDF vector. In the following, we discuss these three methods separately in detail. 3.3.1 Clue Words Clue words were originally defined as reoccurring words with the same stem between two adjacent fragments in the fragment quotation graph. In this section, we re-define clue words based on the sentence quotation graph as follows. A clue word in a sentence S is a non-stop word that also appears (modulo stemming) in a parent or a child node (sentence) of S in the sentence quotation graph. The frequency of clue words in the two sentences measures their cohesion as described in Equation 1. weight(su, sv) = X wi∈su freq(wi, sv) (1) 3.3.2 Semantic Similarity Based on WordNet Other than stems, when people reply to previous messages they may also choose some semantically related words, such as synonyms and antonyms, e.g., “talk” vs. “discuss”. Based on this observation, we propose to use semantic similarity to measure the cohesion between two sentences. We use the wellknown lexical database WordNet to get the semantic similarity of two words. Specifically, we use the package by (Pedersen et al., 2004), which includes several methods to compute the semantic similarity. Among those methods, we choose “lesk” and “jcn”, which are considered two of the best methods in (Jurafsky and Martin, 2008). Similar to the clue words, we measure the semantic similarity of two sentences by the total semantic similarity of the words in both sentences. This is described in the following equation. weight(su, sv) = X wi∈su X wj∈sv σ(wi, wj), (2) 3.3.3 Cosine Similarity Cosine similarity is a popular metric to compute the similarity of two text units. To do so, each sentence is represented as a word vector of TFIDF values. Hence, the cosine similarity of two sentences su and sv is then computed as −→ su·−→ sv ||−→ su||·||−→ sv ||. 356 4 Summarization Based on the Sentence Quotation Graph Having built the sentence quotation graph with different measures of cohesion, in this section, we develop two summarization approaches. One is the generalization of the CWS algorithm in (Carenini et al., 2007) and one is the well-known PageRank algorithm. Both algorithms compute a score, SentScore(s), for each sentence (node) s, which is used to select the top-k% sentences as the summary. 4.1 Generalized ClueWordSummarizer Given the sentence quotation graph, since the weight of an edge (s, t) represents the extent that s is related to t, a natural assumption is that the more relevant a sentence (node) s is to its parents and children, the more important s is. Based on this assumption, we compute the weight of a node s by summing up the weight of all the outgoing and incoming edges of s. This is described in the following equation. SentScore(s) = X (s,t)∈GS weight(s, t) + X (p,s)∈GS weight(p, s) (3) The weight of an edge (s, t) can be any of the three metrics described in the previous section. Particularly, when the weight of the edge is based on clue words as in Equation 1, this method is equivalent to Algorithm CWS in (Carenini et al., 2007). In the rest of this paper, let CWS denote the Generalized ClueWordSummarizer when the edge weight is based on clue words, and let CWS-Cosine and CWSSemantic denote the summarizer when the edge weight is cosine similarity and semantic similarity respectively. Semantic can be either “lesk” or “jcn”. 4.2 Page-Rank-based Summarization The Generalized ClueWordSummarizer only considers the weight of the edges without considering the importance (weight) of the nodes. This might be incorrect in some cases. For example, a sentence replied by an important sentence should get some of its importance. This intuition is similar to the one inspiring the well-known Page-Rank algorithm. The traditional Page-Rank algorithm only considers the outgoing edges. In email conversations, what we want to measure is the cohesion between sentences no matter which one is being replied to. Hence, we need to consider both incoming and outgoing edges and the corresponding sentences. Given the sentence quotation graph Gs, the PageRank-based algorithm is described in Equation 4. PR(s) is the Page-Rank score of a node (sentence) s. d is the dumping factor, which is initialized to 0.85 as suggested in the Page-Rank algorithm. In this way, the rank of a sentence is evaluated globally based on the graph. 5 Summarization with Subjective Opinions Other than the conversation structure, the measures of cohesion and the graph-based summarization methods we have proposed, the importance of a sentence in emails can be captured from other aspects. In many applications, it has been shown that sentences with subjective meanings are paid more attention than factual ones(Pang and Lee, 2004)(Esuli and Sebastiani, 2006). We evaluate whether this is also the case in emails, especially when the conversation is about decision making, giving advice, providing feedbacks, etc. A large amount of work has been done on determining the level of subjectivity of text (Shanahan et al., 2005). In this paper we follow a very simple approach that, if successful, could be extended in future work. More specifically, in order to assess the degree of subjectivity of a sentence s, we count the frequency of words and phrases in s that are likely to bear subjective opinions. The assumption is that the more subjective words s contains, the more likely that s is an important sentence for the purpose of email summarization. Let SubjScore(s) denote the number of words with a subjective meaning. Equation 5 illustrates how SubjScore(s) is computed. SubjList is a list of words and phrases that indicate subjective opinions. SubjScore(s) = X wi∈SubjList,wi∈s freq(wi) (5) The SubjScore(s) alone can be used to evaluate the importance of a sentence. In addition, we can combine SubjScore with any of the sentence scores based on the sentence quotation graph. In this paper, we use a simple approach by adding them up as the final sentence score. 357 PR(s) = (1 −d) + d ∗ X si∈child(s) weight(s, si) ∗PR(si) + X sj∈parent(s) weight(sj, s) ∗PR(sj) X si∈child(s) weight(s, si) + X sj∈parent(s) weight(sj, s) (4) As to the subjective words and phrases, we consider the following two lists generated by researchers in this area. • OpFind: The list of subjective words in (Wilson et al., 2005). The major source of this list is from (Riloff and Wiebe, 2003) with additional words from other sources. This list contains 8,220 words or phrases in total. • OpBear: The list of opinion bearing words in (Kim and Hovy, 2005). This list contains 27,193 words or phrases in total. 6 Empirical Evaluation 6.1 Dataset Setup There are no publicly available annotated corpora to test email summarization techniques. So, the first step in our evaluation was to develop our own corpus. We use the Enron email dataset, which is the largest public email dataset. In the 10 largest inbox folders in the Enron dataset, there are 296 email conversations. Since we are studying summarizing email conversations, we required that each selected conversation contained at least 4 emails. In total, 39 conversations satisfied this requirement. We use the MEAD package to segment the text into 1,394 sentences (Radev et al., 2004). We recruited 50 human summarizers to review those 39 selected email conversations. Each email conversation was reviewed by 5 different human summarizers. For each given email conversation, human summarizers were asked to generate a summary by directly selecting important sentences from the original emails in that conversation. We asked the human summarizers to select 30% of the total sentences in their summaries. Moreover, human summarizers were asked to classify each selected sentence as either essential or optional. The essential sentences are crucial to the email conversation and have to be extracted in any case. The optional sentences are not critical but are useful to help readers understand the email conversation if the given summary length permits. By classifying essential and optional sentences, we can distinguish the core information from the supporting ones and find the most convincing sentences that most human summarizers agree on. As essential sentences are more important than the optional ones, we give more weight to the essential selections. We compute a GSV alue for each sentence to evaluate its importance according to the human summarizers’ selections. The score is designed as follows: for each sentence s, one essential selection has a score of 3, one optional selection has a score of 1. Thus, the GSValue of a sentence ranges from 0 to 15 (5 human summarizers x 3). The GSValue of 8 corresponds to 2 essential and 2 optional selections. If a sentence has a GSValue no less than 8, we take it as an overall essential sentence. In the 39 conversations, we have about 12% overall essential sentences. 6.2 Evaluation Metrics Evaluation of summarization is believed to be a difficult problem in general. In this paper, we use two metrics to measure the accuracy of a system generated summary. One is sentence pyramid precision, and the other is ROUGE recall. As to the statistical significance, we use the 2-tail pairwise student t-test in all the experiments to compare two specific methods. We also use ANOVA to compare three or more approaches together. The sentence pyramid precision is a relative precision based on the GSValue. Since this idea is borrowed from the pyramid metric by Nenkova et al.(Nenkova et al., 2007), we call it the sentence pyramid precision. In this paper, we simplify it as the pyramid precision. As we have discussed above, with the reviewers’ selections, we get a GSValue for each sentence, which ranges from 0 to 15. With this GSValue, we rank all sentences in a descendant order. We also group all sentences with the same GSValue together as one tier Ti, where i is the corre358 sponding GSValue; i is called the level of the tier Ti. In this way, we organize all sentences into a pyramid: a sequence of tiers with a descendant order of levels. With the pyramid of sentences, the accuracy of a summary is evaluated over the best summary we can achieve under the same summary length. The best summary of k sentences are the top k sentences in terms of GSValue. Other than the sentence pyramid precision, we also adopt the ROUGE recall to evaluate the generated summary with a finer granularity than sentences, e.g., n-gram and longest common subsequence. Unlike the pyramid method which gives more weight to sentences with a higher GSValue, ROUGE is not sensitive to the difference between essential and optional selections (it considers all sentences in one summary equally). Directly applying ROUGE may not be accurate in our experiments. Hence, we use the overall essential sentences as the gold standard summary for each conversation, i.e., sentences in tiers no lower than T8. In this way, the ROUGE metric measures the similarity of a system generated summary to a gold standard summary that is considered important by most human summarizers. Specifically, we choose ROUGE-2 and ROUGE-L as the evaluation metric. 6.3 Evaluating the Weight of Edges In Section 3.3, we developed three ways to compute the weight of an edge in the sentence quotation graph, i.e., clue words, semantic similarity based on WordNet and cosine similarity. In this section, we compare them together to see which one is the best. It is well-known that the accuracy of the summarization method is affected by the length of the summary. In the following experiments, we choose the summary length as 10%, 12%, 15%, 20% and 30% of the total sentences and use the aggregated average accuracy to evaluate different algorithms. Table 1 shows the aggregated pyramid precision over all five summary lengths of CWS, CWSCosine, two semantic similarities, i.e., CWS-lesk and CWS-jcn. We first use ANOVA to compare the four methods. For the pyramid precision, the F ratio is 50, and the p-value is 2.1E-29. This shows that the four methods are significantly different in the average accuracy. In Table 1, by comparing CWS with the other methods, we can see that CWS obtains the CWS CWS-Cosine CWS-lesk CWS-jcn Pyramid 0.60 0.39 0.57 0.57 p-value <0.0001 0.02 0.005 ROUGE-2 0.46 0.31 0.39 0.35 p-value <0.0001 <0.001 <0.001 ROUGE-L 0.54 0.43 0.49 0.45 p-value <0.0001 <0.001 <0.001 Table 1: Generalized CWS with Different Edge Weights highest precision (0.60). The widely used cosine similarity does not perform well. Its precision (0.39) is about half of the precision of CWS with a p-value less than 0.0001. This clearly shows that CWS is significantly better than CWS-Cosine. Meanwhile, both semantic similarities have lower accuracy than CWS, and the differences are also statistically significant even with the conservative Bonferroni adjustment (i.e., the p-values in Table 1 are multiplied by three). The above experiments show that the widely used cosine similarity and the more sophisticated semantic similarity in WordNet are less accurate than the basic CWS in the summarization framework. This is an interesting result and can be viewed at least from the following two aspects. First, clue words, though straight forward, are good at capturing the important sentences within an email conversation. The higher accuracy of CWS may suggest that people tend to use the same words to communicate in email conversations. Some related words in the previous emails are adopted exactly or in another similar format (modulo stemming). This is different from other documents such as newspaper articles and formal reports. In those cases, the authors are usually professional in writing and choose their words carefully, even intentionally avoid repeating the same words to gain some diversity. However, for email conversation summarization, this does not appear to be the case. Moreover, in the previous discussion we only considered the accuracy in precision without considering the runtime issue. In order to have an idea of the runtime of the two methods, we did the following comparison. We randomly picked 1000 pairs of words from the 20 conversations and compute their semantic distance in “jcn”. It takes about 0.056 seconds to get the semantic similarity for one pair on the 359 average. In contrast, when the weight of edges are computed based on clue words, the average runtime to compute the SentScore for all sentences in a conversation is only 0.05 seconds, which is even a little less than the time to compute the semantic similarity of one pair of words. In other words, when CWS has generated the summary of one conversation, we can only get the semantic distance between one pair of words. Note that for each edge in the sentence quotation graph, we need to compute the distance for every pair of words in each sentence. Hence, the empirical results do not support the use of semantic similarity. In addition, we do not discuss the runtime performance of CWS-cosine here because of its extremely low accuracy. 6.4 Comparing Page-Rank and CWS Table 2 compares Page-Rank and CWS under different edge weights. We compare Page-Rank only with CWS because CWS is better than the other Generalized CWS methods as shown in the previous section. This table shows that Page-Rank has a lower accuracy than that of CWS and the difference is significant in all four cases. Moreover, when we compare Table 1 and 2 together, we can find that, for each kind of edge weight, Page-Rank has a lower accuracy than the corresponding Generalized CWS. Note that Page-Rank computes a node’s rank based on all the nodes and edges in the graph. In contrast, CWS only considers the similarity between neighboring nodes. The experimental result indicates that for email conversation, the local similarity based on clue words is more consistent with the human summarizers’ selections. 6.5 Evaluating Subjective Opinions Table 3 shows the result of using subjective opinions described in Section 5. The first 3 columns in this table are pyramid precision of CWS and using 2 lists of subjective words and phrases alone. We can see that by using subjective words alone, the precision of each subjective list is lower than that of CWS. However, when we integrate CWS and subjective words together, as shown in the remaining 2 columns, the precisions get improved consistently for both lists. The increase in precision is at least 0.04 with statistical significance. A natural question to ask is whether clue words and subjective words overlap much. Our CWS PR-Clue PR-Cosine PR-lesk PR-jcn Pyramid 0.60 0.51 0.37 0.54 0.50 p-value < 0.0001 < 0.0001 < 0.0001 < 0.0001 ROUGE-2 0.46 0.4 0.26 0.36 0.39 p-value 0.05 < 0.0001 0.001 0.02 ROUGE-L 0.54 0.49 0.36 0.44 0.48 p-value 0.06 < 0.0001 0.0005 0.02 Table 2: Compare Page-Rank with CWS CWS OpFind OpBear CWS+OpFind CWS+OpBear Pyramid 0.60 0.52 0.59 0.65 0.64 p-value 0.0003 0.8 <0.0001 0.0007 ROUGE-2 0.46 0.37 0.44 0.50 0.49 p-value 0.0004 0.5 0.004 0.06 ROUGE-L 0.54 0.48 0.56 0.60 0.59 p-value 0.01 0.6 0.0002 0.002 Table 3: Accuracy of Using Subjective Opinions analysis shows that the overlap is minimal. For the list of OpFind, the overlapped words are about 8% of clue words and 4% of OpFind that appear in the conversations. This result clearly shows that clue words and subjective words capture the importance of sentences from different angles and can be used together to gain a better accuracy. 7 Conclusions We study how to summarize email conversations based on the conversational cohesion and the subjective opinions. We first create a sentence quotation graph to represent the conversation structure on the sentence level. We adopt three cohesion metrics, clue words, semantic similarity and cosine similarity, to measure the weight of the edges. The Generalized ClueWordSummarizer and Page-Rank are applied to this graph to produce summaries. Moreover, we study how to include subjective opinions to help identify important sentences for summarization. The empirical evaluation shows the following two discoveries: (1) The basic CWS (based on clue words) obtains a higher accuracy and a better runtime performance than the other cohesion measures. It also has a significant higher accuracy than the Page-Rank algorithm. (2) By integrating clue words and subjective words (phrases), the accuracy of CWS is improved significantly. This reveals an interesting phenomenon and will be further studied. References Regina Barzilay and Michael Elhadad. 1997. Using lexical chains for text summarization. In Proceedings of 360 the Intelligent Scalable Text Summarization Workshop (ISTS’97), ACL, Madrid, Spain. Sergey Brin and Lawrence Page. 1998. The anatomy of a large-scale hypertextual web search engine. In Proceedings of the seventh international conference on World Wide Web, pages 107–117. Giuseppe Carenini, Raymond T. Ng, and Xiaodong Zhou. 2005. Scalable discovery of hidden emails from large folders. In ACM SIGKDD’05, pages 544–549. Giuseppe Carenini, Raymond T. Ng, and Xiaodong Zhou. 2007. Summarizing email conversations with clue words. In WWW ’07: Proceedings of the 16th international conference on World Wide Web, pages 91–100. Simon Corston-Oliver, Eric K. Ringger, Michael Gamon, and Richard Campbell. 2004. Integration of email and task lists. In First conference on email and antiSpam(CEAS), Mountain View, California, USA, July 30-31. Nicolas Ducheneaut and Victoria Bellotti. 2001. E-mail as habitat: an exploration of embedded personal information management. Interactions, 8(5):30–38. G¨unes Erkan and Dragomir R. Radev. 2004. Lexrank: graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research(JAIR), 22:457–479. Andrea Esuli and Fabrizio Sebastiani. 2006. Sentiwordnet: A publicly available lexical resource for opinion mining. In Proceedings of the International Conference on Language Resources and Evaluation, May 2426. Danyel Fisher and Paul Moody. 2002. Studies of automated collection of email records. In University of Irvine ISR Technical Report UCI-ISR-02-4. Daniel Jurafsky and James H. Martin. 2008. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition (Second Edition). Prentice-Hall. Soo-Min Kim and Eduard Hovy. 2005. Automatic detection of opinion bearing words and sentences. In Proceedings of the Second International Joint Conference on Natural Language Processing: Companion Volume, Jeju Island, Republic of Korea, October 1113. R. Mihalcea and P. Tarau. 2004. TextRank: Bringing order into texts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2004), July. Ani Nenkova, Rebecca Passonneau, and Kathleen McKeown. 2007. The pyramid method: incorporating human content selection variation in summarization evaluation. ACM Transaction on Speech and Language Processing, 4(2):4. Bo Pang and Lillian Lee. 2004. A sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts. In ACL ’04: Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, pages 271–278. Ted Pedersen, Siddharth Patwardhan, and Jason Michelizzi. 2004. Wordnet::similarity - measuring the relatedness of concepts. In Proceedings of Fifth Annual Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-04), pages 38–41, May 3-5. Dragomir R. Radev, Hongyan Jing, Malgorzata Sty´s, and Daniel Tam. 2004. Centroid-based summarization of multiple documents. Information Processing and Management, 40(6):919–938, November. Owen Rambow, Lokesh Shrestha, John Chen, and Chirsty Lauridsen. 2004. Summarizing email threads. In HLT/NAACL, May 2–7. Ellen Riloff and Janyce Wiebe. 2003. Learning extraction patterns for subjective expressions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2003), pages 105– 112. James G. Shanahan, Yan Qu, and Janyce Wiebe. 2005. Computing Attitude and Affect in Text: Theory and Applications (The Information Retrieval Series). Springer-Verlag New York, Inc. Lokesh Shrestha and Kathleen McKeown. 2004. Detection of question-answer pairs in email conversations. In Proceedings of COLING’04, pages 889–895, August 23–27. Stephen Wan and Kathleen McKeown. 2004. Generating overview summaries of ongoing email thread discussions. In Proceedings of COLING’04, the 20th International Conference on Computational Linguistics, August 23–27. Xiaojun Wan, Jianwu Yang, and Jianguo Xiao. 2007. Towards an iterative reinforcement approach for simultaneous document summarization and keyword extraction. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 552– 559, Prague, Czech Republic, June. Theresa Wilson, Paul Hoffmann, Swapna Somasundaran, Jason Kessler, Janyce Wiebe, Yejin Choi, Claire Cardie, Ellen Riloff, and Siddharth Patwardhan. 2005. Opinionfinder: a system for subjectivity analysis. In Proceedings of HLT/EMNLP on Interactive Demonstrations, pages 34–35. Jen-Yuan Yeh and Aaron Harnly. 2006. Email thread reassembly using similarity matching. In Third Conference on Email and Anti-Spam (CEAS), July 27 - 28. 361
2008
41
Proceedings of ACL-08: HLT, pages 362–370, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Ad Hoc Treebank Structures Markus Dickinson Department of Linguistics Indiana University [email protected] Abstract We outline the problem of ad hoc rules in treebanks, rules used for specific constructions in one data set and unlikely to be used again. These include ungeneralizable rules, erroneous rules, rules for ungrammatical text, and rules which are not consistent with the rest of the annotation scheme. Based on a simple notion of rule equivalence and on the idea of finding rules unlike any others, we develop two methods for detecting ad hoc rules in flat treebanks and show they are successful in detecting such rules. This is done by examining evidence across the grammar and without making any reference to context. 1 Introduction and Motivation When extracting rules from constituency-based treebanks employing flat structures, grammars often limit the set of rules (e.g., Charniak, 1996), due to the large number of rules (Krotov et al., 1998) and “leaky” rules that can lead to mis-analysis (Foth and Menzel, 2006). Although frequency-based criteria are often used, these are not without problems because low-frequency rules can be valid and potentially useful rules (see, e.g., Daelemans et al., 1999), and high-frequency rules can be erroneous (see., e.g., Dickinson and Meurers, 2005). A key issue in determining the rule set is rule generalizability: will these rules be needed to analyze new data? This issue is of even more importance when considering the task of porting a parser trained on one genre to another genre (e.g., Gildea, 2001). Infrequent rules in one genre may be quite frequent in another (Sekine, 1997) and their frequency may be unrelated to their usefulness for parsing (Foth and Menzel, 2006). Thus, we need to carefully consider the applicability of rules in a treebank to new text. Specifically, we need to examine ad hoc rules, rules used for particular constructions specific to one data set and unlikely to be used on new data. This is why low-frequency rules often do not extend to new data: if they were only used once, it was likely for a specific reason, not something we would expect to see again. Ungeneralizable rules, however, do not extend to new text for a variety of reasons, not all of which can be captured strictly by frequency. While there are simply phenomena which, for various reasons, are rarely used (e.g., long coordinated lists), other ungeneralizable phenomena are potentially more troubling. For example, when ungrammatical or non-standard text is used, treebanks employ rules to cover it, but do not usually indicate ungrammaticality in the annotation. These rules are only to be used in certain situations, e.g., for typographical conventions such as footnotes, and the fact that the situation is irregular would be useful to know if the purpose of an induced grammar is to support robust parsing. And these rules are outright damaging if the set of treebank rules is intended to accurately capture the grammar of a language. This is true of precision grammars, where analyses can be more or less preferred (see, e.g., Wagner et al., 2007), and in applications like intelligent computer-aided language learning, where learner input is parsed to detect what is correct or not (see, e.g., Vandeventer Faltin, 2003, ch. 2). If a treebank grammar is used (e.g., Metcalf and Boyd, 362 2006), then one needs to isolate rules for ungrammatical data, to be able to distinguish grammatical from ungrammatical input. Detecting ad hoc rules can also reveal issues related to rule quality. Many ad hoc rules exist because they are erroneous. Not only are errors inherently undesirable for obtaining an accurate grammar, but training on data with erroneous rules can be detrimental to parsing performance (e.g., Dickinson and Meurers, 2005; Hogan, 2007) As annotation schemes are not guaranteed to be completely consistent, other ad hoc rules point to non-uniform aspects of the annotation scheme. Thus, identifying ad hoc rules can also provide feedback on annotation schemes, an especially important step if one is to use the treebank for specific applications (see, e.g., Vadas and Curran, 2007), or if one is in the process of developing a treebank. Although statistical techniques have been employed to detect anomalous annotation (Ule and Simov, 2004; Eskin, 2000), these methods do not account for linguistically-motivated generalizations across rules, and no full evaluation has been done on a treebank. Our starting point for detecting ad hoc rules is also that they are dissimilar to the rest of the grammar, but we rely on a notion of equivalence which accounts for linguistic generalizations, as described in section 2. We generalize equivalence in a corpus-independent way in section 3 to detect ad hoc rules, using two different methods to determine when rules are dissimilar. The results in section 4 show the success of the method in identifying all types of ad hoc rules. 2 Background 2.1 Equivalence classes To define dissimilarity, we need a notion of similarity, and, a starting point for this is the error detection method outlined in Dickinson and Meurers (2005). Since most natural language expressions are endocentric, i.e., a category projects to a phrase of the same category (e.g., X-bar Schema, Jackendoff, 1977), daughters lists with more than one possible mother are flagged as potentially containing an error. For example, IN NP1 has nine different mothers in the Wall Street Journal (WSJ) portion of the Penn 1Appendix A lists all categories used in this paper. Treebank (Marcus et al., 1993), six of which are errors. This method can be extended to increase recall, by treating similar daughters lists as equivalent (Dickinson, 2006, 2008). For example, the daughters lists ADVP RB ADVP and ADVP , RB ADVP in (1) can be put into the same equivalence class, because they predict the same mother category. With this equivalence, the two different mothers, PP and ADVP, point to an error (in PP). (1) a. to slash its work force in the U.S. , [PP [ADV P as] soon/RB [ADV P as next month]] b. to report ... [ADV P [ADV P immediately] ,/, not/RB [ADV P a month later]] Anything not contributing to predicting the mother is ignored in order to form equivalence classes. Following the steps below, 15,989 daughters lists are grouped into 3783 classes in the WSJ. 1. Remove daughter categories that are always non-predictive to phrase categorization, i.e., always adjuncts, such as punctuation and the parenthetical (PRN) category. 2. Group head-equivalent lexical categories, e.g., NN (common noun) and NNS (plural noun). 3. Model adjacent identical elements as a single element, e.g., NN NN becomes NN. While the sets of non-predictive and head-equivalent categories are treebank-specific, they require only a small amount of manual effort. 2.2 Non-equivalence classes Rules in the same equivalence class not only predict the same mother, they provide support that the daughters list is accurate—the more rules within a class, the better evidence that the annotation scheme legitimately licenses that sequence. A lack of similar rules indicates a potentially anomalous structure. Of the 3783 equivalence classes for the whole WSJ, 2141 are unique, i.e., have only one unique daughters list. For example, in (2), the daughters list RB TO JJ NNS is a daughters list with no correlates in the treebank; it is erroneous because close to wholesale needs another layer of structure, namely adjective phrase (ADJP) (Bies et al., 1995, p. 179). 363 (2) they sell [merchandise] for [NP close/RB to/TO wholesale/JJ prices/NNS ] Using this strict equivalence to identify ad hoc rules is quite successful (Dickinson, 2008), but it misses a significant number of generalizations. These equivalences were not designed to assist in determining linguistic patterns from non-linguistic patterns, but to predict the mother category, and thus many correct rules are incorrectly flagged. To provide support for the correct rule NP →DT CD JJS NNP JJ NNS in (3), for instance, we need to look at some highly similar rules in the treebank, e.g., the three instances of NP →DT CD JJ NNP NNS, which are not strictly equivalent to the rule in (3). (3) [NP the/DT 100/CD largest/JJS Nasdaq/NNP financial/JJ stocks/NNS ] 3 Rule dissimilarity and generalizability 3.1 Criteria for rule equivalence With a notion of (non-)equivalence as a heuristic, we can begin to detect ad hoc rules. First, however, we need to redefine equivalence to better reflect syntactic patterns. Firstly, in order for two rules to be in the same equivalence class—or even to be similar—the mother must also be the same. This captures the property that identical daughters lists with different mothers are distinct (cf. Dickinson and Meurers, 2005). For example, looking back at (1), the one occurrence of ADVP →ADVP , RB ADVP is very similar to the 4 instances of ADVP →RB ADVP, whereas the one instance of PP →ADVP RB ADVP is not and is erroneous. Daughters lists are thus now only compared to rules with the same mother. Secondly, we use only two steps to determine equivalence: 1) remove non-predictive daughter categories, and 2) group head-equivalent lexical categories.2 While useful for predicting the same mother, the step of Kleene reduction is less useful for our purposes since it ignores potential differences in argument structure. It is important to know how many identical categories can appear within a given rule, to tell whether it is reliable; VP →VB 2See Dickinson (2006) for the full mappings. NP and VP →VB NP NP, for example, are two different rules.3 Thirdly, we base our scores on token counts, in order to capture the fact that the more often we observe a rule, the more reliable it seems to be. This is not entirely true, as mentioned above, but this prevents frequent rules such as NP →EX (1075 occurrences) from being seen as an anomaly. With this new notion of equivalence, we can now proceed to accounting for similar rules in detecting ad hoc rules. 3.2 Reliability scores In order to devise a scoring method to reflect similar rules, the simplest way is to use a version of edit distance between rules, as we do under the Whole daughters scoring below. This reflects the intuition that rules with similar lists of daughters reflect the same properties. This is the “positive” way of scoring rules, in that we start with a basic notion of equivalence and look for more positive evidence that the rule is legitimate. Rules without such evidence are likely ad hoc. Our goal, though, is to take the results and examine the anomalous rules, i.e., those which lack strong evidence from other rules. We can thus more directly look for “negative” evidence that a rule is ad hoc. To do this, we can examine the weakest parts of each rule and compare those across the corpus, to see which anomalous patterns emerge; we do this in the Bigram scoring section below. Because these methods exploit different properties of rules and use different levels of abstraction, they have complementary aspects. Both start with the same assumptions about what makes rules equivalent, but diverge in how they look for rules which do not fit well into these equivalences. Whole daughters scoring The first method to detect ad hoc rules directly accounts for similar rules across equivalence classes. Each rule type is assigned a reliability score, calculated as follows: 1. Map a rule to its equivalence class. 2. For every rule token within the equivalence class, add a score of 1. 3Experiments done with Kleene reduction show that the results are indeed worse. 364 3. For every rule token within a highly similar equivalence class, add a score of 1 2. Positive evidence that a rule is legitimate is obtained by looking at similar classes in step #3, and then rules with the lowest scores are flagged as potentially ad hoc (see section 4.1). To determine similarity, we use a modified Levenshtein distance, where only insertions and deletions are allowed; a distance of one qualifies as highly similar.4 Allowing two or more changes would be problematic for unary rules (e.g., (4a), and in general, would allow us to add and subtract dissimilar categories. We thus remain conservative in determining similarity. Also, we do not utilize substitutions: while they might be useful in some cases, it is too problematic to include them, given the difference in meaning of each category. Consider the problematic rules in (4). In (4a), which occurs once, if we allow substitutions, then we will find 760 “comparable” instances of VP →VB, despite the vast difference in category (verb vs. adverb). Likewise, the rule in (4b), which occurs 8 times, would be “comparable” to the 602 instances of PP →IN PP, used for multi-word prepositions like because of.5 To maintain these true differences, substitutions are not allowed. (4) a. VP →RB b. PP →JJ PP This notion of similarity captures many generalizations, e.g., that adverbial phrases are optional. For example, in (5), the rule reduces to S →PP ADVP NP ADVP VP. With a strict notion of equivalence, there are no comparable rules. However, the class S →PP NP ADVP VP, with 198 members, is highly similar, indicating more confidence in this correct rule. (5) [S [PP During his years in Chiriqui] ,/, [ADV P however] ,/, [NP Mr. Noriega] [ADV P also] [V P revealed himself as an officer as perverse as he was ingenious] ./. ] 4The score is thus more generally 1 1+distance, although we ascribe no theoretical meaning to this 5Rules like PP →JJ PP might seem to be correct, but this depends upon the annotation scheme. Phrases starting with due to are sometimes annotated with this rule, but they also occur as ADJP or ADVP or with due as RB. If PP →JJ PP is correct, identifying this rule actually points to other erroneous rules. Bigram scoring The other method of detecting ad hoc rules calculates reliability scores by focusing specifically on what the classes do not have in common. Instead of examining and comparing rules in their entirety, this method abstracts a rule to its component parts, similar to features using information about n-grams of daughter nodes in parse reranking models (e.g., Collins and Koo, 2005). We abstract to bigrams, including added START and END tags, as longer sequences risk missing generalizations; e.g., unary rules would have no comparable rules. We score rule types as follows: 1. Map a rule to its equivalence class, resulting in a reduced rule. 2. Calculate the frequency of each <mother,bigram> pair in a reduced rule: for every reduced rule token with the same pair, add a score of 1 for that bigram pair. 3. Assign the score of the least-frequent bigram as the score of the rule. We assign the score of the lowest-scoring bigram because we are interested in anomalous sequences. This is in the spirit of Kvˇeton and Oliva (2002), who define invalid bigrams for POS annotation sequences in order to detect annotation errors.. As one example, consider (6), where the reduced rule NP →NP DT NNP is composed of the bigrams START NP, NP DT, DT NNP, and NNP END. All of these are relatively common (more than a hundred occurrences each), except for NP DT, which appears in only two other rule types. Indeed, DT is an incorrect tag (NNP is correct): when NP is the first daughter of NP, it is generally a possessive, precluding the use of a determiner. (6) (NP (NP ABC ’s) (‘‘ ‘‘) (DT This) (NNP Week)) The whole daughters scoring misses such problematic structures because it does not explicitly look for anomalies. The disadvantage of the bigram scoring, however, is its missing of the big picture: for example, the erroneous rule NP →NNP CC NP gets a large score (1905) because each subsequence is quite common. But this exact sequence is rather rare (NNP and NP are not generally coordinated), so the whole daughters scoring assigns a low score (4.0). 365 4 Evaluation To gauge our success in detecting ad hoc rules, we evaluate the reliability scores in two main ways: 1) whether unreliable rules generalize to new data (section 4.1), and, more importantly, 2) whether the unreliable rules which do generalize are ad hoc in other ways—e.g., erroneous (section 4.2). To measure this, we use sections 02-21 of the WSJ corpus as training data to derive scores, section 23 as testing data, and section 24 as development data. 4.1 Ungeneralizable rules To compare the effectiveness of the two scoring methods in identifying ungeneralizable rules, we examine how many rules from the training data do not appear in the heldout data, for different thresholds. In figure 1, for example, the method identifies 3548 rules with scores less than or equal to 50, 3439 of which do not appear in the development data, resulting in an ungeneralizability rate of 96.93%. To interpret the figures below, we first need to know that of the 15,246 rules from the training data, 1832 occur in the development data, or only 12.02%, corresponding to 27,038 rule tokens. There are also 396 new rules in the development data, making for a total of 2228 rule types and 27,455 rule tokens. 4.1.1 Development data results The results are shown in figure 1 for the whole daughters scoring method and in figure 2 for the bigram method. Both methods successfully identify rules with little chance of occurring in new data, the whole daughters method performing slightly better. Thresh. Rules Unused Ungen. 1 311 311 100.00% 25 2683 2616 97.50% 50 3548 3439 96.93% 100 4596 4419 96.15% Figure 1: Whole daughter ungeneralizability (devo.) 4.1.2 Comparing across data Is this ungeneralizability consistent over different data sets? To evaluate this, we use the whole daughters scoring method, since it had a higher ungeneralizability rate in the development data, and we use Thresh. Rules Unused Ungen. 1 599 592 98.83% 5 1661 1628 98.01% 10 2349 2289 97.44% 15 2749 2657 96.65% 20 3120 2997 96.06% Figure 2: Bigram ungeneralizability (devo.) section 23 of the WSJ and the Brown corpus portion of the Penn Treebank. Given different data sizes, we now report the coverage of rules in the heldout data, for both type and token counts. For instance, in figure 3, for a threshold of 50, 108 rule types appear in the development data, and they appear 141 times. With 2228 total rule types and 27,455 rule tokens, this results in coverages of 4.85% and 0.51%, respectively. In figures 3, 4, and 5, we observe the same trends for all data sets: low-scoring rules have little generalizability to new data. For a cutoff of 50, for example, rules at or below this mark account for approximately 5% of the rule types used in the data and half a percent of the tokens. Types Tokens Thresh. Used Cov. Used Cov. 10 23 1.03% 25 0.09% 25 67 3.01% 78 0.28% 50 108 4.85% 141 0.51% 100 177 7.94% 263 0.96% All 1832 82.22% 27,038 98.48% Figure 3: Coverage of rules in WSJ, section 24 Types Tokens Thresh. Used Cov. Used Cov. 10 33 1.17% 39 0.08% 25 82 2.90% 117 0.25% 50 155 5.49% 241 0.51% 100 242 8.57% 416 0.88% All 2266 80.24% 46,375 98.74% Figure 4: Coverage of rules in WSJ, section 23 Note in the results for the larger Brown corpus that the percentage of overall rule types from the 366 Types Tokens Thresh. Used Cov. Used Cov. 10 187 1.51% 603 0.15% 25 402 3.25% 1838 0.45% 50 562 4.54% 2628 0.64% 100 778 6.28% 5355 1.30% All 4675 37.75% 398,136 96.77% Figure 5: Coverage of rules in Brown corpus training data is only 37.75%, vastly smaller than the approximately 80% from either WSJ data set. This illustrates the variety of the grammar needed to parse this data versus the grammar used in training. We have isolated thousands of rules with little chance of being observed in the evaluation data, and, as we will see in the next section, many of the rules which appear are problematic in other ways. The ungeneralizabilty results make sense, in light of the fact that reliability scores are based on token counts. Using reliability scores, however, has the advantage of being able to identify infrequent but correct rules (cf. example (5)) and also frequent but unhelpful rules. For example, in (7), we find erroneous cases from the development data of the rules WHNP → WHNP WHPP (five should be NP) and VP →NNP NP (OKing should be VBG). These rules appear 27 and 16 times, respectively, but have scores of only 28.0 and 30.5, showing their unreliability. Future work can separate the effect of frequency from the effect of similarity (see also section 4.3). (7) a. [WHNP [WHNP five] [WHPP of whom]] b. received hefty sums for * [V P OKing/NNP [NP the purchase of ...]] 4.2 Other ad hoc rules The results in section 4.1 are perhaps unsuprising, given that many of the identified rules are simply rare. What is important, therefore, is to figure out why some rules appeared in the heldout data at all. As this requires qualitative analysis, we handexamined the rules appearing in the development data. We set out to examine about 100 rules, and so we report only for the corresponding threshold, finding that ad hoc rules are predominant. For the whole daughters scoring, at the 50 threshold, 55 (50.93%) of the 108 rules in the development data are errors. Adding these to the ungeneralizable rules, 98.48% (3494/3548) of the 3548 rules are unhelpful for parsing, at least for this data set. An additional 12 rules cover non-English or fragmented constructions, making for 67 clearly ad hoc rules. For the bigram scoring, at the 20 threshold, 67 (54.47%) of the 123 rules in the development data are erroneous, and 8 more are ungrammatical. This means that 97.88% (3054/3120) of the rules at this threshold are unhelpful for parsing this data, still slightly lower than the whole daughters scoring. 4.2.1 Problematic cases But what about the remaining rules for both methods which are not erroneous or ungrammatical? First, as mentioned at the outset, there are several cases which reveal non-uniformity in the annotation scheme or guidelines. This may be justifiable, but it has an impact on grammars using the annotation scheme. Consider the case of NAC (not a constituent), used for complex NP premodifiers. The description for tagging titles in the guidelines (Bies et al., 1995, p. 208-209) covers the exact case found in section 24, shown in (8a). This rule, NAC →NP PP, is one of the lowest-scoring rules which occurs, with a whole daughters score of 2.5 and a bigram score of 3, yet it is correct. Examining the guidelines more closely, however, we find examples such as (8b). Here, no extra NP layer is added, and it is not immediately clear what the criteria are for having an intermediate NP. (8) a. a “ [NAC [NP Points] [PP of Light]] ” foundation b. The Wall Street Journal “ [NAC American Way [PP of Buying]] ” Survey Secondly, rules with mothers which are simply rare are prone to receive lower scores, regardless of their generalizability. For example, the rules dominated by SINV, SQ, or SBARQ are all correct (6 in whole daughters, 5 in bigram), but questions are not very frequent in this news text: SQ appears only 350 times and SBARQ 222 times in the training data. One might thus consider normalizing the scores based on the overall frequency of the parent. Finally, and most prominently, there are issues with coordinate structures. For example, NP →NN CC DT receives a low whole daughters score of 7.0, 367 despite the fact that NP →NN and NP →DT are very common rules. This is a problem for both methods: for the whole daughters scoring, of the 108, 28 of them had a conjunct (CC or CONJP) in the daughters list, and 18 of these were correct. Likewise, for the bigram scoring, 18 had a conjunct, and 12 were correct. Reworking similarity scores to reflect coordinate structures and handle each case separately would require treebank-specific knowledge: the Penn Treebank, for instance, distinguishes unlike coordinated phrases (UCP) from other coordinated phrases, each behaving differently. 4.2.2 Comparing the methods There are other cases in which one method outperforms the other, highlighting their strengths and weaknesses. In general, both methods fare badly with clausal rules, i.e., those dominated by S, SBAR, SINV, SQ, or SBARQ, but the effect is slightly greater on the bigram scoring, where 20 of the 123 rules are clausal, and 16 of these are correct (i.e., 80% of them are misclassified). To understand this, we have to realize that most modifiers are adjoined at the sentence level when there is any doubt about their attachment (Bies et al., 1995, p. 13), leading to correct but rare subsequences. In sentence (9), for example, the reduced rule S →SBAR PP NP VP arises because both the introductory SBAR and the PP are at the same level. This SBAR PP sequence is fairly rare, resulting in a bigram score of 13. (9) [S [SBAR As the best opportunities for corporate restructurings are exhausted * of course] ,/, [PP at some point] [NP the market] [V P will start * to reject them] ./.] Whole daughters scoring, on the other hand, assigns this rule a high reliability score of 2775.0, due to the fact that both SBAR NP VP and PP NP VP sequences are common. For rules with long modifier sequences, whole daughters scoring seems to be more effective since modifiers are easily skipped over in comparing to other rules. Whole daughters scoring is also imprecise with clausal rules (10/12 are misclassified), but identifies less of them, and they tend to be for rare mothers (see above). Various cases are worse for the whole daughters scoring. First are quantifier phrases (QPs), which have a highly varied set of possible heads and arguments. QP is “used for multiword numerical expressions that occur within NP (and sometimes ADJP), where the QP corresponds frequently to some kind of complex determiner phrase” (Bies et al., 1995, p. 193). This definition leads to rules which look different from QP to QP. Some of the lowest-scoring, correct rules are shown in (10). We can see that there is not a great deal of commonality about what comprises quantifier phrases, even if subparts are common and thus not flagged by the bigram method. (10) a. [QP only/RB three/CD of/IN the/DT nine/CD] justices b. [QP too/RB many/JJ] cooks c. 10 % [QP or/CC more/JJR] Secondly, whole daughters scoring relies on complete sequences, and thus whether Kleene reduction (step #3 in section 2) is used makes a marked difference. For example, in (11), the rule NP →DT JJ NNP NNP JJ NN NN is completely correct, despite its low whole daughters score of 15.5 and one occurrence. This rule is similar to the 10 occurrences of NP →DT JJ NNP JJ NN in the training set, but we cannot see this without performing Kleene reduction. For noun phrases at least, using Kleene reduction might more accurately capture comparability. This is less of an issue for bigram scoring, as all the bigrams are perfectly valid, resulting here in a relatively high score (556). (11) [NP the/DT basic/JJ Macintosh/NNP Plus/NNP central/JJ processing/NN unit/NN ] 4.3 Discriminating rare rules In an effort to determine the effectiveness of the scores on isolating structures which are not linguistically sound, in a way which factors out frequency, we sampled 50 rules occurring only once in the training data. We marked for each whether it was correct or how it was ad hoc, and we did this blindly, i.e., without knowledge of the rule scores. Of these 50, only 9 are errors, 2 cover ungrammatical constructions, and 8 more are unclear. Looking at the bottom 25 scores, we find that the whole daughters and bigrams methods both find 6 errors, or 67% of them, additionally finding 5 unclear cases for the whole daughters and 6 for the bigrams method. Erroneous rules in the top half appear to be ones which 368 happened to be errors, but could actually be correct in other contexts (e.g.,NP →NN NNP NNP CD). Although it is a small data set, the scores seem to be effectively sorting rare rules. 5 Summary and Outlook We have outlined the problem of ad hoc rules in treebanks—ungeneralizable rules, erroneous rules, rules for ungrammatical text, and rules which are not necessarily consistent with the rest of the annotation scheme. Based on the idea of finding rules unlike any others, we have developed methods for detecting ad hoc rules in flat treebanks, simply by examining properties across the grammar and without making any reference to context. We have been careful not to say how to use the reliability scores. First, without 100% accuracy, it is hard to know what their removal from a parsing model would mean. Secondly, assigning confidence scores to rules, as we have done, has a number of other potential applications. Parse reranking techniques, for instance, rely on knowledge about features other than those found in the core parsing model in order to determine the best parse (e.g., Collins and Koo, 2005; Charniak and Johnson, 2005). Active learning techniques also require a scoring function for parser confidence (e.g., Hwa et al., 2003), and often use uncertainty scores of parse trees in order to select representative samples for learning (e.g., Tang et al., 2002). Both could benefit from more information about rule reliability. Given the success of the methods, we can strive to make them more corpus-independent, by removing the dependence on equivalence classes. In some ways, comparing rules to similar rules already naturally captures equivalences among rules. In this process, it will also be important to sort out the impact of similarity from the impact of frequency on identifying ad hoc structures. Acknowledgments Thanks to the three anonymous reviewers for their helpful comments. This material is based upon work supported by the National Science Foundation under Grant No. IIS-0623837. A Relevant Penn Treebank categories CC Coordinating conjunction CD Cardinal number DT Determiner EX Existential there IN Preposition or subordinating conjunction JJ Adjective JJR Adjective, comparative JJS Adjective, superlative NN Noun, singular or mass NNS Noun, plural NNP Proper noun, singular RB Adverb TO to VB Verb, base form VBG Verb, gerund or present participle Figure 6: POS tags in the PTB (Santorini, 1990) ADJP Adjective Phrase ADVP Adverb Phrase CONJP Conjunction Phrase NAC Not A Constituent NP Noun Phrase PP Prepositional Phrase PRN Parenthetical QP Quantifier Phrase S Simple declarative clause SBAR Clause introduced by subordinating conjunction SBARQ Direct question introduced by wh-word/phrase SINV Inverted declarative sentence SQ Inverted yes/no question UCP Unlike Coordinated Phrase VP Verb Phrase WHNP Wh-noun Phrase WHPP Wh-prepositional Phrase Figure 7: Syntactic categories in the PTB (Bies et al., 1995) References Bies, Ann, Mark Ferguson, Karen Katz and Robert MacIntyre (1995). Bracketing Guidelines for Treebank II Style Penn Treebank Project. University of Pennsylvania. Charniak, Eugene (1996). Tree-Bank Grammars. Tech. Rep. CS-96-02, Department of Computer Science, Brown University, Providence, RI. 369 Charniak, Eugene and Mark Johnson (2005). Coarse-to-fine n-best parsing and MaxEnt discriminative reranking. In Proceedings of ACL-05. Ann Arbor, MI, USA, pp. 173–180. Collins, Michael and Terry Koo (2005). Discriminative Reranking for Natural Language Parsing. Computational Linguistics 31(1), 25–69. Daelemans, Walter, Antal van den Bosch and Jakub Zavrel (1999). Forgetting Exceptions is Harmful in Language Learning. Machine Learning 34, 11– 41. Dickinson, Markus (2006). Rule Equivalence for Error Detection. In Proceedings of TLT 2006. Prague, Czech Republic. Dickinson, Markus (2008). Similarity and Dissimilarity in Treebank Grammars. In 18th International Congress of Linguists (CIL18). Seoul. Dickinson, Markus and W. Detmar Meurers (2005). Prune Diseased Branches to Get Healthy Trees! How to Find Erroneous Local Trees in a Treebank and Why It Matters. In Proceedings of TLT 2005. Barcelona, Spain. Eskin, Eleazar (2000). Automatic Corpus Correction with Anomaly Detection. In Proceedings of NAACL-00. Seattle, Washington, pp. 148–153. Foth, Kilian and Wolfgang Menzel (2006). Robust Parsing: More with Less. In Proceedings of the workshop on Robust Methods in Analysis of Natural Language Data (ROMAND 2006). Gildea, Daniel (2001). Corpus Variation and Parser Performance. In Proceedings of EMNLP-01. Pittsburgh, PA. Hogan, Deirdre (2007). Coordinate Noun Phrase Disambiguation in a Generative Parsing Model. In Proceedings of ACL-07. Prague, pp. 680–687. Hwa, Rebecca, Miles Osborne, Anoop Sarkar and Mark Steedman (2003). Corrected Co-training for Statistical Parsers. In Proceedings of ICML-2003. Washington, DC. Jackendoff, Ray (1977). X’ Syntax: A Study of Phrase Structure. Cambridge, MA: MIT Press. Krotov, Alexander, Mark Hepple, Robert J. Gaizauskas and Yorick Wilks (1998). Compacting the Penn Treebank Grammar. In Proceedings of ACL-98. pp. 699–703. Kvˇeton, Pavel and Karel Oliva (2002). Achieving an Almost Correct PoS-Tagged Corpus. In Text, Speech and Dialogue (TSD). pp. 19–26. Marcus, M., Beatrice Santorini and M. A. Marcinkiewicz (1993). Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics 19(2), 313–330. Metcalf, Vanessa and Adriane Boyd (2006). Headlexicalized PCFGs for Verb Subcategorization Error Diagnosis in ICALL. In Workshop on Interfaces of Intelligent Computer-Assisted Language Learning. Columbus, OH. Santorini, Beatrice (1990). Part-Of-Speech Tagging Guidelines for the Penn Treebank Project (3rd Revision, 2nd printing). Tech. Rep. MS-CIS-90-47, The University of Pennsylvania, Philadelphia, PA. Sekine, Satoshi (1997). The Domain Dependence of Parsing. In Proceedings of ANLP-96. Washington, DC. Tang, Min, Xiaoqiang Luo and Salim Roukos (2002). Active Learning for Statistical Natural Language Parsing. In Proceedings of ACL-02. Philadelphia, pp. 120–127. Ule, Tylman and Kiril Simov (2004). Unexpected Productions May Well be Errors. In Proceedings of LREC 2004. Lisbon, Portugal, pp. 1795–1798. Vadas, David and James Curran (2007). Adding Noun Phrase Structure to the Penn Treebank. In Proceedings of ACL-07. Prague, pp. 240–247. Vandeventer Faltin, Anne (2003). Syntactic error diagnosis in the context of computer assisted language learning. Th`ese de doctorat, Universit´e de Gen`eve, Gen`eve. Wagner, Joachim, Jennifer Foster and Josef van Genabith (2007). A Comparative Evaluation of Deep and Shallow Approaches to the Automatic Detection of Common Grammatical Errors. In Proceedings of EMNLP-CoNLL 2007. pp. 112– 121. 370
2008
42
Proceedings of ACL-08: HLT, pages 371–379, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics A Single Generative Model for Joint Morphological Segmentation and Syntactic Parsing Yoav Goldberg Ben Gurion University of the Negev Department of Computer Science POB 653 Be’er Sheva, 84105, Israel [email protected] Reut Tsarfaty Institute for Logic Language and Computation University of Amsterdam Plantage Muidergracht 24, Amsterdam, NL [email protected] Abstract Morphologicalprocesses in Semitic languages deliver space-delimited words which introduce multiple, distinct, syntactic units into the structure of the input sentence. These words are in turn highly ambiguous, breaking the assumption underlying most parsers that the yield of a tree for a given sentence is known in advance. Here we propose a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. Using a treebank grammar, a data-driven lexicon, and a linguistically motivated unknown-tokens handling technique our model outperforms previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. 1 Introduction Current state-of-the-art broad-coverage parsers assume a direct correspondence between the lexical items ingrained in the proposed syntactic analyses (the yields of syntactic parse-trees) and the spacedelimited tokens (henceforth, ‘tokens’) that constitute the unanalyzed surface forms (utterances). In Semitic languages the situation is very different. In Modern Hebrew (Hebrew), a Semitic language with very rich morphology, particles marking conjunctions, prepositions, complementizers and relativizers are bound elements prefixed to the word (Glinert, 1989). The Hebrew token ‘bcl’1, for example, stands for the complete prepositional phrase 1We adopt here the transliteration of (Sima’an et al., 2001). “in the shadow”. This token may further embed into a larger utterance, e.g., ‘bcl hneim’ (literally “in-the-shadow the-pleasant”, meaning roughly “in the pleasant shadow”) in which the dominated Noun is modified by a proceeding space-delimited adjective. It should be clear from the onset that the particle b (“in”) in ‘bcl’ may then attach higher than the bare noun cl (“shadow”). This leads to word- and constituent-boundaries discrepancy, which breaks the assumptions underlying current state-of-the-art statistical parsers. One way to approach this discrepancy is to assume a preceding phase of morphological segmentation for extracting the different lexical items that exist at the token level (as is done, to the best of our knowledge, in all parsing related work on Arabic and its dialects (Chiang et al., 2006)). The input for the segmentation task is however highly ambiguous for Semitic languages, and surface forms (tokens) may admit multiple possible analyses as in (BarHaim et al., 2007; Adler and Elhadad, 2006). The aforementioned surface form bcl, for example, may also stand for the lexical item “onion”, a Noun. The implication of this ambiguity for a parser is that the yield of syntactic trees no longer consists of spacedelimited tokens, and the expected number of leaves in the syntactic analysis in not known in advance. Tsarfaty (2006) argues that for Semitic languages determining the correct morphological segmentation is dependent on syntactic context and shows that increasing information sharing between the morphological and the syntactic components leads to improved performance on the joint task. Cohen and Smith (2007) followed up on these results and pro371 posed a system for joint inference of morphological and syntactic structures using factored models each designed and trained on its own. Here we push the single-framework conjecture across the board and present a single model that performs morphological segmentation and syntactic disambiguation in a fully generative framework. We claim that no particular morphological segmentation is a-priory more likely for surface forms before exploring the compositional nature of syntactic structures, including manifestations of various long-distance dependencies. Morphological segmentation decisions in our model are delegated to a lexeme-based PCFG and we show that using a simple treebank grammar, a data-driven lexicon, and a linguistically motivated unknown-tokens handling our model outperforms (Tsarfaty, 2006) and (Cohen and Smith, 2007) on the joint task and achieves state-of-the-art results on a par with current respective standalone models.2 2 Modern Hebrew Structure Segmental morphology Hebrew consists of seven particles m(“from”) f(“when”/“who”/“that”) h(“the”) w(“and”) k(“like”) l(“to”) and b(“in”). which may never appear in isolation and must always attach as prefixes to the following open-class category item we refer to as stem. Several such particles may be prefixed onto a single stem, in which case the affixation is subject to strict linear precedence constraints. Co-occurrences among the particles themselves are subject to further syntactic and lexical constraints relative to the stem. While the linear precedence of segmental morphemes within a token is subject to constraints, the dominance relations among their mother and sister constituents is rather free. The relativizer f(“that”) for example, may attach to an arbitrarily long relative clause that goes beyond token boundaries. The attachment in such cases encompasses a long distance dependency that cannot be captured by Markovian processes that are typically used for morphological disambiguation. The same argument holds for resolving PP attachment of a prefixed preposition or marking conjunction of elements of any kind. A less canonical representation of segmental mor2Standalone parsing models assume a segmentation Oracle. phology is triggered by a morpho-phonological process of omitting the definite article h when occurring after the particles b or l. This process triggers ambiguity as for the definiteness status of Nouns following these particles.We refer to such cases in which the concatenation of elements does not strictly correspond to the original surface form as super-segmental morphology. An additional case of super-segmental morphology is the case of Pronominal Clitics. Inflectional features marking pronominal elements may be attached to different kinds of categories marking their pronominal complements. The additional morphological material in such cases appears after the stem and realizes the extended meaning. The current work treats both segmental and super-segmental phenomena, yet we note that there may be more adequate ways to treat supersegmental phenomena assuming Word-Based morphology as we explore in (Tsarfaty and Goldberg, 2008). Lexical and Morphological Ambiguity The rich morphological processes for deriving Hebrew stems give rise to a high degree of ambiguity for Hebrew space-delimited tokens. The form fmnh, for example, can be understood as the verb “lubricated”, the possessed noun “her oil”, the adjective “fat” or the verb “got fat”. Furthermore, the systematic way in which particles are prefixed to one another and onto an open-class category gives rise to a distinct sort of morphological ambiguity: space-delimited tokens may be ambiguous between several different segmentation possibilities. The same form fmnh can be segmented as f-mnh, f (“that”) functioning as a reletivizer with the form mnh. The form mnh itself can be read as at least three different verbs (“counted”, “appointed”, “was appointed”), a noun (“a portion”), and a possessed noun (“her kind”). Such ambiguities cause discrepancies between token boundaries (indexed as white spaces) and constituent boundaries (imposed by syntactic categories) with respect to a surface form. Such discrepancies can be aligned via an intermediate level of PoS tags. PoS tags impose a unique morphological segmentation on surface tokens and present a unique valid yield for syntactic trees. The correct ambiguity resolution of the syntactic level therefore helps to resolve the morphological one, and vice versa. 372 3 Previous Work on Hebrew Processing Morphological analyzers for Hebrew that analyze a surface form in isolation have been proposed by Segal (2000), Yona and Wintner (2005), and recently by the knowledge center for processing Hebrew (Itai et al., 2006). Such analyzers propose multiple segmentation possibilities and their corresponding analyses for a token in isolation but have no means to determine the most likely ones. Morphological disambiguators that consider a token in context (an utterance) and propose the most likely morphological analysis of an utterance (including segmentation) were presented by Bar-Haim et al. (2005), Adler and Elhadad (2006), Shacham and Wintner (2007), and achieved good results (the best segmentation result so far is around 98%). The development of the very first Hebrew Treebank (Sima’an et al., 2001) called for the exploration of general statistical parsing methods, but the application was at first limited. Sima’an et al. (2001) presented parsing results for a DOP tree-gram model using a small data set (500 sentences) and semiautomatic morphological disambiguation. Tsarfaty (2006) was the first to demonstrate that fully automatic Hebrew parsing is feasible using the newly available 5000 sentences treebank. Tsarfaty and Sima’an (2007) have reported state-of-the-art results on Hebrew unlexicalized parsing (74.41%) albeit assuming oracle morphological segmentation. The joint morphological and syntactic hypothesis was first discussed in (Tsarfaty, 2006; Tsarfaty and Sima’an, 2004) and empirically explored in (Tsarfaty, 2006). Tsarfaty (2006) used a morphological analyzer (Segal, 2000), a PoS tagger (Bar-Haim et al., 2005), and a general purpose parser (Schmid, 2000) in an integrated framework in which morphological and syntactic components interact to share information, leading to improved performance on the joint task. Cohen and Smith (2007) later on based a system for joint inference on factored, independent, morphological and syntactic components of which scores are combined to cater for the joint inference task. Both (Tsarfaty, 2006; Cohen and Smith, 2007) have shown that a single integrated framework outperforms a completely streamlined implementation, yet neither has shown a single generative model which handles both tasks. 4 Model Preliminaries 4.1 The Status Space-Delimited Tokens A Hebrew surface token may have several readings, each of which corresponding to a sequence of segments and their corresponding PoS tags. We refer to different readings as different analyses whereby the segments are deterministic given the sequence of PoS tags. We refer to a segment and its assigned PoS tag as a lexeme, and so analyses are in fact sequences of lexemes. For brevity we omit the segments from the analysis, and so analysis of the form “fmnh” as f/REL mnh/VB is represented simply as REL VB. Such tag sequences are often treated as “complex tags” (e.g. REL+VB) (cf. (Bar-Haim et al., 2007; Habash and Rambow, 2005)) and probabilities are assigned to different analyses in accordance with the likelihood of their tags (e.g., “fmnh is 30% likely to be tagged NN and 70% likely to be tagged REL+VB”). Here we do not submit to this view. When a token fmnh is to be interpreted as the lexeme sequence f/REL mnh/VB, the analysis introduces two distinct entities, the relativizer f (“that”) and the verb mnh (“counted”), and not as the complex entity “that counted”. When the same token is to be interpreted as a single lexeme fmnh, it may function as a single adjective “fat”. There is no relation between these two interpretations other then the fact that their surface forms coincide, and we argue that the only reason to prefer one analysis over the other is compositional. A possible probabilistic model for assigning probabilities to complex analyses of a surface form may be P(REL, VB|fmnh, context) = P(REL|f)P(VB|mnh, REL)P(REL, VB| context) and indeed recent sequential disambiguation models for Hebrew (Adler and Elhadad, 2006) and Arabic (Smith et al., 2005) present similar models. We suggest that in unlexicalized PCFGs the syntactic context may be explicitly modeled in the derivation probabilities. Hence, we take the probability of the event fmnh analyzed as REL VB to be P(REL →f|REL) × P(VB →mnh|VB) This means that we generate f and mnh independently depending on their corresponding PoS tags, 373 and the context (as well as the syntactic relation between the two) is modeled via the derivation resulting in a sequence REL VB spanning the form fmnh. 4.2 Lattice Representation We represent all morphological analyses of a given utterance using a lattice structure. Each lattice arc corresponds to a segment and its corresponding PoS tag, and a path through the lattice corresponds to a specific morphological segmentation of the utterance. This is by now a fairly standard representation for multiple morphological segmentation of Hebrew utterances (Adler, 2001; Bar-Haim et al., 2005; Smith et al., 2005; Cohen and Smith, 2007; Adler, 2007). Figure 1 depicts the lattice for a 2-words sentence bclm hneim. We use double-circles to indicate the space-delimited token boundaries. Note that in our construction arcs can never cross token boundaries. Every token is independent of the others, and the sentence lattice is in fact a concatenation of smaller lattices, one for each token. Furthermore, some of the arcs represent lexemes not present in the input tokens (e.g. h/DT, fl/POS), however these are parts of valid analyses of the token (cf. super-segmental morphology section 2). Segments with the same surface form but different PoS tags are treated as different lexemes, and are represented as separate arcs (e.g. the two arcs labeled neim from node 6 to 7). 0 5 bclm/NNP 1 b/IN 2 bcl/NN 7 hneim/VB 6 h/DT clm/NN clm/VB cl/NN 3 h/DT 4 fl/POS clm/NN hm/PRP neim/VB neim/JJ Figure 1: The Lattice for the Hebrew Phrase bclm hneim A similar structure is used in speech recognition. There, a lattice is used to represent the possible sentences resulting from an interpretation of an acoustic model. In speech recognition the arcs of the lattice are typically weighted in order to indicate the probability of specific transitions. Given that weights on all outgoing arcs sum up to one, weights induce a probability distribution on the lattice paths. In sequential tagging models such as (Adler and Elhadad, 2006; Bar-Haim et al., 2007; Smith et al., 2005) weights are assigned according to a language model based on linear context. In our model, however, all lattice paths are taken to be a-priori equally likely. 5 A Generative PCFG Model The input for the joint task is a sequence W = w1, . . . , wn of space-delimited tokens. Each token may admit multiple analyses, each of which a sequence of one or more lexemes (we use li to denote a lexeme) belonging a presupposed Hebrew lexicon LEX. The entries in such a lexicon may be thought of as meaningful surface segments paired up with their PoS tags li = ⟨si, pi⟩, but note that a surface segment s need not be a space-delimited token. The Input The set of analyses for a token is thus represented as a lattice in which every arc corresponds to a specific lexeme l, as shown in Figure 1. A morphological analyzer M : W →L is a function mapping sentences in Hebrew (W ∈W) to their corresponding lattices (M(W) = L ∈L). We define the lattice L to be the concatenation of the lattices Li corresponding to the input words wi (s.t. M(wi) = Li). Each connected path ⟨l1 . . . lk⟩∈ L corresponds to one morphological segmentation possibility of W. The Parser Given a sequence of input tokens W = w1 . . . wn and a morphological analyzer, we look for the most probable parse tree π s.t. ˆπ = arg max π P(π|W, M) Since the lattice L for a given sentence W is determined by the morphological analyzer M we have ˆπ = arg max π P(π|W, M, L) Hence, our parser searches for a parse tree π over lexemes ⟨l1 . . . lk⟩s.t. li = ⟨si, pi⟩∈LEX, ⟨l1 . . . lk⟩∈L and M(W) = L. So we remain with ˆπ = arg max π P(π|L) which is precisely the formula corresponding to the so-called lattice parsing familiar from speech recognition. Every parse π selects a specific morphological segmentation ⟨l1...lk⟩(a path through the lattice). This is akin to PoS tags sequences induced by different parses in the setup familiar from English and explored in e.g. (Charniak et al., 1996). 374 Our use of an unweighted lattice reflects our belief that all the segmentations of the given input sentence are a-priori equally likely; the only reason to prefer one segmentation over the another is due to the overall syntactic context which is modeled via the PCFG derivations. A compatible view is presented by Charniak et al. (1996) who consider the kind of probabilities a generative parser should get from a PoS tagger, and concludes that these should be P(w|t) “and nothing fancier”.3 In our setting, therefore, the Lattice is not used to induce a probability distribution on a linear context, but rather, it is used as a common-denominator of state-indexation of all segmentations possibilities of a surface form. This is a unique object for which we are able to define a proper probability model. Thus our proposed model is a proper model assigning probability mass to all ⟨π, L⟩pairs, where π is a parse tree and L is the one and only lattice that a sequence of characters (and spaces) W over our alpha-beth gives rise to. X π,L P(π, L) = 1; L uniquely index W The Grammar Our parser looks for the most likely tree spanning a single path through the lattice of which the yield is a sequence of lexemes. This is done using a simple PCFG which is lexemebased. This means that the rules in our grammar are of two kinds: (a) syntactic rules relating nonterminals to a sequence of non-terminals and/or PoS tags, and (b) lexical rules relating PoS tags to lattice arcs (lexemes). The possible analyses of a surface token pose constraints on the analyses of specific segments. In order to pass these constraints onto the parser, the lexical rules in the grammar are of the form pi →⟨si, pi⟩ Parameter Estimation The grammar probabilities are estimated from the corpus using simple relative frequency estimates. Lexical rules are estimated in a similar manner. We smooth Prf(p →⟨s, p⟩) for rare and OOV segments (s ∈l, l ∈L, s unseen) using a “per-tag” probability distribution over rare segments which we estimate using relative frequency estimates for once-occurring segments. 3An English sentence with ambiguous PoS assignment can be trivially represented as a lattice similar to our own, where every pair of consecutive nodes correspond to a word, and every possible PoS assignment for this word is a connecting arc. Handling Unknown tokens When handling unknown tokens in a language such as Hebrew various important aspects have to be borne in mind. Firstly, Hebrew unknown tokens are doubly unknown: each unknown token may correspond to several segmentation possibilities, and each segment in such sequences may be able to admit multiple PoS tags. Secondly, some segments in a proposed segment sequence may in fact be seen lexical events, i.e., for some p tag Prf(p →⟨s, p⟩) > 0, while other segments have never been observed as a lexical event before. The latter arcs correspond to OOV words in English. Finally, the assignments of PoS tags to OOV segments is subject to language specific constraints relative to the token it was originated from. Our smoothing procedure takes into account all the aforementioned aspects and works as follows. We first make use of our morphological analyzer to find all segmentation possibilities by chopping off all prefix sequence possibilities (including the empty prefix) and construct a lattice off of them. The remaining arcs are marked OOV. At this stage the lattice path corresponds to segments only, with no PoS assigned to them. In turn we use two sorts of heuristics, orthogonal to one another, to prune segmentation possibilities based on lexical and grammatical constraints. We simulate lexical constraints by using an external lexical resource against which we verify whether OOV segments are in fact valid Hebrew lexemes. This heuristics is used to prune all segmentation possibilities involving “lexically improper” segments. For the remaining arcs, if the segment is in fact a known lexeme it is tagged as usual, but for the OOV arcs which are valid Hebrew entries lacking tags assignment, we assign all possible tags and then simulate a grammatical constraint. Here, all tokeninternal collocations of tags unseen in our training data are pruned away. From now on all lattice arcs are tagged segments and the assignment of probability P(p →⟨s, p⟩) to lattice arcs proceeds as usual.4 A rather pathological case is when our lexical heuristics prune away all segmentation possibilities and we remain with an empty lattice. In such cases we use the non-pruned lattice including all (possibly ungrammatical) segmentation, and let the statistics (including OOV) decide. We empirically control for 4Our heuristics may slightly alter P π,L P(π, L) ≈1 375 the effect of our heuristics to make sure our pruning does not undermine the objectives of our joint task. 6 Experimental Setup Previous work on morphological and syntactic disambiguation in Hebrew used different sets of data, different splits, differing annotation schemes, and different evaluation measures. Our experimental setup therefore is designed to serve two goals. Our primary goal is to exploit the resources that are most appropriate for the task at hand, and our secondary goal is to allow for comparison of our models’ performance against previously reported results. When a comparison against previous results requires additional pre-processing, we state it explicitly to allow for the reader to replicate the reported results. Data We use the Hebrew Treebank, (Sima’an et al., 2001), provided by the knowledge center for processing Hebrew, in which sentences from the daily newspaper “Ha’aretz” are morphologically segmented and syntactically annotated. The treebank has two versions, v1.0 and v2.0, containing 5001 and 6501 sentences respectively. We use v1.0 mainly because previous studies on joint inference reported results w.r.t. v1.0 only.5 We expect that using the same setup on v2.0 will allow a crosstreebank comparison.6 We used the first 500 sentences as our dev set and the rest 4500 for training and report our main results on this split. To facilitate the comparison of our results to those reported by (Cohen and Smith, 2007) we use their data set in which 177 empty and “malformed”7 were removed. The first 3770 trees of the resulting set then were used for training, and the last 418 are used testing. (we ignored the 419 trees in their development set.) Morphological Analyzer Ideally, we would use an of-the-shelf morphological analyzer for mapping each input token to its possible analyses. Such resources exist for Hebrew (Itai et al., 2006), but unfortunately use a tagging scheme which is incom5The comparison to performance on version 2.0 is meaningless not only because of the change in size, but also conceptual changes in the annotation scheme 6Unfortunatley running our setup on the v2.0 data set is currently not possible due to missing tokens-morphemes alignment in the v2.0 treebank. 7We thank Shay Cohen for providing us with their data set and evaluation Software. patible with the one of the Hebrew Treebank.8 For this reason, we use a data-driven morphological analyzer derived from the training data similar to (Cohen and Smith, 2007). We construct a mapping from all the space-delimited tokens seen in the training sentences to their corresponding analyses. Lexicon and OOV Handling Our data-driven morphological-analyzer proposes analyses for unknown tokens as described in Section 5. We use the HSPELL9 (Har’el and Kenigsberg, 2004) wordlist as a lexeme-based lexicon for pruning segmentations involving invalid segments. Models that employ this strategy are denoted hsp. To control for the effect of the HSPELL-based pruning, we also experimented with a morphological analyzer that does not perform this pruning. For these models we limit the options provided for OOV words by not considering the entire token as a valid segmentation in case at least some prefix segmentation exists. This analyzer setting is similar to that of (Cohen and Smith, 2007), and models using it are denoted nohsp, Parser and Grammar We used BitPar (Schmid, 2004), an efficient general purpose parser,10 together with various treebank grammars to parse the input sentences and propose compatible morphological segmentation and syntactic analysis. We experimented with increasingly rich grammars read off of the treebank. Our first model is GTplain, a PCFG learned from the treebank after removing all functional features from the syntactic categories. In our second model GTvpi we also distinguished finite and non-finite verbs and VPs as 8Mapping between the two schemes involves nondeterministic many-to-many mappings, and in some cases require a change in the syntactic trees. 9An open-source Hebrew spell-checker. 10Lattice parsing can be performed by special initialization of the chart in a CKY parser (Chappelier et al., 1999). We currently simulate this by crafting a WCFG and feeding it to BitPar. Given a PCFG grammar G and a lattice L with nodes n1 . . . nk, we construct the weighted grammar GL as follows: for every arc (lexeme) l ∈L from node ni to node nj, we add to GL the rule [l →tni, tni+1, . . . , tnj−1] with a probability of 1 (this indicates the lexeme l spans from node ni to node nj). GL is then used to parse the string tn1 . . . tnk−1, where tni is a terminal corresponding to the lattice span between node ni and ni+1. Removing the leaves from the resulting tree yields a parse for L under G, with the desired probabilities. We use a patched version of BitPar allowing for direct input of probabilities instead of counts. We thank Felix Hageloh (Hageloh, 2006) for providing us with this version. 376 proposed in (Tsarfaty, 2006). In our third model GTppp we also add the distinction between general PPs and possessive PPs following Goldberg and Elhadad (2007). In our forth model GTnph we add the definiteness status of constituents following Tsarfaty and Sima’an (2007). Finally, model GTv = 2 includes parent annotation on top of the various state-splits, as is done also in (Tsarfaty and Sima’an, 2007; Cohen and Smith, 2007). For all grammars, we use fine-grained PoS tags indicating various morphological features annotated therein. Evaluation We use 8 different measures to evaluate the performance of our system on the joint disambiguation task. To evaluate the performance on the segmentation task, we report SEG, the standard harmonic means for segmentation Precision and Recall F1 (as defined in Bar-Haim et al. (2005); Tsarfaty (2006)) as well as the segmentation accuracy SEGTok measure indicating the percentage of input tokens assigned the correct exact segmentation (as reported by Cohen and Smith (2007)). SEGTok(noH) is the segmentation accuracy ignoring mistakes involving the implicit definite article h.11 To evaluate our performance on the tagging task we report CPOS and FPOS corresponding to coarse- and fine-grained PoS tagging results (F1) measure. Evaluating parsing results in our joint framework, as argued by Tsarfaty (2006), is not trivial under the joint disambiguation task, as the hypothesized yield need not coincide with the correct one. Our parsing performance measures (SY N) thus report the PARSEVAL extension proposed in Tsarfaty (2006). We further report SY N CS, the parsing metric of Cohen and Smith (2007), to facilitate the comparison. We report the F1 value of both measures. Finally, our U (unparsed) measure is used to report the number of sentences to which our system could not propose a joint analysis. 7 Results and Analysis The accuracy results for segmentation, tagging and parsing using our different models and our standard data split are summarized in Table 1. In addition we report for each model its performance on goldsegmented input (GS) to indicate the upper bound 11Overt definiteness errors may be seen as a wrong feature rather than as wrong constituent and it is by now an accepted standard to report accuracy with and without such errors. for the grammars’ performance on the parsing task. The table makes clear that enriching our grammar improves the syntactic performance as well as morphological disambiguation (segmentation and POS tagging) accuracy. This supports our main thesis that decisions taken by single, improved, grammar are beneficial for both tasks. When using the segmentation pruning (using HSPELL) for unseen tokens, performance improves for all tasks as well. Yet we note that the better grammars without pruning outperform the poorer grammars using this technique, indicating that the syntactic context aids, to some extent, the disambiguation of unknown tokens. Table 2 compares the performance of our system on the setup of Cohen and Smith (2007) to the best results reported by them for the same tasks. Model SEGTok CPOS FPOS SY N CS GTnohsp/pln 89.50 81.00 77.65 62.22 GTnohsp/···+nph 89.58 81.26 77.82 64.30 CSpln 91.10 80.40 75.60 64.00 CSv=2 90.90 80.50 75.40 64.40 GThsp/pln 93.13 83.12 79.12 64.46 GTnohsp/···+v=2 89.66 82.85 78.92 66.31 Oracle CSpln 91.80 83.20 79.10 66.50 Oracle CSv=2 91.70 83.00 78.70 67.40 GThsp/···+v=2 93.38 85.08 80.11 69.11 Table 2: Segmentation, Parsing and Tagging Results using the Setup of (Cohen and Smith, 2007) (sentence length ≤40). The Models’ are Ordered by Performance. We first note that the accuracy results of our system are overall higher on their setup, on all measures, indicating that theirs may be an easier dataset. Secondly, for all our models we provide better fine- and coarse-grained POS-tagging accuracy, and all pruned models outperform the Oracle results reported by them.12 In terms of syntactic disambiguation, even the simplest grammar pruned with HSPELL outperforms their non-Oracle results. Without HSPELL-pruning, our simpler grammars are somewhat lagging behind, but as the grammars improve the gap is bridged. The addition of vertical markovization enables non-pruned models to outperform all previously reported re12Cohen and Smith (2007) make use of a parameter (α) which is tuned separately for each of the tasks. This essentially means that their model does not result in a true joint inference, as executions for different tasks involve tuning a parameter separately. In our model there are no such hyper-parameters, and the performance is the result of truly joint disambiguation. 377 Model U SEGTok / no H SEGF CPOS FPOS SY N / SY N CS GS SY N GTnohsp/pln 7 89.77 / 93.18 91.80 80.36 76.77 60.41 / 61.66 65.00 ···+vpi 7 89.80 / 93.18 91.84 80.37 76.74 61.16 / 62.41 66.70 ···+ppp 7 89.79 / 93.20 91.86 80.43 76.79 61.47 / 62.86 67.22 ···+nph 7 89.78 / 93.20 91.86 80.43 76.87 61.85 / 63.06 68.23 ···+v=2 9 89.12 / 92.45 91.77 82.02 77.86 64.53 / 66.02 70.82 GThsp/pln 11 92.00 / 94.81 94.52 82.35 78.11 62.10 / 64.17 65.00 ···+vpi 11 92.03 / 94.82 94.58 82.39 78.23 63.00 / 65.06 66.70 ···+ppp 11 92.02 / 94.85 94.58 82.48 78.33 63.26 / 65.42 67.22 ···+nph 11 92.14 / 94.91 94.73 82.58 78.47 63.98 / 65.98 68.23 ···+v=2 13 91.42 / 94.10 94.67 84.23 79.25 66.60 / 68.79 70.82 Table 1: Segmentation, tagging and parsing results on the Standard dev/train Split, for all Sentences sults. Furthermore, the combination of pruning and vertical markovization of the grammar outperforms the Oracle results reported by Cohen and Smith. This essentially means that a better grammar tunes the joint model for optimized syntactic disambiguation at least in as much as their hyper parameters do. An interesting observation is that while vertical markovization benefits all our models, its effect is less evident in Cohen and Smith. On the surface, our model may seem as a special case of Cohen and Smith in which α = 0. However, there is a crucial difference: the morphological probabilities in their model come from discriminative models based on linear context. Many morphological decisions are based on long distance dependencies, and when the global syntactic evidence disagrees with evidence based on local linear context, the two models compete with one another, despite the fact that the PCFG takes also local context into account. In addition, as the CRF and PCFG look at similar sorts of information from within two inherently different models, they are far from independent and optimizing their product is meaningless. Cohen and Smith approach this by introducing the α hyperparameter, which performs best when optimized independently for each sentence (cf. Oracle results). In contrast, our morphological probabilities are based on a unigram, lexeme-based model, and all other (local and non-local) contextual considerations are delegated to the PCFG. This fully generative model caters for real interaction between the syntactic and morphological levels as a part of a single coherent process. 8 Discussion and Conclusion Employing a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions is not only theoretically clean and linguistically justified and but also probabilistically apropriate and empirically sound. The overall performance of our joint framework demonstrates that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperforms upper bounds proposed by previous joint disambiguation systems and achieves segmentation and parsing results on a par with state-of-the-art standalone applications results. Better grammars are shown here to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. We conjecture that this trend may continue by incorporating additional information, e.g., three-dimensional models as proposed by Tsarfaty and Sima’an (2007). In the current work morphological analyses and lexical probabilities are derived from a small Treebank, which is by no means the best way to go. Using a wide-coverage morphological analyzer based on (Itai et al., 2006) should cater for a better coverage, and incorporating lexical probabilities learned from a big (unannotated) corpus (cf. (Levinger et al., 1995; Goldberg et al., ; Adler et al., 2008)) will make the parser more robust and suitable for use in more realistic scenarios. Acknowledgments We thank Meni Adler and Michael Elhadad (BGU) for helpful comments and discussion. We further thank Khalil Simaan (ILLCUvA) for his careful advise concerning the formal details of the proposal. The work of the first author was supported by the Lynn and William Frankel Center for Computer Sciences. The work of the second author as well as collaboration visits to Israel was financed by NWO, grant number 017.001.271. 378 References Meni Adler and Michael Elhadad. 2006. An Unsupervised Morpheme-Based HMM for Hebrew Morphological Disambiguation. In Proceeding of COLINGACL-06, Sydney, Australia. Meni Adler, Yoav Goldberg, David Gabay, and Michael Elhadad. 2008. Unsupervised Lexicon-Based Resolution of Unknown Words for Full Morpholological Analysis. In Proceedings of ACL-08. Meni Adler. 2001. Hidden Markov Model for Hebrew Part-of-Speech Tagging. Master’s thesis, Ben-Gurion University of the Negev. Meni Adler. 2007. Hebrew Morphological Disambiguation: An Unsupervised Stochastic Word-based Approach. Ph.D. thesis, Ben-Gurion University of the Negev, Beer-Sheva, Israel. Roy Bar-Haim, Khalil Sima’an, and Yoad Winter. 2005. Choosing an optimal architecture for segmentation and pos- tagging of modern Hebrew. In Proceedings of ACL-05 Workshop on Computational Approaches to Semitic Languages. Roy Bar-Haim, Khalil Sima’an, and Yoad Winter. 2007. Part-of-speech tagging of Modern Hebrew text. Natural Language Engineering, 14(02):223–251. J. Chappelier, M. Rajman, R. Aragues, and A. Rozenknop. 1999. Lattice Parsing for Speech Recognition. Eugene Charniak, Glenn Carroll, John Adcock, Anthony R. Cassandra, Yoshihiko Gotoh, Jeremy Katz, Michael L. Littman, and John McCann. 1996. Taggers for Parsers. AI, 85(1-2):45–57. David Chiang, Mona Diab, Nizar Habash, Owen Rambow, and Safiullah Shareef. 2006. Parsing Arabic Dialects. In Proceedings of EACL-06. Shay B. Cohen and Noah A. Smith. 2007. Joint morphological and syntactic disambiguation. In Proceedings of EMNLP-CoNLL-07, pages 208–217. Lewis Glinert. 1989. The Grammar of Modern Hebrew. Cambridge University Press. Yoav Goldberg and Michael Elhadad. 2007. SVM Model Tampering and Anchored Learning: A Case Study in Hebrew NP Chunking. In Proceeding of ACL-07, Prague, Czech Republic. Yoav Goldberg, Meni Adler, and Michael Elhadad. EM Can Find Pretty G]ood HMM POS-Taggers (When Given a Good Start), booktitle = Proceedings of ACL08, year = 2008,. Nizar Habash and Owen Rambow. 2005. Arabic tokenization, part-of-speech tagging and morphological disambiguation in one fell swoop. In Proceeding of ACL-05. Felix Hageloh. 2006. Parsing Using Transforms over Treebanks. Master’s thesis, University of Amsterdam. Nadav Har’el and Dan Kenigsberg. 2004. HSpell - the free Hebrew Spell Checker and Morphological Analyzer. Israeli Seminar on Computational Linguistics. Alon Itai, Shuly Wintner, and Shlomo Yona. 2006. A Computational Lexicon of Contemporary Hebrew. In Proceedings of LREC-06. Moshe Levinger, Uzi Ornan, and Alon Itai. 1995. Learning Morpholexical Probabilities from an Untagged Corpus with an Application to Hebrew. Computational Linguistics, 21:383–404. Helmut Schmid, 2000. LoPar: Design and Implementation. Institute for Computational Linguistics, University of Stuttgart. Helmut Schmid. 2004. Efficient Parsing of Highly Ambiguous Context-Free Grammars with Bit Vector. In Proceedings of COLING-04. Erel Segal. 2000. Hebrew Morphological Analyzer for Hebrew Undotted Texts. Master’s thesis, Technion, Haifa, Israel. Danny Shacham and Shuly Wintner. 2007. Morphological Disambiguation of Hebrew: A Case Study in Classifier Combination. In Proceedings of EMNLPCoNLL-07, pages 439–447. Khalil Sima’an, Alon Itai, Yoad Winter, Alon Altman, and Noa Nativ. 2001. Building a Tree-Bank for Modern Hebrew Text. In Traitement Automatique des Langues, volume 42. Noah A. Smith, David A. Smith, and Roy W. Tromble. 2005. Context-based morphological disambiguation with random fields. In Proceedings of HLT-05, pages 475–482, Morristown, NJ, USA. Association for Computational Linguistics. Reut Tsarfaty and Yoav Goldberg. 2008. Word-Based or Morpheme-Based? Annotation Strategies for Modern Hebrew Clitics. In Proceedings of LREC-08. Reut Tsarfaty and Khalil Sima’an. 2004. An Integrated Model for Morphological and Syntactic Disambiguation in Modern Hebrew. MOZAIEK detailed proposal, NWO Mozaiek scheme. Reut Tsarfaty and Khalil Sima’an. 2007. ThreeDimensional Parametrization for Parsing Morphologically Rich Languages. In Proceedings of IWPT-07. Reut Tsarfaty. 2006. Integrated Morphological and Syntactic Disambiguation for Modern Hebrew. In Proceedings of ACL-SRW-06. Shlomo Yona and Shuly Wintner. 2005. A Finitestate Morphological Grammar of Hebrew. In Proceedings of the ACL-05 Workshop on Computational Approaches to Semitic Languages. 379
2008
43
Proceedings of ACL-08: HLT, pages 380–388, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Which words are hard to recognize? Prosodic, lexical, and disfluency factors that increase ASR error rates Sharon Goldwater, Dan Jurafsky and Christopher D. Manning Department of Linguistics and Computer Science Stanford University {sgwater,jurafsky,manning}@stanford.edu Abstract Many factors are thought to increase the chances of misrecognizing a word in ASR, including low frequency, nearby disfluencies, short duration, and being at the start of a turn. However, few of these factors have been formally examined. This paper analyzes a variety of lexical, prosodic, and disfluency factors to determine which are likely to increase ASR error rates. Findings include the following. (1) For disfluencies, effects depend on the type of disfluency: errors increase by up to 15% (absolute) for words near fragments, but decrease by up to 7.2% (absolute) for words near repetitions. This decrease seems to be due to longer word duration. (2) For prosodic features, there are more errors for words with extreme values than words with typical values. (3) Although our results are based on output from a system with speaker adaptation, speaker differences are a major factor influencing error rates, and the effects of features such as frequency, pitch, and intensity may vary between speakers. 1 Introduction In order to improve the performance of automatic speech recognition (ASR) systems on conversational speech, it is important to understand the factors that cause problems in recognizing words. Previous work on recognition of spontaneous monologues and dialogues has shown that infrequent words are more likely to be misrecognized (Fosler-Lussier and Morgan, 1999; Shinozaki and Furui, 2001) and that fast speech increases error rates (Siegler and Stern, 1995; Fosler-Lussier and Morgan, 1999; Shinozaki and Furui, 2001). Siegler and Stern (1995) and Shinozaki and Furui (2001) also found higher error rates in very slow speech. Word length (in phones) has also been found to be a useful predictor of higher error rates (Shinozaki and Furui, 2001). In Hirschberg et al.’s (2004) analysis of two human-computer dialogue systems, misrecognized turns were found to have (on average) higher maximum pitch and energy than correctly recognized turns. Results for speech rate were ambiguous: faster utterances had higher error rates in one corpus, but lower error rates in the other. Finally, AddaDecker and Lamel (2005) demonstrated that both French and English ASR systems had more trouble with male speakers than female speakers, and found several possible explanations, including higher rates of disfluencies and more reduction. Many questions are left unanswered by these previous studies. In the word-level analyses of FoslerLussier and Morgan (1999) and Shinozaki and Furui (2001), only substitution and deletion errors were considered, so we do not know how including insertions might affect the results. Moreover, these studies primarily analyzed lexical, rather than prosodic, factors. Hirschberg et al.’s (2004) work suggests that prosodic factors can impact error rates, but leaves open the question of which factors are important at the word level and how they influence recognition of natural conversational speech. Adda-Decker and Lamel’s (2005) suggestion that higher rates of disfluency are a cause of worse recognition for male speakers presupposes that disfluencies raise error rates. While this assumption seems natural, it has yet to be carefully tested, and in particular we do not 380 know whether disfluent words are associated with errors in adjacent words, or are simply more likely to be misrecognized themselves. Other factors that are often thought to affect a word’s recognition, such as its status as a content or function word, and whether it starts a turn, also remain unexamined. The present study is designed to address all of these questions by analyzing the effects of a wide range of lexical and prosodic factors on the accuracy of an English ASR system for conversational telephone speech. In the remainder of this paper, we first describe the data set used in our study and introduce a new measure of error, individual word error rate (IWER), that allows us to include insertion errors in our analysis, along with deletions and substitutions. Next, we present the features we collected for each word and the effects of those features individually on IWER. Finally, we develop a joint statistical model to examine the effects of each feature while controlling for possible correlations. 2 Data For our analysis, we used the output from the SRI/ICSI/UW RT-04 CTS system (Stolcke et al., 2006) on the NIST RT-03 development set. This system’s performance was state-of-the-art at the time of the 2004 evaluation. The data set contains 36 telephone conversations (72 speakers, 38477 reference words), half from the Fisher corpus and half from the Switchboard corpus.1 The standard measure of error used in ASR is word error rate (WER), computed as 100(I + D + S)/R, where I, D and S are the number of insertions, deletions, and substitutions found by aligning the ASR hypotheses with the reference transcriptions, and R is the number of reference words. Since we wish to know what features of a reference word increase the probability of an error, we need a way to measure the errors attributable to individual words — an individual word error rate (IWER). We assume that a substitution or deletion error can be assigned to its corresponding reference word, but for insertion errors, there may be two adjacent reference words that could be responsible. Our solution is to assign any insertion errors to each of 1These conversations are not part of the standard Fisher and Switchboard corpora used to train most ASR systems. Ins Del Sub Total % data Full word 1.6 6.9 10.5 19.0 94.2 Filled pause 0.6 – 16.4 17.0 2.8 Fragment 2.3 – 17.3 19.6 2.0 Backchannel 0.3 30.7 5.0 36.0 0.6 Guess 1.6 – 30.6 32.1 0.4 Total 1.6 6.7 10.9 19.7 100 Table 1: Individual word error rates for different word types, and the proportion of words belonging to each type. Deletions of filled pauses, fragments, and guesses are not counted as errors in the standard scoring method. the adjacent words. We could then define IWER as 100(ni + nd + ns)/R, where ni, nd, and ns are the insertion, deletion, and substitution counts for individual words (with nd = D and ns = S). In general, however, ni > I, so that the IWER for a given data set would be larger than the WER. To facilitate comparisons with standard WER, we therefore discount insertions by a factor α, such that αni = I. In this study, α = .617. 3 Analysis of individual features 3.1 Features The reference transcriptions used in our analysis distinguish between five different types of words: filled pauses (um, uh), fragments (wh-, redistr-), backchannels (uh-huh, mm-hm), guesses (where the transcribers were unsure of the correct words), and full words (everything else). Error rates for each of these types can be found in Table 1. The remainder of our analysis considers only the 36159 invocabulary full words in the reference transcriptions (70 OOV full words are excluded). We collected the following features for these words: Speaker sex Male or female. Broad syntactic class Open class (e.g., nouns and verbs), closed class (e.g., prepositions and articles), or discourse marker (e.g., okay, well). Classes were identified using a POS tagger (Ratnaparkhi, 1996) trained on the tagged Switchboard corpus. Log probability The unigram log probability of each word, as listed in the system’s language model. Word length The length of each word (in phones), determined using the most frequent pronunciation 381 BefRep FirRep MidRep LastRep AfRep BefFP AfFP BefFr AfFr yeah i i i think you should um ask for the ref- recommendation Figure 1: Example illustrating disfluency features: words occurring before and after repetitions, filled pauses, and fragments; first, middle, and last words in a repeated sequence. found for that word in the recognition lattices. Position near disfluency A collection of features indicating whether a word occurred before or after a filled pause, fragment, or repeated word; or whether the word itself was the first, last, or other word in a sequence of repetitions. Figure 1 illustrates. Only identical repeated words with no intervening words or filled pauses were considered repetitions. First word of turn Turn boundaries were assigned automatically at the beginning of any utterance following a pause of at least 100 ms during which the other speaker spoke. Speech rate The average speech rate (in phones per second) was computed for each utterance using the pronunciation dictionary extracted from the lattices and the utterance boundary timestamps in the reference transcriptions. In addition to the above features, we used Praat (Boersma and Weenink, 2007) to collect the following additional prosodic features on a subset of the data obtained by excluding all contractions:2 Pitch The minimum, maximum, mean, and range of pitch for each word. Intensity The minimum, maximum, mean, and range of intensity for each word. Duration The duration of each word. 31017 words (85.8% of the full-word data set) remain in the no-contractions data set after removing words for which pitch and/or intensity features could not be extracted. 2Contractions were excluded before collecting prosodic features for the following reason. In the reference transcriptions and alignments used for scoring ASR systems, contractions are treated as two separate words. However, aside from speech rate, our prosodic features were collected using word-by-word timestamps from a forced alignment that used a transcription where contractions are treated as single words. Thus, the start and end times for a contraction in the forced alignment correspond to two words in the alignments used for scoring, and it is not clear how to assign prosodic features appropriately to those words. 3.2 Results and discussion Results of our analysis of individual features can be found in Table 2 (for categorical features) and Figure 2 (for numeric features). Comparing the error rates for the full-word and the no-contractions data sets in Table 2 verifies that removing contractions does not create systematic changes in the patterns of errors, although it does lower error rates (and significance values) slightly overall. (First and middle repetitions are combined as non-final repetitions in the table, because only 52 words were middle repetitions, and their error rates were similar to initial repetitions.) 3.2.1 Disfluency features Perhaps the most interesting result in Table 2 is that the effects of disfluencies are highly variable depending on the type of disfluency and the position of a word relative to it. Non-final repetitions and words next to fragments have an IWER up to 15% (absolute) higher than the average word, while final repetitions and words following repetitions have an IWER up to 7.2% lower. Words occurring before repetitions or next to filled pauses do not have significantly different error rates than words not in those positions. Our results for repetitions support Shriberg’s (1995) hypothesis that the final word of a repeated sequence is in fact fluent. 3.2.2 Other categorical features Our results support the common wisdom that open class words have lower error rates than other words (although the effect we find is small), and that words at the start of a turn have higher error rates. Also, like Adda-Decker and Lamel (2005), we find that male speakers have higher error rates than females, though in our data set the difference is more striking (3.6% absolute, compared to their 2.0%). 3.2.3 Word probability and word length Turning to Figure 2, we find (consistent with previous results) that low-probability words have dramatically higher error rates than high-probability 382 Filled Pau. Fragment Repetition Syntactic Class Sex Bef Aft Bef Aft Bef Aft NonF Fin Clos Open Disc 1st M F All (a) IWER 17.6 16.9 33.8 21.6 16.7 13.8 26.0 11.6 19.7 18.0 19.6 21.2 20.6 17.0 18.8 % wds 1.7 1.7 1.6 1.5 0.7 0.9 1.2 1.1 43.8 50.5 5.8 6.2 52.5 47.5 100 (b) IWER 17.6 17.2 32.0 21.5 15.8 14.2 25.1 11.6 18.8 17.8 19.0 20.3 20.0 16.4 18.3 % wds 1.9 1.8 1.6 1.5 0.8 0.8 1.4 1.1 43.9 49.6 6.6 6.4 52.2 47.8 100 Table 2: IWER by feature and percentage of words exhibiting each feature for (a) the full-word data set and (b) the nocontractions data set. Error rates that are significantly different for words with and without a given feature (computed using 10,000 samples in a Monte Carlo permutation test) are in bold (p < .05) or bold italics (p < .005). Features shown are whether a word occurs before or after a filled pause, fragment, or repetition; is a non-final or final repetition; is open class, closed class, or a discourse marker; is the first word of a turn; or is spoken by a male or female. All is the IWER for the entire data set. (Overall IWER is slightly lower than in Table 1 due to the removal of OOV words.) words. More surprising is that word length in phones does not seem to have a consistent effect on IWER. Further analysis reveals a possible explanation: word length is correlated with duration, but anti-correlated to the same degree with log probability (the Kendall τ statistics are .50 and -.49). Figure 2 shows that words with longer duration have lower IWER. Since words with more phones tend to have longer duration, but lower frequency, there is no overall effect of length. 3.2.4 Prosodic features Figure 2 shows that means of pitch and intensity have relatively little effect except at extreme values, where more errors occur. In contrast, pitch and intensity range show clear linear trends, with greater range of pitch or intensity leading to lower IWER.3 As noted above, decreased duration is associated with increased IWER, and (as in previous work), we find that IWER increases dramatically for fast speech. We also see a tendency towards higher IWER for very slow speech, consistent with Shinozaki and Furui (2001) and Siegler and Stern (1995). The effects of pitch minimum and maximum are not shown for reasons of space, but are similar to pitch mean. Also not shown are intensity minimum (with more errors at higher values) and intensity maximum (with more errors at lower values). For most of our prosodic features, as well as log probability, extreme values seem to be associated 3Our decision to use the log transform of pitch range was originally based on the distribution of pitch range values in the data set. Exploratory data analysis also indicated that using the transformed values would likely lead to a better model fit (Section 4) than using the raw values. with worse recognition than average values. We explore this possibility further in Section 4. 4 Analysis using a joint model In the previous section, we investigated the effects of various individual features on ASR error rates. However, there are many correlations between these features – for example, words with longer duration are likely to have a larger range of pitch and intensity. In this section, we build a single model with all of our features as potential predictors in order to determine the effects of each feature after controlling for the others. We use the no-contractions data set so that we can include prosodic features in our model. Since only 1% of tokens have an IWER > 1, we simplify modeling by predicting only whether each token is responsible for an error or not. That is, our dependent variable is binary, taking on the value 1 if IWER > 0 for a given token and 0 otherwise. 4.1 Model To model data with a binary dependent variable, a logistic regression model is an appropriate choice. In logistic regression, we model the log odds as a linear combination of feature values x0 . . . xn: log p 1 −p = β0x0 + β1x1 + . . . + βnxn where p is the probability that the outcome occurs (here, that a word is misrecognized) and β0 . . . βn are coefficients (feature weights) to be estimated. Standard logistic regression models assume that all categorical features are fixed effects, meaning that all possible values for these features are known in advance, and each value may have an arbitrarily different effect on the outcome. However, features 383 2 4 6 8 10 0 20 40 Word length (phones) IWER 100 200 300 0 20 40 Pitch mean (Hz) 50 60 70 80 0 20 40 Intensity mean (dB) 0.0 0.2 0.4 0.6 0.8 1.0 0 20 40 Duration (sec) −5 −4 −3 −2 0 20 40 Log probability IWER 1 2 3 4 5 0 20 40 log(Pitch range) (Hz) IWER 10 30 50 0 20 40 Intensity range (dB) 5 10 15 20 0 20 40 Speech rate (phones/sec) Figure 2: Effects of numeric features on IWER of the SRI system for the no-contractions data set. All feature values were binned, and the average IWER for each bin is plotted, with the area of the surrounding circle proportional to the number of points in the bin. Dotted lines show the average IWER over the entire data set. such as speaker identity do not fit this pattern. Instead, we control for speaker differences by assuming that speaker identity is a random effect, meaning that the speakers observed in the data are a random sample from a larger population. The baseline probability of error for each speaker is therefore assumed to be a normally distributed random variable, with mean equal to the population mean, and variance to be estimated by the model. Stated differently, a random effect allows us to add a factor to the model for speaker identity, without allowing arbitrary variation in error rates between speakers. Models such as ours, with both fixed and random effects, are known as mixed-effects models, and are becoming a standard method for analyzing linguistic data (Baayen, 2008). We fit our models using the lme4 package (Bates, 2007) of R (R Development Core Team, 2007). To analyze the joint effects of all of our features, we initially built as large a model as possible, and used backwards elimination to remove features one at a time whose presence did not contribute significantly (at p ≤.05) to model fit. All of the features shown in Table 2 were converted to binary variables and included as predictors in our initial model, along with a binary feature controlling for corpus (Fisher or Switchboard), and all numeric features in Figure 2. We did not include minimum and maximum values for pitch and intensity because they are highly correlated with the mean values, making parameter estimation in the combined model difficult. Preliminary investigation indicated that using the mean values would lead to the best overall fit to the data. In addition to these basic fixed effects, our initial model included quadratic terms for all of the numeric features, as suggested by our analysis in Section 3, as well as random effects for speaker identity and word identity. All numeric features were rescaled to values between 0 and 1 so that coefficients are comparable. 4.2 Results and discussion Figure 3 shows the estimated coefficients and standard errors for each of the fixed effect categorical features remaining in the reduced model (i.e., after backwards elimination). Since all of the features are binary, a coefficient of β indicates that the corresponding feature, when present, adds a weight of β to the log odds (i.e., multiplies the odds of an error by a factor of eβ). Thus, features with positive coefficients increase the odds of an error, and features with negative coefficients decrease the odds of an error. The magnitude of the coefficient corresponds to the size of the effect. Interpreting the coefficients for our numeric features is less intuitive, since most of these variables have both linear and quadratic effects. The contribution to the log odds of a particular numeric feature 384 −1.5 −1.0 −0.5 0.0 0.5 1.0 corpus=SW sex=M starts turn before FP after FP before frag after frag non−final rep open class Figure 3: Estimates and standard errors of the coefficients for the categorical predictors in the reduced model. xi, with linear and quadratic coefficients a and b, is axi + bx2 i . We plot these curves for each numeric feature in Figure 4. Values on the x axes with positive y values indicate increased odds of an error, and negative y values indicate decreased odds of an error. The x axes in these plots reflect the rescaled values of each feature, so that 0 corresponds to the minimum value in the data set, and 1 to the maximum value. 4.2.1 Disfluencies In our analysis of individual features, we found that different types of disfluencies have different effects: non-final repeated words and words near fragments have higher error rates, while final repetitions and words following repetitions have lower error rates. After controlling for other factors, a different picture emerges. There is no longer an effect for final repetitions or words after repetitions; all other disfluency features increase the odds of an error by a factor of 1.3 to 2.9. These differences from Section 3 can be explained by noting that words near filled pauses and repetitions have longer durations than other words (Bell et al., 2003). Longer duration lowers IWER, so controlling for duration reveals the negative effect of the nearby disfluencies. Our results are also consistent with Shriberg’s (1995) findings on fluency in repeated words, since final repetitions have no significant effect in our combined model, while non-final repetitions incur a penalty. 4.2.2 Other categorical features Without controlling for other lexical or prosodic features, we found that a word is more likely to be misrecognized at the beginning of a turn, and less likely to be misrecognized if it is an open class word. According to our joint model, these effects still hold even after controlling for other features. Similarly, male speakers still have higher error rates than females. This last result sheds some light on the work of Adda-Decker and Lamel (2005), who suggested several factors that could explain males’ higher error rates. In particular, they showed that males have higher rates of disfluency, produce words with slightly shorter durations, and use more alternate (“sloppy”) pronunciations. Our joint model controls for the first two of these factors, suggesting that the third factor or some other explanation must account for the remaining differences between males and females. One possibility is that female speech is more easily recognized because females tend to have expanded vowel spaces (Diehl et al., 1996), a factor that is associated with greater intelligibility (Bradlow et al., 1996) and is characteristic of genres with lower ASR error rates (Nakamura et al., 2008). 4.2.3 Prosodic features Examining the effects of pitch and intensity individually, we found that increased range for these features is associated with lower IWER, while higher pitch and extremes of intensity are associated with higher IWER. In the joint model, we see the same effect of pitch mean and an even stronger effect for intensity, with the predicted odds of an error dramatically higher for extreme intensity values. Meanwhile, we no longer see a benefit for increased pitch range and intensity; rather, we see small quadratic effects for both features, i.e. words with average ranges of pitch and intensity are recognized more easily than words with extreme values for these features. As with disfluencies, we hypothesize that the linear trends observed in Section 3 are primarily due to effects of duration, since duration is moderately correlated with both log pitch range (τ = .35) and intensity range (τ = .41). Our final two prosodic features, duration and speech rate, showed strong linear and weak quadratic trends when analyzed individually. According to our model, both duration and speech rate are still important predictors of error after controlling for other features. However, as with the other prosodic features, predictions of the joint model are dominated by quadratic trends, i.e., predicted error rates are lower for average values of duration and speech rate than for extreme values. Overall, the results from our joint analysis suggest 385 0.0 0.4 0.8 −4 0 4 Word length log odds y = −0.8x 0.0 0.4 0.8 −4 0 4 Pitch mean log odds y = 1x 0.0 0.4 0.8 −4 0 4 Intensity mean log odds y = −13.2x + 11.5x2 0.0 0.4 0.8 −4 0 4 Duration log odds y = −12.6x + 14.6x2 0.0 0.4 0.8 −4 0 4 Log probability log odds y = −0.6x + 4.1x2 0.0 0.4 0.8 −4 0 4 log(Pitch range) log odds y = −2.3x + 2.2x2 0.0 0.4 0.8 −4 0 4 Intensity range log odds y = −1x + 1.2x2 0.0 0.4 0.8 −4 0 4 Speech rate log odds y = −3.9x + 4.4x2 Figure 4: Predicted effect on the log odds of each numeric feature, including linear and (if applicable) quadratic terms. Model Neg. log lik. Diff. df Full 12932 0 32 Reduced 12935 3 26 No lexical 13203 271 16 No prosodic 13387 455 20 No speaker 13432 500 31 No word 13267 335 31 Baseline 14691 1759 1 Table 3: Fit to the data of various models. Degrees of freedom (df) for each model is the number of fixed effects plus the number of random effects plus 1 (for the intercept). Full model contains all predictors; Reduced contains only predictors contributing significantly to fit; Baseline contains only intercept. Other models are obtained by removing features from Full. Diff is the difference in log likelihood between each model and Full. that, after controlling for other factors, extreme values for prosodic features are associated with worse recognition than typical values. 4.2.4 Differences between lexical items As discussed above, our model contains a random effect for word identity, to control for the possibility that certain lexical items have higher error rates that are not explained by any of the other factors in the model. It is worth asking whether this random effect is really necessary. To address this question, we compared the fit to the data of two models, each containing all of our fixed effects and a random effect for speaker identity. One model also contained a random effect for word identity. Results are shown in Table 3. The model without a random effect for word identity is significantly worse than the full model; in fact, this single parameter is more important than all of the lexical features combined. To see which lexical items are causing the most difficulty, we examined the items with the highest estimated increases in error. The top 20 items on this list include yup, yep, yes, buy, then, than, and r., all of which are acoustically similar to each other or to other high-frequency words, as well as the words after, since, now, and though, which occur in many syntactic contexts, making them difficult to predict based on the language model. 4.2.5 Differences between speakers We examined the importance of the random effect for speaker identity in a similar fashion to the effect for word identity. As shown in Table 3, speaker identity is a very important factor in determining the probability of error. That is, the lexical and prosodic variables examined here are not sufficient to fully explain the differences in error rates between speakers. In fact, the speaker effect is the single most important factor in the model. Given that the differences in error rates between speakers are so large (average IWER for different speakers ranges from 5% to 51%), we wondered whether our model is sufficient to capture the kinds of speaker variation that exist. The model assumes that each speaker has a different baseline error rate, but that the effects of each variable are the same for each speaker. Determining the extent to which this assumption is justified is beyond the scope of this paper, however we present some suggestive results in Figure 5. This figure illustrates some of the dif386 40 60 80 0.0 0.2 0.4 Intensity mean (dB) Fitted P(err) 100 250 400 0.0 0.2 0.4 Pitch mean (Hz) 0.0 0.5 1.0 1.5 0.0 0.2 0.4 Duration (sec) −6 −5 −4 −3 −2 0.0 0.2 0.4 Neg. log prob. 0 5 10 20 0.0 0.2 0.4 Sp. rate (ph/sec) 40 60 80 0.0 0.2 0.4 Intensity mean (dB) Fitted P(err) 100 250 400 0.0 0.2 0.4 Pitch mean (Hz) 0.0 0.5 1.0 1.5 0.0 0.2 0.4 Duration (sec) −6 −5 −4 −3 −2 0.0 0.2 0.4 Neg. log prob. 0 5 10 20 0.0 0.2 0.4 Sp. rate (ph/sec) Figure 5: Estimated effects of various features on the error rates of two different speakers (top and bottom). Dashed lines illustrate the baseline probability of error for each speaker. Solid lines were obtained by fitting a logistic regression model to each speaker’s data, with the variable labeled on the x-axis as the only predictor. ferences between two speakers chosen fairly arbitrarily from our data set. Not only are the baseline error rates different for the two speakers, but the effects of various features appear to be very different, in one case even reversed. The rest of our data set exhibits similar kinds of variability for many of the features we examined. These differences in ASR behavior between speakers are particularly interesting considering that the system we investigated here already incorporates speaker adaptation models. 5 Conclusion In this paper, we introduced the individual word error rate (IWER) for measuring ASR performance on individual words, including insertions as well as deletions and substitutions. Using IWER, we analyzed the effects of various word-level lexical and prosodic features, both individually and in a joint model. Our analysis revealed the following effects. (1) Words at the start of a turn have slightly higher IWER than average, and open class (content) words have slightly lower IWER. These effects persist even after controlling for other lexical and prosodic factors. (2) Disfluencies heavily impact error rates: IWER for non-final repetitions and words adjacent to fragments rises by up to 15% absolute, while IWER for final repetitions and words following repetitions decreases by up to 7.2% absolute. Controlling for prosodic features eliminates the latter benefit, and reveals a negative effect of adjacent filled pauses, suggesting that the effects of these disfluencies are normally obscured by the greater duration of nearby words. (3) For most acoustic-prosodic features, words with extreme values have worse recognition than words with average values. This effect becomes much more pronounced after controlling for other factors. (4) After controlling for lexical and prosodic characteristics, the lexical items with the highest error rates are primarily homophones or near-homophones (e.g., buy vs. by, then vs. than). (5) Speaker differences account for much of the variance in error rates between words. Moreover, the direction and strength of effects of different prosodic features may vary between speakers. While we plan to extend our analysis to other ASR systems in order to determine the generality of our findings, we have already gained important insights into a number of factors that increase ASR error rates. In addition, our results suggest a rich area for future research in further analyzing the variability of both lexical and prosodic effects on ASR behavior for different speakers. Acknowledgments This work was supported by the Edinburgh-Stanford LINK and ONR MURI award N000140510388. We thank Andreas Stolcke for providing the ASR output, language model, and forced alignments used here, and Raghunandan Kumaran and Katrin Kirchhoff for earlier datasets and additional help. 387 References M. Adda-Decker and L. Lamel. 2005. Do speech recognizers prefer female speakers? In Proceedings of INTERSPEECH, pages 2205–2208. R. H. Baayen. 2008. Analyzing Linguistic Data. A Practical Introduction to Statistics. Cambridge University Press. Prepublication version available at http://www.mpi.nl/world/persons/private/baayen/publications.html. Douglas Bates, 2007. lme4: Linear mixed-effects models using S4 classes. R package version 0.99875-8. A. Bell, D. Jurafsky, E. Fosler-Lussier, C. Girand, M. Gregory, and D. Gildea. 2003. Effects of disfluencies, predictability, and utterance position on word form variation in English conversation. Journal of the Acoustical Society of America, 113(2):1001–1024. P. Boersma and D. Weenink. 2007. Praat: doing phonetics by computer (version 4.5.16). http://www.praat.org/. A. Bradlow, G. Torretta, and D. Pisoni. 1996. Intelligibility of normal speech I: Global and fine-grained acoustic-phonetic talker characteristics. Speech Communication, 20:255–272. R. Diehl, B. Lindblom, K. Hoemeke, and R. Fahey. 1996. On explaining certain male-female differences in the phonetic realization of vowel categories. Journal of Phonetics, 24:187–208. E. Fosler-Lussier and N. Morgan. 1999. Effects of speaking rate and word frequency on pronunciations in conversational speech. Speech Communication, 29:137– 158. J. Hirschberg, D. Litman, and M. Swerts. 2004. Prosodic and other cues to speech recognition failures. Speech Communication, 43:155– 175. M. Nakamura, K. Iwano, and S. Furui. 2008. Differences between acoustic characteristics of spontaneous and read speech and their effects on speech recognition performance. Computer Speech and Language, 22:171– 184. R Development Core Team, 2007. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3900051-07-0. A. Ratnaparkhi. 1996. A Maximum Entropy model for part-of-speech tagging. In Proceedings of the First Conference on Empirical Methods in Natural Language Processing, pages 133–142. T. Shinozaki and S. Furui. 2001. Error analysis using decision trees in spontaneous presentation speech recognition. In Proceedings of ASRU 2001. E. Shriberg. 1995. Acoustic properties of disfluent repetitions. In Proceedings of the International Congress of Phonetic Sciences, volume 4, pages 384–387. M. Siegler and R. Stern. 1995. On the effects of speech rate in large vocabulary speech recognition systems. In Proceedings of ICASSP. A. Stolcke, B. Chen, H. Franco, V. R. R. Gadde, M. Graciarena, M.-Y. Hwang, K. Kirchhoff, A. Mandal, N. Morgan, X. Lin, T. Ng, M. Ostendorf, K. Sonmez, A. Venkataraman, D. Vergyri, W. Wang, J. Zheng, and Q. Zhu. 2006. Recent innovations in speech-to-text transcription at SRI-ICSI-UW. IEEE Transactions on Audio, Speech and Language Processing, 14(5):1729– 1744. 388
2008
44
Proceedings of ACL-08: HLT, pages 389–397, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Name Translation in Statistical Machine Translation Learning When to Transliterate Ulf Hermjakob and Kevin Knight University of Southern California Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292, USA fulf,knight [email protected] Hal Daum´e III University of Utah School of Computing 50 S Central Campus Drive Salt Lake City, UT 84112, USA [email protected] Abstract We present a method to transliterate names in the framework of end-to-end statistical machine translation. The system is trained to learn when to transliterate. For Arabic to English MT, we developed and trained a transliterator on a bitext of 7 million sentences and Google’s English terabyte ngrams and achieved better name translation accuracy than 3 out of 4 professional translators. The paper also includes a discussion of challenges in name translation evaluation. 1 Introduction State-of-the-art statistical machine translation (SMT) is bad at translating names that are not very common, particularly across languages with different character sets and sound systems. For example, consider the following automatic translation:1 Arabic input à A K . ñ  ƒ ð P @ P ñ Ó ð p A K . É  J Ó áJ J  ®J ƒ ñ Ó É J ¯ @ P ð ¬ ñ J J K A Ò k P ð à A Ó ñ  ƒ ð á ¯ ñ ê  J J K . ð ­J J ¯ ñ » ð Q K . ð SMT output musicians such as Bach Correct translation composers such as Bach, Mozart, Chopin, Beethoven, Schumann, Rachmaninoff, Ravel and Prokofiev The SMT system drops most names in this example. “Name dropping” and mis-translation happens when the system encounters an unknown word, mistakes a name for a common noun, or trains on noisy parallel data. The state-of-the-art is poor for 1taken from NIST02-05 corpora two reasons. First, although names are important to human readers, automatic MT scoring metrics (such as BLEU) do not encourage researchers to improve name translation in the context of MT. Names are vastly outnumbered by prepositions, articles, adjectives, common nouns, etc. Second, name translation is a hard problem — even professional human translators have trouble with names. Here are four reference translations taken from the same corpus, with mistakes underlined: Ref1 composers such as Bach, missing name Chopin, Beethoven, Shumann, Rakmaninov, Ravel and Prokoviev Ref2 musicians such as Bach, Mozart, Chopin, Bethoven, Shuman, Rachmaninoff, Rafael and Brokoviev Ref3 composers including Bach, Mozart, Schopen, Beethoven, missing name Raphael, Rahmaniev and Brokofien Ref4 composers such as Bach, Mozart, missing name Beethoven, Schumann, Rachmaninov, Raphael and Prokofiev The task of transliterating names (independent of end-to-end MT) has received a significant amount of research, e.g., (Knight and Graehl, 1997; Chen et al., 1998; Al-Onaizan, 2002). One approach is to “sound out” words and create new, plausible targetlanguage spellings that preserve the sounds of the source-language name as much as possible. Another approach is to phonetically match source-language names against a large list of target-language words 389 and phrases. Most of this work has been disconnected from end-to-end MT, a problem which we address head-on in this paper. The simplest way to integrate name handling into SMT is: (1) run a named-entity identification system on the source sentence, (2) transliterate identified entities with a special-purpose transliteration component, and (3) run the SMT system on the source sentence, as usual, but when looking up phrasal translations for the words identified in step 1, instead use the transliterations from step 2. Many researchers have attempted this, and it does not work. Typically, translation quality is degraded rather than improved, for the following reasons:  Automatic named-entity identification makes errors. Some words and phrases that should not be transliterated are nonetheless sent to the transliteration component, which returns a bad translation.  Not all named entities should be transliterated. Many named entities require a mix of transliteration and translation. For example, in the pair A J K P ñ ® J Ë A » H . ñ J k . /jnub kalyfurnya/Southern California, the first Arabic word is translated, and the second word is transliterated.  Transliteration components make errors. The base SMT system may translate a commonlyoccurring name just fine, due to the bitext it was trained on, while the transliteration component can easily supply a worse answer.  Integration hobbles SMT’s use of longer phrases. Even if the named-entity identification and transliteration components operate perfectly, adopting their translations means that the SMT system may no longer have access to longer phrases that include the name. For example, our base SMT system translates  J K P © J K . ù Ë Z @ P P ñ Ë @ (as a whole phrase) to “Premier Li Peng”, based on its bitext knowledge. However, if we force © J K . ù Ë to translate as a separate phrase to “Li Peng”, then the term Z @ P P ñ Ë @  J K P becomes ambiguous (with translations including “Prime Minister”, “Premier”, etc.), and we observe incorrect choices being subsequently made. To spur better work in name handling, an ACE entity-translation pilot evaluation was recently developed (Day, 2007). This evaluation involves a mixture of entity identification and translation concerns—for example, the scoring system asks for coreference determination, which may or may not be of interest for improving machine translation output. In this paper, we adopt a simpler metric. We ask: what percentage of source-language named entities are translated correctly? This is a precision metric. We can readily apply it to any base SMT system, and to human translations as well. Our goal in augmenting a base SMT system is to increase this percentage. A secondary goal is to make sure that our overall translation quality (as measured by BLEU) does not degrade as a result of the name-handling techniques we introduce. We make all our measurements on an Arabic/English newswire translation task. Our overall technical approach is summarized here, along with references to sections of this paper:  We build a component for transliterating between Arabic and English (Section 3).  We automatically learn to tag those words and phrases in Arabic text, which we believe the transliteration component will translate correctly (Section 4).  We integrate suggested transliterations into the base SMT search space, with their use controlled by a feature function (Section 5).  We evaluate both the base SMT system and the augmented system in terms of entity translation accuracy and BLEU (Sections 2 and 6). 2 Evaluation In this section we present the evaluation method that we use to measure our system and also discuss challenges in name transliteration evaluation. 2.1 NEWA Evaluation Metric General MT metrics such as BLEU, TER, METEOR are not suitable for evaluating named entity translation and transliteration, because they are not focused on named entities (NEs). Dropping a comma or a the is penalized as much as dropping a name. We therefore use another metric, jointly developed with BBN and LanguageWeaver. 390 The general idea of the Named Entity Weak Accuracy (NEWA) metric is to  Count number of NEs in source text: N  Count number of correctly translated NEs: C  Divide C/N to get an accuracy figure In NEWA, an NE is counted as correctly translated if the target reference NE is found in the MT output. The metric has the advantage that it is easy to compute, has no special requirements on an MT system (such as depending on source-target word alignment) and is tokenization independent. In the result section of this paper, we will use the NEWA metric to measure and compare the accuracy of NE translations in our end-to-end SMT translations and four human reference translations. 2.2 Annotated Corpus BBN kindly provided us with an annotated Arabic text corpus, in which named entities were marked up with their type (e.g. GPE for Geopolitical Entity) and one or more English translations. Example: ù ¯ <GPE alt=”Termoli”> ùË ñÓ QJ  K </GPE > <PER alt=”Abdullah II j Abdallah II”> é Ê Ë @ Y J . « ù K A  JË @</PER> The BBN annotations exhibit a number of issues. For the English translations of the NEs, BBN annotators looked at human reference translations, which may introduce a bias towards those human translations. Specifically, the BBN annotations are sometimes wrong, because the reference translations were wrong. Consider for example the Arabic phrase ù Ë ñ Ó Q J  K ù ¯ à @ Q  K P ñ K . © J ’ Ó (mSn‘ burtran fY tyrmulY), which means Powertrain plant in Termoli. The mapping from tyrmulY to Termoli is not obvious, and even less the one from burtran to Powertrain. The human reference translations for this phrase are 1. Portran site in Tremolo 2. Termoli plant (one name dropped) 3. Portran in Tirnoli 4. Portran assembly plant, in Tirmoli The BBN annotators adopted the correct translation Termoli, but also the incorrect Portran. In other cases the BBN annotators adopted both a correct (Khatami) and an incorrect translation (Khatimi) when referring to the former Iranian president, which would reward a translation with such an incorrect spelling.  <PER alt=”Khatami jKhatimi”> ùÒ  K A k</PER>  <GPE alt=”the American”>  é J » QJ Ó AË @</GPE > In other cases, all translations are correct, but additional correct translations are missing, as for “the American” above, for which “the US” is an equally valid alternative in the specific sentence it was annotated in. All this raises the question of what is a correct answer. For most Western names, there is normally only one correct spelling. We follow the same conventions as standard media, paying attention to how an organization or individual spells its own name, e.g. Senator Jon Kyl, not Senator John Kyle. For Arabic names, variation is generally acceptable if there is no one clearly dominant spelling in English, e.g. GaddafijGadhafijQaddafijQadhafi, as long as a given variant is not radically rarer than the most conventional or popular form. 2.3 Re-Annotation Based on the issues we found with the BBN annotations, we re-annotated a sub-corpus of 637 sentences of the BBN gold standard. We based this re-annotation on detailed annotation guidelines and sample annotations that had previously been developed in cooperation with LanguageWeaver, building on three iterations of test annotations with three annotators. We checked each NE in every sentence, using human reference translations, automatic transliterator output, performing substantial Web research for many rare names, and checked Google ngrams and counts for the general Web and news archives to determine whether a variant form met our threshold of occurring at least 20% as often as the most dominant form. 3 Transliterator This section describes how we transliterate Arabic words or phrases. Given a word such as ¬ ñ J J K AÒk P or a phrase such as É J ¯ @ P  K P ñ Ó, we want to find the English transliteration for it. This is not just a 391 romanization like rHmanynuf and murys rafyl for the examples above, but a properly spelled English name such as Rachmaninoff and Maurice Ravel. The transliteration result can contain several alternatives, e.g. Rachmaninoff jRachmaninov. Unlike various generative approaches (Knight and Graehl, 1997; Stalls and Knight, 1998; Li et al., 2004; Matthews, 2007; Sherif and Kondrak, 2007; Kashani et al., 2007), we do not synthesize an English spelling from scratch, but rather find a translation in very large lists of English words (3.4 million) and phrases (47 million). We develop a similarity metric for Arabic and English words. Since matching against millions of candidates is computationally prohibitive, we store the English words and phrases in an index, such that given an Arabic word or phrase, we quickly retrieve a much smaller set of likely candidates and apply our similarity metric to that smaller list. We divide the task of transliteration into two steps: given an Arabic word or phrase to transliterate, we (1) identify a list of English transliteration candidates from indexed lists of English words and phrases with counts (section 3.1) and (2) compute for each English name candidate the cost for the Arabic/English name pair (transliteration scoring model, section 3.2). We then combine the count information with the transliteration cost according to the formula: score(e) = log(count(e))/20 - translit cost(e,f) 3.1 Indexing with consonant skeletons We identify a list of English transliteration candidates through what we call a consonant skeleton index. Arabic consonants are divided into 11 classes, represented by letters b,f,g,j,k,l,m,n,r,s,t. In a onetime pre-processing step, all 3,420,339 (unique) English words from our English unigram language model (based on Google’s Web terabyte ngram collection) that might be names or part of names (mostly based on capitalization) are mapped to one or more skeletons, e.g. Rachmaninoff ! rkmnnf, rmnnf, rsmnnf, rtsmnnf This yields 10,381,377 skeletons (average of 3.0 per word) for which a reverse index is created (with counts). At run time, an Arabic word to be transliterated is mapped to its skeleton, e.g. ¬ ñ J J K AÒk P ! rmnnf This skeleton serves as a key for the previously built reverse index, which then yields the list of English candidates with counts: rmnnf ! Rachmaninov (186,216), Rachmaninoff (179,666), Armenonville (3,445), Rachmaninow (1,636), plus 8 others. Shorter words tend to produce more candidates, resulting in slower transliteration, but since there are relatively few unique short words, this can be addressed by caching transliteration results. The same consonant skeleton indexing process is applied to name bigrams (47,700,548 unique with 167,398,054 skeletons) and trigrams (46,543,712 unique with 165,536,451 skeletons). 3.2 Transliteration scoring model The cost of an Arabic/English name pair is computed based on 732 rules that assign a cost to a pair of Arabic and English substrings, allowing for one or more context restrictions. 1.  †::q == ::0 2. ¬ ð::ough == ::0 3. h::ch == :[aou],::0.1 4.  †::k == ,$:,$::0.1 ; ::0.2 5. Z:: == :,EC::0.1 The first example rule above assigns to the straightforward pair  †/q a cost of 0. The second rule includes 2 letters on the Arabic and 4 on the English side. The third rule restricts application to substring pairs where the English side is preceded by the letters a, o, or u. The fourth rule specifies a cost of 0.1 if the substrings occur at the end of (both) names, 0.2 otherwise. According to the fifth rule, the Arabic letter Z may match an empty string on the English side, if there is an English consonant (EC) in the right context of the English side. The total cost is computed by always applying the longest applicable rule, without branching, resulting in a linear complexity with respect to word-pair length. Rules may include left and/or right context for both Arabic and English. The match fails if no rule applies or the accumulated cost exceeds a preset limit. Names may have n words on the English and m on the Arabic side. For example, New York is one word in Arabic and Abdullah is two words in Arabic. The 392 rules handle spaces (as well as digits, apostrophes and other non-alphabetic material) just like regular alphabetic characters, so that our system can handle cases like where words in English and Arabic names do not match one to one. The French name Beaujolais ( éJ Ë ñk . ñK ./bujulyh) deviates from standard English spelling conventions in several places. The accumulative cost from the rules handling these deviations could become prohibitive, with each cost element penalizing the same underlying offense — being French. We solve this problem by allowing for additional context in the form of style flags. The rule for matching eau/ ð specifies, in addition to a cost, an (output) style flag +fr (as in French), which in turn serves as an additional context for the rule that matches ais/ éK at a much reduced cost. Style flags are also used for some Arabic dialects. Extended characters such as ´e, ¨o, and s¸ and spelling idiosyncrasies in names on the English side of the bitext that come from various third languages account for a significant portion of the rule set. Casting the transliteration model as a scoring problem thus allows for very powerful rules with strong contexts. The current set of rules has been built by hand based on a bitext development corpus; future work might include deriving such rules automatically from a training set of transliterated names. This transliteration scoring model described in this section is used in two ways: (1) to transliterate names at SMT decoding time, and (2) to identify transliteration pairs in a bitext. 4 Learning what to transliterate As already mentioned in the introduction, named entity (NE) identification followed by MT is a bad idea. We don’t want to identify NEs per se anyway — we want to identify things that our transliterator will be good at handling, i.e., things that should be transliterated. This might even include loanwords like bnk (bank) and brlman (parliament), but would exclude names such as National Basketball Association that are often translated rather transliterated. Our method follows these steps: 1. Take a bitext. 2. Mark the Arabic words and phrases that have a recognizable transliteration on the Englishside. 3. Remove the English side of the bitext. 4. Divide the annotated Arabic corpus into a training and test corpus. 5. Train a monolingual Arabic tagger to identify which words and phrases (in running Arabic) are good candidates for transliteration (section 4.2) 6. Apply the tagger to test data and evaluate its accuracy. 4.1 Mark-up of bitext Given a tokenized (but unaligned and mixed-case) bitext, we mark up that bitext with links between Arabic and English words that appear to be transliterations. In the following example, linked words are underlined, with numbers indicating what is linked. English The meeting was attended by Omani (1) Secretary of State for Foreign Affairs Yusif (2) bin (3) Alawi (6) bin (8) Abdallah (10) and Special Advisor to Sultan (12) Qabus (13) for Foreign Affairs Umar (14) bin (17) Abdul Munim (19) al-Zawawi (21). Arabic (translit.) uHDr allqa’ uzyr aldule al‘manY (1) llsh’uun alkharjye yusf (2) bn (3) ‘luY (6) bn (8) ‘bd allh (10) ualmstshar alkhaS llslTan (12) qabus (13) ll‘laqat alkharjye ‘mr (14) bn (17) ‘bd almn‘m (19) alzuauY (21) . For each Arabic word, the linking algorithm tries to find a matching word on the English side, using the transliteration scoring model described in section 3. If the matcher reaches the end of an Arabic or English word before reaching the end of the other, it continues to “consume” additional words until a word-boundary observing match is found or the cost threshold exceeded. When there are several viable linking alternatives, the algorithm considers the cost provided by the transliteration scoring model, as well as context to eliminate inferior alternatives, so that for example the different occurrences of the name particle bin in the example above are linked to the proper Arabic words, based on the names next to them. The number of links depends, of course, on the specific corpus, but we typically identify about 3.0 links per sentence. The algorithm is enhanced by a number of heuristics: 393  English match candidates are restricted to capitalized words (with a few exceptions).  We use a list of about 200 Arabic and English stopwords and stopword pairs.  We use lists of countries and their adjective forms to bridge cross-POS translations such as Italy’s president on the English and  J K P ù Ë A¢ K A Ë @ (”Italianpresident”) on the Arabic side.  Arabic prefixes such as È/l- (”to”) are treated in a special way, because they are translated, not transliterated like the rest of the word. Link (12) above is an example. In this bitext mark-up process, we achieve 99.5% precision and 95% recall based on a manual visualization-tool based evaluation. Of the 5% recall error, 3% are due to noisy data in the bitext such as typos, incorrect translations, or names missing on one side of the bitext. 4.2 Training of Arabic name tagger The task of the Arabic name tagger (or more precisely, “transliterate-me” tagger) is to predict whether or not a word in an Arabic text should be transliterated, and if so, whether it includes a prefix. Prefixes such as ð/u- (“and”) have to be translated rather than transliterated, so it is important to split off any prefix from a name before transliterating that name. This monolingual tagging task is not trivial, as many Arabic words can be both a name and a nonname. For example,  è QK Q j . Ë @ (aljzyre) can mean both Al-Jazeera and the island (or peninsula). Features include the word itself plus two words to the left and right, along with various prefixes, suffixes and other characteristics of all of them, totalling about 250 features. Some of our features depend on large corpus statistics. For this, we divide the tagged Arabic side of our training corpus into a stat section and a core training section. From the stat section we collect statistics as to how often every word, bigram or trigram occurs, and what distribution of name/nonname patterns these ngrams have. The name distribution bigram  éK P ñ º Ë @  è Q K Q j . Ë @ 3327 00:133 01:3193 11:1 (aljzyre alkurye/“peninsula Korean”) for example tells us that in 3193 out of 3327 occurrences in the stat corpus bitext, the first word is a marked up as a non-name (”0”) and the second as a name (”1”), which strongly suggests that in such a bigram context, aljzyre better be translated as island or peninsula, and not be transliterated as Al-Jazeera. We train our system on a corpus of  million stat sentences, and 00; 000 core training sentences. We employ a sequential tagger trained using the SEARN algorithm (Daum´e III et al., 2006) with aggressive updates ( = ). Our base learning algorithm is an averaged perceptron, as implemented in the MEGAM package2. Reference Precision Recall F-meas. Raw test corpus 87.4% 95.7% 91.4% Adjusted for GS 92.1% 95.9% 94.0% deficiencies Table 1: Accuracy of “transliterate-me” tagger Testing on 10,000 sentences, we achieve precision of 87.4% and a recall of 95.7% with respect to the automatically marked-up Gold Standard as described in section 4.1. A manual error analysis of 500 sentences shows that a large portion are not errors after all, but have been marked as errors because of noise in the bitext and errors in the bitext markup. After adjusting for these deficiencies in the gold standard, we achieve precision of 92.1% and recall of 95.9% in the name tagging task. 5 Integration with SMT We use the following method to integrate our transliterator into the overall SMT system: 1. We tag the Arabic source text using the tagger described in the previous section. 2. We apply the transliterator described in section 3 to the tagged items. We limit this transliteration to words that occur up to 50 times in the training corpus for single token names (or up to 100 and 150 times for two and three-word names). We do this because the general SMT mechanism tends to do well on more common names, but does poorly on rare names (and will 2Freely available at http://hal3.name/megam 394 always drop names it has never seen in the training bitext). 3. On the fly, we add transliterations to SMT phrase table. Instead of a phrasal probability, the transliterationshave a special binary feature set to 1. In a tuning step, the Minimim Error Rate Training component of our SMT system iteratively adjusts the set of rule weights, including the weight associated with the transliteration feature, such that the English translations are optimized with respect to a set of known reference translations according to the BLEU translation metric. 4. At run-time, the transliterations then compete with the translations generated by the general SMT system. This means that the MT system will not always use the transliterator suggestions, depending on the combination of language model, translation model, and other component scores. 5.1 Multi-token names We try to transliterate names as much as possible in context. Consider for example the Arabic name:  éJ ® “ ñ K . @ ­ ƒ ñ K (”yusf abu Sfye”) If transliterated as single words without context, the top results would be JosephjJosefjYusufjYosefj Youssef, Abu jAbo jIvo jApojIbo, and SephiajSofiaj SophiajSafieh jSafia respectively. However, when transliterating the three words together against our list of 47 million English trigrams (section 3), the transliterator will select the (correct) translation Yousef Abu Safieh. Note that Yousef was not among the top 5 choices, and that Safieh was only choice 4. Similarly, when transliterating à A K . ñ  ƒ ð P @ P ñ Ó ð /umuzar ushuban (”and Mozart and Chopin”) without context, the top results would be MoserjMauserj MozerjMozart jMouser and Shuppan jShopping j Schwaben jSchuppan jShobana (with Chopin way down on place 22). Checking our large English lists for a matching name, name pattern, the transliterator identifies the correct translation “, Mozart, Chopin”. Note that the transliteration module provides the overall SMT system with up to 5 alternatives, augmented with a choice of English translations for the Arabic prefixes like the comma and the conjunction and in the last example. 6 End-to-End results We applied the NEWA metric (section 2) to both our SMT translations as well as the four human reference translations, using both the original namedentity translation annotation and the re-annotation: Gold Standard BBN GS Re-annotated GS Human 1 87.0% 85.0% Human 2 85.3% 86.9% Human 3 90.4% 91.8% Human 4 86.5% 88.3% SMT System 80.4% 89.7% Table 2: Name translation accuracy with respect to BBN and re-annotated Gold Standard on 1730 named entities in 637 sentences. Almost all scores went up with re-annotations, because the re-annotations more properly reward correct answers. Based on the original annotations, all human name translations were much better than our SMT system. However, based on our re-annotation, the results are quite different: our system has a higher NEWA score and better name translations than 3 out of 4 human annotators. The evaluation results confirm that the original annotation method produced a relative bias towards the human translation its annotations were largely based on, compared to other translations. Table 3 provides more detailed NEWA results. The addition of the transliteration module improves our overall NEWA score from 87.8% to 89.7%, a relative gain of 16% over base SMT system. For names of persons (PER) and facilities (FAC), our system outperforms all human translators. Humans performed much better on Person Nominals (PER.Nom) such as Swede, Dutchmen, Americans. Note that name translation quality varies greatly between human translators, with error rates ranging from 8.2-15.0% (absolute). To make sure our name transliterator does not degrade the overall translation quality, we evaluated our base SMT system with BLEU, as well as our transliteration-augmented SMT system. Our standard newswire training set consists of 10.5 million words of bitext (English side) and 1491 test sen395 NE Type Count Baseline SMT with Human 1 Human 2 Human 3 Human 4 SMT Transliteration PER 342 266 (77.8%) 280 (81.9%) 210 (61.4%) 265 (77.5%) 278 (81.3%) 275 (80.4%) GPE 910 863 (94.8%) 877 (96.4%) 867 (95.3%) 849 (93.3%) 885 (97.3%) 852 (93.6%) ORG 332 280 (84.3%) 282 (84.9%) 263 (79.2%) 265 (79.8%) 293 (88.3%) 281 (84.6%) FAC 27 18 (66.7%) 24 (88.9%) 21 (77.8%) 20 (74.1%) 22 (81.5%) 20 (74.1%) PER.Nom 61 49 (80.3%) 48 (78.7%) 61 (100.0%) 56 (91.8%) 60 (98.4%) 57 (93.4%) LOC 58 43 (74.1%) 41 (70.7%) 48 (82.8%) 48 (82.8%) 51 (87.9%) 43 (74.1%) All types 1730 1519 (87.8%) 1552 (89.7%) 1470 (85.0%) 1503 (86.9%) 1589 (91.8%) 1528 (88.3%) Table 3: Name translation accuracy in end-to-end statistical machine translation (SMT) system for different named entity (NE) types: Person (PER), Geopolitical Entity, which includes countries, provinces and towns (GPE), Organization (ORG), Facility (FAC), Nominal Person, e.g. Swede (PER.Nom), other location (LOC). tences. The BLEU scores for the two systems were 50.70 and 50.96 respectively. Finally, here are end-to-end machine translation results for three sentences, with and without the transliteration module, along with a human reference translation. Old: Al-Basha leads a broad list of musicians such as Bach. New: Al-Basha leads a broad list of musical acts such as Bach, Mozart, Beethoven, Chopin, Schumann, Rachmaninoff, Ravel and Prokofiev. Ref: Al-Bacha performs a long list of works by composers such as Bach, Chopin, Beethoven, Shumann, Rakmaninov, Ravel and Prokoviev. Old: Earlier Israeli military correspondent turn introduction programme ”Entertainment Bui” New: Earlier Israeli military correspondent turn to introduction of the programme ”Play Boy” Ref: Former Israeli military correspondent turns host for ”Playboy” program Old: The Nikkei president company De Beers said that ... New: The company De Beers chairman Nicky Oppenheimer said that ... Ref: Nicky Oppenheimer, chairman of the De Beers company, stated that ... 7 Discussion We have shown that a state-of-the-art statistical machine translation system can benefit from a dedicated transliteration module to improve the translation of rare names. Improved named entity translation accuracy as measured by the NEWA metric in general, and a reduction in dropped names in particular is clearly valuable to the human reader of machine translated documents as well as for systems using machine translation for further information processing. At the same time, there has been no negative impact on overall quality as measured by BLEU. We believe that all components can be further improved, e.g.  Automatically retune the weights in the transliteration scoring model.  Improve robustness with respect to typos, incorrect or missing translations, and badly aligned sentences when marking up bitexts.  Add more features for learning whether or not a word should be transliterated, possibly using source language morphology to better identify non-name words never or rarely seen during training. Additionally,our transliterationmethod could be applied to other language pairs. We find it encouraging that we already outperform some professional translators in name translation accuracy. The potential to exceed human translator performance arises from the patience required to translate names right. Acknowledgment This research was supported under DARPA Contract No. HR0011-06-C-0022. 396 References Yaser Al-Onaizan and Kevin Knight. 2002. Machine Transliteration of Names in Arabic Text. In Proceedings of the Association for Computational Linguistics Workshop on Computational Approaches to Semitic Languages. Thorsten Brants, Alex Franz. 2006. Web 1T 5-gram Version 1. Released by Google through the Linguistic Data Consortium, Philadelphia, as LDC2006T13. Hsin-Hsi Chen, Sheng-Jie Huang, Yung-Wei Ding, and Shih-Chung Tsai. 1998. Proper Name Translation in Cross-Language Information Retrieval. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics. Hal Daum´e III, John Langford, and Daniel Marcu. 2006. Search-based Structured Prediction. Submitted to the Machine Learning Journal. http://pub.hal3.name/#daume06searn David Day. 2007. Entity Translation 2007 Pilot Evaluation (ET07). In proceedings of the Workshop on Automatic Content Extraction (ACE). College Park, Maryland. Byung-Ju Kang and Key-Sun Choi. 2000. Automatic Transliteration and Back-transliteration by Decision Tree Learning. In Conference on Language Resources and Evaluation. Mehdi M. Kashani, Fred Popowich, and Fatiha Sadat. 2007. Automatic Transliteration of Proper Nouns from Arabic to English. The Challenge of Arabic For NLP/MT, 76-84. Alexandre Klementiev and Dan Roth. 2006. Named entity transliteration and discovery from multilingual comparable corpora. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics. Kevin Knight and Jonathan Graehl. 1997. Machine Transliteration. In Proceedings of the 35th Annual Meeting of the Association for ComputationalLinguistics. Li Haizhou, Zhang Min, and Su Jian. 2004. A Joint Source-Channel Model for Machine Transliteration. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics. Wei-Hao Lin and Hsin-Hsi Chen. 2002. Backward Machine Transliteration by Learning Phonetic Similarity. Sixth Conference on Natural Language Learning, Taipei, Taiwan, 2002. David Matthews. 2007. Machine Transliteration of Proper Names. Master’s Thesis. School of Informatics. University of Edinburgh. Masaaki Nagata, Teruka Saito, and Kenji Suzuki. 2001. Using the Web as a Bilingual Dictionary. In Proceedings of the Workshop on Data-driven Methods in Machine Translation. Bruno Pouliquen, Ralf Steinberger, Camelia Ignat, Irina Temnikova, Anna Widiger, Wajdi Zaghouani, and Jan Zizka. 2006. Multilingual Person Name Recognition and Transliteration. CORELA - COgnition, REpresentation, LAnguage, Poitiers, France. Volume 3/3, number 2, pp. 115-123. Tarek Sherif and Grzegorz Kondrak. 2007. SubstringBased Transliteration. In Proceedings of the 45th Annual Meeting on Association for Computational Linguistics. Richard Sproat, ChengXiang Zhai, and Tao Tao. 2006. Named Entity Transliteration with Comparable Corpora. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting on Association for Computational Linguistics. Bonnie Glover Stalls and Kevin Knight. 1998. Translating Names and Technical Terms in Arabic Text. In Proceedings of the COLING/ACL Workshop on Computational Approaches to Semitic Languages. Stephen Wan and Cornelia Verspoor. 1998. Automatic English-Chinese Name Transliteration for Development of Multilingual Resources. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics. Montreal, Canada. 397
2008
45
Proceedings of ACL-08: HLT, pages 398–406, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Using adaptor grammars to identify synergies in the unsupervised acquisition of linguistic structure Mark Johnson Brown University Mark [email protected] Abstract Adaptor grammars (Johnson et al., 2007b) are a non-parametric Bayesian extension of Probabilistic Context-Free Grammars (PCFGs) which in effect learn the probabilities of entire subtrees. In practice, this means that an adaptor grammar learns the structures useful for generating the training data as well as their probabilities. We present several different adaptor grammars that learn to segment phonemic input into words by modeling different linguistic properties of the input. One of the advantages of a grammar-based framework is that it is easy to combine grammars, and we use this ability to compare models that capture different kinds of linguistic structure. We show that incorporating both unsupervised syllabification and collocation-finding into the adaptor grammar significantly improves unsupervised word-segmentation accuracy over that achieved by adaptor grammars that model only one of these linguistic phenomena. 1 Introduction How humans acquire language is arguably the central issue in the scientific study of language. Human language is richly structured, but it is still hotly debated as to whether this structure can be learnt, or whether it must be innately specified. Computational linguistics can contribute to this debate by identifying which aspects of language can potentially be learnt from the input available to a child. Here we try to identify linguistic properties that convey information useful for learning to segment streams of phonemes into words. We show that simultaneously learning syllable structure and collocations improves word segmentation accuracy compared to models that learn these independently. This suggests that there might be a synergistic interaction in learning several aspects of linguistic structure simultaneously, as compared to learning each kind of linguistic structure independently. Because learning collocations and word-initial syllable onset clusters requires the learner to be able to identify word boundaries, it might seem that we face a chicken-and-egg problem here. One of the important properties of the adaptor grammar inference procedure is that it gives us a way of learning these interacting linguistic structures simultaneously. Adaptor grammars are also interesting because they can be viewed as directly inferring linguistic structure. Most well-known machine-learning and statistical inference procedures are parameter estimation procedures, i.e., the procedure is designed to find the values of a finite vector of parameters. Standard methods for learning linguistic structure typically try to reduce structure learning to parameter estimation, say, by using an iterative generate-andprune procedure in which each iteration consists of a rule generation step that proposes new rules according to some scheme, a parameter estimation step that estimates the utility of these rules, and pruning step that removes low utility rules. For example, the Bayesian unsupervised PCFG estimation procedure devised by Stolcke (1994) uses a model-merging procedure to propose new sets of PCFG rules and a Bayesian version of the EM procedure to estimate their weights. 398 Recently, methods have been developed in the statistical community for Bayesian inference of increasingly sophisticated non-parametric models. (“Non-parametric” here means that the models are not characterized by a finite vector of parameters, so the complexity of the model can vary depending on the data it describes). Adaptor grammars are a framework for specifying a wide range of such models for grammatical inference. They can be viewed as a nonparametric extension of PCFGs. Informally, there seem to be at least two natural ways to construct non-parametric extensions of a PCFG. First, we can construct an infinite number of more specialized PCFGs by splitting or refining the PCFG’s nonterminals into increasingly finer states; this leads to the iPCFG or “infinite PCFG” (Liang et al., 2007). Second, we can generalize over arbitrary subtrees rather than local trees in much the way done in DOP or tree substitution grammar (Bod, 1998; Joshi, 2003), which leads to adaptor grammars. Informally, the units of generalization of adaptor grammars are entire subtrees, rather than just local trees, as in PCFGs. Just as in tree substitution grammars, each of these subtrees behaves as a new context-free rule that expands the subtree’s root node to its leaves, but unlike a tree substitution grammar, in which the subtrees are specified in advance, in an adaptor grammar the subtrees, as well as their probabilities, are learnt from the training data. In order to make parsing and inference tractable we require the leaves of these subtrees to be terminals, as explained in section 2. Thus adaptor grammars are simple models of structure learning, where the subtrees that constitute the units of generalization are in effect new context-free rules learnt during the inference process. (In fact, the inference procedure for adaptor grammars described in Johnson et al. (2007b) relies on a PCFG approximation that contains a rule for each subtree generalization in the adaptor grammar). This paper applies adaptor grammars to word segmentation and morphological acquisition. Linguistically, these exhibit considerable cross-linguistic variation, and so are likely to be learned by human learners. It’s also plausible that semantics and contextual information is less important for their acquisition than, say, syntax. 2 From PCFGs to Adaptor Grammars This section introduces adaptor grammars as an extension of PCFGs; for a more detailed exposition see Johnson et al. (2007b). Formally, an adaptor grammar is a PCFG in which a subset M of the nonterminals are adapted. An adaptor grammar generates the same set of trees as the CFG with the same rules, but instead of defining a fixed probability distribution over these trees as a PCFG does, it defines a distribution over distributions over trees. An adaptor grammar can be viewed as a kind of PCFG in which each subtree of each adapted nonterminal A ∈M is a potential rule, with its own probability, so an adaptor grammar is nonparametric if there are infinitely many possible adapted subtrees. (An adaptor grammar can thus be viewed as a tree substitution grammar with infinitely many initial trees). But any finite set of sample parses for any finite corpus can only involve a finite number of such subtrees, so the corresponding PCFG approximation only involves a finite number of rules, which permits us to build MCMC samplers for adaptor grammars. A PCFG can be viewed as a set of recursivelydefined mixture distributions GA over trees, one for each nonterminal and terminal in the grammar. If A is a terminal then GA is the distribution that puts all of its mass on the unit tree (i.e., tree consisting of a single node) labeled A. If A is a nonterminal then GA is the distribution over trees with root labeled A that satisfies: GA = X A→B1...Bn∈RA θA→B1...BnTDA(GB1, . . . , GBn) where RA is the set of rules expanding A, θA→B1,...,Bn is the PCFG “probability” parameter associated with the rule A →B1 . . . Bn and TDA(GB1, . . . , GBn) is the distribution over trees with root label A satisfying: TDA(G1, . . . , Gn)   XX A t1 tn ... ! = n Y i=1 Gi(ti). That is, TDA(G1, . . . , Gn) is the distribution over trees whose root node is labeled A and each subtree ti is generated independently from the distribution Gi. This independence assumption is what makes a PCFG “context-free” (i.e., each subtree is independent given its label). Adaptor grammars relax 399 this independence assumption by in effect learning the probability of the subtrees rooted in a specified subset M of the nonterminals known as the adapted nonterminals. Adaptor grammars achieve this by associating each adapted nonterminal A ∈M with a Dirichlet Process (DP). A DP is a function of a base distribution H and a concentration parameter α, and it returns a distribution over distributions DP(α, H). There are several different ways to define DPs; one of the most useful is the characterization of the conditional or sampling distribution of a draw from DP(α, H) in terms of the Polya urn or Chinese Restaurant Process (Teh et al., 2006). The Polya urn initially contains αH(x) balls of color x. We sample a distribution from DP(α, H) by repeatedly drawing a ball at random from the urn and then returning it plus an additional ball of the same color to the urn. In an adaptor grammar there is one DP for each adapted nonterminal A ∈M, whose base distribution HA is the distribution over trees defined using A’s PCFG rules. This DP “adapts” A’s PCFG distribution by moving mass from the infrequently to the frequently occuring subtrees. An adaptor grammar associates a distribution GA that satisfies the following constraints with each nonterminal A: GA ∼ DP(αA, HA) if A ∈M GA = HA if A ̸∈M HA = X A→B1...Bn∈RA θA→B1...BnTDA(GB1, . . . , GBn) Unlike a PCFG, an adaptor grammar does not define a single distribution over trees; rather, each set of draws from the DPs defines a different distribution. In the adaptor grammars used in this paper there is no recursion amongst adapted nonterminals (i.e., an adapted nonterminal never expands to itself); it is currently unknown whether there are tree distributions that satisfy the adaptor grammar constraints for recursive adaptor grammars. Inference for an adaptor grammar involves finding the rule probabilities θ and the adapted distributions over trees G. We put Dirichlet priors over the rule probabilities, i.e.: θA ∼ DIR(βA) where θA is the vector of probabilities for the rules expanding the nonterminal A and βA are the corresponding Dirichlet parameters. The applications described below require unsupervised estimation, i.e., the training data consists of terminal strings alone. Johnson et al. (2007b) describe an MCMC procedure for inferring the adapted tree distributions GA, and Johnson et al. (2007a) describe a Bayesian inference procedure for the PCFG rule parameters θ using a MetropolisHastings MCMC procedure; implementations are available from the author’s web site. Informally, the inference procedure proceeds as follows. We initialize the sampler by randomly assigning each string in the training corpus a random tree generated by the grammar. Then we randomly select a string to resample, and sample a parse of that string with a PCFG approximation to the adaptor grammar. This PCFG contains a production for each adapted subtree in the parses of the other strings in the training corpus. A final accept-reject step corrects for the difference in the probability of the sampled tree under the adaptor grammar and the PCFG approximation. 3 Word segmentation with adaptor grammars We now turn to linguistic applications of adaptor grammars, specifically, to models of unsupervised word segmentation. We follow previous work in using the Brent corpus consists of 9790 transcribed utterances (33,399 words) of childdirected speech from the Bernstein-Ratner corpus (Bernstein-Ratner, 1987) in the CHILDES database (MacWhinney and Snow, 1985). The utterances have been converted to a phonemic representation using a phonemic dictionary, so that each occurrence of a word has the same phonemic transcription. Utterance boundaries are given in the input to the system; other word boundaries are not. We evaluated the f-score of the recovered word constituents (Goldwater et al., 2006b). Using the adaptor grammar software available on the author’s web site, samplers were run for 10,000 epochs (passes through the training data). We scored the parses assigned to the training data at the end of sampling, and for the last two epochs we annealed at temperature 0.5 (i.e., squared the probability) during sampling in or400 1 10 100 1000 U word 0.55 0.55 0.55 0.53 U morph 0.46 0.46 0.42 0.36 U syll 0.52 0.51 0.49 0.46 C word 0.53 0.64 0.74 0.76 C morph 0.56 0.63 0.73 0.63 C syll 0.77 0.77 0.78 0.74 Table 1: Word segmentation f-score results for all models, as a function of DP concentration parameter α. “U” indicates unigram-based grammars, while “C” indicates collocation-based grammars. Sentence →Word+ Word →Phoneme+ Figure 1: The unigram word adaptor grammar, which uses a unigram model to generate a sequence of words, where each word is a sequence of phonemes. Adapted nonterminals are underlined. der to concentrate mass on high probability parses. In all experiments below we set β = 1, which corresponds to a uniform prior on PCFG rule probabilities θ. We tied the Dirichlet Process concentration parameters α, and performed runs with α = 1, 10, 100 and 1000; apart from this, no attempt was made to optimize the hyperparameters. Table 1 summarizes the word segmentation f-scores for all models described in this paper. 3.1 Unigram word adaptor grammar Johnson et al. (2007a) presented an adaptor grammar that defines a unigram model of word segmentation and showed that it performs as well as the unigram DP word segmentation model presented by (Goldwater et al., 2006a). The adaptor grammar that encodes a unigram word segmentation model shown in Figure 1. In this grammar and the grammars below, underlining indicates an adapted nonterminal. Phoneme is a nonterminal that expands to each of the 50 distinct phonemes present in the Brent corpus. This grammar defines a Sentence to consist of a sequence of Words, where a Word consists of a sequence of Phonemes. The category Word is adapted, which means that the grammar learns the words that occur in the training corpus. We present our adapSentence →Words Words →Word Words →Word Words Word →Phonemes Phonemes →Phoneme Phonemes →Phoneme Phonemes Figure 2: The unigram word adaptor grammar of Figure 1 where regular expressions are expanded using new unadapted right-branching nonterminals. Sentence Word y u w a n t Word t u Word s i D 6 Word b U k Figure 3: A parse of the phonemic representation of “you want to see the book” produced by unigram word adaptor grammar of Figure 1. Only nonterminal nodes labeled with adapted nonterminals and the start symbol are shown. tor grammars using regular expressions for clarity, but since our implementation does not handle regular expressions in rules, in the grammars actually used by the program they are expanded using new non-adapted nonterminals that rewrite in a uniform right-branching manner. That is, the adaptor grammar used by the program is shown in Figure 2. The unigram word adaptor grammar generates parses such as the one shown in Figure 3. With α = 1 and α = 10 we obtained a word segmentation fscore of 0.55. Depending on the run, between 1, 100 and 1, 400 subtrees (i.e., new rules) were found for Word. As reported in Goldwater et al. (2006a) and Goldwater et al. (2007), a unigram word segmentation model tends to undersegment and misanalyse collocations as individual words. This is presumably because the unigram model has no way to capture dependencies between words in collocations except to make the collocation into a single word. 3.2 Unigram morphology adaptor grammar This section investigates whether learning morphology together with word segmentation improves word segmentation accuracy. Johnson et al. (2007a) presented an adaptor grammar for segmenting verbs into stems and suffixes that implements the DP401 Sentence →Word+ Word →Stem (Suffix) Stem →Phoneme+ Suffix →Phoneme+ Figure 4: The unigram morphology adaptor grammar, which generates each Sentence as a sequence of Words, and each Word as a Stem optionally followed by a Suffix. Parentheses indicate optional constituents. Sentence Word Stem w a n Suffix 6 Word Stem k l o z Suffix I t Sentence Word Stem y u Suffix h & v Word Stem t u Word Stem t E l Suffix m i Figure 5: Parses of “wanna close it” and “you have to tell me” produced by the unigram morphology grammar of Figure 4. The first parse was chosen because it demonstrates how the grammar is intended to analyse “wanna” into a Stem and Suffix, while the second parse shows how the grammar tends to use Stem and Suffix to capture collocations. based unsupervised morphological analysis model presented by Goldwater et al. (2006b). Here we combine that adaptor grammar with the unigram word segmentation grammar to produce the adaptor grammar shown in Figure 4, which is designed to simultaneously learn both word segmentation and morphology. Parentheses indicate optional constituents in these rules, so this grammar says that a Sentence consists of a sequence of Words, and each Word consists of a Stem followed by an optional Suffix. The categories Word, Stem and Suffix are adapted, which means that the grammar learns the Words, Stems and Suffixes that occur in the training corpus. Technically this grammar implements a Hierarchical Dirichlet Process (HDP) (Teh et al., 2006) because the base distribution for the Word DP is itself constructed from the Stem and Suffix distributions, which are themselves generated by DPs. This grammar recovers words with an f-score of only 0.46 with α = 1 or α = 10, which is considerably less accurate than the unigram model of section 3.1. Typical parses are shown in Figure 5. The unigram morphology grammar tends to misanalyse even longer collocations as words than the unigram word grammar does. Inspecting the parses shows that rather than capturing morphological structure, the Stem and Suffix categories typically expand to words themselves, so the Word category expands to a collocation. It may be possible to correct this by “tuning” the grammar’s hyperparameters, but we did not attempt this here. These results are not too surprising, since the kind of regular stem-suffix morphology that this grammar can capture is not common in the Brent corpus. It is possible that a more sophisticated model of morphology, or even a careful tuning of the Bayesian prior parameters α and β, would produce better results. 3.3 Unigram syllable adaptor grammar PCFG estimation procedures have been used to model the supervised and unsupervised acquisition of syllable structure (M¨uller, 2001; M¨uller, 2002); and the best performance in unsupervised acquisition is obtained using a grammar that encodes linguistically detailed properties of syllables whose rules are inferred using a fairly complex algorithm (Goldwater and Johnson, 2005). While that work studied the acquisition of syllable structure from isolated words, here we investigate whether learning syllable structure together with word segmentation improves word segmentation accuracy. Modeling syllable structure is a natural application of adaptor grammars, since the grammar can learn the possible onset and coda clusters, rather than requiring them to be stipulated in the grammar. In the unigram syllable adaptor grammar shown in Figure 7, Consonant expands to any consonant and Vowel expands to any vowel. This grammar defines a Word to consist of up to three Syllables, where each Syllable consists of an Onset and a Rhyme and a Rhyme consists of a Nucleus and a Coda. Following Goldwater and Johnson (2005), the grammar differentiates between OnsetI, which expands to word-initial onsets, and Onset, 402 Sentence Word OnsetI W Nucleus A CodaF t s Word OnsetI D Nucleus I CodaF s Figure 6: A parse of “what’s this” produced by the unigram syllable adaptor grammar of Figure 7. (Only adapted non-root nonterminals are shown in the parse). which expands to non-word-initial onsets, and between CodaF, which expands to word-final codas, and Coda, which expands to non-word-final codas. Note that we do not need to distinguish specific positions within the Onset and Coda clusters as Goldwater and Johnson (2005) did, since the adaptor grammar learns these clusters directly. Just like the unigram morphology grammar, the unigram syllable grammar also defines a HDP because the base distribution for Word is defined in terms of the Onset and Rhyme distributions. The unigram syllable grammar achieves a word segmentation f-score of 0.52 at α = 1, which is also lower than the unigram word grammar achieves. Inspection of the parses shows that the unigram syllable grammar also tends to misanalyse long collocations as Words. Specifically, it seems to misanalyse function words as associated with the content words next to them, perhaps because function words tend to have simpler initial and final clusters. We cannot compare our syllabification accuracy with Goldwater’s and others’ previous work because that work used different, supervised training data and phonological representations based on British rather than American pronunciation. 3.4 Collocation word adaptor grammar Goldwater et al. (2006a) showed that modeling dependencies between adjacent words dramatically improves word segmentation accuracy. It is not possible to write an adaptor grammar that directly implements Goldwater’s bigram word segmentation model because an adaptor grammar has one DP per adapted nonterminal (so the number of DPs is fixed in advance) while Goldwater’s bigram model has one DP per word type, and the number of word types is not known in advance. However it is posSentence →Word+ Word →SyllableIF Word →SyllableI SyllableF Word →SyllableI Syllable SyllableF Syllable →(Onset) Rhyme SyllableI →(OnsetI) Rhyme SyllableF →(Onset) RhymeF SyllableIF →(OnsetI) RhymeF Rhyme →Nucleus (Coda) RhymeF →Nucleus (CodaF) Onset →Consonant+ OnsetI →Consonant+ Coda →Consonant+ CodaF →Consonant+ Nucleus →Vowel+ Figure 7: The unigram syllable adaptor grammar, which generates each word as a sequence of up to three Syllables. Word-initial Onsets and word-final Codas are distinguished using the suffixes “I” and “F” respectively; these are propagated through the grammar to ensure that these appear in the correct positions. Sentence →Colloc+ Colloc →Word+ Word →Phoneme+ Figure 8: The collocation word adaptor grammar, which generates a Sentence as sequence of Colloc(ations), each of which consists of a sequence of Words. sible for an adaptor grammar to generate a sentence as a sequence of collocations, each of which consists of a sequence of words. These collocations give the grammar a way to model dependencies between words. With the DP concentration parameters α = 1000 we obtained a f-score of 0.76, which is approximately the same as the results reported by Goldwater et al. (2006a) and Goldwater et al. (2007). This suggests that the collocation word adaptor grammar can capture inter-word dependencies similar to those that improve the performance of Goldwater’s bigram segmentation model. 3.5 Collocation morphology adaptor grammar One of the advantages of working within a grammatical framework is that it is often easy to combine 403 Sentence Colloc Word y u Word w a n t Word t u Colloc Word s i Word D 6 Word b U k Figure 9: A parse of “you want to see the book” produced by the collocation word adaptor grammar of Figure 8. Sentence →Colloc+ Colloc →Word+ Word →Stem (Suffix) Stem →Phoneme+ Suffix →Phoneme+ Figure 10: The collocation morphology adaptor grammar, which generates each Sentence as a sequence of Colloc(ations), each Colloc as a sequence of Words, and each Word as a Stem optionally followed by a Suffix. different grammar fragments into a single grammar. In this section we combine the collocation aspect of the previous grammar with the morphology component of the grammar presented in section 3.2 to produce a grammar that generates Sentences as sequences of Colloc(ations), where each Colloc consists of a sequence of Words, and each Word consists of a Stem followed by an optional Suffix, as shown in Figure 10. This grammar achieves a word segmentation fscore of 0.73 at α = 100, which is much better than the unigram morphology grammar of section 3.2, but not as good as the collocation word grammar of the previous section. Inspecting the parses shows Sentence Colloc Word Stem y u Word Stem h & v Suffix t u Colloc Word Stem t E l Suffix m i Figure 11: A parse of the phonemic representation of “you have to tell me” using the collocation morphology adaptor grammar of Figure 10. Sentence Colloc Word OnsetI h Nucleus & CodaF v Colloc Word Nucleus 6 Word OnsetI d r Nucleus I CodaF N k Figure 12: A parse of “have a drink” produced by the collocation syllable adaptor grammar. (Only adapted nonroot nonterminals are shown in the parse). that while the ability to directly model collocations reduces the number of collocations misanalysed as words, function words still tend to be misanalysed as morphemes of two-word collocations. In fact, some of the misanalyses have a certain plausibility to them (e.g., “to” is often analysed as the suffix of verbs such as “have”, “want” and “like”, while “me” is often analysed as a suffix of verbs such as “show” and “tell”), but they lower the word f-score considerably. 3.6 Collocation syllable adaptor grammar The collocation syllable adaptor grammar is the same as the unigram syllable adaptor grammar of Figure 7, except that the first production is replaced with the following pair of productions. Sentence →Colloc+ Colloc →Word+ This grammar generates a Sentence as a sequence of Colloc(ations), each of which is composed of a sequence of Words, each of which in turn is composed of a sequence of Syll(ables). This grammar achieves a word segmentation fscore of 0.78 at α = 100, which is the highest fscore of any of the grammars investigated in this paper, including the collocation word grammar, which models collocations but not syllables. To confirm that the difference is significant, we ran a Wilcoxon test to compare the f-scores obtained from 8 runs of the collocation syllable grammar with α = 100 and the collocation word grammar with α = 1000, and found that the difference is significant at p = 0.006. 4 Conclusion and future work This paper has shown how adaptor grammars can be used to study a variety of different linguistic hy404 potheses about the interaction of morphology and syllable structure with word segmentation. Technically, adaptor grammars are a way of specifying a variety of Hierarchical Dirichlet Processes (HDPs) that can spread their support over an unbounded number of distinct subtrees, giving them the ability to learn which subtrees are most useful for describing the training corpus. Thus adaptor grammars move beyond simple parameter estimation and provide a principled approach to the Bayesian estimation of at least some types of linguistic structure. Because of this, less linguistic structure needs to be “built in” to an adaptor grammar compared to a comparable PCFG. For example, the adaptor grammars for syllable structure presented in sections 3.3 and 3.6 learn more information about syllable onsets and codas than the PCFGs presented in Goldwater and Johnson (2005). We used adaptor grammars to study the effects of modeling morphological structure, syllabification and collocations on the accuracy of a standard unsupervised word segmentation task. We showed how adaptor grammars can implement a previously investigated model of unsupervised word segmentation, the unigram word segmentation model. We then investigated adaptor grammars that incorporate one additional kind of information, and found that modeling collocations provides the greatest improvement in word segmentation accuracy, resulting in a model that seems to capture many of the same interword dependencies as the bigram model of Goldwater et al. (2006b). We then investigated grammars that combine these kinds of information. There does not seem to be a straight forward way to design an adaptor grammar that models both morphology and syllable structure, as morpheme boundaries typically do not align with syllable boundaries. However, we showed that an adaptor grammar that models collocations and syllable structure performs word segmentation more accurately than an adaptor grammar that models either collocations or syllable structure alone. This is not surprising, since syllable onsets and codas that occur word-peripherally are typically different to those that appear word-internally, and our results suggest that by tracking these onsets and codas, it is possible to learn more accurate word segmentation. There are a number of interesting directions for future work. In this paper all of the hyperparameters αA were tied and varied simultaneously, but it is desirable to learn these from data as well. Just before the camera-ready version of this paper was due we developed a method for estimating the hyperparameters by putting a vague Gamma hyper-prior on each αA and sampled using Metropolis-Hastings with a sequence of increasingly narrow Gamma proposal distributions, producing results for each model that are as good or better than the best ones reported in Table 1. The adaptor grammars presented here barely scratch the surface of the linguistically interesting models that can be expressed as Hierarchical Dirichlet Processes. The models of morphology presented here are particularly naive—they only capture regular concatenative morphology consisting of one paradigm class—which may partially explain why we obtained such poor results using morphology adaptor grammars. It’s straight forward to design an adaptor grammar that can capture a finite number of concatenative paradigm classes (Goldwater et al., 2006b; Johnson et al., 2007a). We’d like to learn the number of paradigm classes from the data, but doing this would probably require extending adaptor grammars to incorporate the kind of adaptive statesplitting found in the iHMM and iPCFG (Liang et al., 2007). There is no principled reason why this could not be done, i.e., why one could not design an HDP framework that simultaneously learns both the fragments (as in an adaptor grammar) and the states (as in an iHMM or iPCFG). However, inference with these more complex models will probably itself become more complex. The MCMC sampler of Johnson et al. (2007a) used here is satifactory for small and medium-sized problems, but it would be very useful to have more efficient inference procedures. It may be possible to adapt efficient split-merge samplers (Jain and Neal, 2007) and Variational Bayes methods (Teh et al., 2008) for DPs to adaptor grammars and other linguistic applications of HDPs. Acknowledgments This research was funded by NSF awards 0544127 and 0631667. 405 References N. Bernstein-Ratner. 1987. The phonology of parentchild speech. In K. Nelson and A. van Kleeck, editors, Children’s Language, volume 6. Erlbaum, Hillsdale, NJ. Rens Bod. 1998. Beyond grammar: an experience-based theory of language. CSLI Publications, Stanford, California. Sharon Goldwater and Mark Johnson. 2005. Representational bias in unsupervised learning of syllable structure. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL2005), pages 112–119, Ann Arbor, Michigan, June. Association for Computational Linguistics. Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2006a. Contextual dependencies in unsupervised word segmentation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 673–680, Sydney, Australia, July. Association for Computational Linguistics. Sharon Goldwater, Tom Griffiths, and Mark Johnson. 2006b. Interpolating between types and tokens by estimating power-law generators. In Y. Weiss, B. Sch¨olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 459–466, Cambridge, MA. MIT Press. Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2007. Distributional cues to word boundaries: Context is important. In David Bamman, Tatiana Magnitskaia, and Colleen Zaller, editors, Proceedings of the 31st Annual Boston University Conference on Language Development, pages 239–250, Somerville, MA. Cascadilla Press. Sonia Jain and Radford M. Neal. 2007. Splitting and merging components of a nonconjugate dirichlet process mixture model. Bayesian Analysis, 2(3):445–472. Mark Johnson, Thomas Griffiths, and Sharon Goldwater. 2007a. Bayesian inference for PCFGs via Markov chain Monte Carlo. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 139–146, Rochester, New York, April. Association for Computational Linguistics. Mark Johnson, Thomas L. Griffiths, and Sharon Goldwater. 2007b. Adaptor Grammars: A framework for specifying compositional nonparametric Bayesian models. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 641–648. MIT Press, Cambridge, MA. Aravind Joshi. 2003. Tree adjoining grammars. In Ruslan Mikkov, editor, The Oxford Handbook of Computational Linguistics, pages 483–501. Oxford University Press, Oxford, England. Percy Liang, Slav Petrov, Michael Jordan, and Dan Klein. 2007. The infinite PCFG using hierarchical Dirichlet processes. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 688–697. Brian MacWhinney and Catherine Snow. 1985. The child language data exchange system. Journal of Child Language, 12:271–296. Karin M¨uller. 2001. Automatic detection of syllable boundaries combining the advantages of treebank and bracketed corpora training. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics. Karin M¨uller. 2002. Probabilistic context-free grammars for phonology. In Proceedings of the 6th Workshop of the ACL Special Interest Group in Computational Phonology (SIGPHON), pages 70–80, Philadelphia. Andreas Stolcke. 1994. Bayesian Learning of Probabilistic Language Models. Ph.D. thesis, University of California, Berkeley. Y. W. Teh, M. Jordan, M. Beal, and D. Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101:1566–1581. Yee Whye Teh, Kenichi Kurihara, and Max Welling. 2008. Collapsed variational inference for hdp. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20. MIT Press, Cambridge, MA. 406
2008
46
Proceedings of ACL-08: HLT, pages 407–415, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Inducing Gazetteers for Named Entity Recognition by Large-scale Clustering of Dependency Relations Jun’ichi Kazama Japan Advanced Institute of Science and Technology (JAIST), Asahidai 1-1, Nomi, Ishikawa, 923-1292 Japan [email protected] Kentaro Torisawa National Institute of Information and Communications Technology (NICT), 3-5 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0289 Japan [email protected] Abstract We propose using large-scale clustering of dependency relations between verbs and multiword nouns (MNs) to construct a gazetteer for named entity recognition (NER). Since dependency relations capture the semantics of MNs well, the MN clusters constructed by using dependency relations should serve as a good gazetteer. However, the high level of computational cost has prevented the use of clustering for constructing gazetteers. We parallelized a clustering algorithm based on expectationmaximization (EM) and thus enabled the construction of large-scale MN clusters. We demonstrated with the IREX dataset for the Japanese NER that using the constructed clusters as a gazetteer (cluster gazetteer) is a effective way of improving the accuracy of NER. Moreover, we demonstrate that the combination of the cluster gazetteer and a gazetteer extracted from Wikipedia, which is also useful for NER, can further improve the accuracy in several cases. 1 Introduction Gazetteers, or entity dictionaries, are important for performing named entity recognition (NER) accurately. Since building and maintaining high-quality gazetteers by hand is very expensive, many methods have been proposed for automatic extraction of gazetteers from texts (Riloff and Jones, 1999; Thelen and Riloff, 2002; Etzioni et al., 2005; Shinzato et al., 2006; Talukdar et al., 2006; Nadeau et al., 2006). Most studies using gazetteers for NER are based on the assumption that a gazetteer is a mapping from a multi-word noun (MN)1 to named entity categories such as “Tokyo Stock Exchange → {ORGANIZATION}”.2 However, since the correspondence between the labels and the NE categories can be learned by tagging models, a gazetteer will be useful as long as it returns consistent labels even if those returned are not the NE categories. By changing the perspective in such a way, we can explore more broad classes of gazetteers. For example, we can use automatically extracted hyponymy relations (Hearst, 1992; Shinzato and Torisawa, 2004), or automatically induced MN clusters (Rooth et al., 1999; Torisawa, 2001). For instance, Kazama and Torisawa (2007) used the hyponymy relations extracted from Wikipedia for the English NER, and reported improved accuracies with such a gazetteer. We focused on the automatically induced clusters of multi-word nouns (MNs) as the source of gazetteers. We call the constructed gazetteers cluster gazetteers. In the context of tagging, there are several studies that utilized word clusters to prevent the data sparseness problem (Kazama et al., 2001; Miller et al., 2004). However, these methods cannot produce the MN clusters required for constructing gazetteers. In addition, the clustering methods used, such as HMMs and Brown’s algorithm (Brown et al., 1992), seem unable to adequately capture the semantics of MNs since they are based only on the information of adjacent words. We utilized richer 1We used the term, “multi-word”, to emphasize that a gazetteer includes not only one-word expressions but also multi-word expressions. 2Although several categories can be associated in general, we assume that only one category is associated. 407 syntactic/semantic structures, i.e., verb-MN dependencies to make clean MN clusters. Rooth et al. (1999) and Torisawa (2001) showed that the EMbased clustering using verb-MN dependencies can produce semantically clean MN clusters. However, the clustering algorithms, especially the EM-based algorithms, are computationally expensive. Therefore, performing the clustering with a vocabulary that is large enough to cover the many named entities required to improve the accuracy of NER is difficult. We enabled such large-scale clustering by parallelizing the clustering algorithm, and we demonstrate the usefulness of the gazetteer constructed. We parallelized the algorithm of (Torisawa, 2001) using the Message Passing Interface (MPI), with the prime goal being to distribute parameters and thus enable clustering with a large vocabulary. Applying the parallelized clustering to a large set of dependencies collected from Web documents enabled us to construct gazetteers with up to 500,000 entries and 3,000 classes. In our experiments, we used the IREX dataset (Sekine and Isahara, 2000) to demonstrate the usefulness of cluster gazetteers. We also compared the cluster gazetteers with the Wikipedia gazetteer constructed by following the method of (Kazama and Torisawa, 2007). The improvement was larger for the cluster gazetteer than for the Wikipedia gazetteer. We also investigated whether these gazetteers improve the accuracies further when they are used in combination. The experimental results indicated that the accuracy improved further in several cases and showed that these gazetteers complement each other. The paper is organized as follows. In Section 2, we explain the construction of cluster gazetteers and its parallelization, along with a brief explanation of the construction of the Wikipedia gazetteer. In Section 3, we explain how to use these gazetteers as features in an NE tagger. Our experimental results are reported in Section 4. 2 Gazetteer Induction 2.1 Induction by MN Clustering Assume we have a probabilistic model of a multiword noun (MN) and its class: p(n, c) = p(n|c)p(c), where n ∈N is an MN and c ∈C is a class. We can use this model to construct a gazetteer in several ways. The method we used in this study constructs a gazetteer: n →argmax c p(c|n). This computation can be re-written by the Bayes rule as argmax c p(n|c)p(c) using p(n|c) and p(c). Note that we do not exclude non-NEs when we construct the gazetteer. We expect that tagging models (CRFs in our case) can learn an appropriate weight for each gazetteer match regardless of whether it is an NE or not. 2.2 EM-based Clustering using Dependency Relations To learn p(n|c) and p(c) for Japanese, we use the EM-based clustering method presented by Torisawa (2001). This method assumes a probabilistic model of verb-MN dependencies with hidden semantic classes:3 p(v, r, n) = ∑ c p(〈v, r〉|c)p(n|c)p(c), (1) where v ∈V is a verb and n ∈N is an MN that depends on verb v with relation r. A relation, r, is represented by Japanese postpositions attached to n. For example, from the following Japanese sentence, we extract the following dependency: v = 飲む(drink), r = を(”wo” postposition), n = ビール(beer). ビール(beer) を(wo) 飲む(drink) (≈drink beer) In the following, we let vt ≡〈v, r〉∈VT for the simplicity of explanation. To be precise, we attach various auxiliary verb suffixes, such as “れる(reru)”, which is for passivization, into v, since these greatly change the type of n in the dependent position. In addition, we also treated the MN-MN expressions, “MN1 のMN2” (≈“MN2 of MN1”), as dependencies v = MN2, r = の, n = MN1, since these expressions also characterize the dependent MNs well. Given L training examples of verb-MN dependencies {(vti, ni, fi)}L i=1, where fi is the number of dependency (vti, ni) in a corpus, the EM-based clustering tries to find p(vt|c), p(n|c), and p(c) that maximize the (log)-likelihood of the training examples: LL(p) = ∑ i fi log( ∑ c p(vti|c)p(ni|c)p(c)). (2) 3This formulation is based on the formulation presented in Rooth et al. (1999) for English. 408 We iteratively update the probabilities using the EM algorithm. For the update procedures used, see Torisawa (2001). The corpus we used for collecting dependencies was a large set (76 million) of Web documents, that were processed by a dependency parser, KNP (Kurohashi and Kawahara, 2005).4 From this corpus, we extracted about 380 million dependencies of the form {(vti, ni, fi)}L i . 2.3 Parallelization for Large-scale Data The disadvantage of the clustering algorithm described above is the computational costs. The space requirements are O(|VT ||C|+|N||C|+|C|) for storing the parameters, p(vt|c), p(n|c), and p(c)5, plus O(L) for storing the training examples. The time complexity is mainly O(L × |C| × I), where I is the number of update iterations. The space requirements are the main limiting factor. Assume that a floating-point number consumes 8 bytes. With the setting, |N| = 500, 000, |VT | = 500, 000, and |C| = 3, 000, the algorithm requires more than 44 GB for the parameters and 4 GB of memory for the training examples. A machine with more than 48 GB of memory is not widely available even today. Therefore, we parallelized the clustering algorithm, to make it suitable for running on a cluster of PCs with a moderate amount of memory (e.g., 8 GB). First, we decided to store the training examples on a file since otherwise each node would need to store all the examples when we use the data splitting described below, and having every node consume 4 GB of memory is memory-consuming. Since the access to the training data is sequential, this does not slow down the execution when we use a buffering technique appropriately.6 We then split the matrix for the model parameters, p(n|c) and p(vt|c), along with the class coordinate. That is, each cluster node is responsible for storing only a part of classes Cl, i.e., 1/|P| of the parameter matrix, where P is the number of cluster nodes. This data splitting enables linear scalability of memory sizes. However, doing so complicates the update procedure and, in terms of execution speed, may 4Acknowledgements: This corpus was provided by Dr. Daisuke Kawahara of NICT. 5To be precise, we need two copies of these. 6Each node has a copy of the training data on a local disk. Algorithm 2.1: Compute p(cl|vti, ni) localZ = 0, Z = 0 for cl ∈Cl do    d = p(vti|c)p(ni|c)p(c) p(cl|vti, ni) = d localZ += d MPI Allreduce( localZ, Z, 1, MPI DOUBLE, MPI SUM, MPI COMM WORLD) for cl ∈Cl do p(cl|vti, ni) /= Z Figure 1: Parallelized inner-most routine of EM clustering algorithm. Each node executes this code in parallel. offset the advantage of parallelization because each node needs to receive information about the classes that are not on the node in the inner-most routine of the update procedure. The inner-most routine should compute: p(c|vti, ni) = p(vti|c)p(ni|c)p(c)/Z, (3) for each class c, where Z = ∑ c p(vti|c)p(ni|c)p(c) is a normalizing constant. However, Z cannot be calculated without knowing the results of other cluster nodes. Thus, if we use MPI for parallelization, the parallelized version of this routine should resemble the algorithm shown in Figure 1. This routine first computes p(vti|cl)p(ni|cl)p(cl) for each cl ∈Cl, and stores the sum of these values as localZ. The routine uses an MPI function, MPI Allreduce, to sum up localZ of the all cluster nodes and to set Z with the resulting sum. We can compute p(cl|vti, ni) by using this Z to normalize the value. Although the above is the essence of our parallelization, invoking MPI Allreduce in the inner-most loop is very expensive because the communication setup is not so cheap. Therefore, our implementation calculates p(cl|vti, ni) in batches of B examples and calls MPI Allreduce at every B examples.7 We used a value of B = 4, 096 in this study. By using this parallelization, we successfully performed the clustering with |N| = 500, 000, |VT | = 500, 000, |C| = 3, 000, and I = 150, on 8 cluster nodes with a 2.6 GHz Opteron processor and 8 GB of memory. This clustering took about a week. To our knowledge, no one else has performed EMbased clustering of this type on this scale. The resulting MN clusters are shown in Figure 2. In terms of speed, our experiments are still at a preliminary 7MPI Allreduce can also take array arguments and apply the operation to each element of the array in one call. 409 Class 791 Class 2760 ウィンダム   (WINDOM) マリン/スタジアム      (Chiba Marine Stadium [abb.]) カムリ     (CAMRY) 大阪/ドーム         (Osaka Dome) ディアマンテ  (DIAMANTE) ナゴ/ド          (Nagoya Dome [abb.]) オデッセイ   (ODYSSEY) 福岡/ドーム         (Fukuoka Dome) インスパイア   (INSPIRE) 大阪/球場          (Osaka Stadium) スイフト   (SWIFT) ハマ/スタ          (Yokohama Stadium [abb.]) Figure 2: Clean MN clusters with named entity entries (Left: car brand names. Right: stadium names). Names are sorted on the basis of p(c|n). Stadium names are examples of multi-word nouns (word boundaries are indicated by “/”) and also include abbreviated expressions (marked by [abb.]) . stage. We have observed 5 times faster execution, when using 8 cluster nodes with a relatively small setting, |N| = |VT | = 50, 000, |C| = 2, 000. 2.4 Induction from Wikipedia Defining sentences in a dictionary or an encyclopedia have long been used as a source of hyponymy relations (Tsurumaru et al., 1991; Herbelot and Copestake, 2006). Kazama and Torisawa (2007) extracted hyponymy relations from the first sentences (i.e., defining sentences) of Wikipedia articles and then used them as a gazetteer for NER. We used this method to construct the Wikipedia gazetteer. The method described by Kazama and Torisawa (2007) is to first extract the first (base) noun phrase after the first “is”, “was”, “are”, or “were” in the first sentence of a Wikipedia article. The last word in the noun phase is then extracted and becomes the hypernym of the entity described by the article. For example, from the following defining sentence, it extracts “guitarist” as the hypernym for “Jimi Hendrix”. Jimi Hendrix (November 27, 1942) was an American guitarist, singer and songwriter. The second noun phrase is used when the first noun phrase ends with “one”, “kind”, “sort”, or “type”, or it ended with “name” followed by “of”. This rule is for treating expressions like “... is one of the landlocked countries.” By applying this method of extraction to all the articles in Wikipedia, we # instances page titles processed 550,832 articles found 547,779 (found by redirection) (189,222) first sentences found 545,577 hypernyms extracted 482,599 Table 1: Wikipedia gazetteer extraction construct a gazetteer that maps an MN (a title of a Wikipedia article) to its hypernym.8 When the hypernym extraction failed, a special hypernym symbol, e.g., “UNK”, was used. We modified this method for Japanese. After preprocessing the first sentence of an article using a morphological analyzer, MeCab9, we extracted the last noun after the appearance of Japanese postposition “は(wa)” (≈“is”). As in the English case, we also refrained from extracting expressions corresponding to “one of” and so on. From the Japanese Wikipedia entries of April 10, 2007, we extracted 550,832 gazetteer entries (482,599 entries have hypernyms other than UNK). Various statistics for this extraction are shown in Table 1. The number of distinct hypernyms in the gazetteer was 12,786. Although this Wikipedia gazetteer is much smaller than the English version used by Kazama and Torisawa (2007) that has over 2,000,000 entries, it is the largest gazetteer that can be freely used for Japanese NER. Our experimental results show that this Wikipedia gazetteer can be used to improve the accuracy of Japanese NER. 3 Using Gazetteers as Features of NER Since Japanese has no spaces between words, there are several choices for the token unit used in NER. Asahara and Motsumoto (2003) proposed using characters instead of morphemes as the unit to alleviate the effect of segmentation errors in morphological analysis and we also used their character-based method. The NER task is then treated as a tagging task, which assigns IOB tags to each character in a sentence.10 We use Conditional Random Fields (CRFs) (Lafferty et al., 2001) to perform this tagging. The information of a gazetteer is incorporated 8They handled “redirections” as well by following redirection links and extracting a hypernym from the article reached. 9http://mecab.sourceforge.net 10Precisely, we use IOB2 tags. 410 ch にソ ニ ー が開発· · · match O B I I O O O · · · (w/ class) O B-会社I-会社I-会社O O O · · · Figure 3: Gazetteer features for Japanese NER. Here, ‘ソ ニー” means “SONY”, “会社” means “company”, and “ 開発” means “to develop”. as features in a CRF-based NE tagger. We follow the method used by Kazama and Torisawa (2007), which encodes the matching with a gazetteer entity using IOB tags, with the modification for Japanese. They describe using two types of gazetteer features. The first is a matching-only feature, which uses bare IOB tags to encode only matching information. The second uses IOB tags that are augmented with classes (e.g., B-country and I-country).11 When there are several possibilities for making a match, the left-most longest match is selected. The small differences from their work are: (1) We used characters as the unit as we described above, (2) While Kazama and Torisawa (2007) checked only the word sequences that start with a capitalized word and thus exploited the characteristics of English language, we checked the matching at every character, (3) We used a TRIE to make the look-up efficient. The output of gazetteer features for Japanese NER are thus as those shown in Figure 3. These annotated IOB tags can be used in the same way as other features in a CRF tagger. 4 Experiments 4.1 Data We used the CRL NE dataset provided in the IREX competition (Sekine and Isahara, 2000). In the dataset, 1,174 newspaper articles are annotated with 8 NE categories: ARTIFACT, DATE, LOCATION, MONEY, ORGANIZATION, PERCENT, PERSON, and TIME.12 We converted the data into the CoNLL 2003 format, i.e., each row corresponds to a character in this case. We obtained 11,892 sentences13 with 18,677 named entities. We split this data into the training set (9,000 sentences), the de11Here, we call the value returned by a gazetteer a “class”. Features are not output when the returned class is UNK in the case of the Wikipedia gazetteer. We did not observe any significant change if we also used UNK. 12We ignored OPTIONAL category. 13This number includes the number of -DOCSTART- tokens in CoNLL 2003 format. Name Description ch character itself ct character type: uppercase alphabet, lowercase alphabet, katakana, hiragana, Chinese characters, numbers, numbers in Chinese characters, and spaces m mo bare IOB tag indicating boundaries of morphemes m mm IOB tag augmented by morpheme string, indicating boundaries and morphemes m mp IOB tag augmented by morpheme type, indicating boundaries and morpheme types (POSs) bm bare IOB tag indicating “bunsetsu” boundaries (Bunsetsu is a basic unit in Japanese and usually contains content words followed by function words such as postpositions) bi bunsetsu-inner feature. See (Nakano and Hirai, 2004). bp adjacent-bunsetsu feature. See (Nakano and Hirai, 2004). bh head-of-bunsetsu features. See (Nakano and Hirai, 2004). Table 2: Atomic features used in baseline model. velopment set (1,446 sentences), and the testing set (1,446 sentences). 4.2 Baseline Model We extracted the atomic features listed in Table 2 at each character for our baseline model. Though there may be slight differences, these features are based on the standard ones proposed and used in previous studies on Japanese NER such as those by Asahara and Motsumoto (2003), Nakano and Hirai (2004), and Yamada (2007). We used MeCab as a morphological analyzer and CaboCha14 (Kudo and Matsumoto, 2002) as the dependency parser to find the boundaries of the bunsetsu. We generated the node and the edge features of a CRF model as described in Table 3 using these atomic features. 4.3 Training To train CRF models, we used Taku Kudo’s CRF++ (ver. 0.44) 15 with some modifications.16 We 14http://chasen.org/∼taku/software/ CaboCha 15http://chasen.org/˜taku/software/CRF++ 16We implemented scaling, which is similar to that for HMMs (Rabiner, 1989), in the forward-backward phase and replaced the optimization module in the original package with the 411 Node features: {””, x−2, x−1, x0, x+1, x+2} × y0 where x = ch, ct, m mm, m mo, m mp, bi, bp, and bh Edge features: {””, x−1, x0, x+1} × y−1 × y0 where x = ch, ct, and m mp Bigram node features: {x−2x−1, x−1x0, x0x+1} × y0 x = ch, ct, m mo, m mp, bm, bi, bp, and bh Table 3: Baseline features. Value of node feature is determined from current tag, y0, and surface feature (combination of atomic features in Table 2). Value of edge feature is determined by previous tag, y−1, current tag, y0, and surface feature. Subscripts indicate relative position from current character. used Gaussian regularization to prevent overfitting. The parameter of the Gaussian, σ2, was tuned using the development set. We tested 10 points: {0.64, 1.28, 2.56, 5.12, . . . , 163.84, 327.68}. We stopped training when the relative change in the loglikelihood became less than a pre-defined threshold, 0.0001. Throughout the experiments, we omitted the features whose surface part described in Table 3 occurred less than twice in the training corpus. 4.4 Effect of Gazetteer Features We investigated the effect of the cluster gazetteer described in Section 2.1 and the Wikipedia gazetteer described in Section 2.4, by adding each gazetteer to the baseline model. We added the matchingonly and the class-augmented features, and we generated the node and the edge features in Table 3.17 For the cluster gazetteer, we made several gazetteers that had different vocabulary sizes and numbers of classes. The number of clustering iterations was 150 and the initial parameters were set randomly with a Dirichlet distribution (αi = 1.0). The statistics of each gazetteer are summarized in Table 4. The number of entries in a gazetteer is given by “# entries”, and “# matches” is the number of matches that were output for the training set. We define “# e-matches” as the number of matches that also match a boundary of a named entity in the training set, and “# optimal” as the optimal number of “# e-matches” that can be achieved when we know the LMVM optimizer of TAO (version 1.9) (Benson et al., 2007) 17Bigram node features were not used for gazetteer features. oracle of entity boundaries. Note that this cannot be realized because our matching uses the left-most longest heuristics. We define “pre.” as the precision of the output matches (i.e., # e-matches/# matches), and “rec.” as the recall (i.e., # e-matches/# NEs). Here, # NEs = 14, 056. Finally, “opt.” is the optimal recall (i.e., # optimal/# NEs). “# classes” is the number of distinct classes in a gazetteer, and “# used” is the number of classes that were output for the training set. Gazetteers are as follows: “wikip(m)” is the Wikipedia gazetteer (matching only), and “wikip(c)” is the Wikipedia gazetteer (with class-augmentation). A cluster gazetteer, which is constructed by the clustering with |N| = |VT | = X × 1, 000 and |C| = Y × 1, 000, is indicated by “cXk-Y k”. Note that “# entries” is slightly smaller than the vocabulary size since we removed some duplications during the conversion to a TRIE. These gazetteers cover 40 - 50% of the named entities, and the cluster gazetteers have relatively wider coverage than the Wikipedia gazetteer has. The precisions are very low because there are many erroneous matches, e.g., with a entries for a hiragana character.18 Although this seems to be a serious problem, removing such one-character entries does not affect the accuracy, and in fact, makes it worsen slightly. We think this shows one of the strengths of machine learning methods such as CRFs. We can also see that our current matching method is not an optimal one. For example, 16% of the matches were lost as a result of using our left-most longest heuristics for the case of the c500k-2k gazetteer. A comparison of the effect of these gazetteers is shown in Table 5. The performance is measured by the F-measure. First, the Wikipedia gazetteer improved the accuracy as expected, i.e., it reproduced the result of Kazama and Torisawa (2007) for Japanese NER. The improvement for the testing set was 1.08 points. Second, all the tested cluster gazetteers improved the accuracy. The largest improvement was 1.55 points with the c300k-3k gazetteer. This was larger than that of the Wikipedia gazetteer. The results for c300k-Y k gazetteers show a peak of the improvement at some number of clusters. In this case, |C| = 3, 000 achieved the best improvement. The results of cXk-2k gazetteers in18Wikipedia contains articles explaining each hiragana character, e.g., “あis a hiragana character”. 412 Name # entries # matches # e-matches # optimal pre. (%) rec. (%) opt. rec. (%) # classes # used wikip(m) 550,054 225,607 6,804 7,602 3.02 48.4 54.1 N/A N/A wikip(c) 550,054 189,029 5,441 6,064 2.88 38.7 43.1 12,786 1,708 c100k-2k 99,671 193,897 6,822 8,233 3.52 48.5 58.6 2,000 1,910 c300k-2k 295,695 178,220 7,377 9,436 4.14 52.5 67.1 2,000 1,973 c300k-1k ↑ ↑ ↑ ↑ ↑ ↑ ↑ 1,000 982 c300k-3k ↑ ↑ ↑ ↑ ↑ ↑ ↑ 3,000 2,848 c300k-4k ↑ ↑ ↑ ↑ ↑ ↑ ↑ 4,000 3,681 c500k-2k 497,101 174,482 7,470 9,798 4.28 53.1 69.7 2,000 1,951 c500k-3k ↑ ↑ ↑ ↑ ↑ ↑ ↑ 3,000 2,854 Table 4: Statistics of various gazetteers. Model F (dev.) F (test.) best σ2 baseline 87.23 87.42 20.48 +wikip 87.60 88.50 2.56 +c300k-1k 88.74 87.98 40.96 +c300k-2k 88.75 88.01 163.84 +c300k-3k 89.12 88.97 20.48 +c300k-4k 88.99 88.40 327.68 +c100k-2k 88.15 88.06 20.48 +c500k-2k 88.80 88.12 40.96 +c500k-3k 88.75 88.03 20.48 Table 5: Comparison of gazetteer features. Model F (dev.) F (test.) best σ2 +wikip+c300k-1k 88.65 *89.32 0.64 +wikip+c300k-2k *89.22 *89.13 10.24 +wikip+c300k-3k 88.69 *89.62 40.96 +wikip+c300k-4k 88.67 *89.19 40.96 +wikip+c500k-2k *89.26 *89.19 2.56 +wikip+c500k-3k *88.80 *88.60 10.24 Table 6: Effect of combination. Figures with * mean that accuracy was improved by combining gazetteers. dicate that the larger a gazetteer is, the larger the improvement. However, the accuracies of the c300k-3k and c500k-3k gazetteers seem to contradict this tendency. It might be caused by the accidental low quality of the clustering that results from random initialization. We need to investigate this further. 4.5 Effect of Combining the Cluster and the Wikipedia Gazetteers We have observed that using the cluster gazetteer and the Wikipedia one improves the accuracy of Japanese NER. The next question is whether these gazetteers improve the accuracy further when they are used together. The accuracies of models that use the Wikipedia gazetteer and one of the cluster gazetteers at the same time are shown in Table 6. The accuracy was improved in most cases. HowModel F (Asahara and Motsumoto, 2003) 87.21 (Nakano and Hirai, 2004) 89.03 (Yamada, 2007) 88.33 (Sasano and Kurohashi, 2008) 89.40 proposed (baseline) 87.62 proposed (+wikip) 88.14 proposed (+c300k-3k) 88.45 proposed (+c500k-2k) 88.41 proposed (+wikip+c300k-3k) 88.93 proposed (+wikip+c500k-2k) 88.71 Table 7: Comparison with previous studies ever, there were some cases where the accuracy for the development set was degraded. Therefore, we should state at this point that while the benefit of combining these gazetteers is not consistent in a strict sense, it seems to exist. The best performance, F = 89.26 (dev.) / 89.19 (test.), was achieved when we combined the Wikipedia gazetteer and the cluster gazetteer, c500k-2k. This means that there was a 1.77-point improvement from the baseline for the testing set. 5 Comparison with Previous Studies Since many previous studies on Japanese NER used 5-fold cross validation for the IREX dataset, we also performed it for some our models that had the best σ2 found in the previous experiments. The results are listed in Table 7 with references to the results of recent studies. These results not only reconfirmed the effects of the gazetteer features shown in the previous experiments, but they also showed that our best model is comparable to the state-of-theart models. The system recently proposed by Sasano and Kurohashi (2008) is currently the best system for the IREX dataset. It uses many structural features that are not used in our model. Incorporating 413 such features might improve our model further. 6 Related Work and Discussion There are several studies that used automatically extracted gazetteers for NER (Shinzato et al., 2006; Talukdar et al., 2006; Nadeau et al., 2006; Kazama and Torisawa, 2007). Most of the methods (Shinzato et al., 2006; Talukdar et al., 2006; Nadeau et al., 2006) are oriented at the NE category. They extracted a gazetteer for each NE category and utilized it in a NE tagger. On the other hand, Kazama and Torisawa (2007) extracted hyponymy relations, which are independent of the NE categories, from Wikipedia and utilized it as a gazetteer. The effectiveness of this method was demonstrated for Japanese NER as well by this study. Inducing features for taggers by clustering has been tried by several researchers (Kazama et al., 2001; Miller et al., 2004). They constructed word clusters by using HMMs or Brown’s clustering algorithm (Brown et al., 1992), which utilize only information from neighboring words. This study, on the other hand, utilized MN clustering based on verbMN dependencies (Rooth et al., 1999; Torisawa, 2001). We showed that gazetteers created by using such richer semantic/syntactic structures improves the accuracy for NER. The size of the gazetteers is also a novel point of this study. The previous studies, with the exception of Kazama and Torisawa (2007), used smaller gazetteers than ours. Shinzato et al. (2006) constructed gazetteers with about 100,000 entries in total for the “restaurant” domain; Talukdar et al. (2006) used gazetteers with about 120,000 entries in total, and Nadeau et al. (2006) used gazetteers with about 85,000 entries in total. By parallelizing the clustering algorithm, we successfully constructed a cluster gazetteer with up to 500,000 entries from a large amount of dependency relations in Web documents. To our knowledge, no one else has performed this type of clustering on such a large scale. Wikipedia also produced a large gazetteer of more than 550,000 entries. However, comparing these gazetteers and ours precisely is difficult at this point because the detailed information such as the precision and the recall of these gazetteers were not reported.19 Recently, Inui et al. (2007) investi19Shinzato et al. (2006) reported some useful statistics about gated the relation between the size and the quality of a gazetteer and its effect. We think this is one of the important directions of future research. Parallelization has recently regained attention in the machine learning community because of the need for learning from very large sets of data. Chu et al. (2006) presented the MapReduce framework for a wide range of machine learning algorithms, including the EM algorithm. Newman et al. (2007) presented parallelized Latent Dirichlet Allocation (LDA). However, these studies focus on the distribution of the training examples and relevant computation, and ignore the need that we found for the distribution of model parameters. The exception, which we noticed recently, is a study by Wolfe et al. (2007), which describes how each node stores only those parameters relevant to the training data on each node. However, some parameters need to be duplicated and thus their method is less efficient than ours in terms of memory usage. We used the left-most longest heuristics to find the matching gazetteer entries. However, as shown in Table 4 this is not an optimal method. We need more sophisticated matching methods that can handle multiple matching possibilities. Using models such as Semi-Markov CRFs (Sarawagi and Cohen, 2004), which handle the features on overlapping regions, is one possible direction. However, even if we utilize the current gazetteers optimally, the coverage is upper bounded at 70%. To cover most of the named entities in the data, we need much larger gazetteers. A straightforward approach is to increase the number of Web documents used for the MN clustering and to use larger vocabularies. 7 Conclusion We demonstrated that a gazetteer obtained by clustering verb-MN dependencies is a useful feature for a Japanese NER. In addition, we demonstrated that using the cluster gazetteer and the gazetteer extracted from Wikipedia (also shown to be useful) can together further improves the accuracy in several cases. Future work will be to refine the matching method and to construct even larger gazetteers. their gazetteers. 414 References M. Asahara and Y. Motsumoto. 2003. Japanese named entity extraction with redundant morphological analysis. S. Benson, L. C. McInnes, J. Mor´e, T. Munson, and J. Sarich. 2007. TAO user manual (revision 1.9). Technical Report ANL/MCS-TM-242, Mathematics and Computer Science Division, Argonne National Laboratory. http://www.mcs.anl.gov/tao. P. F. Brown, V. J. Della Pietra, P. V. deSouza, J. C. Lai, and R. L. Mercer. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18(4):467–479. C.-T. Chu, S. K. Kim, Y.-A. Lin, Y. Yu, G. Bradski, A. Y. Ng, and K. Olukotun. 2006. Map-reduce for machine learning on multicore. In NIPS 2006. O. Etzioni, M. Cafarella, D. Downey, A. M. Popescu, T. Shaked, S. Soderland, D. S. Weld, and A. Yates. 2005. Unsupervised named-entity extraction from the Web – an experimental study. Artificial Intelligence Journal. M. A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proc. of the 14th International Conference on Computational Linguistics, pages 539–545. A. Herbelot and A. Copestake. 2006. Acquiring ontological relationships from Wikipedia using RMRS. In Workshop on Web Content Mining with Human Language Technologies ISWC06. T. Inui, K. Murakami, T. Hashimoto, K. Utsumi, and M. Ishikawa. 2007. A study on using gazetteers for organization name recognition. In IPSJ SIG Technical Report 2007-NL-182 (in Japanese). J. Kazama and K. Torisawa. 2007. Exploiting Wikipedia as external knowledge for named entity recognition. In EMNLP-CoNLL 2007. J. Kazama, Y. Miyao, and J. Tsujii. 2001. A maximum entropy tagger with unsupervised hidden Markov models. In NLPRS 2001. T. Kudo and Y. Matsumoto. 2002. Japanese dependency analysis using cascaded chunking. In CoNLL 2002. S. Kurohashi and D. Kawahara. 2005. KNP (KurohashiNagao parser) 2.0 users manual. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML 2001. S. Miller, J. Guinness, and A. Zamanian. 2004. Name tagging with word clusters and discriminative training. In HLT-NAACL04. D. Nadeau, Peter D. Turney, and Stan Matwin. 2006. Unsupervised named-entity recognition: Generating gazetteers and resolving ambiguity. In 19th Canadian Conference on Artificial Intelligence. K. Nakano and Y. Hirai. 2004. Japanese named entity extraction with bunsetsu features. IPSJ Journal (in Japanese). D. Newman, A. Asuncion, P. Smyth, and M. Welling. 2007. Distributed inference for latent dirichlet allocation. In NIPS 2007. L. R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286. E. Riloff and R. Jones. 1999. Learning dictionaries for information extraction by multi-level bootstrapping. In 16th National Conference on Artificial Intelligence (AAAI-99). M. Rooth, S. Riezler, D. Presher, G. Carroll, and F. Beil. 1999. Inducing a semantically annotated lexicon via EM-based clustering. S. Sarawagi and W. W. Cohen. 2004. Semi-Markov random fields for information extraction. In NIPS 2004. R. Sasano and S. Kurohashi. 2008. Japanese named entity recognition using structural natural language processing. In IJCNLP 2008. S. Sekine and H. Isahara. 2000. IREX: IR and IE evaluation project in Japanese. In IREX 2000. K. Shinzato and K. Torisawa. 2004. Acquiring hyponymy relations from Web documents. In HLTNAACL 2004. K. Shinzato, S. Sekine, N. Yoshinaga, and K. Torisawa. 2006. Constructing dictionaries for named entity recognition on specific domains from the Web. In Web Content Mining with Human Language Technologies Workshop on the 5th International Semantic Web. P. P. Talukdar, T. Brants, M. Liberman, and F. Pereira. 2006. A context pattern induction method for named entity extraction. In CoNLL 2006. M. Thelen and E. Riloff. 2002. A bootstrapping method for learning semantic lexicons using extraction pattern context. In EMNLP 2002. K. Torisawa. 2001. An unsupervised method for canonicalization of Japanese postpositions. In NLPRS 2001. H. Tsurumaru, K. Takeshita, K. Iami, T. Yanagawa, and S. Yoshida. 1991. An approach to thesaurus construction from Japanese language dictionary. In IPSJ SIG Notes Natural Language vol.83-16, (in Japanese). J. Wolfe, A. Haghighi, and D. Klein. 2007. Fully distributed EM for very large datasets. In NIPS Workshop on Efficient Machine Learning. H. Yamada. 2007. Shift-reduce chunking for Japanese named entity extraction. In ISPJ SIG Technical Report 2007-NL-179. 415
2008
47
Proceedings of ACL-08: HLT, pages 416–424, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Evaluating Roget’s Thesauri Alistair Kennedy School of Information Technology and Engineering University of Ottawa Ottawa, Ontario, Canada [email protected] Stan Szpakowicz School of Information Technology and Engineering University of Ottawa Ottawa, Ontario, Canada and Institute of Computer Science Polish Academy of Sciences Warsaw, Poland [email protected] Abstract Roget’s Thesaurus has gone through many revisions since it was first published 150 years ago. But how do these revisions affect Roget’s usefulness for NLP? We examine the differences in content between the 1911 and 1987 versions of Roget’s, and we test both versions with each other and WordNet on problems such as synonym identification and word relatedness. We also present a novel method for measuring sentence relatedness that can be implemented in either version of Roget’s or in WordNet. Although the 1987 version of the Thesaurus is better, we show that the 1911 version performs surprisingly well and that often the differences between the versions of Roget’s and WordNet are not statistically significant. We hope that this work will encourage others to use the 1911 Roget’s Thesaurus in NLP tasks. 1 Introduction Roget’s Thesaurus, first introduced over 150 years ago, has gone through many revisions to reach its current state. We compare two versions, the 1987 and 1911 editions of the Thesaurus with each other and with WordNet 3.0. Roget’s Thesaurus has a unique structure, quite different from WordNet, of which the NLP community has yet to take full advantage. In this paper we demonstrate that although the 1911 version of the Thesaurus is very old, it can give results comparable to systems that use WordNet or newer versions of Roget’s Thesaurus. The main motivation for working with the 1911 Thesaurus instead of newer versions is that it is in the public domain, along with related NLP-oriented software packages. For applications that call for an NLP-friendly thesaurus, WordNet has become the de-facto standard. Although WordNet is a fine resources, we believe that ignoring other thesauri is a serious oversight. We show on three applications how useful the 1911 Thesaurus is. We ran the wellestablished tasks of determining semantic relatedness of pairs of terms and identifying synonyms (Jarmasz and Szpakowicz, 2004). We also proposed a new method of representing the meaning of sentences or other short texts using either WordNet or Roget’s Thesaurus, and tested it on the data set provided by Li et al. (2006). We hope that this work will encourage others to use Roget’s Thesaurus in their own NLP tasks. Previous research on the 1987 version of Roget’s Thesaurus includes work of Jarmasz and Szpakowicz (2004). They propose a method of determining semantic relatedness between pairs of terms. Terms that appear closer together in the Thesaurus get higher weights than those farther apart. The experiments aimed at identifying synonyms using a modified version of the proposed semantic similarity function. Similar experiments were carried out using WordNet in combination with a variety of semantic relatedness functions. Roget’s Thesaurus was found generally to outperform WordNet on these problems. We have run similar experiments using the 1911Thesaurus. Lexical chains have also been developed using the 1987 Roget’s Thesaurus (Jarmasz and Szpakowicz, 2003). The procedure maps words in a text to the Head (a Roget’s concept) from which they are most likely to come. Although we did not experiment 416 with lexical chains here, they were an inspiration for our sentence relatedness function. Roget’s Thesaurus does not explicitly label the relations between its terms, as WordNet does. Instead, it groups terms together with implied relations. Kennedy and Szpakowicz (2007) show how disambiguating one of these relations, hypernymy, can help improve the semantic similarity functions in (Jarmasz and Szpakowicz, 2004). These hypernym relations were also put towards solving analogy questions. This is not the first time the 1911 version of Roget’s Thesaurus has been used in NLP research. Cassidy (2000) used it to build the semantic network FACTOTUM. This required significant (manual) restructuring, so FACTOTUM cannot really be considered a true version of Roget’s Thesaurus. The 1987 data come from Penguin’s Roget’s Thesaurus (Kirkpatrick, 1987). The 1911 version is available from Project Gutenberg1. We use WordNet 3.0, the latest version (Fellbaum, 1998). In the experiments we present here, we worked with an interface to Roget’s Thesaurus implemented in Java 5.02. It is built around a large index which stores the location in the thesaurus of each word or phrase; the system individually indexes all words within each phrase, as well as the phrase itself. This was shown to improve results in a few applications, which we will discuss later in the paper. 2 Content comparison of the 1911 and 1987 Thesauri Although the 1987 and 1911 Thesauri are very similar in structure, there are a few differences, among them, the number of levels and the number of partsof-speech represented. For example, the 1911 version contains some pronouns as well as more sections dedicated to phrases. There are nine levels in Roget’s Thesaurus hierarchy, from Class down to Word. We show them in Table 1 along with the counts of instances of each level. An example of a Class in the 1911 Thesaurus is “Words Expressing Abstract Relations”, a Section in that Class is “Quantity” with a Subsection “Comparative Quantity”. Heads can be thought of as the heart of the Thesaurus because it is at this level that 1http://www.gutenberg.org/ebooks/22 2http://rogets.site.uottawa.ca/ Hierarchy 1911 1987 Class 8 8 Section 39 39 Subsection 97 95 Head Group 625 596 Head 1044 990 Part-of-speech 3934 3220 Paragraph 10244 6443 Semicolon Group 43196 59915 Total Words 98924 225124 Unique Words 59768 100470 Table 1: Frequencies of each level of the hierarchy in the 1911 and 1987 Thesauri. the lexical material, organized into approximately a thousand concepts, resides. Head Groups often pair up opposites, for example Head #1 “Existence” and Head #2 “Nonexistence” are found in the same Head Group in both versions of the Thesaurus. Terms in the Thesaurus may be labelled with cross-references to other words in different Heads. We did not use these references in our experiments. The part-of-speech level is a little confusing, since clearly no such grouping contains an exhaustive list of all nouns, all verbs etc. We will write “POS” to indicate a structure in Roget’s and “part-of-speech” to indicate the word category in general. The four main parts-of-speech represented in a POS are nouns, verbs, adjectives and adverbs. Interjections are also included in both the 1911 and 1987 thesauri; they are usually phrases followed by an exclamation mark, such as “for God’s sake!” and “pshaw!”. The Paragraph and Semicolon Group are not given names, but can often be represented by the first word. The 1911 version also contains phrases (mostly quotations), prefixes and pronouns. There are only three prefixes – “tri-”, “tris-”, “laevo-” – and six pronouns – “he”, “him”, “his”, “she”, “her”, “hers”. Table 2 shows the frequency of paragraphs, semicolon groups and both total and unique words in a given type of POS. Many terms occur both in the 1911 and 1987 Thesauri, but many more are unique to either. Surprisingly, quite a few 1911 terms do not appear in the 1987 data, as shown in Table 3; many of them may have been considered obsolete and thus dropped from the 1987 version. For example “ingrafted” appears in the same semicolon group as 417 POS Paragraph Semicolon Grp 1911 1987 1911 1987 Noun 4495 2884 19215 31174 Verb 2402 1499 10838 13958 Adjective 2080 1501 9097 12893 Adverb 594 499 2028 1825 Interjection 108 60 149 65 Phrase 561 0 1865 0 Total Word Unique Words 1911 1987 1911 1987 Noun 46308 114473 29793 56187 Verb 25295 55724 15150 24616 Adjective 20447 48802 12739 21614 Adverb 4039 5720 3016 4144 Interjection 598 405 484 383 Phrase 2228 0 2038 0 Table 2: Frequencies of paragraphs, semicolon groups, total words and unique words by their part of speech; we omitted prefixes and pronouns. POS Both Only 1911 Only 1987 All 35343 24425 65127 N. 18685 11108 37502 Vb. 8618 6532 15998 Adj. 8584 4155 13030 Adv. 1684 1332 2460 Int. 68 416 315 Phr. 0 2038 0 Table 3: Frequencies of terms in either the 1911 or 1987 Thesaurus, and in both; we omitted prefixes and pronouns. “implanted” in the older but not the newer version. Some mismatches may be due to small changes in spelling, for example, “Nirvana” is capitalized in the 1911 version, but not in the 1987 version. The lexical data in Project Gutenberg’s 1911 Roget’s appear to have been somewhat added to. For example, the citation “Go ahead, make my day!” from the 1971 movie Dirty Harry appears twice (in Heads #715-Defiance and #761-Prohibition) within the Phrase POS. It is not clear to what extent new terms have been added to the original 1911 Roget’s Thesaurus, or what the criteria for adding such new elements could have been. In the end, there are many differences between the 1987 and 1911 Roget’s Thesauri, primarily in content rather than in structure. The 1987 Thesaurus is largely an expansion of the 1911 version, with three POSs (phrases, pronouns and prefixes) removed. 3 Comparison on applications In this section we consider how the two versions of Roget’s Thesaurus and WordNet perform in three applications – measuring word relatedness, synonym identification, and sentence relatedness. 3.1 Word relatedness Relatedness can be measured by the closeness of the words or phrases – henceforth referred to as terms – in the structure of the thesaurus. Two terms in the same semicolon group score 16, in the same paragraph – 14, and so on (Jarmasz and Szpakowicz, 2004). The score is 0 if the terms appear in different classes, or if either is missing. Pairs of terms get higher scores for being closer together. When there are multiple senses of two terms A and B, we want to select senses a ∈A and b ∈B that maximize the relatedness score. We define a distance function: semDist(A, B) = max a∈A,b∈B2 ∗(depth(lca(a, b))) lca is the lowest common ancestor and depth is the depth in the Roget’s hierarchy; a Class has depth 0, Section 1, ..., Semicolon Group 8. If we think of the function as counting edges between concepts in the Roget’s hierarchy, then it could also be written as: semDist(A, B) = max a∈A,b∈B16−edgesBetween(a, b) We do not count links between words in the same semicolon group, so in effect these methods find distances between semicolon groups, that is to say, these two functions will give the same results. The 1911 and 1987 Thesauri were compared with WordNet 3.0 on the three data sets containing pairs of words with manually assigned similarity scores: 30 pairs (Miller and Charles, 1991), 65 pairs (Rubenstein and Goodenough, 1965) and 353 pairs3 (Finkelstein et al., 2001). We assume that all terms are nouns, so that we can have a fair comparison of the two Thesauri with WordNet. We measure the correlation with Pearson’s Correlation Coefficient. 3http://www.cs.technion.ac.il/˜gabr/resources/data/ wordsim353/wordsim353.html 418 Year Miller & Rubenstein & Finkelstein Charles Goodenough et. al Index words and phrase 1911 0.7846 0.7313 0.3449 1987 0.7984 0.7865 0.4214 Index phrase only 1911 0.7090 0.7168 0.3373 1987 0.7471 0.7777 0.3924 Table 4: Pearson’s coefficient values when not breaking / breaking phrases up. A preliminary experiment set out to determine whether there is any advantage to indexing the words in a phrase separately, for example, whether the phrase “change of direction” should be indexed only as a whole, or as all of “change”, “of”, “direction” and “change of direction”. The outcome of this experiment appears in Table 4. There is a clear improvement: breaking phrases up gives superior results on all three data sets, for both versions of Roget’s. In the remaining experiments, we have each word in a phrase indexed. We compare the results for the 1911 and 1987 Roget’s Thesauri with a variety of WordNet-based semantic relatedness measures – see Table 5. We consider 10 measures, noted in the table as J&C (Jiang and Conrath, 1997), Resnik (Resnik, 1995), Lin (Lin, 1998), W&P (Wu and Palmer, 1994), L&C (Leacock and Chodorow, 1998), H&SO (Hirst and St-Onge, 1998), Path (counts edges between synsets), Lesk (Banerjee and Pedersen, 2002), and finally Vector and Vector Pair (Patwardhan, 2003). The latter two work with large vectors of cooccurring terms from a corpus, so WordNet is only part of the system. We used Pedersen’s Semantic Distance software package (Pedersen et al., 2004). The results suggest that neither version of Roget’s is best for these data sets. In fact, the Vector method is superior on all three sets, and the Lesk algorithm performs very closely to Roget’s 1987. Even on the largest set (Finkelstein et al., 2001), however, the differences between Roget’s Thesaurus and the Vector method are not statistically significant at the p < 0.05 level for either thesaurus on a two-tailed test4. The difference between the 1911 Thesaurus and Vector would be statistically signifi4http://faculty.vassar.edu/lowry/rdiff.html Method Miller & Rubenstein & Finkelstein Charles Goodenough et. al 1911 0.7846 0.7313 0.3449 1987 0.7984 0.7865 0.4214 J&C 0.4735 0.5755 0.2273 Resnik 0.8060 0.8224 0.3531 Lin 0.7388 0.7264 0.2932 W&P 0.7641 0.7973 0.2676 L&C 0.7792 0.8387 0.3094 H&SO 0.6668 0.7258 0.3548 Path 0.7550 0.7842 0.3744 Lesk 0.7954 0.7780 0.4220 Vector 0.8645 0.7929 0.4621 Vct Pair 0.5101 0.5810 0.3722 Table 5: Pearson’s coefficient values for three data sets on a variety of relatedness functions. cant at p < 0.07. On the (Miller and Charles, 1991) and (Rubenstein and Goodenough, 1965) data sets the best system did not show a statistically significant improvement over the 1911 or 1987 Roget’s Thesauri, even at p < 0.1 for a two-tailed test. These data sets are too small for a meaningful comparison of systems with close correlation scores. 3.2 Synonym identification In this problem we take a term q and we seek the correct synonym s from a set C. There are two steps. We used the system from (Jarmasz and Szpakowicz, 2004) for identifying synonyms with Roget’s. First we find a set of terms B ⊆C with the maximum relatedness between q and each term x ∈C: B = {x | argmax x∈C semDist(x, q)} Next, we take the set of terms A ⊆B where each a ∈A has the maximum number of shortest paths between a and q. A = {x | argmax x∈B numberShortestPaths(x, q)} If s ∈A and |A| = 1, the correct synonym has been selected. Often the sets A and B will contain just one item. If s ∈A and |A| > 1, there is a tie. If s /∈A then the selected synonyms are incorrect. If a multi-word phrase c ∈C of length n is not found, 419 ESL Method Yes Tie No QNF ANF ONF 1911 27 3 20 0 3 3 1987 36 6 8 0 0 1 J&C 30 4 16 4 4 10 Resnik 26 6 18 4 4 10 Lin 31 5 14 4 4 10 W&P 31 6 13 4 4 10 L&C 29 11 10 4 4 10 H&SO 34 4 12 0 0 0 Path 30 11 9 4 4 10 Lesk 38 0 12 0 0 0 Vector 39 0 11 0 0 0 VctPair 40 0 10 0 0 0 TOEFL 1911 52 3 25 10 5 25 1987 59 7 14 4 4 17 J&C 34 37 9 33 31 90 Resnik 37 37 6 33 31 90 Lin 33 41 6 33 31 90 W&P 39 36 5 33 31 90 L&C 38 36 6 33 31 90 H&SO 60 16 4 1 0 1 Path 38 36 6 33 31 90 Lesk 70 1 9 1 0 1 Vector 69 1 10 1 0 1 VctPair 65 2 13 1 0 1 RDWP 1911 157 13 130 57 13 76 1987 198 17 85 22 5 17 J&C 100 146 54 62 58 150 Resnik 114 114 72 62 58 150 Lin 94 160 46 62 58 150 W&P 147 87 66 62 58 150 L&C 149 93 58 62 58 150 H&SO 170 82 48 4 6 5 Path 148 96 56 62 58 150 Lesk 220 7 73 4 6 5 Vector 216 7 73 4 6 5 VctPair 187 10 103 4 6 5 Table 6: Synonym selection experiments. it is replaced by each of its words c1, c2..., cn, and each of these words is considered in turn. The ci that is closest to q is chosen to represent c. When searching for a word in Roget’s or WordNet, we look for all forms of the word. The results of these experiments appear in Table 6. “Yes” indicates correct answers, “No” – incorrect answers, and “Tie” is for ties. QNF stands for “Question word Not Found”, ANF for “Answer word Not Found” and ONF for “Other word Not Found”. We used three data sets for this application: 80 questions taken from the Test of English as a Foreign Language (TOEFL) (Landauer and Dumais, 1997), 50 questions – from the English as a Second Language test (ESL) (Turney, 2001) and 300 questions – from the Reader’s Digest Word Power Game (RDWP) (Lewis, 2000 and 2001). Lesk and the Vector-based systems perform better than all others, including Roget’s 1911 and 1987. Even so, both versions of Roget’s Thesaurus performed well, and were never worse than the worst WordNet systems. In fact, six of the ten WordNet-based methods are consistently worse than the 1911 Thesaurus. Since the two Vector-based systems make use of additional data beyond WordNet, Lesk is the only completely WordNet-based system to outperform Roget’s 1987. One advantage of Roget’s Thesaurus is that both versions generally have fewer missing terms than WordNet, though Lesk, Hirst & St-Onge and the two vector based methods had fewer missing terms than Roget’s. This may be because the other WordNet methods will only work for nouns and verbs. 3.3 Sentence relatedness Our final experiment concerns sentence relatedness. We worked with a data set from (Li et al., 2006)5. They took a subset of the term pairs from (Rubenstein and Goodenough, 1965) and chose sentences to represent these terms; the sentences are definitions from the Collins Cobuild dictionary (Sinclair, 2001). Thirty people were then asked to assign relatedness scores to these sentences, and the average of these similarities was taken for each sentence. Other methods of determining sentence semantic relatedness expand term relatedness functions to 5http://www.docm.mmu.ac.uk/STAFF/D.McLean/ SentenceResults.htm 420 create a sentence relatedness function (Islam and Inkpen, 2007; Mihalcea et al., 2006). We propose to approach the task by exploiting in other ways the commonalities in the structure of Roget’s Thesaurus and of WordNet. We use the OpenNLP toolkit6 for segmentation and part-of-speech tagging. We use a method of sentence representation that involves mapping the sentence into weighted concepts in either Roget’s or WordNet. We mean a concept in Roget’s to be either a Class, Section, ..., Semicolon Group, while a concept in WordNet is any synset. Essentially a concept is a grouping of words from either resource. Concepts are weighted by two criteria. The first is how frequently words from the sentence appear in these concepts. The second is the depth (or specificity) of the concept itself. 3.3.1 Weighting based on word frequency Each word and punctuation mark w in a sentence is given a score of 1. (Naturally, only open-category words will be found in the thesaurus.) If w has n word senses w1, ..., wn, each sense gets a score of 1/n, so that 1/n is added to each concept in the Roget’s hierarchy (semicolon group, paragraph, ..., class) or WordNet hierarchy that contains wi. We weight concepts in this way simply because, unable to determine which sense is correct, we assume that all senses are equally probable. Each concept in Roget’s Thesaurus and WordNet gets the sum of the scores of the concepts below it in its hierarchy. We will define the scores recursively for a concept c in a sentence s and sub-concepts ci. For example, in Roget’s if the concept c were a Class, then each ci would be a Section. Likewise, in WordNet if c were a synset, then each ci would be a hyponym synset of c. Obviously if c is a word sense wi (a word in either a synset or a Semicolon Group), then there can be no sub-concepts ci. When c = wi, the score for c is the sum of all occurrences of the word w in sentence s divided by the number of senses of the word w. score(c, s) = ( instancesOf(w,s) sensesOf(w) if c = wi P ci∈c score(ci, s) otherwise See Table 7 for an example of how this sentence representation works. The sentence “A gem is a jewel or stone that is used in jewellery.” is represented using the 1911 Roget’s. A concept is identi6http://opennlp.sourceforge.net fied by a name and a series of up to 9 numbers that indicate where in the thesaurus it appears. The first number represents the Class, the second the Section, ..., the ninth the word. We only show concepts with weights greater than 1.0. Words not in the thesaurus keep a weight of 1.0, but this weight will not increase the weight of any concepts in Roget’s or WordNet. Apart from the function words “or”, “in”, “that” and “a” and the period, only the word “jewellery” had a weight above 1.0. The categories labelled 6, 6.2 and 6.2.2 are the only ancestors of the word “use” that ended up with the weights above 1.0. The words “gem”, “is”, “jewel”, “stone” and “used” all contributed weight to the categories shown in Table 7, and to some categories with weights lower than 1.0, but no sense of the words themselves had a weight greater than 1.0. It is worth noting that this method only relies on the hierarchies in Roget’s and WordNet. We do not take advantage of other WordNet relations such as hyponymy, nor do we use any cross-reference links that exist in Roget’s Thesaurus. Including such relations might improve our sentence relatedness system, but that has been left for future work. 3.3.2 Weighting based on specificity To determine sentence relatedness, one could, for example, flatten the structures like those in Table 7 into vectors and measure their closeness by some vector distance function such as cosine similarity. There is a problem with this, though. A concept inherits the weights of all its sub-concepts, so the concepts that appear closer to the root of the tree will far outweigh others. Some sort of weighting function should be used to re-adjust the weights of particular concepts. Were this an Information Retrieval task, weighting schemes such as tf.idf for each concept could apply, but for sentence relatedness we propose an ad hoc weighting scheme based on assumptions about which concepts are most important to sentence representation. This weighting scheme is the second element of our sentence relatedness function. We weight a concept in Roget’s and in WordNet by how many words in a sentence give weight to it. We need to re-weight it based on how specific it is. Clearly, concepts near the leaves of the hierarchy are more specific than those close to the root of the hierarchy. We define specificity as the distance in levels between a given word and each concept found above 421 Identifier Concept Weight 6 Words Relating to the Voluntary Powers - Individual Volition 2.125169028274 6.2 Prospective Volition 1.504066255252 6.2.2 Subservience to Ends 1.128154077172 8 Words Relating to the Sentiment and Moral Powers 3.13220884041 8.2 Personal Affections 1.861744448402 8.2.2 Discriminative Affections 1.636503978149 8.2.2.2 Ornament/Jewelry/Blemish [Head Group] 1.452380952380 8.2.2.2.886 Jewelry [Head] 1.452380952380 8.2.2.2.886.1 Jewelry [Noun] 1.452380952380 8.2.2.2.886.1.1 jewel [Paragraph] 1.452380952380 8.2.2.2.886.1.1.1 jewel [Semicolon Group] 1.166666666666 8.2.2.2.886.1.1.1.3 jewellery [Word Sense] 1.0 or 1.0 in 1.0 that 1.0 a 2.0 . 1.0 Table 7: “A gem is a jewel or stone that is used in jewellery.” as represented using Roget’s 1911. it in the hierarchy. In Roget’s Thesaurus there are exactly 9 levels from the term to the class. In WordNet there will be as many levels as a word has ancestors up the hypernymy chain. In Roget’s, a term has specificity 1, a Semicolon Group 2, a Paragraph 3, ..., a Class 9. In WordNet, the specificity of a word is 1, its synset – 2, the synset’s hypernym – 3, its hypernym – 4, and so on. Words not found in the Thesaurus or in WordNet get specificity 1. We seek a function that, given s, assigns to all concepts of specificity s a weight progressively larger than to their neighbours. The weights in this function should be assigned based on specificity, so that all concepts of the same specificity receive the same score. Weights will differ depending on a combination of specificity and how frequently words that signal the concepts appear in a sentence. The weight of concepts with specificity s should be the highest, of those with specificity s ± 1 – lower, of those with specificity s ± 2 lower still, and so on. In order to achieve this effect, we weight the concepts using a normal distribution, where the mean is s: f(x) = 1 σ √ 2πe „ −(x−s)2 2σ2 « Since the Head is often considered the main category in Roget’s, we expect a specificity of 5 to be best, but we decided to test the values 1 through 9 as a possible setting for specificity. We do not claim that this weighting scheme is optimal; other weighting schemes might do better. For the purpose of comparing the 1911 and 1987 Thesauri and WordNet, however, this method appears sufficient. With this weighting scheme, we determine the distance between two sentences using cosine similarity: cosSim(A, B) = P ai ∗bi qP a2 i ∗ qP b2 i For this problem we used the MIT Java WordNet Interface version 1.1.17. 3.3.3 Sentence similarity results We used this method of representation for Roget’s of 1911 and of 1987, as well as for WordNet 3.0 – see Figure 1. For comparison, we also implemented a baseline method that we refer to as Simple: we built vectors out of words and their count. It can be seen in Figure 1 that each system is superior for at least one of the nine specificities. The Simple method is best at a specificity of 1, 8 and 9, Roget’s Thesaurus 1911 is best at 6, Roget’s Thesaurus 1987 is best at 4, 5 and 7, and WordNet is best at 2 and 3. The systems based on Roget’s and WordNet more or less followed a bell-shaped curve, with the curves of the 1911 and 1987 Thesauri following each other fairly closely and peaking close together. WordNet clearly peaked first and then fell the farthest. 7http://www.mit.edu/˜markaf/projects/wordnet/ 422 The best correlation result for the 1987 Roget’s Thesaurus is 0.8725 when the mean is 4, the POS. The maximum correlation for the 1911 Thesaurus is 0.8367, where the mean is 5, the Head. The maximum for WordNet is 0.8506, where the mean is 3, or the first hypernym synset. This suggests that the POS and Head are most important for representing text in Roget’s Thesaurus, while the first hypernym is most important for representing text using WordNet. For the Simple method, we found a more modest correlation of 0.6969. Figure 1: Correlation data for all four systems. Several other methods have given very good scores on this data set. For the system in (Li et al., 2006), where this data set was first introduced, a correlation of 0.816 with the human annotators was achieved. The mean of all human annotators had a score of 0.825, with a standard deviation of 0.072. In (Islam and Inkpen, 2007), an even better system was proposed, with a correlation of 0.853. Selecting the mean that gives the best correlation could be considered as training on test data. However, were we simply to have selected a value somewhere in the middle of the graph, as was our original intuition, it would have given an unfair advantage to either version of Roget’s Thesaurus over WordNet. Our system shows good results for both versions of Roget’s Thesauri and WordNet. The 1987 Thesaurus once again performs better than the 1911 version and than WordNet. Much like (Miller and Charles, 1991), the data set used here is not large enough to determine if any system’s improvement is statistically significant. 4 Conclusion and future work The 1987 version of Roget’s Thesaurus performed better than the 1911 version on all our tests, but we did not find the differences to be statistically significant. It is particularly interesting that the 1911 Thesaurus performed as well as it did, given that it is almost 100 years old. On problems such as semantic word relatedness, the 1911 Thesaurus performance was fairly close to that of the 1987 Thesaurus, and was comparable to many WordNet-based measures. For problems of identifying synonyms both versions of Roget’s Thesaurus performed relatively well compared to most WordNet-based methods. We have presented a new method of sentence representation that attempts to leverage the structure found in Roget’s Thesaurus and similar lexical ontologies (among them WordNet). We have shown that given this style of text representation both versions of Roget’s Thesaurus work comparably to WordNet. All three perform fairly well compared to the baseline Simple method. Once again, the 1987 version is superior to the 1911 version, but the 1911 version still works quite well. We hope to investigate further the representation of sentences and other short texts using Roget’s Thesaurus. These kinds of measurements can help with problems such as identifying relevant sentences for extractive text summarization, or possibly paraphrase identification (Dolan et al., 2004). Another – longer-term – direction of future work could be merging Roget’s Thesaurus with WordNet. We also plan to study methods of automatically updating the 1911 Roget’s Thesaurus with modern words. Some work has been done on adding new terms and relations to WordNet (Snow et al., 2006) and FACTOTUM (O’Hara and Wiebe, 2003). Similar methods could be used for identifying related terms and assigning them to a correct semicolon group or paragraph. Acknowledgments Our research is supported by the Natural Sciences and Engineering Research Council of Canada and the University of Ottawa. We thank Dr. Diana Inkpen, Anna Kazantseva and Oana Frunza for many useful comments on the paper. 423 References S. Banerjee and T. Pedersen. 2002. An adapted lesk algorithm for word sense disambiguation using wordnet. In Proc. CICLing 2002, pages 136–145. P. Cassidy. 2000. An investigation of the semantic relations in the roget’s thesaurus: Preliminary results. In Proc. CICLing 2000, pages 181–204. B. Dolan, C. Quirk, and C. Brockett. 2004. Unsupervised construction of large paraphrase corpora: exploiting massively parallel news sources. In Proc. COLING 2004, pages 350–356, Morristown, NJ. C. Fellbaum. 1998. A semantic network of english verbs. In C. Fellbaum, editor, WordNet: An Electronic Lexical Database, pages 69–104. MIT Press, Cambridge, MA. L. Finkelstein, E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan, G. Wolfman, and E. Ruppin. 2001. Placing search in context: the concept revisited. In Proc. 10th International Conf. on World Wide Web, pages 406–414, New York, NY, USA. ACM Press. G. Hirst and D. St-Onge. 1998. Lexical chains as representation of context for the detection and correction malapropisms. In C. Fellbaum, editor, WordNet: An Electronic Lexical Database, pages 305–322. MIT Press, Cambridge, MA. A. Islam and D. Inkpen. 2007. Semantic similarity of short texts. In Proc. RANLP 2007, pages 291–297, September. M. Jarmasz and S. Szpakowicz. 2003. Not as easy as it seems: Automating the construction of lexical chains using roget’s thesaurus. In Proc. 16th Canadian Conf. on Artificial Intelligence, pages 544–549. M. Jarmasz and S. Szpakowicz. 2004. Roget’s thesaurus and semantic similarity. In N. Nicolov, K. Bontcheva, G. Angelova, and R. Mitkov, editors, Recent Advances in Natural Language Processing III: Selected Papers from RANLP 2003, Current Issues in Linguistic Theory, volume 260, pages 111–120. John Benjamins. J. Jiang and D. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proc. 10th International Conf. on Research on Computational Linguistics, pages 19–33. A. Kennedy and S. Szpakowicz. 2007. Disambiguating hypernym relations for roget’s thesaurus. In Proc. TSD 2007, pages 66–75. B. Kirkpatrick, editor. 1987. Roget’s Thesaurus of English Words and Phrases. Penguin, Harmondsworth, Middlesex, England. T. Landauer and S. Dumais. 1997. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104:211–240. C. Leacock and M. Chodorow. 1998. Combining local context and wordnet sense similiarity for word sense disambiguation. In C. Fellbaum, editor, WordNet: An Electronic Lexical Database, pages 265–284. MIT Press, Cambridge, MA. M. Lewis, editor. 2000 and 2001. Readers Digest, 158(932, 934, 935, 936, 937, 938, 939, 940), 159(944, 948). Readers Digest Magazines Canada Limited. Y. Li, D. McLean, Z. A. Bandar, J. D. O’Shea, and K. Crockett. 2006. Sentence similarity based on semantic nets and corpus statistics. IEEE Transactions on Knowledge and Data Engineering, 18(8):1138– 1150. D. Lin. 1998. An information-theoretic definition of similarity. In Proc. 15th International Conf. on Machine Learning, pages 296–304, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. R. Mihalcea, C. Corley, and C. Strapparava. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In Proc. 21st National Conf. on Artificial Intelligence, pages 775–780. AAAI Press. G. A. Miller and W. G. Charles. 1991. Contextual correlates of semantic similarity. Language and Cognitive Process, 6(1):1–28. T. P. O’Hara and J. Wiebe. 2003. Classifying functional relations in factotum via wordnet hypernym associations. In Proc. CICLing 2003), pages 347–359. S. Patwardhan. 2003. Incorporating dictionary and corpus information into a vector measure of semantic relatedness. Master’s thesis, University of Minnesota, Duluth, August. T. Pedersen, S. Patwardhan, and J. Michelizzi. 2004. Wordnet::similarity - measuring the relatedness of concepts. In Proc. of the 19th National Conference on Artificial Intelligence., pages 1024–1025. P. Resnik. 1995. Using information content to evaluate semantic similarity. In Proc. 14th International Joint Conf. on Artificial Intelligence, pages 448–453. H. Rubenstein and J. B. Goodenough. 1965. Contextual correlates of synonymy. Communication of the ACM, 8(10):627–633. J. Sinclair. 2001. Collins Cobuild English Dictionary for Advanced Learners. Harper Collins Pub. R. Snow, D. Jurafsky, and A. Y. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proc COLING/ACL 2006, pages 801–808. P. Turney. 2001. Mining the web for synonyms: Pmi-ir versus lsa on toefl. In Proc. 12th European Conf. on Machine Learning, pages 491–502. Z. Wu and M. Palmer. 1994. Verb semantics and lexical selection. In Proc. 32nd Annual Meeting of the ACL, pages 133–138, New Mexico State University, Las Cruces, New Mexico. 424
2008
48
Proceedings of ACL-08: HLT, pages 425–433, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Unsupervised Translation Induction for Chinese Abbreviations using Monolingual Corpora Zhifei Li and David Yarowsky Department of Computer Science and Center for Language and Speech Processing Johns Hopkins University, Baltimore, MD 21218, USA [email protected] and [email protected] Abstract Chinese abbreviations are widely used in modern Chinese texts. Compared with English abbreviations (which are mostly acronyms and truncations), the formation of Chinese abbreviations is much more complex. Due to the richness of Chinese abbreviations, many of them may not appear in available parallel corpora, in which case current machine translation systems simply treat them as unknown words and leave them untranslated. In this paper, we present a novel unsupervised method that automatically extracts the relation between a full-form phrase and its abbreviation from monolingual corpora, and induces translation entries for the abbreviation by using its full-form as a bridge. Our method does not require any additional annotated data other than the data that a regular translation system uses. We integrate our method into a state-ofthe-art baseline translation system and show that it consistently improves the performance of the baseline system on various NIST MT test sets. 1 Introduction The modern Chinese language is a highly abbreviated one due to the mixed use of ancient singlecharacter words with modern multi-character words and compound words. According to Chang and Lai (2004), approximately 20% of sentences in a typical news article have abbreviated words in them. Abbreviations have become even more popular along with the development of Internet media (e.g., online chat, weblog, newsgroup, and so on). While English words are normally abbreviated by either their Full-form Abbreviation Translation &¬ ¬ ¬ Ò Ò Ò ¬Ò Hong Kong Governor “ “ “\ ® ® ®/Ì Ì Ì “®Ì Security Council Figure 1: Chinese Abbreviations Examples first letters (i.e. acronyms) or via truncation, the formation of Chinese abbreviations is much more complex. Figure 1 shows two examples for Chinese abbreviations. Clearly, an abbreviated form of a word can be obtained by selecting one or more characters from this word, and the selected characters can be at any position in the word. In an extreme case, there are even re-ordering between a full-form phrase and its abbreviation. While the research in statistical machine translation (SMT) has made significant progress, most SMT systems (Koehn et al., 2003; Chiang, 2007; Galley et al., 2006) rely on parallel corpora to extract translation entries. The richness and complexness of Chinese abbreviations imposes challenges to the SMT systems. In particular, many Chinese abbreviations may not appear in available parallel corpora, in which case current SMT systems treat them as unknown words and leave them untranslated. This affects the translation quality significantly. To be able to translate a Chinese abbreviation that is unseen in available parallel corpora, one may annotate more parallel data. However, this is very expensive as there are too many possible abbreviations and new abbreviations are constantly created. Another approach is to transform the abbreviation 425 into its full-form for which the current SMT system knows how to translate. For example, if the baseline system knows that the translation for “&¬ Ò” is “Hong Kong Governor”, and it also knows that “¬ Ò” is an abbreviation of “&¬ ¬ ¬ Ò Ò Ò” , then it can translate “¬Ò” to “Hong Kong Governor”. Even if an abbreviation has been seen in parallel corpora, it may still be worth to consider its fullform phrase as an additional alternative to the abbreviation since abbreviated words are normally semantically ambiguous, while its full-form contains more context information that helps the MT system choose a right translation for the abbreviation. Conceptually, the approach of translating an abbreviation by using its full-form as a bridge involves four components: identifying abbreviations, learning their full-forms, inducing their translations, and integrating the abbreviation translations into the baseline SMT system. None of these components is trivial to realize. For example, for the first two components, we may need manually annotated data that tags an abbreviation with its full-form. We also need to make sure that the baseline system has at least one valid translation for the full-form phrase. On the other hand, integrating an additional component into a baseline SMT system is notoriously tricky as evident in the research on integrating word sense disambiguation (WSD) into SMT systems: different ways of integration lead to conflicting conclusions on whether WSD helps MT performance (Chan et al., 2007; Carpuat and Wu, 2007). In this paper, we present an unsupervised approach to translate Chinese abbreviations. Our approach exploits the data co-occurrence phenomena and does not require any additional annotated data except the parallel and monolingual corpora that the baseline SMT system uses. Moreover, our approach integrates the abbreviation translation component into the baseline system in a natural way, and thus is able to make use of the minimum-error-rate training (Och, 2003) to automatically adjust the model parameters to reflect the change of the integrated system over the baseline system. We carry out experiments on a state-of-the-art SMT system, i.e., Moses (Koehn et al., 2007), and show that the abbreviation translations consistently improve the translation performance (in terms of BLEU (Papineni et al., 2002)) on various NIST MT test sets. 2 Background: Chinese Abbreviations In general, Chinese abbreviations are formed based on three major methods: reduction, elimination and generalization (Lee, 2005; Yin, 1999). Table 1 presents examples for each category. Among the three methods, reduction is the most popular one, which generates an abbreviation by selecting one or more characters from each of the words in the full-form phrase. The selected characters can be at any position of the word. Table 1 presents examples to illustrate how characters at different positions are selected to generate abbreviations. While the abbreviations mostly originate from noun phrases (in particular, named entities), other general phrases are also abbreviatable. For example, the second example “Save Energy” is a verb phrase. In an extreme case, reordering may happen between an abbreviation and its full-form phrase. For example, for the seventh example in Table 1, a monotone abbreviation should be “X¢”, however, “X ¢” is a more popular ordering in Chinese texts. In elimination, one or more words of the original full-form phrase are eliminated and the rest parts remain as an abbreviation. For example, in the fullform phrase “8• L¦”, the word “L¦” is eliminated and the remaining word “8•” alone becomes the abbreviation. In generalization, an abbreviation is created by generalizing parallel sub-parts of the full-form phrase. For example, “®3 (three preventions)” in Table 1 is an abbreviation for the phrase “3Û3 x3b//ù (fire prevention, theft prevention, and traffic accident prevention)”. The character “3 (prevention)” is common to the three sub-parts of the full-form, so it is being generalized. 3 Unsupervised Translation Induction for Chinese Abbreviations In this section, we describe an unsupervised method to induce translation entries for Chinese abbreviations, even when these abbreviations never appear in the Chinese side of the parallel corpora. Our basic idea is to automatically extract the relation between a full-form phrase and its abbreviation (we refer the relation as full-abbreviation) from monolingual corpora, and then induce translation entries for the abbreviation by using its full-form phrase as a bridge. 426 Category Full-form Abbreviation Translation Reduction ð ð ð® L L L¦ ðL Peking University   Õ   Í  Save Energy &¬ ¬ ¬ Ò Ò Ò ¬Ò Hong Kong Governor i i ib \Ÿ Ÿ Ÿ iŸ Foreign Minister |Ì Ì Ì ´ ´ ´‰ Ì´ People’s Police “ “ “\ ® ® ®/Ì Ì Ì “®Ì Security Council ‘   X X X ž¢ ¢ ¢ X¢ No.1 Nuclear Energy Power Plant Elimination 8 8 8• • • L¦ 8• Tsinghua University Generalization 3 3 3Û3 3 3x3 3 3b//ù ®3 Three Preventions Table 1: Chinese Abbreviation: Categories and Examples Our approach involves five major steps: • Step-1: extract a list of English entities from English monolingual corpora; • Step-2: translate the list into Chinese using a baseline translation system; • Step-3: extract full-abbreviation relations from Chinese monolingual corpora by treating the Chinese translations obtained in Step-2 as fullform phrases; • Step-4: induce translation entries for Chinese abbreviations by using their full-form phrases as bridges; • Step-5: augment the baseline system with translation entries obtained in Step-4. Clearly, the main purpose of Step-1 and -2 is to obtain a list of Chinese entities, which will be treated as full-form phrases in Step-3. One may use a named entity tagger to obtain such a list. However, this relies on the existence of a Chinese named entity tagger with high-precision. Moreover, obtaining a list using a dedicated tagger does not guarantee that the baseline system knows how to translate the list. On the contrary, in our approach, since the Chinese entities are translation outputs for the English entities, it is ensured that the baseline system has translations for these Chinese entities. Regarding the data resource used, Step-1, -2, and -3 rely on the English monolingual corpora, parallel corpora, and the Chinese monolingual corpora, respectively. Clearly, our approach does not require any additional annotated data compared with the baseline system. Moreover, our approach utilizes both Chinese and English monolingual data to help MT, while most SMT systems utilizes only the English monolingual data to build a language model. This is particularly interesting since we normally have enormous monolingual data, but a small amount of parallel data. For example, in the translation task between Chinese and English, both the Chinese and English Gigaword have billions of words, but the parallel data has only about 30 million words. Step-4 and -5 are natural ways to integrate the abbreviation translation component with the baseline translation system. This is critical to make the abbreviation translation get performance gains over the baseline system as will be clear later. In the remainder of this section, we will present a specific instantiation for each step. 3.1 English Entity Extraction from English Monolingual Corpora Though one can exploit a sophisticated named-entity tagger to extract English entities, in this paper we identify English entities based on the capitalization information. Specifically, to be considered as an entity, a continuous span of English words must satisfy the following conditions: • all words must start from a capital letter except for function words “of”, “the”, and “and”; • each function word can appear only once; • the number of words in the span must be smaller than a threshold (e.g., 10); • the occurrence count of this span must be greater than a threshold (e.g., 1). 427 3.2 English Entity Translation For the Chinese-English language pair, most MT research is on translation from Chinese to English, but here we need the reverse direction. However, since most of statistical translation models (Koehn et al., 2003; Chiang, 2007; Galley et al., 2006) are symmetrical, it is relatively easy to train a translation system to translate from English to Chinese, except that we need to train a Chinese language model from the Chinese monolingual data. It is worth pointing out that the baseline system may not be able to translate all the English entities. This is because the entities are extracted from the English monolingual corpora, which has a much larger vocabulary than the English side of the parallel corpora. Therefore, we should remove all the Chinese translations that contain any untranslated English words before proceeding to the next step. Moreover, it is desirable to generate an n-best list instead of a 1-best translation for the English entity. 3.3 Full-abbreviation Relation Extraction from Chinese Monolingual Corpora We treat the Chinese entities obtained in Section 3.2 as full-form phrases. To identify their abbreviations, one can employ an HMM model (Chang and Teng, 2006). Here we propose a much simpler approach, which is based on the data co-occurrence intuition. 3.3.1 Data Co-occurrence In a monolingual corpus, relevant words tend to appear together (i.e., co-occurrence). For example, Bill Gates tends to appear together with Microsoft. The co-occurrence may imply a relationship (e.g., Bill Gates is the founder of Microsoft). By inspection of the Chinese text, we found that the data co-occurrence phenomena also applies to the fullTitle ÑÁ Á Á£ £ £Ì Ì Ì ô*Rí<ÞÜ Text c • ö Ñ 2Û9† ž( V ¶ c Õ ÷)‘20“Á Á Á  £ £ £ä ä äÌ Ì Ì{ ô*R• h-10†t8šóÑ£õš. ¸œt*y ³{Áà Table 2: Data Co-occurrence Example for the Fullabbreviation Relation (Á Á Á£ £ £äÌ Ì Ì,Á£Ì) meaning “winter olympics” abbreviation relation. Table 2 shows an example, where the abbreviation “Á£Ì” appears in the title while its full-form “Á Á Á£ £ £äÌ Ì Ì” appears in the text of the same document. In general, the occurrence distance between an abbreviation and its full-form varies. For example, they may appear in the same sentence, or in the neighborhood sentences. 3.3.2 Full-abbreviation Relation Extraction Algorithm By exploiting the data co-occurrence phenomena, we identify possible abbreviations for full-form phrases. Figure 2 presents the pseudocode of the full-abbreviation relation extraction algorithm. Relation-Extraction(Corpus, Full-list) 1 contexts ←NIL 2 for i ←1 to length[Corpus] 3 sent1 ←Corpus[i] 4 contexts ←UPDATE(contexts, Corpus, i) 5 for full in sent1 6 if full in Full-list 7 for sent2 in contexts 8 for abbr in sent2 9 if RL(full, abbr) = TRUE 10 Count[abbr, full]++ 11 return Count Figure 2: Full-abbreviation Relation Extraction Given a monolingual corpus and a list of full-form phrases (i.e., Full-list, which is obtained in Section 3.2), the algorithm returns a Count that contains full-abbreviation relations and their occurrence counts. Specifically, the algorithm linearly scans over the whole corpus as indicated by line 1. Along the linear scan, the algorithm maintains contexts of the current sentence (i.e., sent1), and the contexts remember the sentences from where the algorithm identifies possible abbreviations. In our implementation, the contexts include current sentence, the title of current document, and previous and next sentence in the document. Then, for each ngram (i.e., full) of the current sentence (i.e., sent1) and for each ngram (i.e., abbr) of a context sentence (i.e., sent2), the algorithm calls a function RL, which decides whether the full-abbreviation relation holds between full and abbr. If RL returns TRUE, the count table 428 (i.e., Count) is incremented by one for this relation. Note that the filtering through the full-form phrases list (i.e., Full-list) as shown in line 6 is the key to make the algorithm efficient enough to run through large-size monolingual corpora. In function RL, we run a simple alignment algorithm that links the characters in abbr with the words in full. In the alignment, we assume that there is no reordering between full and abbr. To be considered as a valid full-abbreviation relation, full and abbr must satisfy the following conditions: • abbr must be shorter than full by a relative threshold (e.g., 1.2); • each character in abbr must be aligned to full; • each word in full must have at least one character aligned to abbr; • abbr must not be a continuous sub-part of full; Clearly, due to the above conditions, our approach may not be able to handle all possible abbreviations (e.g., the abbreviations formed by the generalization method described in Section 2). One can modify the conditions and the alignment algorithm to handle more complex full-abbreviation relations. With the count table Count, we can calculate the relative frequency and get the following probability, P(full|abbr) = Count[abbr, full] P Count[abbr, ∗] (1) 3.4 Translation Induction for Chinese Abbreviations Given a Chinese abbreviation and its full-form, we induce English translation entries for the abbreviation by using the full-form as a bridge. Specifically, we first generate n-best translations for each fullform Chinese phrase using the baseline system.1 We then post-process the translation outputs such that they have the same format (i.e., containing the same set of model features) as a regular phrase entry in 1In our method, it is guaranteed that each Chinese full-form phrase will have at least one English translation, i.e., the English entity that has been used to produce this full-form phrase. However, it does not mean that this English entity is the best translation that the baseline system has for the Chinese fullform phrase. This is mainly due to the asymmetry introduced by the different LMs in different translation directions. the baseline phrase table. Once we get the translation entries for the full-form, we can replace the fullform Chinese with its abbreviation to generate translation entries for the abbreviation. Moreover, to deal with the case that an abbreviation may have several candidate full-form phrases, we normalize the feature values using the following equation, Φj(e, abbr) = Φj(e, full) × P(full|abbr) (2) where e is an English translation, and Φj is the j-th model feature indexed as in the baseline system. 3.5 Integration with Baseline Translation System Since the obtained translation entries for abbreviations have the same format as the regular translation entries in the baseline phrase table, it is relatively easy to add them into the baseline phrase table. Specifically, if a translation entry (signatured by its Chinese and English strings) to be added is not in the baseline phrase table, we simply add the entry into the baseline table. On the other hand, if the entry is already in the baseline phrase table, then we merge the entries by enforcing the translation probability as we obtain the same translation entry from two different knowledge sources (one is from parallel corpora and the other one is from the Chinese monolingual corpora). Once we obtain the augmented phrase table, we should run the minimum-error-rate training (Och, 2003) with the augmented phrase table such that the model parameters are properly adjusted. As will be shown in the experimental results, this is critical to obtain performance gain over the baseline system. 4 Experimental Results 4.1 Corpora We compile a parallel dataset which consists of various corpora distributed by the Linguistic Data Consortium (LDC) for NIST MT evaluation. The parallel dataset has about 1M sentence pairs, and about 28M words. The monolingual data we use includes the English Gigaword V2 (LDC2005T12) and the Chinese Gigaword V2 (LDC2005T14). 4.2 Baseline System Training Using the toolkit Moses (Koehn et al., 2007), we built a phrase-based baseline system by following 429 the standard procedure: running GIZA++ (Och and Ney, 2000) in both directions, applying refinement rules to obtain a many-to-many word alignment, and then extracting and scoring phrases using heuristics (Och and Ney, 2004). The baseline system has eight feature functions (see Table 8). The feature functions are combined under a log-linear framework, and the weights are tuned by the minimum-error-rate training (Och, 2003) using BLEU (Papineni et al., 2002) as the optimization metric. To handle different directions of translation between Chinese and English, we built two trigram language models with modified Kneser-Ney smoothing (Chen and Goodman, 1998) using the SRILM toolkit (Stolcke, 2002). 4.3 Statistics on Intermediate Steps As described in Section 3, our approach involves five major steps. Table 3 reports the statistics for each intermediate step. While about 5M English entities are extracted and 2-best Chinese translations are generated for each English entity, we get only 4.7M Chinese entities. This is because many of the English entities are untranslatable by the baseline system. The number of full-abbreviation relations2 extracted from the Chinese monolingual corpora is 51K. For each full-form phrase we generate 5-best English translations, however only 210k (<51K×5) translation entries are obtained. This is because the baseline system may have less than 5 unique translations for some of the full-form phrases. Lastly, the number of translation entries added due to abbreviations is very small compared with the total number of translation entries (i.e., 50M). Measure Value number of English entities 5M number of Chinese entities 4.7M number of full-abbreviation relations 51K number of translation entries added 210K total number of translation entries 50M Table 3: Statistics on Intermediate Steps 2Note that many of the “abbreviations” extracted by our algorithm are not true abbreviations in the linguistic sense, instead they are just continuous-span of words. This is analogous to the concept of “phrase” in phrase-based MT. 4.4 Precision on Full-abbreviation Relations Table 4 reports the precision on the extracted fullabbreviation relations. We classify the relations into several classes based on their occurrence counts. In the second column, we list the fraction of the relations in the given class among all the relations we have extracted (i.e., 51K relations). For each class, we randomly select 100 relations, manually tag them as correct or wrong, and then calculate the precision. Intuitively, a class that has a higher occurrence count should have a higher precision, and this is generally true as shown in the fourth column of Table 4. In comparison, Chang and Teng (2006) reports a precision of 50% over relations between single-word fullforms and single-character abbreviations. One can imagine a much lower precision on general relations (e.g., the relations between multi-word full-forms and multi-character abbreviations) that we consider here. Clearly, our results are very competitive3. Count Fraction (%) Precision (%) Baseline Ours (0, 1] 35.2 8.9 42.6 (1, 5] 33.8 7.8 54.4 (5, 10] 10.7 8.9 60.0 (10, 100] 16.5 7.6 55.9 (100, +∞) 3.8 12.1 59.9 Average Precision (%) 8.4 51.3 Table 4: Full-abbreviation Relation Extraction Precision To further show the advantage of our relation extraction algorithm (see Section 3.3), in the third column of Table 4 we report the results on a simple baseline. To create the baseline, we make use of the dominant abbreviation patterns shown in Table 5, which have been reported in Chang and Lai (2004). The abbreviation pattern is represented using the format “(bit pattern|length)” where the bit pattern encodes the information about how an abbreviated form is obtained from its original full-form word, and the length represents the number of characters in the full-form word. In the bit pattern, a “1” indicates that the character at the corresponding position of the full-form word is kept in the abbreviation, while a “0” means the character is deleted. Now we dis3However, it is not a strict comparison because the dataset is different and the recall may also be different. 430 Pattern Fraction (%) Example (1|1) 100 (¥ ¥ ¥, ¥) (10|2) 87 (Æ Æ Æ³, Æ) (101|3) 44 (® ® ®/Ì Ì Ì, ®Ì) (1010|4) 56 (Ú Ú ÚÌ= = =¦, Ú=) Table 5: Dominant Abbreviation Patterns reported in Chang and Lai (2004) cuss how to create the baseline. For each full-form phrase in the randomly selected relations, we generate a baseline hypothesis (i.e., abbreviation) as follows. We first generate an abbreviated form for each word in the full-form phrase by using the dominant abbreviation pattern, and then concatenate these abbreviated words to form a baseline abbreviation for the full-form phrase. As shown in Table 4, the baseline performs significantly worse than our relation extraction algorithm. Compared with the baseline, our relation extraction algorithm allows arbitrary abbreviation patterns as long as they satisfy the alignment constraints. Moreover, our algorithm exploits the data co-occurrence phenomena to generate and rank hypothesis (i.e., abbreviation). The above two reasons explain the large performance gain. It is interesting to examine the statistics on abbreviation patterns over the relations automatically extracted by our algorithm. Table 6 reports the statistics. We obtain the statistics on the relations that are manually tagged as correct before, and there are in total 263 unique words in the corresponding fullform phrases. Note that the results here are highly biased to our relation extraction algorithm (see Section 3.3). For the statistics on manually collected examples, please refer to Chang and Lai (2004). 4.5 Results on Translation Performance 4.5.1 Precision on Translations of Chinese Full-form Phrases For the relations manually tagged as correct in Section 4.4, we manually look at the top-5 translations for the full-form phrases. If the top-5 translations contain at least one correct translation, we tag it as correct, otherwise as wrong. We get a precision of 97.5%. This precision is extremely high because the BLEU score (precision with brevity penalty) that one obtains for a Chinese sentence is normally between 30% to 50%. Two reasons explain such a high Pattern Fraction (%) Example (1|1) 100 (¥ ¥ ¥, ¥) (10|2) 74.3 (Æ Æ Æ³, Æ) (01|2) 7.6 (ð® ® ®, ®) (11|2) 18.1 (j j j, j) (100|3) 58.5 (  n., ) (010|3) 3.1 (qu u uÓ, u) (001|3) 4.6 (ÏÄÄ Ä Ä, Ä) (110|3) 13.8 (£ £ £ä ä äÌ, £ä) (101|3) 3.1 (® ® ®/Ì Ì Ì, ®Ì) (111|3) 16.9 () ) )¦ ¦ ¦  , )¦) Table 6: Statistics on Abbreviation Patterns precision. Firstly, the full-form phrase is short compared with a regular Chinese sentence, and thus it is easier to translate. Secondly, the full-form phrase itself contains enough context information that helps the system choose a right translation for it. In fact, this shows the importance of considering the fullform phrase as an additional alternative to the abbreviation even if the baseline system already has translation entries for the abbreviation. 4.5.2 BLEU on NIST MT Test Sets We use MT02 as the development set4 for minimum error rate training (MERT) (Och, 2003). The MT performance is measured by lower-case 4-gram BLEU (Papineni et al., 2002). Table 7 reports the results on various NIST MT test sets. As shown in the table, our Abbreviation Augmented MT (AAMT) systems perform consistently better than the baseline system (described in Section 4.2). Task Baseline AAMT No MERT With MERT MT02 29.87 29.96 30.46 MT03 29.03 29.23 29.71 MT04 29.05 29.88 30.55 Average Gain +0.52 +1.18 Table 7: MT Performance measured by BLEU Score As clear in Table 7, it is important to re-run MERT (on MT02 only) with the augmented phrase table in order to get performance gains. Table 8 reports 4On the dev set, about 20K (among 210K) abbreviation translation entries are matched in the Chinese side. 431 the MERT weights with different phrase tables. One may notice the change of the weight in word penalty feature. This is very intuitive in order to prevent the hypothesis being too long due to the expansion of the abbreviations into their full-forms. Feature Baseline AAMT language model 0.137 0.133 phrase translation 0.066 0.023 lexical translation 0.061 0.078 reverse phrase translation 0.059 0.103 reverse lexical translation 0.112 0.090 phrase penalty -0.150 -0.162 word penalty -0.327 -0.356 distortion model 0.089 0.055 Table 8: Weights obtained by MERT 5 Related Work Though automatically extracting the relations between full-form Chinese phrases and their abbreviations is an interesting and important task for many natural language processing applications (e.g., machine translation, question answering, information retrieval, and so on), not much work is available in the literature. Recently, Chang and Lai (2004), Chang and Teng (2006), and Lee (2005) have investigated this task. Specifically, Chang and Lai (2004) describes a hidden markov model (HMM) to model the relationship between a full-form phrase and its abbreviation, by treating the abbreviation as the observation and the full-form words as states in the model. Using a set of manually-created fullabbreviation relations as training data, they report experimental results on a recognition task (i.e., given an abbreviation, the task is to obtain its full-form, or the vice versa). Clearly, their method is supervised because it requires the full-abbreviation relations as training data.5 Chang and Teng (2006) extends the work in Chang and Lai (2004) to automatically extract the relations between full-form phrases and their abbreviations. However, they have only considered relations between single-word phrases and single-character abbreviations. Moreover, the HMM model is computationally-expensive and unable to exploit the data co-occurrence phenomena that we 5However, the HMM model aligns the characters in the abbreviation to the words in the full-form in an unsupervised way. have exploited efficiently in this paper. Lee (2005) gives a summary about how Chinese abbreviations are formed and presents many examples. Manual rules are created to expand an abbreviation to its fullform, however, no quantitative results are reported. None of the above work has addressed the Chinese abbreviation issue in the context of a machine translation task, which is the primary goal in this paper. To the best of our knowledge, our work is the first to systematically model Chinese abbreviation expansion to improve machine translation. The idea of using a bridge (i.e., full-form) to obtain translation entries for unseen words (i.e., abbreviation) is similar to the idea of using paraphrases in MT (see Callison-Burch et al. (2006) and references therein) as both are trying to introduce generalization into MT. At last, the goal that we aim to exploit monolingual corpora to help MT is in-spirit similar to the goal of using non-parallel corpora to help MT as aimed in a large amount of work (see Munteanu and Marcu (2006) and references therein). 6 Conclusions In this paper, we present a novel method that automatically extracts relations between full-form phrases and their abbreviations from monolingual corpora, and induces translation entries for these abbreviations by using their full-form as a bridge. Our method is scalable enough to handle large amount of monolingual data, and is essentially unsupervised as it does not require any additional annotated data than the baseline translation system. Our method exploits the data co-occurrence phenomena that is very useful for relation extractions. We integrate our method into a state-of-the-art phrase-based baseline translation system, i.e., Moses (Koehn et al., 2007), and show that the integrated system consistently improves the performance of the baseline system on various NIST machine translation test sets. Acknowledgments We would like to thank Yi Su, Sanjeev Khudanpur, Philip Resnik, Smaranda Muresan, Chris Dyer and the anonymous reviewers for their helpful comments. This work was partially supported by the Defense Advanced Research Projects Agency’s GALE program via Contract No¯ HR0011-06-2-0001. 432 References Chris Callison-Burch, Philipp Koehn, and Miles Osborne, 2006. Improved Statistical Machine Translation Using Paraphrases. In Proceedings of NAACL 2006, pages 17-24. Marine Carpuat and Dekai Wu. 2007. Improving Statistical Machine Translation using Word Sense Disambiguation. In Proceedings of EMNLP 2007, pages 6172. Yee Seng Chan, Hwee Tou Ng, and David Chiang. 2007. Word Sense Disambiguation Improves Statistical Machine Translation. In Proceedings of ACL 2007, pages 33-40. Jing-Shin Chang and Yu-Tso Lai. 2004. A preliminary study on probabilistic models for Chinese abbreviations. In Proceedings of the 3rd SIGHAN Workshop on Chinese Language Processing, pages 9-16. Jing-Shin Chang and Wei-Lun Teng. 2006. Mining Atomic Chinese Abbreviation Pairs: A Probabilistic Model for Single Character Word Recovery. In Proceedings of the 5rd SIGHAN Workshop on Chinese Language Processing, pages 17-24. Stanley F. Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report TR-10-98, Harvard University Center for Research in Computing Technology. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201-228. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceedings of COLING/ACL 2006, pages 961-968. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan,Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constrantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of ACL, Demonstration Session, pages 177-180. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of NAACL 2003, pages 48-54. H.W.D Lee. 2005. A study of automatic expansion of Chinese abbreviations. MA Thesis, The University of Hong Kong. Dragos Stefan Munteanu and Daniel Marcu. 2006. Extracting Parallel Sub-Sentential Fragments from NonParallel Corpora. In Proceedings of ACL 2006, pages 81-88. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL 2003, pages 160-167. Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of ACL 2000, pages 440-447. Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30:417-449. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of ACL 2002, pages 311-318. Andreas Stolcke. 2002. SRILM - an extensible language modeling toolkit. In Proceedings of the International Conference on Spoken Language Processing, pages 901-904. Z.P. Yin. 1999. Methodologies and principles of Chinese abbreviation formation. In Language Teaching and Study, 2:73-82. 433
2008
49