text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 262–270, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Compiling a Massive, Multilingual Dictionary via Probabilistic Inference Mausam Stephen Soderland Oren Etzioni Daniel S. Weld Michael Skinner* Jeff Bilmes University of Washington, Seattle *Google, Seattle {mausam,soderlan,etzioni,weld,bilmes}@cs.washington.edu [email protected] Abstract Can we automatically compose a large set of Wiktionaries and translation dictionaries to yield a massive, multilingual dictionary whose coverage is substantially greater than that of any of its constituent dictionaries? The composition of multiple translation dictionaries leads to a transitive inference problem: if word A translates to word B which in turn translates to word C, what is the probability that C is a translation of A? The paper introduces a novel algorithm that solves this problem for 10,000,000 words in more than 1,000 languages. The algorithm yields PANDICTIONARY, a novel multilingual dictionary. PANDICTIONARY contains more than four times as many translations than in the largest Wiktionary at precision 0.90 and over 200,000,000 pairwise translations in over 200,000 language pairs at precision 0.8. 1 Introduction and Motivation In the era of globalization, inter-lingual communication is becoming increasingly important. Although nearly 7,000 languages are in use today (Gordon, 2005), most language resources are mono-lingual, or bi-lingual.1 This paper investigates whether Wiktionaries and other translation dictionaries available over the Web can be automatically composed to yield a massive, multilingual dictionary with superior coverage at comparable precision. We describe the automatic construction of a massive multilingual translation dictionary, called 1The English Wiktionary, a lexical resource developed by volunteers over the Internet is one notable exception that contains translations of English words in about 500 languages. Figure 1: A fragment of the translation graph for two senses of the English word ‘spring’. Edges labeled ‘1’ and ‘3’ are for spring in the sense of a season, and ‘2’ and ‘4’ are for the flexible coil sense. The graph shows translation entries from an English dictionary merged with ones from a French dictionary. PANDICTIONARY, that could serve as a resource for translation systems operating over a very broad set of language pairs. The most immediate application of PANDICTIONARY is to lexical translation—the translation of individual words or simple phrases (e.g., “sweet potato”). Because lexical translation does not require aligned corpora as input, it is feasible for a much broader set of languages than statistical Machine Translation (SMT). Of course, lexical translation cannot replace SMT, but it is useful for several applications including translating search-engine queries, library classifications, meta-data tags,2 and recent applications like cross-lingual image search (Etzioni et al., 2007), and enhancing multi-lingual Wikipedias (Adar et al., 2009). Furthermore, lexical translation is a valuable component in knowledge-based Machine Translation systems, e.g., (Bond et al., 2005; Carbonell et al., 2006). PANDICTIONARY currently contains over 200 million pairwise translations in over 200,000 language pairs at precision 0.8. It is constructed from information harvested from 631 online dictionaries and Wiktionaries. This necessitates match2Meta-data tags appear in community Web sites such as flickr.com and del.icio.us. 262 ing word senses across multiple, independentlyauthored dictionaries. Because of the millions of translations in the dictionaries, a feasible solution to this sense matching problem has to be scalable; because sense matches are imperfect and uncertain, the solution has to be probabilistic. The core contribution of this paper is a principled method for probabilistic sense matching to infer lexical translations between two languages that do not share a translation dictionary. For example, our algorithm can conclude that Basque word ‘udaherri’ is a translation of Maori word ‘koanga’ in Figure 1. Our contributions are as follows: 1. We describe the design and construction of PANDICTIONARY—a novel lexical resource that spans over 200 million pairwise translations in over 200,000 language pairs at 0.8 precision, a four-fold increase when compared to the union of its input translation dictionaries. 2. We introduce SenseUniformPaths, a scalable probabilistic method, based on graph sampling, for inferring lexical translations, which finds 3.5 times more inferred translations at precison 0.9 than the previous best method. 3. We experimentally contrast PANDICTIONARY with the English Wiktionary and show that PANDICTIONARY is from 4.5 to 24 times larger depending on the desired precision. The remainder of this paper is organized as follows. Section 2 describes our earlier work on sense matching (Etzioni et al., 2007). Section 3 describes how the PANDICTIONARY builds on and improves on their approach. Section 4 reports on our experimental results. Section 5 considers related work on lexical translation. The paper concludes in Section 6 with directions for future work. 2 Building a Translation Graph In previous work (Etzioni et al., 2007) we introduced an approach to sense matching that is based on translation graphs (see Figure 1 for an example). Each vertex v ∈V in the graph is an ordered pair (w, l) where w is a word in a language l. Undirected edges in the graph denote translations between words: an edge e ∈E between (w1, l1) and (w2, l2) represents the belief that w1 and w2 share at least one word sense. Construction: The Web hosts a large number of bilingual dictionaries in different languages and several Wiktionaries. Bilingual dictionaries translate words from one language to another, often without distinguishing the intended sense. For example, an Indonesian-English dictionary gives ‘light’ as a translation of the Indonesian word ‘enteng’, but does not indicate whether this means illumination, light weight, light color, or the action of lighting fire. The Wiktionaries (wiktionary.org) are sensedistinguished, multilingual dictionaries created by volunteers collaborating over the Web. A translation graph is constructed by locating these dictionaries, parsing them into a common XML format, and adding the nodes and edges to the graph. Figure 1 shows a fragment of a translation graph, which was constructed from two sets of translations for the word ‘spring’ from an English Wiktionary, and two corresponding entries from a French Wiktionary for ‘printemps’ (spring season) and ‘ressort’ (flexible spring). Translations of the season ‘spring’ have edges labeled with sense ID=1, the flexible coil sense has ID=2, translations of ‘printemps’ have ID=3, and so forth.3 For clarity, we show only a few of the actual vertices and edges; e.g., the figure doesn’t show the edge (ID=1) between ‘udaherri’ and ‘primavera’. Inference: In our previous system we had a simple inference procedure over translation graphs, called TRANSGRAPH, to find translations beyond those provided by any source dictionary. TRANSGRAPH searched for paths in the graph between two vertices and estimated the probability that the path maintains the same word sense along all edges in the path, even when the edges come from different dictionaries. For example, there are several paths between ‘udaherri’ and ‘koanga’ in Figure 1, but all shift from sense ID 1 to 3. The probability that the two words are translations is equivalent to the probability that IDs 1 and 3 represent the same sense. TRANSGRAPH used two formulae to estimate these probabilities. One formula estimates the probability that two multi-lingual dictionary entries represent the same word sense, based on the proportion of overlapping translations for the two entries. For example, most of the translations of 3Sense-distinguished multi-lingual entries give rise to cliques all of which share a common sense ID. 263 French ‘printemps’ are also translations of the season sense of ‘spring’. A second formula is based on triangles in the graph (useful for bilingual dictionaries): a clique of 3 nodes with an edge between each pair of nodes. In such cases, there is a high probability that all 3 nodes share a word sense. Critique: While TRANSGRAPH was the first to present a scalable inference method for lexical translation, it suffers from several drawbacks. Its formulae operate only on local information: pairs of senses that are adjacent in the graph or triangles. It does not incorporate evidence from longer paths when an explicit triangle is not present. Moreover, the probabilities from different paths are combined conservatively (either taking the max over all paths, or using “noisy or” on paths that are completely disjoint, except end points), thus leading to suboptimal precision/recall. In response to this critique, the next section presents an inference algorithm, called SenseUniformPaths (SP), with substantially improved recall at equivalent precision. 3 Translation Inference Algorithms In essence, inference over a translation graph amounts to transitive sense matching: if word A translates to word B, which translates in turn to word C, what is the probability that C is a translation of A? If B is polysemous then C may not share a sense with A. For example, in Figure 2(a) if A is the French word ‘ressort’ (the flexiblecoil sense of spring) and B is the English word ‘spring’, then Slovenian word ‘vzmet’ may or may not be a correct translation of ‘ressort’ depending on whether the edge (B, C) denotes the flexiblecoil sense of spring, the season sense, or another sense. Indeed, given only the knowledge of the path A −B −C we cannot claim anything with certainty regarding A to C. However, if A, B, and C are on a circuit that starts at A, passes through B and C and returns to A, there is a high probability that all nodes on that circuit share a common word sense, given certain restrictions that we enumerate later. Where TRANSGRAPH used evidence from circuits of length 3, we extend this to paths of arbitrary lengths. To see how this works, let us begin with the simplest circuit, a triangle of three nodes as shown in Figure 2(b). We can be quite certain that ‘vzmet’ shares the sense of coil with both ‘spring’ and ‘ressort’. Our reasoning is as follows: even though both ‘ressort’ and ‘spring’ are polysemous they share only one sense. For a triangle to form we have two choices – (1) either ‘vzmet’ means spring coil, or (2) ‘vzmet’ means both the spring season and jurisdiction, but not spring coil. The latter is possible but such a coincidence is very unlikely, which is why a triangle is strong evidence for the three words to share a sense. As an example of longer paths, our inference algorithms can conclude that in Figure 2(c), both ‘molla’ and ‘vzmet’ have the sense coil, even though no explicit triangle is present. To show this, let us define a translation circuit as follows: Definition 1 A translation circuit from v∗ 1 with sense s∗is a cycle that starts and ends at v∗ 1 with no repeated vertices (other than v∗ 1 at end points). Moreover, the path includes an edge between v∗ 1 and another vertex v∗ 2 that also has sense s∗. All vertices on a translation circuit are mutual translations with high probability, as in Figure 2(c). The edge from ‘spring’ indicates that ‘vzmet’ means either coil or season, while the edge from ‘ressort’ indicates that ‘molla’ means either coil or jurisdiction. The edge from ‘vzmet’ to ‘molla’ indicates that they share a sense, which will happen if all nodes share the sense season or if either ‘vzmet’ has the unlikely combination of coil and jurisdiction (or ‘molla’ has coil and season). We also develop a mathematical model of sense-assignment to words that lets us formally prove these insights. For more details on the theory please refer to our extended version. This paper reports on our novel algorithm and experimental results. These insights suggest a basic version of our algorithm: “given two vertices, v∗ 1 and v∗ 2, that share a sense (say s∗) compute all translation circuits from v∗ 1 in the sense s∗; mark all vertices in the circuits as translations of the sense s∗”. To implement this algorithm we need to decide whether a vertex lies on a translation circuit, which is trickier than it seems. Notice that knowing that v is connected independently to v∗ 1 and v∗ 2 doesn’t imply that there exists a translation circuit through v, because both paths may go through a common node, thus violating of the definition of translation circuit. For example, in Figure 2(d) the Catalan word ‘ploma’ has paths to both spring and ressort, but there is no translation circuit through 264 spring English ressort French vzmet Slovenian spring English ressort French vzmet Slovenian spring English vzmet Slovenian ressort French molla Italian spring English ressort French ploma Catalan Feder German перо Russian spring English ressort French fjäder Swedish penna Italian Feder German (a) (b) (c) (d) (e) season coil jurisdiction coil s* s* s* s* s* ? ? ? ? ? feather coil ? ? Figure 2: Snippets of translation graphs illustrating various inference scenarios. The nodes in question mark represent the nodes in focus for each illustration. For all cases we are trying to infer translations of the flexible coil sense of spring. it. Hence, it will not be considered a translation. This example also illustrates potential errors avoided by our algorithm – here, German word ‘Feder’ mean feather and spring coil, but ‘ploma’ means feather and not the coil. An exhaustive search to find translation circuits would be too slow, so we approximate the solution by a random walk scheme. We start the random walk from v∗ 1 (or v∗ 2) and choose random edges without repeating any vertices in the current path. At each step we check if the current node has an edge to v∗ 2 (or v∗ 1). If it does, then all the vertices in the current path form a translation circuit and, thus, are valid translations. We repeat this random walk many times and keep marking the nodes. In our experiments for each inference task we performed a total of 2,000 random walks (NR in pseudo-code) of max circuit length 7. We chose these parameters based on a development set of 50 inference tasks. Our first experiments with this basic algorithm resulted in a much higher recall than TRANSGRAPH, albeit, at a significantly lower precision. A closer examination of the results revealed two sources of error – (1) errors in source dictionary data, and (2) correlated sense shifts in translation circuits. Below we add two new features to our algorithm to deal with each of these error sources, respectively. 3.1 Errors in Source Dictionaries In practice, source dictionaries contain mistakes and errors occur in processing the dictionaries to create the translation graph. Thus, existence of a single translation circuit is only limited evidence for a vertex as a translation. We wish to exploit the insight that more translation circuits constitute stronger evidence. However, the different circuits may share some edges, and thus the evidence cannot be simply the number of translation circuits. We model the errors in dictionaries by assigning a probability less than 1.0 to each edge4 (pe in the 4In our experiments we used a flat value of 0.6, chosen by pseudo-code). We assume that the probability of an edge being erroneous is independent of the rest of the graph. Thus, a translation graph with possible data errors converts into a distribution over accurate translation graphs. Under this distribution, we can use the probability of existence of a translation circuit through a vertex as the probability that the vertex is a translation. This value captures our insights, since a larger number of translation circuits gives a higher probability value. We sample different graph topologies from our given distribution. Some translation circuits will exist in some of the sampled graphs, but not in others. This, in turn, means that a given vertex v will only be on a circuit for a fraction of the sampled graphs. We take the proportion of samples in which v is on a circuit to be the probability that v is in the translation set. We refer to this algorithm as Unpruned SenseUniformPaths (uSP). 3.2 Avoiding Correlated Sense-shifts The second source of errors are circuits that include a pair of nodes sharing the same polysemy, i.e., having the same pair of senses. A circuit might maintain sense s∗until it reaches a node that has both s∗and a distinct si. The next edge may lead to a node with si, but not s∗, causing an extraction error. The path later shifts back to sense s∗at a second node that also has s∗and si. An example for this is illustrated in Figure 2(e), where both the German and Swedish words mean feather and spring coil. Here, Italian ‘penna’ means only the feather and not the coil. Two nodes that share the same two senses occur frequently in practice. For example, many languages use the same word for ‘heart’ (the organ) and center; similarly, it is common for languages to use the same word for ‘silver’, the metal and the color. These correlations stem from comparameter tuning on a development set of 50 inference tasks. In future we can use different values for different dictionaries based on our confidence in their accuracy. 265 Figure 3: The set {B, C} has a shared ambiguity - each node has both sense 1 (from the lower clique) and sense 2 (from the upper clique). A circuit that contains two nodes from the same ambiguity set with an intervening node not in that set is likely to create translation errors. mon metaphor and the shared evolutionary roots of some languages. We are able to avoid circuits with this type of correlated sense-shift by automatically identifying ambiguity sets, sets of nodes known to share multiple senses. For instance, in Figure 2(e) ‘Feder’ and ‘fjäder’ form an ambiguity set (shown within dashed lines), as they both mean feather and coil. Definition 2 An ambiguity set A is a set of vertices that all share the same two senses. I.e., ∃s1, s2, with s1 ̸= s2 s.t. ∀v ∈A, sense(v, s1) ∧ sense(v, s2), where sense(v, s) denotes that v has sense s. To increase the precision of our algorithm we prune the circuits that contain two nodes in the same ambiguity set and also have one or more intervening nodes that are not in the ambiguity set. There is a strong likelihood that the intervening nodes will represent a translation error. Ambiguity sets can be detected from the graph topology as follows. Each clique in the graph represents a set of vertices that share a common word sense. When two cliques intersect in two or more vertices, the intersecting vertices share the word sense of both cliques. This may either mean that both cliques represent the same word sense, or that the intersecting vertices form an ambiguity set. A large overlap between two cliques makes the former case more likely; a small overlap makes it more likely that we have found an ambiguity set. Figure 3 illustrates one such computation. All nodes of the clique V1, V2, A, B, C, D share a word sense, and all nodes of the clique B, C, E, F, G, H also share a word sense. The set {B, C} has nodes that have both senses, forming an ambiguity set. We denote the set of ambiguity sets by A in the pseudo-code. Having identified these ambiguity sets, we modify our random walk scheme by keeping track of whether we are entering or leaving an ambiguity set. We prune away all paths that enter the same ambiguity set twice. We name the resulting algorithm SenseUniformPaths (SP), summarized at a high level in Algorithm 1. Comparing Inference Algorithms Our evaluation demonstrated that SP outperforms uSP. Both these algorithms have significantly higher recall than TRANSGRAPH algorithm. The detailed results are presented in Section 4.2. We choose SP as our inference algorithm for all further research, in particular to create PANDICTIONARY. 3.3 Compiling PanDictionary Our goal is to automatically compile PANDICTIONARY, a sense-distinguished lexical translation resource, where each entry is a distinct word sense. Associated with each word sense is a list of translations in multiple languages. We use Wiktionary senses as the base senses for PANDICTIONARY. Recall that SP requires two nodes (v∗ 1 and v∗ 2) for inference. We use the Wiktionary source word as v∗ 1 and automatically pick the second word from the set of Wiktionary translations of that sense by choosing a word that is well connected, and, which does not appear in other senses of v∗ 1 (i.e., is expected to share only one sense with v∗ 1). We first run SenseUniformPaths to expand the approximately 50,000 senses in the English Wiktionary. We further expand any senses from the other Wiktionaries that are not yet covered by PANDICTIONARY, and add these to PANDICTIONARY. This results in the creation of the world’s largest multilingual, sense-distinguished translation resource, PANDICTIONARY. It contains a little over 80,000 senses. Its construction takes about three weeks on a 3.4 GHz processor with a 2 GB memory. Algorithm 1 S.P.(G, v∗ 1, v∗ 2, A) 1: parameters NG: no. of graph samples, NR: no. of random walks, pe: prob. of sampling an edge 2: create NG versions of G by sampling each edge independently with probability pe 3: for all i = 1..NG do 4: for all vertices v : rp[v][i] = 0 5: perform NR random walks starting at v∗ 1 (or v∗ 2) and pruning any walk that enters (or exits) an ambiguity set in A twice. All walks that connect to v∗ 2 (or v∗ 1) form a translation circuit. 6: for all vertices v do 7: if(v is on a translation circuit) rp[v][i] = 1 8: return P i rp[v][i] NG as the prob. that v is a translation 266 4 Empirical Evaluation In our experiments we investigate three key questions: (1) which of the three algorithms (TG, uSP and SP) is superior for translation inference (Section 4.2)? (2) how does the coverage of PANDICTIONARY compare with the largest existing multilingual dictionary, the English Wiktionary (Section 4.3)? (3) what is the benefit of inference over the mere aggregation of 631 dictionaries (Section 4.4)? Additionally, we evaluate the inference algorithm on two other dimensions – variation with the degree of polysemy of source word, and variation with original size of the seed translation set. 4.1 Experimental Methodology Ideally, we would like to evaluate a random sample of the more than 1,000 languages represented in PANDICTIONARY.5 However, a high-quality evaluation of translation between two languages requires a person who is fluent in both languages. Such people are hard to find and may not even exist for many language pairs (e.g., Basque and Maori). Thus, our evaluation was guided by our ability to recruit volunteer evaluators. Since we are based in an English speaking country we were able to recruit local volunteers who are fluent in a range of languages and language families, and who are also bilingual in English.6 The experiments in Sections 4.2 and 4.3 test whether translations in a PANDICTIONARY have accurate word senses. We provided our evaluators with a random sample of translations into their native language. For each translation we showed the English source word and gloss of the intended sense. For example, a Dutch evaluator was shown the sense ‘free (not imprisoned)’ together with the Dutch word ‘loslopende’. The instructions were to mark a word as correct if it could be used to express the intended sense in a sentence in their native language. For experiments in Section 4.4 we tested precision of pairwise translations, by having informants in several pairs of languages discuss whether the words in their respective languages can be used for the same sense. We use the tags of correct or incorrect to compute the precision: the percentage of correct trans5The distribution of words in PANDICTIONARY is highly non-uniform ranging from 182,988 words in English to 6,154 words in Luxembourgish and 189 words in Tuvalu. 6The languages used was based on the availability of native speakers. This varied between the different experiments, which were conducted at different times. Figure 4: The SenseUniformPaths algorithm (SP) more than doubles the number of correct translations at precision 0.95, compared to a baseline of translations that can be found without inference. lations divided by correct plus incorrect translations. We then order the translations by probability and compute the precision at various probability thresholds. 4.2 Comparing Inference Algorithms Our first evaluation compares our SenseUniformPaths (SP) algorithm (before and after pruning) with TRANSGRAPH on both precision and number of translations. To carry out this comparison, we randomly sampled 1,000 senses from English Wiktionary and ran the three algorithms over them. We evaluated the results on 7 languages – Chinese, Danish, German, Hindi, Japanese, Russian, and Turkish. Each informant tagged 60 random translations inferred by each algorithm, which resulted in 360400 tags per algorithm7. The precision over these was taken as a surrogate for the precision across all the senses. We compare the number of translations for each algorithm at comparable precisions. The baseline is the set of translations (for these 1000 senses) found in the source dictionaries without inference, which has a precision 0.95 (as evaluated by our informants).8 Our results are shown in Figure 4. At this high precision, SP more than doubles the number of baseline translations, finding 5 times as many inferred translations (in black) as TG. Indeed, both uSP and SP massively outperform TG. SP is consistently better than uSP, since it performs better for polysemous words, due to its pruning based on ambiguity sets. We conclude 7Some translations were marked as “Don’t know”. 8Our informants tended to underestimate precision, often marking correct translations in minor senses of a word as incorrect. 267 0.5 0.6 0.7 0.8 0.9 1 0.0 4.0 8.0 12.0 16.0 Precision Translations in Millions PanDictionary English Wiktionary Figure 5: Precision vs. coverage curve for PANDICTIONARY. It quadruples the size of the English Wiktionary at precision 0.90, is more than 8 times larger at precision 0.85 and is almost 24 times the size at precision 0.7. that SP is the best inference algorithm and employ it for PANDICTIONARY construction. 4.3 Comparison with English Wiktionary We now compare the coverage of PANDICTIONARY with the English Wiktionary at varying levels of precision. The English Wiktionary is the largest Wiktionary with a total of 403,413 translations. It is also more reliable than some other Wiktionaries in making word sense distinctions. In this study we use only the subset of PANDICTIONARY that was computed starting from the English Wiktionary senses. Thus, this subsection under-reports PANDICTIONARY’s coverage. To evaluate a huge resource such as PANDICTIONARY we recruited native speakers of 14 languages – Arabic, Bulgarian, Danish, Dutch, German, Hebrew, Hindi, Indonesian, Japanese, Korean, Spanish, Turkish, Urdu, and Vietnamese. We randomly sampled 200 translations per language, which resulted in about 2,500 tags. Figure 5 shows the total number of translations in PANDICTIONARY in senses from the English Wiktionary. At precision 0.90, PANDICTIONARY has 1.8 million translations, 4.5 times as many as the English Wiktionary. We also compare the coverage of PANDICTIONARY with that of the English Wiktionary in terms of languages covered. Table 1 reports, for each resource, the number of languages that have a minimum number of distinct words in the resource. PANDICTIONARY has 1.4 times as many languages with at least 1,000 translations at precision 0.90 and more than twice at precision 0.7. These observations reaffirm our faith in the panlingual nature of the resource. PANDICTIONARY’s ability to expand the lists of translations provided by the English Wiktionary is most pronounced for senses with a small num0.75 0.8 0.85 0.9 0.95 1 2 3,4 >4 Precision Avg precision 0.90 Avg precision 0.85 Polysemy of the English source word 3-4 Figure 6: Variation of precision with the degree of polysemy of the source English word. The precision decreases as polysemy increases, still maintaining reasonably high values. ber of translations. For example, at precision 0.90, senses that originally had 3 to 6 translations are increased 5.3 times in size. The increase is 2.2 times when the original sense size is greater than 20. For closer analysis we divided the English source words (v∗ 1) into different bins based on the number of senses that English Wiktionary lists for them. Figure 6 plots the variation of precision with this degree of polysemy. We find that translation quality decreases as degree of polysemy increases, but this decline is gradual, which suggests that SP algorithm is able to hold its ground well in difficult inference tasks. 4.4 Comparison with All Source Dictionaries We have shown that PANDICTIONARY has much broader coverage than the English Wiktionary, but how much of this increase is due to the inference algorithm versus the mere aggregation of hundreds of translation dictionaries in PANDICTIONARY? Since most bilingual dictionaries are not sensedistinguished, we ignore the word senses and count the number of distinct (word1, word2) translation pairs. We evaluated the precision of word-word translations by a collaborative tagging scheme, with two native speakers of different languages, who are both bi-lingual in English. For each suggested translation they discussed the various senses of words in their respective languages and tag a translation correct if they found some sense that is shared by both words. For this study we tagged 7 language pairs: Hindi-Hebrew, # languages with distinct words ≥1000 ≥100 ≥1 English Wiktionary 49 107 505 PanDictionary (0.90) 67 146 608 PanDictionary (0.85) 75 175 794 PanDictionary (0.70) 107 607 1066 Table 1: PANDICTIONARY covers substantially more languages than the English Wiktionary. 268 0 50 100 150 200 250 EW 631D PD(0.9) PD(0.85) PD(0.8) Inferred transl. Direct transl. Translations (in millions) Figure 7: The number of distinct word-word translation pairs from PANDICTIONARY is several times higher than the number of translation pairs in the English Wiktionary (EW) or in all 631 source dictionaries combined (631 D). A majority of PANDICTIONARY translations are inferred by combining entries from multiple dictionaries. Japanese-Russian, Chinese-Turkish, JapaneseGerman, Chinese-Russian, Bengali-German, and Hindi-Turkish. Figure 7 compares the number of word-word translation pairs in the English Wiktionary (EW), in all 631 source dictionaries (631 D), and in PANDICTIONARY at precisions 0.90, 0.85, and 0.80. PANDICTIONARY increases the number of wordword translations by 73% over the source dictionary translations at precision 0.90 and increases it by 2.7 times at precision 0.85. PANDICTIONARY also adds value by identifying the word sense of the translation, which is not given in most of the source dictionaries. 5 Related Work Because we are considering a relatively new problem (automatically building a panlingual translation resource) there is little work that is directly related to our own. The closest research is our previous work on TRANSGRAPH algorithm (Etzioni et al., 2007). Our current algorithm outperforms the previous state of the art by 3.5 times at precision 0.9 (see Figure 4). Moreover, we compile this in a dictionary format, thus considerably reducing the response time compared to TRANSGRAPH, which performed inference at query time. There has been considerable research on methods to acquire translation lexicons from either MRDs (Neff and McCord, 1990; Helmreich et al., 1993; Copestake et al., 1994) or from parallel text (Gale and Church, 1991; Fung, 1995; Melamed, 1997; Franz et al., 2001), but this has generally been limited to a small number of languages. Manually engineered dictionaries such as EuroWordNet (Vossen, 1998) are also limited to a relatively small set of languages. There is some recent work on compiling dictionaries from monolingual corpora, which may scale to several language pairs in future (Haghighi et al., 2008). Little work has been done in combining multiple dictionaries in a way that maintains word senses across dictionaries. Gollins and Sanderson (2001) explored using triangulation between alternate pivot languages in cross-lingual information retrieval. Their triangulation essentially mixes together circuits for all word senses, hence, is unable to achieve high precision. Dyvik’s “semantic mirrors” uses translation paths to tease apart distinct word senses from inputs that are not sense-distinguished (Dyvik, 2004). However, its expensive processing and reliance on parallel corpora would not scale to large numbers of languages. Earlier (Knight and Luk, 1994) discovered senses of Spanish words by matching several English translations to a WordNet synset. This approach applies only to specific kinds of bilingual dictionaries, and also requires a taxonomy of synsets in the target language. Random walks, graph sampling and Monte Carlo simulations are popular in literature, though, to our knowledge, none have applied these to our specific problems (Henzinger et al., 1999; Andrieu et al., 2003; Karger, 1999). 6 Conclusions We have described the automatic construction of a unique multilingual translation resource, called PANDICTIONARY, by performing probabilistic inference over the translation graph. Overall, the construction process consists of large scale information extraction over the Web (parsing dictionaries), combining it into a single resource (a translation graph), and then performing automated reasoning over the graph (SenseUniformPaths) to yield a much more extensive and useful knowledge base. We have shown that PANDICTIONARY has more coverage than any other existing bilingual or multilingual dictionary. Even at the high precision of 0.90, PANDICTIONARY more than quadruples the size of the English Wiktionary, the largest available multilingual resource today. We plan to make PANDICTIONARY available to the research community, and also to the Wiktionary community in an effort to bolster their efforts. PANDICTIONARY entries can suggest new translations for volunteers to add to Wiktionary entries, particularly if combined with an intelligent editing tool (e.g., (Hoffmann et al., 2009)). 269 Acknowledgments This research was supported by a gift from the Utilika Foundation to the Turing Center at University of Washington. We acknowledge Paul Beame, Nilesh Dalvi, Pedro Domingos, Rohit Khandekar, Daniel Lowd, Parag, Jonathan Pool, Hoifung Poon, Vibhor Rastogi, Gyanit Singh for fruitful discussions and insightful comments on the research. We thank the language experts who donated their time and language expertise to evaluate our systems. We also thank the anynomous reviewers of the previous drafts of this paper for their valuable suggestions in improving the evaluation and presentation. References E. Adar, M. Skinner, and D. Weld. 2009. Information arbitrage in multi-lingual Wikipedia. In Procs. of Web Search and Data Mining(WSDM 2009). C. Andrieu, N. De Freitas, A. Doucet, and M. Jordan. 2003. An Introduction to MCMC for Machine Learning. Machine Learning, 50:5–43. F. Bond, S. Oepen, M. Siegel, A. Copestake, and D D. Flickinger. 2005. Open source machine translation with DELPH-IN. In Open-Source Machine Translation Workshop at MT Summit X. J. Carbonell, S. Klein, D. Miller, M. Steinbaum, T. Grassiany, and J. Frey. 2006. Context-based machine translation. In AMTA. A. Copestake, T. Briscoe, P. Vossen, A. Ageno, I. Castellon, F. Ribas, G. Rigau, H. Rodriquez, and A. Samiotou. 1994. Acquisition of lexical translation relations from MRDs. Machine Translation, 3(3–4):183–219. H. Dyvik. 2004. Translation as semantic mirrors: from parallel corpus to WordNet. Language and Computers, 49(1):311–326. O. Etzioni, K. Reiter, S. Soderland, and M. Sammer. 2007. Lexical translation with application to image search on the Web. In Machine Translation Summit XI. M. Franz, S. McCarly, and W. Zhu. 2001. EnglishChinese information retrieval at IBM. In Proceedings of TREC 2001. P. Fung. 1995. A pattern matching method for finding noun and proper noun translations from noisy parallel corpora. In Proceedings of ACL-1995. W. Gale and K.W. Church. 1991. A Program for Aligning Sentences in Bilingual Corpora. In Proceedings of ACL-1991. T. Gollins and M. Sanderson. 2001. Improving cross language retrieval with triangulated translation. In SIGIR. Raymond G. Gordon, Jr., editor. 2005. Ethnologue: Languages of the World (Fifteenth Edition). SIL International. A. Haghighi, P. Liang, T. Berg-Kirkpatrick, and D. Klein. 2008. Learning bilingual lexicons from monolingual corpora. In ACL. S. Helmreich, L. Guthrie, and Y. Wilks. 1993. The use of machine readable dictionaries in the Pangloss project. In AAAI Spring Symposium on Building Lexicons for Machine Translation. Monika R. Henzinger, Allan Heydon, Michael Mitzenmacher, and Marc Najork. 1999. Measuring index quality using random walks on the web. In WWW. R. Hoffmann, S. Amershi, K. Patel, F. Wu, J. Fogarty, and D. S. Weld. 2009. Amplifying community content creation with mixed-initiative information extraction. In ACM SIGCHI (CHI2009). D. R. Karger. 1999. A randomized fully polynomial approximation scheme for the all-terminal network reliability problem. SIAM Journal of Computation, 29(2):492–514. K. Knight and S. Luk. 1994. Building a large-scale knowledge base for machine translation. In AAAI. I.D. Melamed. 1997. A Word-to-Word Model of Translational Equivalence. In Proceedings of ACL1997 and EACL-1997, pages 490–497. M. Neff and M. McCord. 1990. Acquiring lexical data from machine-readable dictionary resources for machine translation. In 3rd Intl Conference on Theoretical and Methodological Issues in Machine Translation of Natural Language. P. Vossen, editor. 1998. EuroWordNet: A multilingual database with lexical semantic networds. Kluwer Academic Publishers. 270
2009
30
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 271–279, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP A Metric-based Framework for Automatic Taxonomy Induction Hui Yang Language Technologies Institute School of Computer Science Carnegie Mellon University [email protected] Jamie Callan Language Technologies Institute School of Computer Science Carnegie Mellon University [email protected] Abstract This paper presents a novel metric-based framework for the task of automatic taxonomy induction. The framework incrementally clusters terms based on ontology metric, a score indicating semantic distance; and transforms the task into a multi-criteria optimization based on minimization of taxonomy structures and modeling of term abstractness. It combines the strengths of both lexico-syntactic patterns and clustering through incorporating heterogeneous features. The flexible design of the framework allows a further study on which features are the best for the task under various conditions. The experiments not only show that our system achieves higher F1-measure than other state-of-the-art systems, but also reveal the interaction between features and various types of relations, as well as the interaction between features and term abstractness. 1 Introduction Automatic taxonomy induction is an important task in the fields of Natural Language Processing, Knowledge Management, and Semantic Web. It has been receiving increasing attention because semantic taxonomies, such as WordNet (Fellbaum, 1998), play an important role in solving knowledge-rich problems, including question answering (Harabagiu et al., 2003) and textual entailment (Geffet and Dagan, 2005). Nevertheless, most existing taxonomies are manually created at great cost. These taxonomies are rarely complete; it is difficult to include new terms in them from emerging or rapidly changing domains. Moreover, manual taxonomy construction is time-consuming, which may make it unfeasible for specialized domains and personalized tasks. Automatic taxonomy induction is a solution to augment existing resources and to produce new taxonomies for such domains and tasks. Automatic taxonomy induction can be decomposed into two subtasks: term extraction and relation formation. Since term extraction is relatively easy, relation formation becomes the focus of most research on automatic taxonomy induction. In this paper, we also assume that terms in a taxonomy are given and concentrate on the subtask of relation formation. Existing work on automatic taxonomy induction has been conducted under a variety of names, such as ontology learning, semantic class learning, semantic relation classification, and relation extraction. The approaches fall into two main categories: pattern-based and clusteringbased. Pattern-based approaches define lexicalsyntactic patterns for relations, and use these patterns to discover instances of relations. Clustering-based approaches hierarchically cluster terms based on similarities of their meanings usually represented by a vector of quantifiable features. Pattern-based approaches are known for their high accuracy in recognizing instances of relations if the patterns are carefully chosen, either manually (Berland and Charniak, 1999; Kozareva et al., 2008) or via automatic bootstrapping (Hearst, 1992; Widdows and Dorow, 2002; Girju et al., 2003). The approaches, however, suffer from sparse coverage of patterns in a given corpus. Recent studies (Etzioni et al., 2005; Kozareva et al., 2008) show that if the size of a corpus, such as the Web, is nearly unlimited, a pattern has a higher chance to explicitly appear in the corpus. However, corpus size is often not that large; hence the problem still exists. Moreover, since patterns usually extract instances in pairs, the approaches suffer from the problem of inconsistent concept chains after connecting pairs of instances to form taxonomy hierarchies. Clustering-based approaches have a main advantage that they are able to discover relations 271 which do not explicitly appear in text. They also avoid the problem of inconsistent chains by addressing the structure of a taxonomy globally from the outset. Nevertheless, it is generally believed that clustering-based approaches cannot generate relations as accurate as pattern-based approaches. Moreover, their performance is largely influenced by the types of features used. The common types of features include contextual (Lin, 1998), co-occurrence (Yang and Callan, 2008), and syntactic dependency (Pantel and Lin, 2002; Pantel and Ravichandran, 2004). So far there is no systematic study on which features are the best for automatic taxonomy induction under various conditions. This paper presents a metric-based taxonomy induction framework. It combines the strengths of both pattern-based and clustering-based approaches by incorporating lexico-syntactic patterns as one type of features in a clustering framework. The framework integrates contextual, co-occurrence, syntactic dependency, lexical-syntactic patterns, and other features to learn an ontology metric, a score indicating semantic distance, for each pair of terms in a taxonomy; it then incrementally clusters terms based on their ontology metric scores. The incremental clustering is transformed into an optimization problem based on two assumptions: minimum evolution and abstractness. The flexible design of the framework allows a further study of the interaction between features and relations, as well as that between features and term abstractness. 2 Related Work There has been a substantial amount of research on automatic taxonomy induction. As we mentioned earlier, two main approaches are patternbased and clustering-based. Pattern-based approaches are the main trend for automatic taxonomy induction. Though suffering from the problems of sparse coverage and inconsistent chains, they are still popular due to their simplicity and high accuracy. They have been applied to extract various types of lexical and semantic relations, including is-a, part-of, sibling, synonym, causal, and many others. Pattern-based approaches started from and still pay a great deal of attention to the most common is-a relations. Hearst (1992) pioneered using a hand crafted list of hyponym patterns as seeds and employing bootstrapping to discover is-a relations. Since then, many approaches (Mann, 2002; Etzioni et al., 2005; Snow et al., 2005) have used Hearst-style patterns in their work on is-a relations. For instance, Mann (2002) extracted is-a relations for proper nouns by Hearststyle patterns. Pantel et al. (2004) extended is-a relation acquisition towards terascale, and automatically identified hypernym patterns by minimal edit distance. Another common relation is sibling, which describes the relation of sharing similar meanings and being members of the same class. Terms in sibling relations are also known as class members or similar terms. Inspired by the conjunction and appositive structures, Riloff and Shepherd (1997), Roark and Charniak (1998) used cooccurrence statistics in local context to discover sibling relations. The KnowItAll system (Etzioni et al., 2005) extended the work in (Hearst, 1992) and bootstrapped patterns on the Web to discover siblings; it also ranked and selected the patterns by statistical measures. Widdows and Dorow (2002) combined symmetric patterns and graph link analysis to discover sibling relations. Davidov and Rappoport (2006) also used symmetric patterns for this task. Recently, Kozareva et al. (2008) combined a double-anchored hyponym pattern with graph structure to extract siblings. The third common relation is part-of. Berland and Charniak (1999) used two meronym patterns to discover part-of relations, and also used statistical measures to rank and select the matching instances. Girju et al. (2003) took a similar approach to Hearst (1992) for part-of relations. Other types of relations that have been studied by pattern-based approaches include questionanswer relations (such as birthdates and inventor) (Ravichandran and Hovy, 2002), synonyms and antonyms (Lin et al., 2003), general purpose analogy (Turney et al., 2003), verb relations (including similarity, strength, antonym, enablement and temporal) (Chklovski and Pantel, 2004), entailment (Szpektor et al., 2004), and more specific relations, such as purpose, creation (Cimiano and Wenderoth, 2007), LivesIn, and EmployedBy (Bunescu and Mooney , 2007). The most commonly used technique in pattern-based approaches is bootstrapping (Hearst, 1992; Etzioni et al., 2005; Girju et al., 2003; Ravichandran and Hovy, 2002; Pantel and Pennacchiotti, 2006). It utilizes a few man-crafted seed patterns to extract instances from corpora, then extracts new patterns using these instances, and continues the cycle to find new instances and new patterns. It is effective and scalable to large datasets; however, uncontrolled bootstrapping 272 soon generates undesired instances once a noisy pattern brought into the cycle. To aid bootstrapping, methods of pattern quality control are widely applied. Statistical measures, such as point-wise mutual information (Etzioni et al., 2005; Pantel and Pennacchiotti, 2006) and conditional probability (Cimiano and Wenderoth, 2007), have been shown to be effective to rank and select patterns and instances. Pattern quality control is also investigated by using WordNet (Girju et al., 2006), graph structures built among terms (Widdows and Dorow, 2002; Kozareva et al., 2008), and pattern clusters (Davidov and Rappoport, 2008). Clustering-based approaches usually represent word contexts as vectors and cluster words based on similarities of the vectors (Brown et al., 1992; Lin, 1998). Besides contextual features, the vectors can also be represented by verb-noun relations (Pereira et al., 1993), syntactic dependency (Pantel and Ravichandran, 2004; Snow et al., 2005), co-occurrence (Yang and Callan, 2008), conjunction and appositive features (Caraballo, 1999). More work is described in (Buitelaar et al., 2005; Cimiano and Volker, 2005). Clustering-based approaches allow discovery of relations which do not explicitly appear in text. Pantel and Pennacchiotti (2006), however, pointed out that clustering-based approaches generally fail to produce coherent cluster for small corpora. In addition, clustering-based approaches had only applied to solve is-a and sibling relations. Many clustering-based approaches face the challenge of appropriately labeling non-leaf clusters. The labeling amplifies the difficulty in creation and evaluation of taxonomies. Agglomerative clustering (Brown et al., 1992; Caraballo, 1999; Rosenfeld and Feldman, 2007; Yang and Callan, 2008) iteratively merges the most similar clusters into bigger clusters, which need to be labeled. Divisive clustering, such as CBC (Clustering By Committee) which constructs cluster centroids by averaging the feature vectors of a subset of carefully chosen cluster members (Pantel and Lin, 2002; Pantel and Ravichandran, 2004), also need to label the parents of split clusters. In this paper, we take an incremental clustering approach, in which terms and relations are added into a taxonomy one at a time, and their parents are from the existing taxonomy. The advantage of the incremental approach is that it eliminates the trouble of inventing cluster labels and concentrates on placing terms in the correct positions in a taxonomy hierarchy. The work by Snow et al. (2006) is the most similar to ours because they also took an incremental approach to construct taxonomies. In their work, a taxonomy grows based on maximization of conditional probability of relations given evidence; while in our work based on optimization of taxonomy structures and modeling of term abstractness. Moreover, our approach employs heterogeneous features from a wide range; while their approach only used syntactic dependency. We compare system performance between (Snow et al., 2006) and our framework in Section 5. 3 The Features The features used in this work are indicators of semantic relations between terms. Given two input terms y x c c , , a feature is defined as a function generating a single numeric score ∈ ) , ( y x c c h ℝ or a vector of numeric scores ∈ ) , ( y x c c h ℝn. The features include contextual, co-occurrence, syntactic dependency, lexicalsyntactic patterns, and miscellaneous. The first set of features captures contextual information of terms. According to Distributional Hypothesis (Harris, 1954), words appearing in similar contexts tend to be similar. Therefore, word meanings can be inferred from and represented by contexts. Based on the hypothesis, we develop the following features: (1) Global Context KL-Divergence: The global context of each input term is the search results collected through querying search engines against several corpora (Details in Section 5.1). It is built into a unigram language model without smoothing for each term. This feature function measures the Kullback-Leibler divergence (KL divergence) between the language models associated with the two inputs. (2) Local Context KL-Divergence: The local context is the collection of all the left two and the right two words surrounding an input term. Similarly, the local context is built into a unigram language model without smoothing for each term; the feature function outputs KL divergence between the models. The second set of features is co-occurrence. In our work, co-occurrence is measured by pointwise mutual information between two terms: ) ( ) ( ) , ( log ) , ( y x y x y x c Count c Count c c Count c c pmi = where Count(.) is defined as the number of documents or sentences containing the term(s); or n as in “Results 1-10 of about n for term” appearing on the first page of Google search results for a term or the concatenation of a term pair. Based 273 on different definitions of Count(.), we have (3) Document PMI, (4) Sentence PMI, and (5) Google PMI as the co-occurrence features. The third set of features employs syntactic dependency analysis. We have (6) Minipar Syntactic Distance to measure the average length of the shortest syntactic paths (in the first syntactic parse tree returned by Minipar1) between two terms in sentences containing them, (7) Modifier Overlap, (8) Object Overlap, (9) Subject Overlap, and (10) Verb Overlap to measure the number of overlaps between modifiers, objects, subjects, and verbs, respectively, for the two terms in sentences containing them. We use Assert2 to label the semantic roles. The fourth set of features is lexical-syntactic patterns. We have (11) Hypernym Patterns based on patterns proposed by (Hearst, 1992) and (Snow et al., 2005), (12) Sibling Patterns which are basically conjunctions, and (13) Part-of Patterns based on patterns proposed by (Girju et al., 2003) and (Cimiano and Wenderoth, 2007). Table 1 lists all patterns. Each feature function returns a vector of scores for two input terms, one score per pattern. A score is 1 if two terms match a pattern in text, 0 otherwise. The last set of features is miscellaneous. We have (14) Word Length Difference to measure the length difference between two terms, and (15) Definition Overlap to measure the number of word overlaps between the term definitions obtained by querying Google with “define:term”. These heterogeneous features vary from simple statistics to complicated syntactic dependency features, basic word length to comprehensive Web-based contextual features. The flexible design of our learning framework allows us to use all of them, and even allows us to use different sets of them under different conditions, for instance, different types of relations and different abstraction levels. We study the interaction be 1 http://www.cs.ualberta.ca/lindek/minipar.htm. 2 http://cemantix.org/assert. tween features and relations and that between features and abstractness in Section 5. 4 The Metric-based Framework This section presents the metric-based framework which incrementally clusters terms to form taxonomies. By minimizing the changes of taxonomy structures and modeling term abstractness at each step, it finds the optimal position for each term in a taxonomy. We first introduce definitions, terminologies and assumptions about taxonomies; then, we formulate automatic taxonomy induction as a multi-criterion optimization and solve it by a greedy algorithm; lastly, we show how to estimate ontology metrics. 4.1 Taxonomies, Ontology Metric, Assumptions, and Information Functions We define a taxonomy T as a data model that represents a set of terms C and a set of relations R between these terms. T can be written as T(C,R). Note that for the subtask of relation formation, we assume that the term set C is given. A full taxonomy is a tree containing all the terms in C. A partial taxonomy is a tree containing only a subset of terms in C. In our framework, automatic taxonomy induction is the process to construct a full taxonomy Tˆ given a set of terms C and an initial partial taxonomy ) , ( 0 0 0 R S T , where C S ⊆ 0 . Note that T0 is possibly empty. The process starts from the initial partial taxonomy T0 and randomly adds terms from C to T0 one by one, until a full taxonomy is formed, i.e., all terms in C are added. Ontology Metric We define an ontology metric as a distance measure between two terms (cx,cy) in a taxonomy T(C,R). Formally, it is a function → ×C C d : ℝ+, where C is the set of terms in T. An ontology metric d on a taxonomy T with edge weights w for any term pair (cx,cy)∈C is the sum of all edge weights along the shortest path between the pair: ∑ ∈ = ) , ( , ) , ( , ) ( ) , ( y x P e y x y x w T y x e w c c d Hypernym Patterns Sibling Patterns NPx (,)?and/or other NPy NPx and/or NPy such NPy as NPx Part-of Patterns NPy (,)? such as NPx NPx of NPy NPy (,)? including NPx NPy’s NPx NPy (,)? especially NPx NPy has/had/have NPx NPy like NPx NPy is made (up)? of NPx NPy called NPx NPy comprises NPx NPx is a/an NPy NPy consists of NPx NPx , a/an NPy Table 1. Lexico-Syntactic Patterns. Figure 1. Illustration of Ontology Metric. 274 where ) , ( y x P is the set of edges defining the shortest path from term cx to cy . Figure 1 illustrates ontology metrics for a 5-node taxonomy. Section 4.3 presents the details of learning ontology metrics. Information Functions The amount of information in a taxonomy T is measured and represented by an information function Info(T). An information function is defined as the sum of the ontology metrics among a set of term pairs. The function can be defined over a taxonomy, or on a single level of a taxonomy. For a taxonomy T(C,R), we define its information function as: ∑ ∈ < = C y c x c y x y x c c d T Info , , ) , ( ) ( (1) Similarly, we define the information function for an abstraction level Li as: ∑ ∈ < = iL y c x c y x y x i i c c d L Info , , ) , ( ) ( (2) where Li is the subset of terms lying at the ith level of a taxonomy T. For example, in Figure 1, node 1 is at level L1, node 2 and node 5 level L2. Assumptions Given the above definitions about taxonomies, we make the following assumptions: Minimum Evolution Assumption. Inspired by the minimum evolution tree selection criterion widely used in phylogeny (Hendy and Penny, 1985), we assume that a good taxonomy not only minimizes the overall semantic distance among the terms but also avoid dramatic changes. Construction of a full taxonomy is proceeded by adding terms one at a time, which yields a series of partial taxonomies. After adding each term, the current taxonomy Tn+1 from the previous taxonomy Tn is one that introduces the least changes between the information in the two taxonomies: ) , ( min arg ' ' 1 T T Info T n T n ∆ = + where the information change function is |) ( ) ( | ) , ( b a b a T Info T Info T T Info − = ∆ . Abstractness Assumption. In a taxonomy, concrete concepts usually lay at the bottom of the hierarchy while abstract concepts often occupy the intermediate and top levels. Concrete concepts often represent physical entities, such as “basketball” and “mercury pollution”. While abstract concepts, such as “science” and “economy”, do not have a physical form thus we must imagine their existence. This obvious difference suggests that there is a need to treat them differently in taxonomy induction. Hence we assume that terms at the same abstraction level have common characteristics and share the same Info(.) function. We also assume that terms at different abstraction levels have different characteristics; hence they do not necessarily share the same Info(.) function. That is to say, , concept T c ∈ ∀ , level n abstractio T Li ⊂ (.). uses i i Info c L c ⇒ ∈ 4.2 Problem Formulation The Minimum Evolution Objective Based on the minimum evolution assumption, we define the goal of taxonomy induction is to find the optimal full taxonomy Tˆ such that the information changes are the least since the initial partial taxonomy T0, i.e., to find: ) , ( min arg ˆ ' 0 ' T T Info T T ∆ = (3) where ' T is a full taxonomy, i.e., the set of terms in ' T equals C. To find the optimal solution for Equation (3), Tˆ , we need to find the optimal term set Cˆ and the optimal relation set Rˆ . Since the optimal term set for a full taxonomy is always C, the only unknown part left is Rˆ . Thus, Equation (3) can be transformed equivalently into: )) , ( ), , ( ( min arg ˆ 0 0 0 ' ' ' R S T R C T Info R R ∆ = Note that in the framework, terms are added incrementally into a taxonomy. Each term insertion yields a new partial taxonomy T. By the minimum evolution assumption, the optimal next partial taxonomy is one gives the least information change. Therefore, the updating function for the set of relations 1 + n R after a new term z is inserted can be calculated as: )) , ( ), }, { ( ( min arg ˆ ' ' n n n R R S T R z S T Info R ∪ ∆ = By plugging in the definition of the information change function (.,.) Info ∆ in Section 4.1 and Equation (1), the updating function becomes: |) , ( ) , ( | min arg ˆ , } { , ' ∑ ∑ ∈ ∪ ∈ − = n S y c x c y x z n S y c x c y x R c c d c c d R The above updating function can be transformed into a minimization problem: y x c c d c c d u c c d c c d u u z n S y c x c y x n S y c x c y x n S y c x c y x z n S y c x c y x < − ≤ − ≤ ∑ ∑ ∑ ∑ ∪ ∈ ∈ ∈ ∪ ∈ } { , , , } { , ) , ( ) , ( ) , ( ) , ( subject to min The minimization follows the minimum evolution assumption; hence we call it the minimum evolution objective. 275 The Abstractness Objective The abstractness assumption suggests that term abstractness should be modeled explicitly by learning separate information functions for terms at different abstraction levels. We approximate an information function by a linear interpolation of some underlying feature functions. Each abstraction level Li is characterized by its own information function Infoi(.). The least square fit of Infoi(.) is: . | ) ( | min 2 i T i i i H W L Info − By plugging Equation (2) and minimizing over every abstraction level, we have: 2 , , , )) , ( ) , ( ( min y x j i j j i i i L y c x c y x c c h w c c d ∑ ∑ ∑ − ∈ where j ih , (.,.) is the jth underlying feature function for term pairs at level Li, j i w , is the weight for j ih , (.,.). This minimization follows the abstractness assumption; hence we call it the abstractness objective. The Multi-Criterion Optimization Algorithm We propose that both minimum evolution and abstractness objectives need to be satisfied. To optimize multiple criteria, the Pareto optimality needs to be satisfied (Boyd and Vandenberghe, 2004). We handle this by introducing   0,1 to control the contribution of each objective. The multi-criterion optimization function is: y x c c h w c c d v c c d c c d u c c d c c d u v u y x j i j j i i L c c y x z S c c y x S c c y x S c c y x z S c c y x i y x n y x n y x n y x n y x < − = − ≤ − ≤ − + ∑ ∑∑ ∑ ∑ ∑ ∑ ∈ ∪ ∈ ∈ ∈ ∪ ∈ 2 )) , ( ) , ( ( ) , ( ) , ( ) , ( ) , ( subject to ) 1( min , , , } { , , , } { , λ λ The above optimization can be solved by a greedy optimization algorithm. At each term insertion step, it produces a new partial taxonomy by adding to the existing partial taxonomy a new term z, and a new set of relations R(z,.). z is attached to every nodes in the existing partial taxonomy; and the algorithm selects the optimal position indicated by R(z,.), which minimizes the multicriterion objective function. The algorithm is: ); , ( )}; ) 1( ( min {arg ; \ R S T v u R R {z} S S S C z (z,.) R Output foreach λ λ − + ∪ → ∪ → ∈ The above algorithm presents a general incremental clustering procedure to construct taxonomies. By minimizing the taxonomy structure changes and modeling term abstractness at each step, it finds the optimal position of each term in the taxonomy hierarchy. 4.3 Estimating Ontology Metric Learning a good ontology metric is important for the multi-criterion optimization algorithm. In this work, the estimation and prediction of ontology metric are achieved by ridge regression (Hastie et al., 2001). In the training data, an ontology metric d(cx,cy) for a term pair (cx,cy) is generated by assuming every edge weight as 1 and summing up all the edge weights along the shortest path from cx to cy. We assume that there are some underlying feature functions which measure the semantic distance from term cx to cy. A weighted combination of these functions approximates the ontology metric for (cx,cy): ∑ = ) , ( ) , ( y x j j j c c h w y x d where j w is the jth weight for ) , ( y x j c c h , the jth feature function. The feature functions are generated as mentioned in Section 3. 5 Experiments 5.1 Data The gold standards used in the evaluation are hypernym taxonomies extracted from WordNet and ODP (Open Directory Project), and meronym taxonomies extracted from WordNet. In WordNet taxonomy extraction, we only use the word senses within a particular taxonomy to ensure no ambiguity. In ODP taxonomy extraction, we parse the topic lines, such as “Topic r:id=`Top/Arts/Movies’”, in the XML databases to obtain relations, such as is_a(movies, arts). In total, there are 100 hypernym taxonomies, 50 each extracted from WordNet3 and ODP4, and 50 meronym taxonomies from WordNet5. Table 2 3 WordNet hypernym taxonomies are from 12 topics: gathering, professional, people, building, place, milk, meal, water, beverage, alcohol, dish, and herb. 4 ODP hypernym taxonomies are from 16 topics: computers, robotics, intranet, mobile computing, database, operating system, linux, tex, software, computer science, data communication, algorithms, data formats, security multimedia, and artificial intelligence. 5 WordNet meronym taxonomies are from 15 topics: bed, car, building, lamp, earth, television, body, drama, theatre, water, airplane, piano, book, computer, and watch. Statistics WN/is-a ODP/is-a WN/part-of #taxonomies 50 50 50 #terms 1,964 2,210 1,812 Avg #terms 39 44 37 Avg depth 6 6 5 Table 2. Data Statistics. 276 summarizes the data statistics. We also use two Web-based auxiliary datasets to generate features mentioned in Section 3: • Wikipedia corpus. The entire Wikipedia corpus is downloaded and indexed by Indri6. The top 100 documents returned by Indri are the global context of a term when querying with the term. • Google corpus. A collection of the top 1000 documents by querying Google using each term, and each term pair. Each top 1000 documents are the global context of a query term. Both corpora are split into sentences and are used to generate contextual, co-occurrence, syntactic dependency and lexico-syntactic pattern features. 5.2 Methodology We evaluate the quality of automatic generated taxonomies by comparing them with the gold standards in terms of precision, recall and F1measure. F1-measure is calculated as 2*P*R/ (P+R), where P is precision, the percentage of correctly returned relations out of the total returned relations, R is recall, the percentage of correctly returned relations out of the total relations in the gold standard. Leave-one-out cross validation is used to average the system performance across different training and test datasets. For each 50 datasets from WordNet hypernyms, WordNet meronyms or ODP hypernyms, we randomly pick 49 of them to generate training data, and test on the remaining dataset. We repeat the process for 50 times, with different training and test sets at each 6 http://www.lemurproject.org/indri/. time, and report the averaged precision, recall and F1-measure across all 50 runs. We also group the fifteen features in Section 3 into six sets: contextual, co-concurrence, patterns, syntactic dependency, word length difference and definition. Each set is turned on one by one for experiments in Section 5.4 and 5.5. 5.3 Performance of Taxonomy Induction In this section, we compare the following automatic taxonomy induction systems: HE, the system by Hearst (1992) with 6 hypernym patterns; GI, the system by Girju et al. (2003) with 3 meronym patterns; PR, the probabilistic framework by Snow et al. (2006); and ME, the metric-based framework proposed in this paper. To have a fair comparison, for PR, we estimate the conditional probability of a relation given the evidence P(Rij|Eij), as in (Snow et al. 2006), by using the same set of features as in ME. Table 3 shows precision, recall, and F1measure of each system for WordNet hypernyms (is-a), WordNet meronyms (part-of) and ODP hypernyms (is-a). Bold font indicates the best performance in a column. Note that HE is not applicable to part-of, so is GI to is-a. Table 3 shows that systems using heterogeneous features (PR and ME) achieve higher F1measure than systems only using patterns (HE and GI) with a significant absolute gain of >30%. Generally speaking, pattern-based systems show higher precision and lower recall, while systems using heterogeneous features show lower precision and higher recall. However, when considering both precision and recall, using heterogeneous features is more effective than just using patterns. The proposed system ME consistently produces the best F1-measure for all three tasks. The performance of the systems for ODP/is-a is worse than that for WordNet/is-a. This may be because there is more noise in ODP than in WordNet/is-a System Precision Recall F1-measure HE 0.85 0.32 0.46 GI n/a n/a n/a PR 0.75 0.73 0.74 ME 0.82 0.79 0.82 ODP/is-a System Precision Recall F1-measure HE 0.31 0.29 0.30 GI n/a n/a n/a PR 0.60 0.72 0.65 ME 0.64 0.70 0.67 WordNet/part-of System Precision Recall F1-measure HE n/a n/a n/a GI 0.75 0.25 0.38 PR 0.68 0.52 0.59 ME 0.69 0.55 0.61 Table 3. System Performance. Feature is-a sibling partof Benefited Relations Contextual 0.21 0.42 0.12 sibling Co-occur. 0.48 0.41 0.28 All Patterns 0.46 0.41 0.30 All Syntactic 0.22 0.36 0.12 sibling Word Leng. 0.16 0.16 0.15 All but limited Definition 0.12 0.18 0.10 Sibling but limited Best Features Cooccur., patterns Contextual, co-occur., patterns Cooccur., patterns Table 4. F1-measure for Features vs. Relations: WordNet. 277 WordNet. For example, under artificial intelligence, ODP has neural networks, natural language and academic departments. Clearly, academic departments is not a hyponym of artificial intelligence. The noise in ODP interferes with the learning process, thus hurts the performance. 5.4 Features vs. Relations This section studies the impact of different sets of features on different types of relations. Table 4 shows F1-measure of using each set of features alone on taxonomy induction for WordNet is-a, sibling, and part-of relations. Bold font means a feature set gives a major contribution to the task of automatic taxonomy induction for a particular type of relation. Table 4 shows that different relations favor different sets of features. Both co-occurrence and lexico-syntactic patterns work well for all three types of relations. It is interesting to see that simple co-occurrence statistics work as good as lexico-syntactic patterns. Contextual features work well for sibling relations, but not for is-a and part-of. Syntactic features also work well for sibling, but not for is-a and part-of. The similar behavior of contextual and syntactic features may be because that four out of five syntactic features (Modifier, Subject, Object, and Verb overlaps) are just surrounding context for a term. Comparing the is-a and part-of columns in Table 4 and the ME rows in Table 3, we notice a significant difference in F1-measure. It indicates that combination of heterogeneous features gives more rise to the system performance than a single set of features does. 5.5 Features vs. Abstractness This section studies the impact of different sets of features on terms at different abstraction levels. In the experiments, F1-measure is evaluated for terms at each level of a taxonomy, not the whole taxonomy. Table 5 and 6 demonstrate F1measure of using each set of features alone on each abstraction levels. Columns 2-6 are indices of the levels in a taxonomy. The larger the indices are, the lower the levels. Higher levels contain abstract terms, while lower levels contain concrete terms. L1 is ignored here since it only contains a single term, the root. Bold font indicates good performance in a column. Both tables show that abstract terms and concrete terms favor different sets of features. In particular, contextual, co-occurrence, pattern, and syntactic features work well for terms at L4L6, i.e., concrete terms; co-occurrence works well for terms at L2-L3, i.e., abstract terms. This difference indicates that terms at different abstraction levels have different characteristics; it confirms our abstractness assumption in Section 4.1. We also observe that for abstract terms in WordNet, patterns work better than contextual features; while for abstract terms in ODP, the conclusion is the opposite. This may be because that WordNet has a richer vocabulary and a more rigid definition of hypernyms, and hence is-a relations in WordNet are recognized more effectively by using lexico-syntactic patterns; while ODP contains more noise, and hence it favors features requiring less rigidity, such as the contextual features generated from the Web. 6 Conclusions This paper presents a novel metric-based taxonomy induction framework combining the strengths of lexico-syntactic patterns and clustering. The framework incrementally clusters terms and transforms automatic taxonomy induction into a multi-criteria optimization based on minimization of taxonomy structures and modeling of term abstractness. The experiments show that our framework is effective; it achieves higher F1measure than three state-of-the-art systems. The paper also studies which features are the best for different types of relations and for terms at different abstraction levels. Most prior work uses a single rule or feature function for automatic taxonomy induction at all levels of abstraction. Our work is a more general framework which allows a wider range of features and different metric functions at different abstraction levels. This more general framework has the potential to learn more complex taxonomies than previous approaches. Acknowledgements This research was supported by NSF grant IIS0704210. Any opinions, findings, conclusions, or recommendations expressed in this paper are of the authors, and do not necessarily reflect those of the sponsor. Feature L2 L3 L4 L5 L6 Contextual 0.29 0.31 0.35 0.36 0.36 Co-occurrence 0.47 0.56 0.45 0.41 0.41 Patterns 0.47 0.44 0.42 0.39 0.40 Syntactic 0.31 0.28 0.36 0.38 0.39 Word Length 0.16 0.16 0.16 0.16 0.16 Definition 0.12 0.12 0.12 0.12 0.12 Table 5. F1-measure for Features vs. Abstractness: WordNet/is-a. Feature L2 L3 L4 L5 L6 Contextual 0.30 0.30 0.33 0.29 0.29 Co-occurrence 0.34 0.36 0.34 0.31 0.31 Patterns 0.23 0.25 0.30 0.28 0.28 Syntactic 0.18 0.18 0.23 0.27 0.27 Word Length 0.15 0.15 0.15 0.14 0.14 Definition 0.13 0.13 0.13 0.12 0.12 Table 6. F1-measure for Features vs. Abstractness: ODP/is-a. 278 References M. Berland and E. Charniak. 1999. Finding parts in very large corpora. ACL’99. S. Boyd and L. Vandenberghe. 2004. Convex optimization. In Cambridge University Press, 2004. P. Brown, V. D. Pietra, P. deSouza, J. Lai, and R. Mercer. 1992. Class-based ngram models for natural language. Computational Linguistics, 18(4):468–479. P. Buitelaar, P. Cimiano, and B. Magnini. 2005. Ontology Learning from Text: Methods, Evaluation and Applications. Volume 123 Frontiers in Artificial Intelligence and Applications. R. Bunescu and R. Mooney. 2007. Learning to Extract Relations from the Web using Minimal Supervision. ACL’07. S. Caraballo. 1999. Automatic construction of a hypernymlabeled noun hierarchy from text. ACL’99. T. Chklovski and P. Pantel. 2004. VerbOcean: mining the web for fine-grained semantic verb relations. EMNLP ’04. P. Cimiano and J. Volker. 2005. Towards large-scale, opendomain and ontology-based named entity classification. RANLP’07. P. Cimiano and J. Wenderoth. 2007. Automatic Acquisition of Ranked Qualia Structures from the Web. ACL’07. D. Davidov and A. Rappoport. 2006. Efficient Unsupervised Discovery of Word Categories Using Symmetric Patterns and High Frequency Words. ACL’06. D. Davidov and A. Rappoport. 2008. Classification of Semantic Relationships between Nominals Using Pattern Clusters. ACL’08. D. Downey, O. Etzioni, and S. Soderland. 2005. A Probabilistic model of redundancy in information extraction. IJCAI’05. O. Etzioni, M. Cafarella, D. Downey, A. Popescu, T. Shaked, S. Soderland, D. Weld, and A. Yates. 2005. Unsupervised named-entity extraction from the web: an experimental study. Artificial Intelligence, 165(1):91–134. C. Fellbuam. 1998. WordNet: An Electronic Lexical Database. MIT Press. 1998. M. Geffet and I. Dagan. 2005. The Distributional Inclusion Hypotheses and Lexical Entailment. ACL’05. R. Girju, A. Badulescu, and D. Moldovan. 2003. Learning Semantic Constraints for the Automatic Discovery of Part-Whole Relations. HLT’03. R. Girju, A. Badulescu, and D. Moldovan. 2006. Automatic Discovery of Part-Whole Relations. Computational Linguistics, 32(1): 83-135. Z. Harris. 1985. Distributional structure. In Word, 10(23): 146-162s, 1954. T. Hastie, R. Tibshirani and J. Friedman. 2001. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer-Verlag, 2001. M. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. COLING’92. M. D. Hendy and D. Penny. 1982. Branch and bound algorithms to determine minimal evolutionary trees. Mathematical Biosciences 59: 277-290. Z. Kozareva, E. Riloff, and E. Hovy. 2008. Semantic Class Learning from the Web with Hyponym Pattern Linkage Graphs. ACL’08. D. Lin, 1998. Automatic retrieval and clustering of similar words. COLING’98. D. Lin, S. Zhao, L. Qin, and M. Zhou. 2003. Identifying Synonyms among Distributionally Similar Words. IJCAI’03. G. S. Mann. 2002. Fine-Grained Proper Noun Ontologies for Question Answering. In Proceedings of SemaNet’ 02: Building and Using Semantic Networks, Taipei. P. Pantel and D Lin. 2002. Discovering word senses from text. SIGKDD’02. P. Pantel and D. Ravichandran. 2004. Automatically labeling semantic classes. HLT/NAACL’04. P. Pantel, D. Ravichandran, and E. Hovy. 2004. Towards terascale knowledge acquisition. COLING’04. P. Pantel and M. Pennacchiotti. 2006. Espresso: Leveraging Generic Patterns for Automatically Harvesting Semantic Relations. ACL’06. F. Pereira, N. Tishby, and L. Lee. 1993. Distributional clustering of English words. ACL’93. D. Ravichandran and E. Hovy. 2002. Learning surface text patterns for a question answering system. ACL’02. E. Riloff and J. Shepherd. 1997. A corpus-based approach for building semantic lexicons. EMNLP’97. B. Roark and E. Charniak. 1998. Noun-phrase cooccurrence statistics for semi-automatic semantic lexicon construction. ACL/COLING’98. R. Snow, D. Jurafsky, and A. Y. Ng. 2005. Learning syntactic patterns for automatic hypernym discovery. NIPS’05. R. Snow, D. Jurafsky, and A. Y. Ng. 2006. Semantic Taxonomy Induction from Heterogeneous Evidence. ACL’06. B. Rosenfeld and R. Feldman. 2007. Clustering for unsupervised relation identification. CIKM’07. P. Turney, M. Littman, J. Bigham, and V. Shnayder. 2003. Combining independent modules to solve multiplechoice synonym and analogy problems. RANLP’03. S. M. Harabagiu, S. J. Maiorano and M. A. Pasca. 2003. Open-Domain Textual Question Answering Techniques. Natural Language Engineering 9 (3): 1-38, 2003. I. Szpektor, H. Tanev, I. Dagan, and B. Coppola. 2004. Scaling web-based acquisition of entailment relations. EMNLP’04. D. Widdows and B. Dorow. 2002. A graph model for unsupervised Lexical acquisition. COLING ’02. H. Yang and J. Callan. 2008. Learning the Distance Metric in a Personal Ontology. Workshop on Ontologies and Information Systems for the Semantic Web of CIKM’08. 279
2009
31
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 280–287, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Learning with Annotation Noise Eyal Beigman Olin Business School Washington University in St. Louis [email protected] Beata Beigman Klebanov Kellogg School of Management Northwestern University [email protected] Abstract It is usually assumed that the kind of noise existing in annotated data is random classification noise. Yet there is evidence that differences between annotators are not always random attention slips but could result from different biases towards the classification categories, at least for the harder-to-decide cases. Under an annotation generation model that takes this into account, there is a hazard that some of the training instances are actually hard cases with unreliable annotations. We show that these are relatively unproblematic for an algorithm operating under the 0-1 loss model, whereas for the commonly used voted perceptron algorithm, hard training cases could result in incorrect prediction on the uncontroversial cases at test time. 1 Introduction It is assumed, often tacitly, that the kind of noise existing in human-annotated datasets used in computational linguistics is random classification noise (Kearns, 1993; Angluin and Laird, 1988), resulting from annotator attention slips randomly distributed across instances. For example, Osborne (2002) evaluates noise tolerance of shallow parsers, with random classification noise taken to be “crudely approximating annotation errors.” It has been shown, both theoretically and empirically, that this type of noise is tolerated well by the commonly used machine learning algorithms (Cohen, 1997; Blum et al., 1996; Osborne, 2002; Reidsma and Carletta, 2008). Yet this might be overly optimistic. Reidsma and op den Akker (2008) show that apparent differences between annotators are not random slips of attention but rather result from different biases annotators might have towards the classification categories. When training data comes from one annotator and test data from another, the first annotator’s biases are sometimes systematic enough for a machine learner to pick them up, with detrimental results for the algorithm’s performance on the test data. A small subset of doubly annotated data (for inter-annotator agreement check) and large chunks of singly annotated data (for training algorithms) is not uncommon in computational linguistics datasets; such a setup is prone to problems if annotators are differently biased.1 Annotator bias is consistent with a number of noise models. For example, it could be that an annotator’s bias is exercised on each and every instance, making his preferred category likelier for any instance than in another person’s annotations. Another possibility, recently explored by Beigman Klebanov and Beigman (2009), is that some items are really quite clear-cut for an annotator with any bias, belonging squarely within one particular category. However, some instances – termed hard cases therein – are harder to decide upon, and this is where various preferences and biases come into play. In a metaphor annotation study reported by Beigman Klebanov et al. (2008), certain markups received overwhelming annotator support when people were asked to validate annotations after a certain time delay. Other instances saw opinions split; moreover, Beigman Klebanov et al. (2008) observed cases where people retracted their own earlier annotations. To start accounting for such annotator behavior, Beigman Klebanov and Beigman (2009) proposed a model where instances are either easy, and then all annotators agree on them, or hard, and then each annotator flips his or her own coin to de1The different biases might not amount to much in the small doubly annotated subset, resulting in acceptable interannotator agreement; yet when enacted throughout a large number of instances they can be detrimental from a machine learner’s perspective. 280 cide on a label (each annotator can have a different “coin” reflecting his or her biases). For annotations generated under such a model, there is a danger of hard instances posing as easy – an observed agreement between annotators being a result of all coins coming up heads by chance. They therefore define the expected proportion of hard instances in agreed items as annotation noise. They provide an example from the literature where an annotation noise rate of about 15% is likely. The question addressed in this article is: How problematic is learning from training data with annotation noise? Specifically, we are interested in estimating the degree to which performance on easy instances at test time can be hurt by the presence of hard instances in training data. Definition 1 The hard case bias, τ, is the portion of easy instances in the test data that are misclassified as a result of hard instances in the training data. This article proceeds as follows. First, we show that a machine learner operating under a 0-1 loss minimization principle could sustain a hard case bias of θ( 1 √ N ) in the worst case. Thus, while annotation noise is hazardous for small datasets, it is better tolerated in larger ones. However, 0-1 loss minimization is computationally intractable for large datasets (Feldman et al., 2006; Guruswami and Raghavendra, 2006); substitute loss functions are often used in practice. While their tolerance to random classification noise is as good as for 0-1 loss, their tolerance to annotation noise is worse. For example, the perceptron family of algorithms handle random classification noise well (Cohen, 1997). We show in section 3.4 that the widely used Freund and Schapire (1999) voted perceptron algorithm could face a constant hard case bias when confronted with annotation noise in training data, irrespective of the size of the dataset. Finally, we discuss the implications of our findings for the practice of annotation studies and for data utilization in machine learning. 2 0-1 Loss Let a sample be a sequence x1, . . . , xN drawn uniformly from the d-dimensional discrete cube Id = {−1, 1}d with corresponding labels y1, . . . , yN ∈ {−1, 1}. Suppose further that the learning algorithm operates by finding a hyperplane (w, ψ), w ∈Rd, ψ ∈R, that minimizes the empirical error L(w, ψ) = P j=1...N[yj−sgn(P i=1...d xi jwi− ψ)]2. Let there be H hard cases, such that the annotation noise is γ = H N .2 Theorem 1 In the worst case configuration of instances a hard case bias of τ = θ( 1 √ N ) cannot be ruled out with constant confidence. Idea of the proof: We prove by explicit construction of an adversarial case. Suppose there is a plane that perfectly separates the easy instances. The θ(N) hard instances will be concentrated in a band parallel to the separating plane, that is near enough to the plane so as to trap only about θ( √ N) easy instances between the plane and the band (see figure 1 for an illustration). For a random labeling of the hard instances, the central limit theorem shows there is positive probability that there would be an imbalance between +1 and −1 labels in favor of −1s on the scale of √ N, which, with appropriate constants, would lead to the movement of the empirically minimal separation plane to the right of the hard case band, misclassifying the trapped easy cases. Proof: Let v = v(x) = P i=1...d xi denote the sum of the coordinates of an instance in Id and take λe = √ d · F −1(√γ · 2−d 2 + 1 2) and λh = √ d · F −1(γ + √γ · 2−d 2 + 1 2), where F(t) is the cumulative distribution function of the normal distribution. Suppose further that instances xj such that λe < vj < λh are all and only hard instances; their labels are coinflips. All other instances are easy, and labeled y = y(x) = sgn(v). In this case, the hyperplane 1 √ d(1 . . . 1) is the true separation plane for the easy instances, with ψ = 0. Figure 1 shows this configuration. According to the central limit theorem, for d, N large, the distribution of v is well approximated by N(0, √ d). If N = c1 · 2d, for some 0 < c1 < 4, the second application of the central limit theorem ensures that, with high probability, about γN = c1γ2d items would fall between λe and λh (all hard), and √γ · 2−d 2 N = c1 p γ2d would fall between 0 and λe (all easy, all labeled +1). Let Z be the sum of labels of the hard cases, Z = P i=1...H yi. Applying the central limit theorem a third time, for large N, Z will, with a high probability, be distributed approximately as 2In Beigman Klebanov and Beigman (2009), annotation noise is defined as percentage of hard instances in the agreed annotations; this implies noise measurement on multiply annotated material. When there is just one annotator, no distinction between easy vs hard instances can be made; in this sense, all hard instances are posing as easy. 281 0 λe λh Figure 1: The adversarial case for 0-1 loss. Squares correspond to easy instances, circles – to hard ones. Filled squares and circles are labeled −1, empty ones are labeled +1. N(0, √γN). This implies that a value as low as −2σ cannot be ruled out with high (say 95%) confidence. Thus, an imbalance of up to 2√γN, or of 2 p c1γ2d, in favor of −1s is possible. There are between 0 and λh about 2√c1 p γ2d more −1 hard instances than +1 hard instances, as opposed to c1 p γ2d easy instances that are all +1. As long as c1 < 2√c1, i.e. c1 < 4, the empirically minimal threshold would move to λh, resulting in a hard case bias of τ = √γ√ c12d (1−γ)·c12d = θ( 1 √ N ). To see that this is the worst case scenario, we note that 0-1 loss sustained on θ(N) hard cases is the order of magnitude of the possible imbalance between −1 and +1 random labels, which is θ( √ N). For hard case loss to outweigh the loss on the misclassified easy instances, there cannot be more than θ( √ N) of the latter 2 Note that the proof requires that N = θ(2d) namely, that asymptotically the sample includes a fixed portion of the instances. If the sample is asymptotically smaller, then λe will have to be adjusted such that λe = √ d · F −1(θ( 1 √ N ) + 1 2). According to theorem 1, for a 10K dataset with 15% hard case rate, a hard case bias of about 1% cannot be ruled out with 95% confidence. Theorem 1 suggests that annotation noise as defined here is qualitatively different from more malicious types of noise analyzed in the agnostic learning framework (Kearns and Li, 1988; Haussler, 1992; Kearns et al., 1994), where an adversary can not only choose the placement of the hard cases, but also their labels. In worst case, the 0-1 loss model would sustain a constant rate of error due to malicious noise, whereas annotation noise is tolerated quite well in large datasets. 3 Voted Perceptron Freund and Schapire (1999) describe the voted perceptron. This algorithm and its many variants are widely used in the computational linguistics community (Collins, 2002a; Collins and Duffy, 2002; Collins, 2002b; Collins and Roark, 2004; Henderson and Titov, 2005; Viola and Narasimhan, 2005; Cohen et al., 2004; Carreras et al., 2005; Shen and Joshi, 2005; Ciaramita and Johnson, 2003). In this section, we show that the voted perceptron can be vulnerable to annotation noise. The algorithm is shown below. Algorithm 1 Voted Perceptron Training Input: a labeled training set (x1, y1), . . . , (xN, yN) Output: a list of perceptrons w1, . . . , wN Initialize: t ←0; w1 ←0; ψ1 ←0 for t = 1 . . . N do ˆyt ←sign(⟨wt, xt⟩+ ψt) wt+1 ←wt + yt−ˆyt 2 · xt ψt+1 ←ψt + yt−ˆyt 2 · ⟨wt, xt⟩ end for Forecasting Input: a list of perceptrons w1, . . . , wN an unlabeled instance x Output: A forecasted label y ˆy ←PN t=1 sign(⟨wt, xt⟩+ ψt) y ←sign(ˆy) The voted perceptron algorithm is a refinement of the perceptron algorithm (Rosenblatt, 1962; Minsky and Papert, 1969). Perceptron is a dynamic algorithm; starting with an initial hyperplane w0, it passes repeatedly through the labeled sample. Whenever an instance is misclassified by wt, the hyperplane is modified to adapt to the instance. The algorithm terminates once it has passed through the sample without making any classification mistakes. The algorithm terminates iff the sample can be separated by a hyperplane, and in this case the algorithm finds a separating hyperplane. Novikoff (1962) gives a bound on the number of iterations the algorithm goes through before termination, when the sample is separable by a margin. 282 The perceptron algorithm is vulnerable to noise, as even a little noise could make the sample inseparable. In this case the algorithm would cycle indefinitely never meeting termination conditions, wt would obtain values within a certain dynamic range but would not converge. In such setting, imposing a stopping time would be equivalent to drawing a random vector from the dynamic range. Freund and Schapire (1999) extend the perceptron to inseparable samples with their voted perceptron algorithm and give theoretical generalization bounds for its performance. The basic idea underlying the algorithm is that if the dynamic range of the perceptron is not too large then wt would classify most instances correctly most of the time (for most values of t). Thus, for a sample x1, . . . , xN the new algorithm would keep track of w0, . . . , wN, and for an unlabeled instance x it would forecast the classification most prominent amongst these hyperplanes. The bounds given by Freund and Schapire (1999) depend on the hinge loss of the dataset. In section 3.2 we construct a difficult setting for this algorithm. To prove that voted perceptron would suffer from a constant hard case bias in this setting using the exact dynamics of the perceptron is beyond the scope of this article. Instead, in section 3.3 we provide a lower bound on the hinge loss for a simplified model of the perceptron algorithm dynamics, which we argue would be a good approximation to the true dynamics in the setting we constructed. For this simplified model, we show that the hinge loss is large, and the bounds in Freund and Schapire (1999) cannot rule out a constant level of error regardless of the size of the dataset. In section 3.4 we study the dynamics of the model and prove that τ = θ(1) for the adversarial setting. 3.1 Hinge Loss Definition 2 The hinge loss of a labeled instance (x, y) with respect to hyperplane (w, ψ) and margin δ > 0 is given by ζ = ζ(ψ, δ) = max(0, δ − y · (⟨w, x⟩−ψ)). ζ measures the distance of an instance from being classified correctly with a δ margin. Figure 2 shows examples of hinge loss for various data points. Theorem 2 (Freund and Schapire (1999)) After one pass on the sample, the probability that the voted perceptron algorithm does not δ ζ ζ ζ ζ ζ ζ Figure 2: Hinge loss ζ for various data points incurred by the separator with margin δ. predict correctly the label of a test instance xN+1 is bounded by 2 N+1EN+1 d+D δ 2 where D = D(w, ψ, δ) = qPN i=1 ζ2 i . This result is used to explain the convergence of weighted or voted perceptron algorithms (Collins, 2002a). It is useful as long as the expected value of D is not too large. We show that in an adversarial setting of the annotation noise D is large, hence these bounds are trivial. 3.2 Adversarial Annotation Noise Let a sample be a sequence x1, . . . , xN drawn uniformly from Id with y1, . . . , yN ∈{−1, 1}. Easy cases are labeled y = y(x) = sgn(v) as before, with v = v(x) = P i=1...d xi. The true separation plane for the easy instances is w∗= 1 √ d(1 . . . 1), ψ∗= 0. Suppose hard cases are those where v(x) > c1 √ d, where c1 is chosen so that the hard instances account for γN of all instances.3 Figure 3 shows this setting. 3.3 Lower Bound on Hinge Loss In the simplified case, we assume that the algorithm starts training with the hyperplane w0 = w∗= 1 √ d(1 . . . 1), and keeps it throughout the training, only updating ψ. In reality, each hard instance can be decomposed into a component that is parallel to w∗, and a component that is orthogonal to it. The expected contribution of the orthogonal 3See the proof of 0-1 case for a similar construction using the central limit theorem. 283 0 c1√d Figure 3: An adversarial case of annotation noise for the voted perceptron algorithm. component to the algorithm’s update will be positive due to the systematic positioning of the hard cases, while the contributions of the parallel components are expected to cancel out due to the symmetry of the hard cases around the main diagonal that is orthogonal to w∗. Thus, while wt will not necessarily parallel w∗, it will be close to parallel for most t > 0. The simplified case is thus a good approximation of the real case, and the bound we obtain is expected to hold for the real case as well. For any initial value ψ0 < 0 all misclassified instances are labeled −1 and classified as +1, hence the update will increase ψ0, and reach 0 soon enough. We can therefore assume that ψt ≥0 for any t > t0 where t0 ≪N. Lemma 3 For any t > t0, there exist α = α(γ, T) > 0 such that E(ζ2) ≥α · δ. Proof: For ψ ≥0 there are two main sources of hinge loss: easy +1 instances that are classified as −1, and hard -1 instances classified as +1. These correspond to the two components of the following sum (the inequality is due to disregarding the loss incurred by a correct classification with too wide a margin): E(ζ2) ≥ [ψ] X l=0 1 2d d l  ( ψ √ d − l √ d + δ)2 +1 2 d X l=c1 √ d 1 2d d l  ( l √ d −ψ √ d + δ)2 Let 0 < T < c1 be a parameter. For ψ > T √ d, misclassified easy instances dominate the loss: E(ζ2) ≥ [ψ] X l=0 1 2d d l  ( ψ √ d − l √ d + δ)2 ≥ [T √ d] X l=0 1 2d d l  (T √ d √ d − l √ d + δ)2 ≥ T √ d X l=0 1 2d d l  (T − l √ d + δ)2 ≥ 1 √ 2π Z T 0 (T + δ −t)2e−t2/2dt = HT (δ) The last inequality follows from a normal approximation of the binomial distribution (see, for example, Feller (1968)). For 0 ≤ψ ≤T √ d, misclassified hard cases dominate: E(ζ2) ≥ 1 2 d X l=c1 √ d 1 2d d l  ( l √ d −ψ √ d + δ)2 ≥ 1 2 d X l=c1 √ d 1 2d d l  ( l √ d −T √ d √ d + δ)2 ≥ 1 2 · 1 √ 2π Z ∞ Φ−1(γ) (t −T + δ)2e−t2/2dt = Hγ(δ) where Φ−1(γ) is the inverse of the normal distribution density. Thus E(ζ2) ≥ min{HT (δ), Hγ(δ)}, and there exists α = α(γ, T) > 0 such that min{HT (δ), Hγ(δ)} ≥α · δ 2 Corollary 4 The bound in theorem 2 does not converge to zero for large N. We recall that Freund and Schapire (1999) bound is proportional to D2 = PN i=1 ζ2 i . It follows from lemma 3 that D2 = θ(N), hence the bound is ineffective. 3.4 Lower Bound on τ for Voted Perceptron Under Simplified Dynamics Corollary 4 does not give an estimate on the hard case bias. Indeed, it could be that wt = w∗for almost every t. There would still be significant hinge in this case, but the hard case bias for the voted forecast would be zero. To assess the hard case bias we need a model of perceptron dynamics that would account for the history of hyperplanes w0, . . . , wN the perceptron goes through on 284 a sample x1, . . . , xN. The key simplification in our model is assuming that wt parallels w∗for all t, hence the next hyperplane depends only on the offset ψt. This is a one dimensional Markov random walk governed by the distribution P(ψt+1−ψt = r|ψt) = P(x|yt −ˆyt 2 ·⟨w∗, x⟩= r) In general −d ≤ψt ≤d but as mentioned before lemma 3, we may assume ψt > 0. Lemma 5 There exists c > 0 such that with a high probability ψt > c · √ d for most 0 ≤t ≤N. Proof: Let c0 = F −1(γ 2 + 1 2); c1 = F −1(1−γ). We designate the intervals I0 = [0, c0 · √ d]; I1 = [c0 · √ d, c1 · √ d] and I2 = [c1 · √ d, d] and define Ai = {x : v(x) ∈Ii} for i = 0, 1, 2. Note that the constants c0 and c1 are chosen so that P(A0) = γ 2 and P(A2) = γ. It follows from the construction in section 3.2 that A0 and A1 are easy instances and A2 are hard. Given a sample x1, . . . , xN, a misclassification of xt ∈A0 by ψt could only happen when an easy +1 instance is classified as −1. Thus the algorithm would shift ψt to the left by no more than |vt −ψt| since vt = ⟨w∗, xt⟩. This shows that ψt ∈I0 implies ψt+1 ∈I0. In the same manner, it is easy to verify that if ψt ∈Ij and xt ∈Ak then ψt+1 ∈Ik, unless j = 0 and k = 1, in which case ψt+1 ∈I0 because xt ∈A1 would be classified correctly by ψt ∈I0. We construct a Markov chain with three states a0 = 0, a1 = c0 · √ d and a2 = c1 · √ d governed by the following transition distribution:     1 −γ 2 0 γ 2 γ 2 1 −γ γ 2 γ 2 1 2 −3γ 2 1 2 + γ     Let Xt be the state at time t. The principal eigenvector of the transition matrix (1 3, 1 3, 1 3) gives the stationary probability distribution of Xt. Thus Xt ∈{a1, a2} with probability 2 3. Since the transition distribution of Xt mirrors that of ψt, and since aj are at the leftmost borders of Ij, respectively, it follows that Xt ≤ψt for all t, thus Xt ∈{a1, a2} implies ψt ∈I1∪I2. It follows that ψt > c0 · √ d with probability 2 3, and the lemma follows from the law of large numbers 2 Corollary 6 With high probability τ = θ(1). Proof: Lemma 5 shows that for a sample x1, . . . , xN with high probability ψt is most of the time to the right of c · √ d. Consequently for any x in the band 0 ≤v ≤c · √ d we get sign(⟨w∗, x⟩+ψt) = −1 for most t hence by definition, the voted perceptron would classify such an instance as −1, although it is in fact a +1 easy instance. Since there are θ(N) misclassified easy instances, τ = θ(1) 2 4 Discussion In this article we show that training with annotation noise can be detrimental for test-time results on easy, uncontroversial instances; we termed this phenomenon hard case bias. Although under the 0-1 loss model annotation noise can be tolerated for larger datasets (theorem 1), minimizing such loss becomes intractable for larger datasets. Freund and Schapire (1999) voted perceptron algorithm and its variants are widely used in computational linguistics practice; our results show that it could suffer a constant rate of hard case bias irrespective of the size of the dataset (section 3.4). How can hard case bias be reduced? One possibility is removing as many hard cases as one can not only from the test data, as suggested in Beigman Klebanov and Beigman (2009), but from the training data as well. Adding the second annotator is expected to detect about half the hard cases, as they would surface as disagreements between the annotators. Subsequently, a machine learner can be told to ignore those cases during training, reducing the risk of hard case bias. While this is certainly a daunting task, it is possible that for annotation studies that do not require expert annotators and extensive annotator training, the newly available access to a large pool of inexpensive annotators, such as the Amazon Mechanical Turk scheme (Snow et al., 2008),4 or embedding the task in an online game played by volunteers (Poesio et al., 2008; von Ahn, 2006) could provide some solutions. Reidsma and op den Akker (2008) suggest a different option. When non-overlapping parts of the dataset are annotated by different annotators, each classifier can be trained to reflect the opinion (albeit biased) of a specific annotator, using different parts of the datasets. Such “subjective machines” can be applied to a new set of data; an item that causes disagreement between classifiers is then extrapolated to be a case of potential disagreement between the humans they replicate, i.e. 4http://aws.amazon.com/mturk/ 285 a hard case. Our results suggest that, regardless of the success of such an extrapolation scheme in detecting hard cases, it could erroneously invalidate easy cases: Each classifier would presumably suffer from a certain hard case bias, i.e. classify incorrectly things that are in fact uncontroversial for any human annotator. If each such classifier has a different hard case bias, some inter-classifier disagreements would occur on easy cases. Depending on the distribution of those easy cases in the feature space, this could invalidate valuable cases. If the situation depicted in figure 1 corresponds to the pattern learned by one of the classifiers, it would lead to marking the easy cases closest to the real separation boundary (those between 0 and λe) as hard, and hence unsuitable for learning, eliminating the most informative material from the training data. Reidsma and Carletta (2008) recently showed by simulation that different types of annotator behavior have different impact on the outcomes of machine learning from the annotated data. Our results provide a theoretical analysis that points in the same direction: While random classification noise is tolerable, other types of noise – such as annotation noise handled here – are more problematic. It is therefore important to develop models of annotator behavior and of the resulting imperfections of the annotated datasets, in order to diagnose the potential learning problem and suggest mitigation strategies. References Dana Angluin and Philip Laird. 1988. Learning from Noisy Examples. Machine Learning, 2(4):343–370. Beata Beigman Klebanov and Eyal Beigman. 2009. From Annotator Agreement to Noise Models. Computational Linguistics, accepted for publication. Beata Beigman Klebanov, Eyal Beigman, and Daniel Diermeier. 2008. Analyzing Disagreements. In COLING 2008 Workshop on Human Judgments in Computational Linguistics, pages 2–7, Manchester, UK. Avrim Blum, Alan Frieze, Ravi Kannan, and Santosh Vempala. 1996. A Polynomial-Time Algorithm for Learning Noisy Linear Threshold Functions. In Proceedings of the 37th Annual IEEE Symposium on Foundations of Computer Science, pages 330–338, Burlington, Vermont, USA. Xavier Carreras, Ll´uis M`arquez, and Jorge Castro. 2005. Filtering-Ranking Perceptron Learning for Partial Parsing. Machine Learning, 60(1):41–71. Massimiliano Ciaramita and Mark Johnson. 2003. Supersense Tagging of Unknown Nouns in WordNet. In Proceedings of the Empirical Methods in Natural Language Processing Conference, pages 168–175, Sapporo, Japan. William Cohen, Vitor Carvalho, and Tom Mitchell. 2004. Learning to Classify Email into “Speech Acts”. In Proceedings of the Empirical Methods in Natural Language Processing Conference, pages 309–316, Barcelona, Spain. Edith Cohen. 1997. Learning Noisy Perceptrons by a Perceptron in Polynomial Time. In Proceedings of the 38th Annual Symposium on Foundations of Computer Science, pages 514–523, Miami Beach, Florida, USA. Michael Collins and Nigel Duffy. 2002. New Ranking Algorithms for Parsing and Tagging: Kernels over Discrete Structures, and the Voted Perceptron. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 263–370, Philadelphia, USA. Michael Collins and Brian Roark. 2004. Incremental Parsing with the Perceptron Algorithm. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, pages 111–118, Barcelona, Spain. Michael Collins. 2002a. Discriminative Training Methods for Hidden Markov Hodels: Theory and Experiments with Perceptron Algorithms. In Proceedings of the Empirical Methods in Natural Language Processing Conference, pages 1–8, Philadelphia, USA. Michael Collins. 2002b. Ranking Algorithms for Named Entity Extraction: Boosting and the Voted Perceptron. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 489–496, Philadelphia, USA. Vitaly Feldman, Parikshit Gopalan, Subhash Khot, and Ashok Ponnuswami. 2006. New Results for Learning Noisy Parities and Halfspaces. In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science, pages 563–574, Los Alamitos, CA, USA. William Feller. 1968. An Introduction to Probability Theory and Its Application, volume 1. Wiley, New York, 3rd edition. Yoav Freund and Robert Schapire. 1999. Large Margin Classification Using the Perceptron Algorithm. Machine Learning, 37(3):277–296. Venkatesan Guruswami and Prasad Raghavendra. 2006. Hardness of Learning Halfspaces with Noise. In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science, pages 543– 552, Los Alamitos, CA, USA. 286 David Haussler. 1992. Decision Theoretic Generalizations of the PAC Model for Neural Net and other Learning Applications. Information and Computation, 100(1):78–150. James Henderson and Ivan Titov. 2005. Data-Defined Kernels for Parse Reranking Derived from Probabilistic Models. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 181–188, Ann Arbor, Michigan, USA. Michael Kearns and Ming Li. 1988. Learning in the Presence of Malicious Errors. In Proceedings of the 20th Annual ACM symposium on Theory of Computing, pages 267–280, Chicago, USA. Michael Kearns, Robert Schapire, and Linda Sellie. 1994. Toward Efficient Agnostic Learning. Machine Learning, 17(2):115–141. Michael Kearns. 1993. Efficient Noise-Tolerant Learning from Statistical Queries. In Proceedings of the 25th Annual ACM Symposium on Theory of Computing, pages 392–401, San Diego, CA, USA. Marvin Minsky and Seymour Papert. 1969. Perceptrons: An Introduction to Computational Geometry. MIT Press, Cambridge, Mass. A. B. Novikoff. 1962. On convergence proofs on perceptrons. Symposium on the Mathematical Theory of Automata, 12:615–622. Miles Osborne. 2002. Shallow Parsing Using Noisy and Non-Stationary Training Material. Journal of Machine Learning Research, 2:695–719. Massimo Poesio, Udo Kruschwitz, and Chamberlain Jon. 2008. ANAWIKI: Creating Anaphorically Annotated Resources through Web Cooperation. In Proceedings of the 6th International Language Resources and Evaluation Conference, Marrakech, Morocco. Dennis Reidsma and Jean Carletta. 2008. Reliability measurement without limit. Computational Linguistics, 34(3):319–326. Dennis Reidsma and Rieks op den Akker. 2008. Exploiting Subjective Annotations. In COLING 2008 Workshop on Human Judgments in Computational Linguistics, pages 8–16, Manchester, UK. Frank Rosenblatt. 1962. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington, D.C. Libin Shen and Aravind Joshi. 2005. Incremental LTAG Parsing. In Proceedings of the Human Language Technology Conference and Empirical Methods in Natural Language Processing Conference, pages 811–818, Vancouver, British Columbia, Canada. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks. In Proceedings of the Empirical Methods in Natural Language Processing Conference, pages 254–263, Honolulu, Hawaii. Paul Viola and Mukund Narasimhan. 2005. Learning to Extract Information from Semi-Structured Text Using a Discriminative Context Free Grammar. In Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 330–337, Salvador, Brazil. Luis von Ahn. 2006. Games with a purpose. Computer, 39(6):92–94. 287
2009
32
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 288–296, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Abstraction and Generalisation in Semantic Role Labels: PropBank, VerbNet or both? Paola Merlo Linguistics Department University of Geneva 5 Rue de Candolle, 1204 Geneva Switzerland [email protected] Lonneke Van Der Plas Linguistics Department University of Geneva 5 Rue de Candolle, 1204 Geneva Switzerland [email protected] Abstract Semantic role labels are the representation of the grammatically relevant aspects of a sentence meaning. Capturing the nature and the number of semantic roles in a sentence is therefore fundamental to correctly describing the interface between grammar and meaning. In this paper, we compare two annotation schemes, PropBank and VerbNet, in a task-independent, general way, analysing how well they fare in capturing the linguistic generalisations that are known to hold for semantic role labels, and consequently how well they grammaticalise aspects of meaning. We show that VerbNet is more verb-specific and better able to generalise to new semantic role instances, while PropBank better captures some of the structural constraints among roles. We conclude that these two resources should be used together, as they are complementary. 1 Introduction Most current approaches to language analysis assume that the structure of a sentence depends on the lexical semantics of the verb and of other predicates in the sentence. It is also assumed that only certain aspects of a sentence meaning are grammaticalised. Semantic role labels are the representation of the grammatically relevant aspects of a sentence meaning. Capturing the nature and the number of semantic roles in a sentence is therefore fundamental to correctly describe the interface between grammar and meaning, and it is of paramount importance for all natural language processing (NLP) applications that attempt to extract meaning representations from analysed text, such as questionanswering systems or even machine translation. The role of theories of semantic role lists is to obtain a set of semantic roles that can apply to any argument of any verb, to provide an unambiguous identifier of the grammatical roles of the participants in the event described by the sentence (Dowty, 1991). Starting from the first proposals (Gruber, 1965; Fillmore, 1968; Jackendoff, 1972), several approaches have been put forth, ranging from a combination of very few roles to lists of very fine-grained specificity. (See Levin and Rappaport Hovav (2005) for an exhaustive review). In NLP, several proposals have been put forth in recent years and adopted in the annotation of large samples of text (Baker et al., 1998; Palmer et al., 2005; Kipper, 2005; Loper et al., 2007). The annotated PropBank corpus, and therefore implicitly its role labels inventory, has been largely adopted in NLP because of its exhaustiveness and because it is coupled with syntactic annotation, properties that make it very attractive for the automatic learning of these roles and their further applications to NLP tasks. However, the labelling choices made by PropBank have recently come under scrutiny (Zapirain et al., 2008; Loper et al., 2007; Yi et al., 2007). The annotation of PropBank labels has been conceived in a two-tiered fashion. A first tier assigns abstract labels such as ARG0 or ARG1, while a separate annotation records the secondtier, verb-sense specific meaning of these labels. Labels ARG0 or ARG1 are assigned to the most prominent argument in the sentence (ARG1 for unaccusative verbs and ARG0 for all other verbs). The other labels are assigned in the order of prominence. So, while the same high-level labels are used across verbs, they could have different meanings for different verb senses. Researchers have usually concentrated on the high-level annotation, but as indicated in Yi et al. (2007), there is reason to think that these labels do not generalise across verbs, nor to unseen verbs or to novel verb 288 senses. Because the meaning of the role annotation is verb-specific, there is also reason to think that it fragments the data and creates data sparseness, making automatic learning from examples more difficult. These short-comings are more apparent in the annotation of less prominent and less frequent roles, marked by the ARG2 to ARG5 labels. Zapirain et al. (2008), Loper et al. (2007) and Yi et al. (2007) investigated the ability of the PropBank role inventory to generalise compared to the annotation in another semantic role list, proposed in the electronic dictionary VerbNet. VerbNet labels are assigned in a verb-class specific way and have been devised to be more similar to the inventories of thematic role lists usually proposed by linguists. The results in these papers are conflicting. While Loper et al. (2007) and Yi et al. (2007) show that augmenting PropBank labels with VerbNet labels increases generalisation of the less frequent labels, such as ARG2, to new verbs and new domains, they also show that PropBank labels perform better overall, in a semantic role labelling task. Confirming this latter result, Zapirain et al. (2008) find that PropBank role labels are more robust than VerbNet labels in predicting new verb usages, unseen verbs, and they port better to new domains. The apparent contradiction of these results can be due to several confounding factors in the experiments. First, the argument labels for which the VerbNet improvement was found are infrequent, and might therefore not have influenced the overall results enough to counterbalance new errors introduced by the finer-grained annotation scheme; second, the learning methods in both these experimental settings are largely based on syntactic information, thereby confounding learning and generalisation due to syntax — which would favour the more syntactically-driven PropBank annotation — with learning due to greater generality of the semantic role annotation; finally, task-specific learning-based experiments do not guarantee that the learners be sufficiently powerful to make use of the full generality of the semantic role labels. In this paper, we compare the two annotation schemes, analysing how well they fare in capturing the linguistic generalisations that are known to hold for semantic role labels, and consequently how well they grammaticalise aspects of meaning. Because the well-attested strong correlation between syntactic structure and semantic role labels (Levin and Rappaport Hovav, 2005; Merlo and Stevenson, 2001) could intervene as a confounding factor in this analysis, we expressly limit our investigation to data analyses and statistical measures that do not exploit syntactic properties or parsing techniques. The conclusions reached this way are not task-specific and are therefore widely applicable. To preview, based on results in section 3, we conclude that PropBank is easier to learn, but VerbNet is more informative in general, it generalises better to new role instances and its labels are more strongly correlated to specific verbs. In section 4, we show that VerbNet labels provide finergrained specificity. PropBank labels are more concentrated on a few VerbNet labels at higher frequency. This is not true at low frequency, where VerbNet provides disambiguations to overloaded PropBank variables. Practically, these two sets of results indicate that both annotation schemes could be useful in different circumstances, and at different frequency bands. In section 5, we report results indicating that PropBank role sets are highlevel abstractions of VerbNet role sets and that VerbNet role sets are more verb and class-specific. In section 6, we show that PropBank more closely captures the thematic hierarchy and is more correlated to grammatical functions, hence potentially more useful for semantic role labelling, for learners whose features are based on the syntactic tree. Finally, in section 7, we summarise some previous results, and we provide new statistical evidence to argue that VerbNet labels are more general across verbs. These conclusions are reached by task-independent statistical analyses. The data and the measures used to reach these conclusions are discussed in the next section. 2 Materials and Method In data analysis and inferential statistics, careful preparation of the data and choice of the appropriate statistical measures are key. We illustrate the data and the measures used here. 2.1 Data and Semantic Role Annotation Proposition Bank (Palmer et al., 2005) adds Levin’s style predicate-argument annotation and indication of verbs’ alternations to the syntactic structures of the Penn Treebank (Marcus et al., 289 1993). It defines a limited role typology. Roles are specified for each verb individually. Verbal predicates in the Penn Treebank (PTB) receive a label REL and their arguments are annotated with abstract semantic role labels A0-A5 or AA for those complements of the predicative verb that are considered arguments, while those complements of the verb labelled with a semantic functional label in the original PTB receive the composite semantic role label AM-X, where X stands for labels such as LOC, TMP or ADV, for locative, temporal and adverbial modifiers respectively. PropBank uses two levels of granularity in its annotation, at least conceptually. Arguments receiving labels A0-A5 or AA do not express consistent semantic roles and are specific to a verb, while arguments receiving an AM-X label are supposed to be adjuncts and the respective roles they express are consistent across all verbs. However, among argument labels, A0 and A1 are assigned attempting to capture Proto-Agent and Proto-Patient properties (Dowty, 1991). They are, therefore, more valid across verbs and verb instances than the A2A5 labels. Numerical results in Yi et al. (2007) show that 85% of A0 occurrences translate into Agent roles and more than 45% instances of A1 map into Patient and Patient-like roles, using a VerbNet labelling scheme. This is also confirmed by our counts, as illustrated in Tables 3 and 4 and discussed in Section 4 below. VerbNet is a lexical resource for English verbs, yielding argumental and thematic information (Kipper, 2005). VerbNet resembles WordNet in spirit, it provides a verbal lexicon tying verbal semantics (theta-roles and selectional restrictions) to verbal distributional syntax. VerbNet defines 23 thematic roles that are valid across verbs. The list of thematic roles can be seen in the first column of Table 4. For some of our comparisons below to be valid, we will need to reduce the inventory of labels of VerbNet to the same number of labels in PropBank. Following previous work (Loper et al., 2007), we define equivalence classes of VerbNet labels. We will refer to these classes as VerbNet groups. The groups we define are illustrated in Figure 1. Notice also that all our comparisons, like previous work, will be limited to the obligatory arguments in PropBank, the A0 to A5, AA arguments, to be comparable to VerbNet. VerbNet is a lexicon and by definition it does not list optional modifiers (the arguments labelled AM-X in PropBank). In order to support the joint use of both these resources and their comparison, SemLink has been developed (Loper et al., 2007). SemLink1 provides mappings from PropBank to VerbNet for the WSJ portion of the Penn Treebank. The mapping have been annotated automatically by a two-stage process: a lexical mapping and an instance classifier (Loper et al., 2007). The results were handcorrected. In addition to semantic roles for both PropBank and VerbNet, SemLink contains information about verbs, their senses and their VerbNet classes which are extensions of Levin’s classes. The annotations in SemLink 1.1. are not complete. In the analyses presented here, we have only considered occurrences of semantic roles for which both a PropBank and a VerbNet label is available in the data (roughly 45% of the PropBank semantic roles have a VerbNet semantic role).2 Furthermore, we perform our analyses on training and development data only. This means that we left section 23 of the Wall Street Journal out. The analyses are done on the basis of 106,459 semantic role pairs. For the analysis concerning the correlation between semantic roles and syntactic dependencies in Section 6, we merged the SemLink data with the non-projectivised gold data of the CoNNL 2008 shared task on syntactic and semantic dependency parsing (Surdeanu et al., 2008). Only those dependencies that bear both a syntactic and a semantic label have been counted for test and development set. We have discarded discontinous arguments. Analyses are based on 68,268 dependencies in total. 2.2 Measures In the following sections, we will use simple proportions, entropy, joint entropy, conditional entropy, mutual information, and a normalised form of mutual information which measures correlation between nominal attributes called symmetric uncertainty (Witten and Frank, 2005, 291). These are all widely used measures (Manning and Schuetze, 1999), excepted perhaps the last one. We briefly describe it here. 1(http://verbs.colorado.edu/semlink/) 2In some cases SemLink allows for multiple annotations. In those cases we selected the first annotation. 290 AGENT: Agent, Agent1 PATIENT: Patient GOAL: Recipient, Destination, Location, Source, Material, Beneficiary, Goal EXTENT: Extent, Asset, Value PREDATTR: Predicate, Attribute, Theme, Theme1, Theme2, Topic, Stimulus, Proposition PRODUCT: Patient2, Product, Patient1 INSTRCAUSE: Instrument, Cause, Experiencer, Actor2, Actor, Actor1 Figure 1: VerbNet Groups Given a random variable X, the entropy H(X) describes our uncertainty about the value of X, and hence it quantifies the information contained in a message trasmitted by this variable. Given two random variables X,Y, the joint entropy H(X,Y) describes our uncertainty about the value of the pair (X,Y). Symmetric uncertainty is a normalised measure of the information redundancy between the distributions of two random variables. It calculates the ratio between the joint entropy of the two random variables if they are not independent and the joint entropy if the two random variables were independent (which is the sum of their individual entropies). This measure is calculated as follows. U(A, B) = 2H(A) + H(B) −H(A, B) H(A) + H(B) where H(X) = −Σx∈X p(x)logp(x) and H(X, Y ) = −Σx∈X,y∈Y p(x, y)logp(x, y). Symmetric uncertainty lies between 0 and 1. A higher value for symmetric uncertainty indicates that the two random variables are more highly associated (more redundant), while lower values indicate that the two random variables approach independence. We use these measures to evaluate how well two semantic role inventories capture well-known distributional generalisations. We discuss several of these generalisations in the following sections. 3 Amount of Information in Semantic Roles Inventory Most proposals of semantic role inventories agree on the fact that the number of roles should be small to be valid generally. 3 3With the notable exception of FrameNet, which is developing a large number of labels organised hierarchically and Task PropBank ERR VerbNet ERR Role generalisation 62 (82−52/48) 66 (77−33/67) No verbal features 48 (76−52/48) 43 (58−33/67) Unseen predicates 50 (75−52/48) 37 (62−33/67) Table 2: Percent Error rate reduction (ERR) across role labelling sets in three tasks in Zapirain et al. (2008). ERR= (result −baseline / 100% −baseline ) PropBank and VerbNet clearly differ in the level of granularity of the semantic roles that have been assigned to the arguments. PropBank makes fewer distinctions than VerbNet, with 7 core argument labels compared to VerbNet’s 23. More important than the size of the inventory, however, is the fact that PropBank has a much more skewed distribution than VerbNet, illustrated in Table 1. Consequently, the distribution of PropBank labels has an entropy of 1.37 bits, and even when the VerbNet labels are reduced to 7 equivalence classes the distribution has an entropy of 2.06 bits. VerbNet therefore conveys more information, but it is also more difficult to learn, as it is more uncertain. An uninformed PropBank learner that simply assigned the most frequent label would be correct 52% of the times by always assigning an A1 label, while for VerbNet would be correct only 33% of the times assigning Agent. This simple fact might cast new light on some of the comparative conclusions of previous work. In some interesting experiments, Zapirain et al. (2008) test generalising abilities of VerbNet and PropBank comparatively to new role instances in general (their Table 1, line CoNLL setting, column F1 core), and also on unknown verbs and in the absence of verbal features. They find that a learner based on VerbNet has worse learning performance. They interpret this result as indicating that VerbNet labels are less general and more dependent on knowledge of specific verbs. However, a comparison that takes into consideration the differential baseline is able to factor the difficulty of the task out of the results for the overall performance. A simple baseline for a classifier is based on a majority class assignment (see our Table 1). We use the performance results reported in Zapirain et al. (2008) and calculate the reduction in error rate based on this differential baseline for the two annotation schemes. We compare only the results for the core labels in PropBank as those interpreted frame-specifically (Ruppenhofer et al., 2006). 291 PropBank VerbNet A0 38.8 Agent 32.8 Cause 1.9 Source 0.9 Asset 0.3 Goal 0.00 A1 51.7 Theme 26.3 Product 1.6 Actor1 0.8 Material 0.2 Agent1 0.00 A2 9.0 Topic 11.5 Extent 1.3 Theme2 0.8 Beneficiary 0.2 A3 0.5 Patient 5.8 Destination 1.2 Theme1 0.8 Proposition 0.1 A4 0.0 Experiencer 4.2 Patient1 1.2 Attribute 0.7 Value 0.1 A5 0.0 Predicate 2.3 Location 1.0 Patient2 0.5 Instrument 0.1 AA 0.0 Recipient 2.2 Stimulus 0.9 Actor2 0.3 Actor 0.0 Table 1: Distribution of PropBank core labels and VerbNet labels. are the ones that correspond to VerbNet.4 We find more mixed results than previously reported. VerbNet has better role generalising ability overall as its reduction in error rate is greater than PropBank (first line of Table 2), but it is more degraded by lack of verb information (second and third lines of Table 2). The importance of verb information for VerbNet is confirmed by information-theoretic measures. While the entropy of VerbNet labels is higher than that of PropBank labels (2.06 bits vs. 1.37 bits), as seen before, the conditional entropy of respective PropBank and VerbNet distributions given the verb is very similar, but higher for PropBank (1.11 vs 1.03 bits), thereby indicating that the verb provides much more information in association with VerbNet labels. The mutual information of the PropBank labels and the verbs is only 0.26 bits, while it is 1.03 bits for VerbNet. These results are expected if we recall the two-tiered logic that inspired PropBank annotation, where the abstract labels are less related to verbs than labels in VerbNet. These results lead us to our first conclusion: while PropBank is easier to learn, VerbNet is more informative in general, it generalises better to new role instances, and its labels are more strongly correlated to specific verbs. It is therefore advisable to use both annotations: VerbNet labels if the verb is available, reverting to PropBank labels if no lex4We assume that our majority class can roughly correspond to Zapirain et al. (2008)’s data. Notice however that both sampling methods used to collect the counts are likely to slightly overestimate frequent labels. Zapirain et al. (2008) sample only complete propositions. It is reasonable to assume that higher numbered PropBank roles (A3, A4, A5) are more difficult to define. It would therefore more often happen that these labels are not annotated than it happens that A0, A1, A2, the frequent labels, are not annotated. This reasoning is confirmed by counts on our corpus, which indicate that incomplete propositions include a higher proportion of low frequency labels and a lower proportion of high frequency labels that the overall distribution. However, our method is also likely to overestimate frequent labels, since we count all labels, even those in incomplete propositions. By the same reasoning, we will find more frequent labels than the underlying real distribution of a complete annotation. ical information is known. 4 Equivalence Classes of Semantic Roles An observation that holds for all semantic role labelling schemes is that certain labels seem to be more similar than others, based on their ability to occur in the same syntactic environment and to be expressed by the same function words. For example, Agent and Instrumental Cause are often subjects (of verbs selecting animate and inanimate subjects respectively); Patients/Themes can be direct objects of transitive verbs and subjects of change of state verbs; Goal and Beneficiary can be passivised and undergo the dative alternation; Instrument and Comitative are expressed by the same preposition in many languages (see Levin and Rappaport Hovav (2005).) However, most annotation schemes in NLP and linguistics assume that semantic role labels are atomic. It is therefore hard to explain why labels do not appear to be equidistant in meaning, but rather to form equivalence classes in certain contexts. 5 While both role inventories under scrutiny here use atomic labels, their joint distribution shows interesting relations. The proportion counts are shown in Table 3 and 4. If we read these tables column-wise, thereby taking the more linguistically-inspired labels in VerbNet to be the reference labels, we observe that the labels in PropBank are especially concentrated on those labels that linguistically would be considered similar. Specifically, in Table 3 A0 mostly groups together Agents and Instrumental Causes; A1 mostly refers to Themes and Patients; while A2 refers to Goals and Themes. If we 5Clearly, VerbNet annotators recognise the need to express these similarities since they use variants of the same label in many cases. Because the labels are atomic however, the distance between Agent and Patient is the same as Patient and Patient1 and the intended greater similarity of certain labels is lost to a learning device. As discussed at length in the linguistic literature, features bundles instead of atomic labels would be the mechanism to capture the differential distance of labels in the inventory (Levin and Rappaport Hovav, 2005). 292 A0 A1 A2 A3 A4 A5 AA Agent 32.6 0.2 Patient 0.0 5.8 Goal 0.0 1.5 4.0 0.2 0.0 0.0 Extent 0.2 1.3 0.2 PredAttr 1.2 39.3 2.9 0.0 0.0 Product 0.1 2.7 0.6 0.0 InstrCause 4.8 2.2 0.3 0.1 Table 3: Distribution of PropBank by VerbNet group labels according to SemLink. Counts indicated as 0.0 approximate zero by rounding, while a - sign indicates that no occurrences were found. read these tables row-wise, thereby concentrating on the grouping of PropBank labels provided by VerbNet labels, we see that low frequency PropBank labels are more evenly spread across VerbNet labels than the frequent labels, and it is more difficult to identify a dominant label than for highfrequency labels. Because PropBank groups together VerbNet labels at high frequency, while VerbNet labels make different distinctions at lower frequencies, the distribution of PropBank is much more skewed than VerbNet, yielding the differences in distributions and entropy discussed in the previous section. We can draw, then, a second conclusion: while VerbNet is finer-grained than PropBank, the two classifications are not in contradiction with each other. VerbNet greater specificity can be used in different ways depending on the frequency of the label. Practically, PropBank labels could provide a strong generalisation to a VerbNet annotation at high-frequency. VerbNet labels, on the other hand, can act as disambiguators of overloaded variables in PropBank. This conclusion was also reached by Loper et al. (2007). Thus, both annotation schemes could be useful in different circumstances and at different frequency bands. 5 The Combinatorics of Semantic Roles Semantic roles exhibit paradigmatic generalisations — generalisations across similar semantic roles in the inventory — (which we saw in section 4.) They also show syntagmatic generalisations, generalisations that concern the context. One kind of context is provided by what other roles they can occur with. It has often been observed that certain semantic roles sets are possible, while others are not; among the possible sets, certain are much more frequent than others (Levin and Rappaport Hovav, 2005). Some linguistically-inspired A0 A1 A2 A3 A4 A5 AA Actor 0.0 Actor1 0.8 Actor2 0.3 0.1 Agent1 0.0 Agent 32.6 0.2 Asset 0.1 0.0 0.2 Attribute 0.1 0.7 Beneficiary 0.0 0.1 0.1 0.0 Cause 0.7 1.1 0.1 0.1 Destination 0.4 0.8 0.0 Experiencer 3.3 0.9 0.1 Extent 1.3 Goal 0.0 Instrument 0.1 0.0 Location 0.0 0.4 0.6 0.0 0.0 Material 0.1 0.1 0.0 Patient 0.0 5.8 Patient1 0.1 1.1 Patient2 0.1 0.5 Predicate 1.2 1.1 0.0 Product 0.0 1.5 0.1 0.0 Proposition 0.0 0.1 Recipient 0.3 2.0 0.0 Source 0.3 0.5 0.1 Stimulus 1.0 Theme 0.8 25.1 0.5 0.0 0.0 Theme1 0.4 0.4 0.0 0.0 Theme2 0.1 0.4 0.3 Topic 11.2 0.3 Value 0.1 Table 4: Distribution of PropBank by original VerbNet labels according to SemLink. Counts indicated as 0.0 approximate zero by rounding, while a - sign indicates that no occurrences were found. semantic role labelling techniques do attempt to model these dependencies directly (Toutanova et al., 2008; Merlo and Musillo, 2008). Both annotation schemes impose tight constraints on co-occurrence of roles, independently of any verb information, with 62 role sets for PropBank and 116 role combinations for VerbNet, fewer than possible. Among the observed role sets, some are more frequent than expected under an assumption of independence between roles. For example, in PropBank, propositions comprising A0, A1 roles are observed 85% of the time, while they would be expected to occur only in 20% of the cases. In VerbNet the difference is also great between the 62% observed Agent, PredAttr propositions and the 14% expected. Constraints on possible role sets are the expression of structural constraints among roles inherited from syntax, which we discuss in the next section, but also of the underlying event structure of the verb. Because of this relation, we expect a strong correlation between role sets and their associated 293 A0,A1 A0,A2 A1,A2 Agent, Theme 11650 109 4 Agent, Topic 8572 14 0 Agent, Patient 1873 0 0 Experiencer, Theme 1591 0 15 Agent, Product 993 1 0 Agent, Predicate 960 64 0 Experiencer, Stimulus 843 0 0 Experiencer, Cause 756 0 2 Table 5: Sample of role sets correspondences verb, as well as role sets and verb classes for both annotation schemes. However, PropBank roles are associated based on the meaning of the verb, but also based on their positional prominence in the tree, and so we can expect their relation to the actual verb entry to be weaker. We measure here simply the correlation as indicated by the symmetric uncertainty of the joint distribution of role sets by verbs and of role sets by verb classes, for each of the two annotation schemes. We find that the correlation between PropBank role sets and verb classes is weaker than the correlation between VerbNet role sets and verb classes, as expected (PropBank: U=0.21 vs VerbNet: U=0.46). We also find that correlation between PropBank role sets and verbs is weaker than the correlation between VerbNet role sets and verbs (PropBank: U=0.23 vs VerbNet U=0.43). Notice that this result holds for VerbNet role label groups, and is therefore not a side-effect of a different size in role inventory. This result confirms our findings reported in Table 2, which showed a larger degradation of VerbNet labels in the absence of verb information. If we analyse the data, we see that many role sets that form one single set in PropBank are split into several sets in VerbNet, with those roles that are different being roles that in PropBank form a group. So, for example, a role list (A0, A1) in PropBank will corresponds to 14 different lists in VerbNet (when using the groups). The three most frequent VerbNet role sets describe 86% of the cases: (Agent, Predattr) 71%, (InstrCause, PredAttr) 9%, and (Agent, Patient) 6% . Using the original VerbNet labels – a very small sample of the most frequent ones is reported in Table 5 — we find 39 different sets. Conversely, we see that VerbNet sets corresponds to few PropBank sets, even for high frequency. The third conclusion we can draw then is twofold. First, while VerbNet labels have been assigned to be valid across verbs, as confirmed by their ability to enter in many combinations, these combinations are more verb and class-specific than combinations in PropBank. Second, the finegrained, coarse-grained correspondence of annotations between VerbNet and PropBank that was illustrated by the results in Section 4 is also borne out when we look at role sets: PropBank role sets appear to be high-level abstractions of VerbNet role sets. 6 Semantic Roles and Grammatical Functions: the Thematic Hierarchy A different kind of context-dependence is provided by thematic hierarchies. It is a well-attested fact that lexical semantic properties described by semantic roles and grammatical functions appear to be distributed according to prominence scales (Levin and Rappaport Hovav, 2005). Semantic roles are organized according to the thematic hierarchy (one proposal among many is Agent > Experiencer> Goal/Source/Location> Patient (Grimshaw, 1990)). This hierarchy captures the fact that the options for the structural realisation of a particular argument do not depend only on its role, but also on the roles of other arguments. For example in psychological verbs, the position of the Experiencer as a syntactic subject or object depends on whether the other role in the sentence is a Stimulus, hence lower in the hierarchy, as in the psychological verbs of the fear class or an Agent/Cause as in the frighten class. Two prominence scales can combine by matching elements harmonically, higher elements with higher elements and lower with lower (Aissen, 2003). Grammatical functions are also distributed according to a prominence scale. Thus, we find that most subjects are Agents, most objects are Patients or Themes, and most indirect objects are Goals, for example. The semantic role inventory, thus, should show a certain correlation with the inventory of grammatical functions. However, perfect correlation is clearly not expected as in this case the two levels of representation would be linguistically and computationally redundant. Because PropBank was annotated according to argument prominence, we expect to see that PropBank reflects relationships between syntax and semantic role labels more strongly than VerbNet. Comparing syntactic dependency labels to their corresponding PropBank or VerbNet groups labels (groups are used to elim294 inate the confound of different inventory sizes), we find that the joint entropy of PropBank and dependency labels is 2.61 bits while the joint entropy of VerbNet and dependency labels is 3.32 bits. The symmetric uncertainty of PropBank and dependency labels is 0.49, while the symmetric uncertainty of VerbNet and dependency labels is 0.39. On the basis of these correlations, we can confirm previous findings: PropBank more closely captures the thematic hierarchy and is more correlated to grammatical functions, hence potentially more useful for semantic role labelling, for learners whose features are based on the syntactic tree. VerbNet, however, provides a level of annotation that is more independent of syntactic information, a property that might be useful in several applications, such as machine translation, where syntactic information might be too language-specific. 7 Generality of Semantic Roles Semantic roles are not meant to be domainspecific, but rather to encode aspects of our conceptualisation of the world. A semantic role inventory that wants to be linguistically perspicuous and also practically useful in several tasks needs to reflect our grammatical representation of events. VerbNet is believed to be superior in this respect to PropBank, as it attempts to be less verb-specific and to be portable across classes. Previous results (Loper et al., 2007; Zapirain et al., 2008) appear to indicate that this is not the case because a labeller has better performance with PropBank labels than with VerbNet labels. But these results are taskspecific, and they were obtained in the context of parsing. Since we know that PropBank is more closely related to grammatical function and syntactic annotation than VerbNet, as indicated above in Section 6, then these results could simply indicate that parsing predicts PropBank labels better because they are more closely related to syntactic labels, and not because the semantic roles inventory is more general. Several of the findings in the previous sections shed light on the generality of the semantic roles in the two inventories. Results in Section 3 show that previous results can be reinterpreted as indicating that VerbNet labels generalise better to new roles. We attempt here to determine the generality of the “meaning” of a role label without recourse to a task-specific experiment. It is often claimed in the literature that semantic roles are better described by feature bundles. In particular, the features sentience and volition have been shown to be useful in distinguishing Proto-Agents from ProtoPatients (Dowty, 1991). These features can be assumed to be correlated to animacy. Animacy has indeed been shown to be a reliable indicator of semantic role differences (Merlo and Stevenson, 2001). Personal pronouns in English grammaticalise animacy. We extract all the occurrences of the unambiguously animate pronouns (I, you, he, she, us, we, me, us, him) and the unambiguously inanimate pronoun it, for each semantic role label, in PropBank and VerbNet. We find occurrences for three semantic role labels in PropBank and six in VerbNet. We reduce the VerbNet groups to five by merging Patient roles with PredAttr roles to avoid artificial variation among very similar roles. An analysis of variance of the distributions of the pronous yields a significant effect of animacy for VerbNet (F(4)=5.62, p< 0.05), but no significant effect for PropBank (F(2)=4.94, p=0.11). This result is a preliminary indication that VerbNet labels might capture basic components of meaning more clearly than PropBank labels, and that they might therefore be more general. 8 Conclusions In this paper, we have proposed a taskindependent, general method to analyse annotation schemes. The method is based on information-theoretic measures and comparison with attested linguistic generalisations, to evaluate how well semantic role inventories and annotations capture grammaticalised aspects of meaning. We show that VerbNet is more verb-specific and better able to generalise to new semantic roles, while PropBank, because of its relation to syntax, better captures some of the structural constraints among roles. Future work will investigate another basic property of semantic role labelling schemes: cross-linguistic validity. Acknowledgements We thank James Henderson and Ivan Titov for useful comments. The research leading to these results has received partial funding from the EU FP7 programme (FP7/2007-2013) under grant agreement number 216594 (CLASSIC project: www.classic-project.org). 295 References Judith Aissen. 2003. Differential object marking: Iconicity vs. economy. Natural Language and Linguistic Theory, 21:435–483. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Proceedings of the Thirty-Sixth Annual Meeting of the Association for Computational Linguistics and Seventeenth International Conference on Computational Linguistics (ACL-COLING’98), pages 86–90, Montreal, Canada. David Dowty. 1991. Thematic proto-roles and argument selection. Language, 67(3):547–619. Charles Fillmore. 1968. The case for case. In Emmon Bach and Harms, editors, Universals in Linguistic Theory, pages 1–88. Holt, Rinehart, and Winston. Jane Grimshaw. 1990. Argument Structure. MIT Press. Jeffrey Gruber. 1965. Studies in Lexical Relation. MIT Press, Cambridge, MA. Ray Jackendoff. 1972. Semantic Interpretation in Generative Grammar. MIT Press, Cambridge, MA. Karin Kipper. 2005. VerbNet: A broad-coverage, comprehensive verb lexicon. Ph.D. thesis, University of Pennsylvania. Beth Levin and Malka Rappaport Hovav. 2005. Argument Realization. Cambridge University Press, Cambridge, UK. Edward Loper, Szu ting Yi, and Martha Palmer. 2007. Combining lexical resources: Mapping between PropBank and VerbNet. In Proceedings of the IWCS. Christopher Manning and Hinrich Schuetze. 1999. Foundations of Statistical Natural Language Processing. MIT Press. Mitch Marcus, Beatrice Santorini, and M.A. Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19:313–330. Paola Merlo and Gabriele Musillo. 2008. Semantic parsing for high-precision semantic role labelling. In Proceedings of the Twelfth Conference on Computational Natural Language Learning (CONLL08), pages 1–8, Manchester, UK. Paola Merlo and Suzanne Stevenson. 2001. Automatic verb classification based on statistical distributions of argument structure. Computational Linguistics, 27(3):373–408. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31:71–105. Josef Ruppenhofer, Michael Ellsworth, Miriam Petruck, Christopher Johnson, and Jan Scheffczyk. 2006. Framenet ii: Theory and practice. Technical report, Berkeley,CA. Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu´ıs M`arquez, and Joakim Nivre. 2008. The CoNLL-2008 shared task on joint parsing of syntactic and semantic dependencies. In Proceedings of the 12th Conference on Computational Natural Language Learning (CoNLL-2008), pages 159–177. Kristina Toutanova, Aria Haghighi, and Christopher D. Manning. 2008. A global joint model for semantic role labeling. Computational Linguistics, 34(2). Ian Witten and Eibe Frank. 2005. Data Mining. Elsevier. Szu-ting Yi, Edward Loper, and Martha Palmer. 2007. Can semantic roles generalize across genres? In Proceedings of the Human Language Technologies 2007 (NAACL-HLT’07), pages 548–555, Rochester, New York, April. Be˜nat Zapirain, Eneko Agirre, and Llu´ıs M`arquez. 2008. Robustness and generalization of role sets: PropBank vs. VerbNet. In Proceedings of ACL-08: HLT, pages 550–558, Columbus, Ohio, June. 296
2009
33
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 297–305, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Robust Machine Translation Evaluation with Entailment Features∗ Sebastian Pad´o Stuttgart University [email protected] Michel Galley, Dan Jurafsky, Chris Manning Stanford University {mgalley,jurafsky,manning}@stanford.edu Abstract Existing evaluation metrics for machine translation lack crucial robustness: their correlations with human quality judgments vary considerably across languages and genres. We believe that the main reason is their inability to properly capture meaning: A good translation candidate means the same thing as the reference translation, regardless of formulation. We propose a metric that evaluates MT output based on a rich set of features motivated by textual entailment, such as lexical-semantic (in-)compatibility and argument structure overlap. We compare this metric against a combination metric of four state-of-theart scores (BLEU, NIST, TER, and METEOR) in two different settings. The combination metric outperforms the individual scores, but is bested by the entailment-based metric. Combining the entailment and traditional features yields further improvements. 1 Introduction Constant evaluation is vital to the progress of machine translation (MT). Since human evaluation is costly and difficult to do reliably, a major focus of research has been on automatic measures of MT quality, pioneered by BLEU (Papineni et al., 2002) and NIST (Doddington, 2002). BLEU and NIST measure MT quality by using the strong correlation between human judgments and the degree of n-gram overlap between a system hypothesis translation and one or more reference translations. The resulting scores are cheap and objective. However, studies such as Callison-Burch et al. (2006) have identified a number of problems with BLEU and related n-gram-based scores: (1) BLEUlike metrics are unreliable at the level of individual sentences due to data sparsity; (2) BLEU metrics can be “gamed” by permuting word order; (3) for some corpora and languages, the correlation to human ratings is very low even at the system level; (4) scores are biased towards statistical MT; (5) the quality gap between MT and human translations is not reflected in equally large BLEU differences. ∗This paper is based on work funded by the Defense Advanced Research Projects Agency through IBM. The content does not necessarily reflect the views of the U.S. Government, and no official endorsement should be inferred. This is problematic, but not surprising: The metrics treat any divergence from the reference as a negative, while (computational) linguistics has long dealt with linguistic variation that preserves the meaning, usually called paraphrase, such as: (1) HYP: However, this was declared terrorism by observers and witnesses. REF: Nevertheless, commentators as well as eyewitnesses are terming it terrorism. A number of metrics have been designed to account for paraphrase, either by making the matching more intelligent (TER, Snover et al. (2006)), or by using linguistic evidence, mostly lexical similarity (METEOR, Banerjee and Lavie (2005); MaxSim, Chan and Ng (2008)), or syntactic overlap (Owczarzak et al. (2008); Liu and Gildea (2005)). Unfortunately, each metrics tend to concentrate on one particular type of linguistic information, none of which always correlates well with human judgments. Our paper proposes two strategies. We first explore the combination of traditional scores into a more robust ensemble metric with linear regression. Our second, more fundamental, strategy replaces the use of loose surrogates of translation quality with a model that attempts to comprehensively assess meaning equivalence between references and MT hypotheses. We operationalize meaning equivalence by bidirectional textual entailment (RTE, Dagan et al. (2005)), and thus predict the quality of MT hypotheses with a rich RTE feature set. The entailment-based model goes beyond existing word-level “semantic” metrics such as METEOR by integrating phrasal and compositional aspects of meaning equivalence, such as multiword paraphrases, (in-)correct argument and modification relations, and (dis-)allowed phrase reorderings. We demonstrate that the resulting metric beats both individual and combined traditional MT metrics. The complementary features of both metric types can be combined into a joint, superior metric. 297 HYP: Three aid workers were kidnapped. REF: Three aid workers were kidnapped by pirates. no entailment entailment HYP: The virus did not infect anybody. REF: No one was infected by the virus. entailment entailment Figure 1: Entailment status between an MT system hypothesis and a reference translation for equivalent (top) and non-equivalent (bottom) translations. 2 Regression-based MT Quality Prediction Current MT metrics tend to focus on a single dimension of linguistic information. Since the importance of these dimensions tends not to be stable across language pairs, genres, and systems, performance of these metrics varies substantially. A simple strategy to overcome this problem could be to combine the judgments of different metrics. For example, Paul et al. (2007) train binary classifiers on a feature set formed by a number of MT metrics. We follow a similar idea, but use a regularized linear regression to directly predict human ratings. Feature combination via regression is a supervised approach that requires labeled data. As we show in Section 5, this data is available, and the resulting model generalizes well from relatively small amounts of training data. 3 Textual Entailment vs. MT Evaluation Our novel approach to MT evaluation exploits the similarity between MT evaluation and textual entailment (TE). TE was introduced by Dagan et al. (2005) as a concept that corresponds more closely to “common sense” reasoning patterns than classical, strict logical entailment. Textual entailment is defined informally as a relation between two natural language sentences (a premise P and a hypothesis H) that holds if “a human reading P would infer that H is most likely true”. Knowledge about entailment is beneficial for NLP tasks such as Question Answering (Harabagiu and Hickl, 2006). The relation between textual entailment and MT evaluation is shown in Figure 1. Perfect MT output and the reference translation entail each other (top). Translation problems that impact semantic equivalence, e.g., deletion or addition of material, can break entailment in one or both directions (bottom). On the modelling level, there is common ground between RTE and MT evaluation: Both have to distinguish between valid and invalid variation to determine whether two texts convey the same information or not. For example, to recognize the bidirectional entailment in Ex. (1), RTE must account for the following reformulations: synonymy (However/Nevertheless), more general semantic relatedness (observers/commentators), phrasal replacements (and/as well as), and an active/passive alternation that implies structural change (is declared/are terming). This leads us to our main hypothesis: RTE features are designed to distinguish meaning-preserving variation from true divergence and are thus also good predictors in MT evaluation. However, while the original RTE task is asymmetric, MT evaluation needs to determine meaning equivalence, which is a symmetric relation. We do this by checking for entailment in both directions (see Figure 1). Operationally, this ensures we detect translations which either delete or insert material. Clearly, there are also differences between the two tasks. An important one is that RTE assumes the well-formedness of the two sentences. This is not generally true in MT, and could lead to degraded linguistic analyses. However, entailment relations are more sensitive to the contribution of individual words (MacCartney and Manning, 2008). In Example 2, the modal modifiers break the entailment between two otherwise identical sentences: (2) HYP: Peter is certainly from Lincolnshire. REF: Peter is possibly from Lincolnshire. This means that the prediction of TE hinges on correct semantic analysis and is sensitive to misanalyses. In contrast, human MT judgments behave robustly. Translations that involve individual errors, like (2), are judged lower than perfect ones, but usually not crucially so, since most aspects are still rendered correctly. We thus expect even noisy RTE features to be predictive for translation quality. This allows us to use an off-the-shelf RTE system to obtain features, and to combine them using a regression model as described in Section 2. 3.1 The Stanford Entailment Recognizer The Stanford Entailment Recognizer (MacCartney et al., 2006) is a stochastic model that computes match and mismatch features for each premisehypothesis pair. The three stages of the system are shown in Figure 2. The system first uses a robust broad-coverage PCFG parser and a deterministic constituent-dependency converter to construct linguistic representations of the premise and 298 Stage 3: Feature computation (w/ numbers of features) Premise: India buys 1,000 tanks. Hypothesis: India acquires arms. Stage 1: Linguistic analysis India buys 1,000 tanks subj dobj India acquires arms subj dobj Stage 2: Alignment India buys 1,000 tanks subj dobj India acquires arms subj dobj 0.9 1.0 0.7 Alignment (8): Semantic compatibility (34): Insertions and deletions (20): Preservation of reference (16): Structural alignment (28): Overall alignment quality Modality, Factivity, Polarity, Quantification, Lexical-semantic relatedness, Tense Felicity of appositions and adjuncts, Types of unaligned material Locations, Dates, Entities Alignment of main verbs and syntactically prominent words, Argument structure (mis-)matches Figure 2: The Stanford Entailment Recognizer the hypothesis. The results are typed dependency graphs that contain a node for each word and labeled edges representing the grammatical relations between words. Named entities are identified, and contiguous collocations grouped. Next, it identifies the highest-scoring alignment from each node in the hypothesis graph to a single node in the premise graph, or to null. It uses a locally decomposable scoring function: The score of an alignment is the sum of the local word and edge alignment scores. The computation of these scores make extensive use of about ten lexical similarity resources, including WordNet, InfoMap, and Dekang Lin’s thesaurus. Since the search space is exponential in the hypothesis length, the system uses stochastic (rather than exhaustive) search based on Gibbs sampling (see de Marneffe et al. (2007)). Entailment features. In the third stage, the system produces roughly 100 features for each aligned premise-hypothesis pair. A small number of them are real-valued (mostly quality scores), but most are binary implementations of small linguistic theories whose activation indicates syntactic and semantic (mis-)matches of different types. Figure 2 groups the features into five classes. Alignment features measure the overall quality of the alignment as given by the lexical resources. Semantic compatibility features check to what extent the aligned material has the same meaning and preserves semantic dimensions such as modality and factivity, taking a limited amount of context into account. Insertion/deletion features explicitly address material that remains unaligned and assess its felicity. Reference features ascertain that the two sentences actually refer to the same events and participants. Finally, structural features add structural considerations by ensuring that argument structure is preserved in the translation. See MacCartney et al. (2006) for details on the features, and Sections 5 and 6 for examples of feature firings. Efficiency considerations. The use of deep linguistic analysis makes our entailment-based metric considerably more heavyweight than traditional MT metrics. The average total runtime per sentence pair is 5 seconds on an AMD 2.6GHz Opteron core – efficient enough to perform regular evaluations on development and test sets. We are currently investigating caching and optimizations that will enable the use of our metric for MT parameter tuning in a Minimum Error Rate Training setup (Och, 2003). 4 Experimental Evaluation 4.1 Experiments Traditionally, human ratings for MT quality have been collected in the form of absolute scores on a five- or seven-point Likert scale, but low reliability numbers for this type of annotation have raised concerns (Callison-Burch et al., 2008). An alternative that has been adopted by the yearly WMT evaluation shared tasks since 2008 is the collection of pairwise preference judgments between pairs of MT hypotheses which can be elicited (somewhat) more reliably. We demonstrate that our approach works well for both types of annotation and different corpora. Experiment 1 models absolute scores on Asian newswire, and Experiment 2 pairwise preferences on European speech and news data. 4.2 Evaluation We evaluate the output of our models both on the sentence and on the system level. At the sentence level, we can correlate predictions in Experiment 1 directly with human judgments with Spearman’s ρ, 299 a non-parametric rank correlation coefficient appropriate for non-normally distributed data. In Experiment 2, the predictions cannot be pooled between sentences. Instead of correlation, we compute “consistency” (i.e., accuracy) with human preferences. System-level predictions are computed in both experiments from sentence-level predictions, as the ratio of sentences for which each system provided the best translation (Callison-Burch et al., 2008). We extend this procedure slightly because realvalued predictions cannot predict ties, while human raters decide for a significant portion of sentences (as much as 80% in absolute score annotation) to “tie” two systems for first place. To simulate this behavior, we compute “tie-aware” predictions as the percentage of sentences where the system’s hypothesis was assigned a score better or at most ε worse than the best system. ε is set to match the frequency of ties in the training data. Finally, the predictions are again correlated with human judgments using Spearman’s ρ. “Tie awareness” makes a considerable practical difference, improving correlation figures by 5–10 points.1 4.3 Baseline Metrics We consider four baselines. They are small regression models as described in Section 2 over component scores of four widely used MT metrics. To alleviate possible nonlinearity, we add all features in linear and log space. Each baselines carries the name of the underlying metric plus the suffix -R.2 BLEUR includes the following 18 sentence-level scores: BLEU-n and n-gram precision scores (1 ≤n ≤4); BLEU brevity penalty (BP); BLEU score divided by BP. To counteract BLEU’s brittleness at the sentence level, we also smooth BLEU-n and n-gram precision as in Lin and Och (2004). NISTR consists of 16 features. NIST-n scores (1 ≤n ≤10) and information-weighted n-gram precision scores (1 ≤n ≤4); NIST brevity penalty (BP); and NIST score divided by BP. 1Due to space constraints, we only show results for “tieaware” predictions. See Pad´o et al. (2009) for a discussion. 2The regression models can simulate the behaviour of each component by setting the weights appropriately, but are strictly more powerful. A possible danger is that the parameters overfit on the training set. We therefore verified that the three non-trivial “baseline” regression models indeed confer a benefit over the default component combination scores: BLEU-1 (which outperformed BLEU-4 in the MetricsMATR 2008 evaluation), NIST-4, and TER (with all costs set to 1). We found higher robustness and improved correlations for the regression models. An exception is BLEU-1 and NIST-4 on Expt. 1 (Ar, Ch), which perform 0.5–1 point better at the sentence level. TERR includes 50 features. We start with the standard TER score and the number of each of the four edit operations. Since the default uniform cost does not always correlate well with human judgment, we duplicate these features for 9 non-uniform edit costs. We find it effective to set insertion cost close to 0, as a way of enabling surface variation, and indeed the new TERp metric uses a similarly low default insertion cost (Snover et al., 2009). METEORR consists of METEOR v0.7. 4.4 Combination Metrics The following three regression models implement the methods discussed in Sections 2 and 3. MTR combines the 85 features of the four baseline models. It uses no entailment features. RTER uses the 70 entailment features described in Section 3.1, but no MTR features. MT+RTER uses all MTR and RTER features, combining matching and entailment evidence.3 5 Expt. 1: Predicting Absolute Scores Data. Our first experiment evaluates the models we have proposed on a corpus with traditional annotation on a seven-point scale, namely the NIST OpenMT 2008 corpus.4 The corpus contains translations of newswire text into English from three source languages (Arabic (Ar), Chinese (Ch), Urdu (Ur)). Each language consists of 1500–2800 sentence pairs produced by 7–15 MT systems. We use a “round robin” scheme. We optimize the weights of our regression models on two languages and then predict the human scores on the third language. This gauges performance of our models when training and test data come from the same genre, but from different languages, which we believe to be a setup of practical interest. For each test set, we set the system-level tie parameter ε so that the relative frequency of ties was equal to the training set (65–80%). Hypotheses generally had to receive scores within 0.3−0.5 points to tie. Results. Table 1 shows the results. We first concentrate on the upper half (sentence-level results). The predictions of all models correlate highly significantly with human judgments, but we still see robustness issues for the individual MT metrics. 3Software for RTER and MT+RTER is available from http://nlp.stanford.edu/software/mteval.shtml. 4Available from http://www.nist.gov. 300 Evaluation Data Metrics train test BLEUR METEORR NISTR TERR MTR RTER MT+RTER Sentence-level Ar+Ch Ur 49.9 49.1 49.5 50.1 50.1 54.5 55.6 Ar+Ur Ch 53.9 61.1 53.1 50.3 57.3 58.0 62.7 Ch+Ur Ar 52.5 60.1 50.4 54.5 55.2 59.9 61.1 System-level Ar+Ch Ur 73.9 68.4 50.0 90.0∗ 92.7∗ 77.4∗ 81.0∗ Ar+Ur Ch 38.5 44.3 40.0 59.0∗ 51.8∗ 47.7 57.3∗ Ch+Ur Ar 59.7∗ 86.3∗ 61.9∗ 42.1 48.1 59.7∗ 61.7∗ Table 1: Expt. 1: Spearman’s ρ for correlation between human absolute scores and model predictions on NIST OpenMT 2008. Sentence level: All correlations are highly significant. System level: ∗: p<0.05. METEORR achieves the best correlation for Chinese and Arabic, but fails for Urdu, apparently the most difficult language. TERR shows the best result for Urdu, but does worse than METEORR for Arabic and even worse than BLEUR for Chinese. The MTR combination metric alleviates this problem to some extent by improving the “worst-case” performance on Urdu to the level of the best individual metric. The entailment-based RTER system outperforms MTR on each language. It particularly improves on MTR’s correlation on Urdu. Even though METEORR still does somewhat better than MTR and RTER, we consider this an important confirmation for the usefulness of entailment features in MT evaluation, and for their robustness.5 In addition, the combined model MT+RTER is best for all three languages, outperforming METEORR for each language pair. It performs considerably better than either MTR or RTER. This is a second result: the types of evidence provided by MTR and RTER appear to be complementary and can be combined into a superior model. On the system level (bottom half of Table 1), there is high variance due to the small number of predictions per language, and many predictions are not significantly correlated with human judgments. BLEUR, METEORR, and NISTR significantly predict one language each (all Arabic); TERR, MTR, and RTER predict two languages. MT+RTER is the only model that shows significance for all three languages. This result supports the conclusions we have drawn from the sentence-level analysis. Further analysis. We decided to conduct a thorough analysis of the Urdu dataset, the most difficult source language for all metrics. We start with a fea5These results are substantially better than the performance our metric showed in the MetricsMATR 2008 challenge. Beyond general enhancement of our model, we attribute the less good MetricsMATR 2008 results to an infelicitous choice of training data for the submission, coupled with the large amount of ASR output in the test data, whose disfluencies represent an additional layer of problems for deep approaches. 20 40 60 80 100 0.42 0.46 0.50 0.54 % Training data MT08 Ar+Ch Spearman's rho on MT 08 Ur G G G G G G G G G G G G G G G G G G G G Metrics Mt−RteR RteR MtR MetR Figure 3: Experiment 1: Learning curve (Urdu). ture ablation study. Removing any feature group from RTER results in drops in correlation of at least three points. The largest drops occur for the structural (δ = −11) and insertion/deletion (δ = −8) features. Thus, all feature groups appear to contribute to the good correlation of RTER. However, there are big differences in the generality of the feature groups: in isolation, the insertion/deletion features achieve almost no correlation, and need to be complemented by more robust features. Next, we analyze the role of training data. Figure 3 shows Urdu average correlations for models trained on increasing subsets of the training data (10% increments, 10 random draws per step; Ar and Ch show similar patterns.) METEORR does not improve, which is to be expected given the model definition. RTER has a rather flat learning curve that climbs to within 2 points of the final correlation value for 20% of the training set (about 400 sentence pairs). Apparently, entailment features do not require a large training set, presumably because most features of RTER are binary. The remaining two models, MTR and MT+RTER, show clearer benefit from more data. With 20% of the total data, they climb to within 5 points of their final performance, but keep slowly improving further. 301 REF: I shall face that fact today. HYP: Today I will face this reality. [doc WL-34-174270-7483871, sent 4, system1] Gold: 6 METEORR: 2.8 RTER: 6.1 • Only function words unaligned (will, this) • Alignment fact/reality: hypernymy is ok in upward monotone context REF: What does BBC’s Haroon Rasheed say after a visit to Lal Masjid Jamia Hafsa complex? There are no underground tunnels in Lal Masjid or Jamia Hafsa. The presence of the foreigners could not be confirmed as well. What became of the extremists like Abuzar? HYP: BBC Haroon Rasheed Lal Masjid, Jamia Hafsa after his visit to Auob Medical Complex says Lal Masjid and seminary in under a land mine, not also been confirmed the presence of foreigners could not be, such as Abu by the extremist? [doc WL-12-174261-7457007, sent 2, system2] Gold: 1 METEORR: 4.5 RTER: 1.2 • Hypothesis root node unaligned • Missing alignments for subjects • Important entities in hypothesis cannot be aligned • Reference, hypothesis differ in polarity Table 2: Expt. 1: Reference translations and MT output (Urdu). Scores are out of 7 (higher is better). Finally, we provide a qualitative comparison of RTER’s performance against the best baseline metric, METEORR. Since the computation of RTER takes considerably more resources than METEORR, it is interesting to compare the predictions of RTER against METEORR. Table 2 shows two classes of examples with apparent improvements. The first example (top) shows a good translation that is erroneously assigned a low score by METEORR because (a) it cannot align fact and reality (METEORR aligns only synonyms) and (b) it punishes the change of word order through its “penalty” term. RTER correctly assigns a high score. The features show that this prediction results from two semantic judgments. The first is that the lack of alignments for two function words is unproblematic; the second is that the alignment between fact and reality, which is established on the basis of WordNet similarity, is indeed licensed in the current context. More generally, we find that RTER is able to account for more valid variation in good translations because (a) it judges the validity of alignments dependent on context; (b) it incorporates more semantic similarities; and (c) it weighs mismatches according to the word’s status. The second example (bottom) shows a very bad translation that is scored highly by METEORR, since almost all of the reference words appear either literally or as synonyms in the hypothesis (marked in italics). In combination with METEORR’s concentration on recall, this is sufficient to yield a moderately high score. In the case of RTER, a number of mismatch features have fired. They indicate problems with the structural well-formedness of the MT output as well as semantic incompatibility between hypothesis and reference (argument structure and reference mismatches). 6 Expt. 2: Predicting Pairwise Preferences In this experiment, we predict human pairwise preference judgments (cf. Section 4). We reuse the linear regression framework from Section 2 and predict pairwise preferences by predicting two absolute scores (as before) and comparing them.6 Data. This experiment uses the 2006–2008 corpora of the Workshop on Statistical Machine Translation (WMT).7 It consists of data from EUROPARL (Koehn, 2005) and various news commentaries, with five source languages (French, German, Spanish, Czech, and Hungarian). As training set, we use the portions of WMT 2006 and 2007 that are annotated with absolute scores on a fivepoint scale (around 14,000 sentences produced by 40 systems). The test set is formed by the WMT 2008 relative rank annotation task. As in Experiment 1, we set ε so that the incidence of ties in the training and test set is equal (60%). Results. Table 4 shows the results. The left result column shows consistency, i.e., the accuracy on human pairwise preference judgments.8 The pattern of results matches our observations in Expt. 1: Among individual metrics, METEORR and TERR do better than BLEUR and NISTR. MTR and RTER outperform individual metrics. The best result by a wide margin, 52.5%, is shown by MT+RTER. 6We also experimented with a logistic regression model that predicts binary preferences directly. Its performance is comparable; see Pad´o et al. (2009) for details. 7Available from http://www.statmt.org/. 8The random baseline is not 50%, but, according to our experiments, 39.8%. This has two reasons: (1) the judgments include contradictory and tie annotations that cannot be predicted correctly (raw inter-annotator agreement on WMT 2008 was 58%); (2) metrics have to submit a total order over the translations for each sentence, which introduces transitivity constraints. For details, see Callison-Burch et al. (2008). 302 Segment MTR RTER MT+RTER Gold REF: Scottish NHS boards need to improve criminal records checks for employees outside Europe, a watchdog has said. HYP: The Scottish health ministry should improve the controls on extracommunity employees to check whether they have criminal precedents, said the monitoring committee. [1357, lium-systran] Rank: 3 Rank: 1 Rank: 2 Rank: 1 REF: Arguments, bullying and fights between the pupils have extended to the relations between their parents. HYP: Disputes, chicane and fights between the pupils transposed in relations between the parents. [686, rbmt4] Rank: 5 Rank: 2 Rank: 4 Rank: 5 Table 3: Expt. 2: Reference translations and MT output (French). Ranks are out of five (smaller is better). Feature set Consistency (%) System-level correlation (ρ) BLEUR 49.6 69.3 METEORR 51.1 72.6 NISTR 50.2 70.4 TERR 51.2 72.5 MTR 51.5 73.1 RTER 51.8 78.3 MT+RTER 52.5 75.8 WMT 08 (worst) 44 37 WMT 08 (best) 56 83 Table 4: Expt. 2: Prediction of pairwise preferences on the WMT 2008 dataset. The right column shows Spearman’s ρ for the correlation between human judgments and tieaware system-level predictions. All metrics predict system scores highly significantly, partly due to the larger number of systems compared (87 systems). Again, we see better results for METEORR and TERR than for BLEUR and NISTR, and the individual metrics do worse than the combination models. Among the latter, the order is: MTR (worst), MT+RTER, and RTER (best at 78.3). WMT 2009. We submitted the Expt. 2 RTER metric to the WMT 2009 shared MT evaluation task (Pad´o et al., 2009). The results provide further validation for our results and our general approach. At the system level, RTER made third place (avg. correlation ρ = 0.79), trailing the two top metrics closely (ρ = 0.80, ρ = 0.83) and making the best predictions for Hungarian. It also obtained the second-best consistency score (53%, best: 54%). Metric comparison. The pairwise preference annotation of WMT 2008 gives us the opportunity to compare the MTR and RTER models by computing consistency separately on the “top” (highestranked) and “bottom” (lowest-ranked) hypotheses for each reference. RTER performs about 1.5 percent better on the top than on the bottom hypotheses. The MTR model shows the inverse behavior, performing 2 percent worse on the top hypotheses. This matches well with our intuitions: We see some noise-induced degradation for the entailment features, but not much. In contrast, surface-based features are better at detecting bad translations than at discriminating among good ones. Table 3 further illustrates the difference between the top models on two example sentences. In the top example, RTER makes a more accurate prediction than MTR. The human rater’s favorite translation deviates considerably from the reference in lexical choice, syntactic structure, and word order, for which it is punished by MTR (rank 3/5). In contrast, RTER determines correctly that the propositional content of the reference is almost completely preserved (rank 1). In the bottom example, RTER’s prediction is less accurate. This sentence was rated as bad by the judge, presumably due to the inappropriate main verb translation. Together with the subject mismatch, MTR correctly predicts a low score (rank 5/5). RTER’s attention to semantic overlap leads to an incorrect high score (rank 2/5). Feature Weights. Finally, we make two observations about feature weights in the RTER model. First, the model has learned high weights not only for the overall alignment score (which behaves most similarly to traditional metrics), but also for a number of binary syntacto-semantic match and mismatch features. This confirms that these features systematically confer the benefit we have shown anecdotally in Table 2. Features with a consistently negative effect include dropping adjuncts, unaligned or poorly aligned root nodes, incompatible modality between the main clauses, person and location mismatches (as opposed to general mismatches) and wrongly handled passives. Con303 versely, higher scores result from factors such as high alignment score, matching embeddings under factive verbs, and matches between appositions. Second, good MT evaluation feature weights are not good weights for RTE. Some differences, particularly for structural features, are caused by the low grammaticality of MT data. For example, the feature that fires for mismatches between dependents of predicates is unreliable on the WMT data. Other differences do reflect more fundamental differences between the two tasks (cf. Section 3). For example, RTE puts high weights onto quantifier and polarity features, both of which have the potential of influencing entailment decisions, but are (at least currently) unimportant for MT evaluation. 7 Related Work Researchers have exploited various resources to enable the matching between words or n-grams that are semantically close but not identical. Banerjee and Lavie (2005) and Chan and Ng (2008) use WordNet, and Zhou et al. (2006) and Kauchak and Barzilay (2006) exploit large collections of automatically-extracted paraphrases. These approaches reduce the risk that a good translation is rated poorly due to lexical deviation, but do not address the problem that a translation may contain many long matches while lacking coherence and grammaticality (cf. the bottom example in Table 2). Thus, incorporation of syntactic knowledge has been the focus of another line of research. Amig´o et al. (2006) use the degree of overlap between the dependency trees of reference and hypothesis as a predictor of translation quality. Similar ideas have been applied by Owczarzak et al. (2008) to LFG parses, and by Liu and Gildea (2005) to features derived from phrase-structure tress. This approach has also been successful for the related task of summarization evaluation (Hovy et al., 2006). The most comparable work to ours is Gim´enez and M´arquez (2008). Our results agree on the crucial point that the use of a wide range of linguistic knowledge in MT evaluation is desirable and important. However, Gim´enez and M´arquez advocate the use of a bottom-up development process that builds on a set of “heterogeneous”, independent metrics each of which measures overlap with respect to one linguistic level. In contrast, our aim is to provide a “top-down”, integrated motivation for the features we integrate through the textual entailment recognition paradigm. 8 Conclusion and Outlook In this paper, we have explored a strategy for the evaluation of MT output that aims at comprehensively assessing the meaning equivalence between reference and hypothesis. To do so, we exploit the common ground between MT evaluation and the Recognition of Textual Entailment (RTE), both of which have to distinguish valid from invalid linguistic variation. Conceputalizing MT evaluation as an entailment problem motivates the use of a rich feature set that covers, unlike almost all earlier metrics, a wide range of linguistic levels, including lexical, syntactic, and compositional phenomena. We have used an off-the-shelf RTE system to compute these features, and demonstrated that a regression model over these features can outperform an ensemble of traditional MT metrics in two experiments on different datasets. Even though the features build on deep linguistic analysis, they are robust enough to be used in a real-world setting, at least on written text. A limited amount of training data is sufficient, and the weights generalize well. Our data analysis has confirmed that each of the feature groups contributes to the overall success of the RTE metric, and that its gains come from its better success at abstracting away from valid variation (such as word order or lexical substitution), while still detecting major semantic divergences. We have also clarified the relationship between MT evaluation and textual entailment: The majority of phenomena (but not all) that are relevant for RTE are also informative for MT evaluation. The focus of this study was on the use of an existing RTE infrastructure for MT evaluation. Future work will have to assess the effectiveness of individual features and investigate ways to customize RTE systems for the MT evaluation task. An interesting aspect that we could not follow up on in this paper is that entailment features are linguistically interpretable (cf. Fig. 2) and may find use in uncovering systematic shortcomings of MT systems. A limitation of our current metric is that it is language-dependent and relies on NLP tools in the target language that are still unavailable for many languages, such as reliable parsers. To some extent, of course, this problem holds as well for state-of-the-art MT systems. Nevertheless, it must be an important focus of future research to develop robust meaning-based metrics for other languages that can cash in the promise that we have shown for evaluating translation into English. 304 References Enrique Amig´o, Jes´us Gim´enez, Julio Gonzalo, and Llu´ıs M`arquez. 2006. MT Evaluation: Humanlike vs. human acceptable. In Proceedings of COLING/ACL 2006, pages 17–24, Sydney, Australia. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures, pages 65–72, Ann Arbor, MI. Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the role of BLEU in machine translation research. In Proceedings of EACL, pages 249–256, Trento, Italy. Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2008. Further meta-evaluation of machine translation. In Proceedings of the ACL Workshop on Statistical Machine Translation, pages 70–106, Columbus, OH. Yee Seng Chan and Hwee Tou Ng. 2008. MAXSIM: A maximum similarity metric for machine translation evaluation. In Proceedings of ACL-08: HLT, pages 55–62, Columbus, Ohio, June. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In Proceedings of the PASCAL Challenges Workshop on Recognising Textual Entailment, Southampton, UK. Marie-Catherine de Marneffe, Trond Grenager, Bill MacCartney, Daniel Cer, Daniel Ramage, Chlo´e Kiddon, and Christopher D. Manning. 2007. Aligning semantic graphs for textual inference and machine reading. In Proceedings of the AAAI Spring Symposium, Stanford, CA. George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram cooccurrence statistics. In Proceedings of HLT, pages 128– 132, San Diego, CA. Jes´us Gim´enez and Llu´ıs M´arquez. 2008. Heterogeneous automatic MT evaluation through nonparametric metric combinations. In Proceedings of IJCNLP, pages 319–326, Hyderabad, India. Sanda Harabagiu and Andrew Hickl. 2006. Methods for using textual entailment in open-domain question answering. In Proceedings of ACL, pages 905– 912, Sydney, Australia. Eduard Hovy, Chin-Yew Lin, Liang Zhou, and Junichi Fukumoto. 2006. Automated summarization evaluation with basic elements. In Proceedings of LREC, Genoa, Italy. David Kauchak and Regina Barzilay. 2006. Paraphrasing for automatic evaluation. In Proceedings of HLTNAACL, pages 455–462. Phillip Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of the MT Summit X, Phuket, Thailand. Chin-Yew Lin and Franz Josef Och. 2004. ORANGE: a method for evaluating automatic evaluation metrics for machine translation. In Proceedings of COLING, pages 501–507, Geneva, Switzerland. Ding Liu and Daniel Gildea. 2005. Syntactic features for evaluation of machine translation. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures, pages 25–32, Ann Arbor, MI. Bill MacCartney and Christopher D. Manning. 2008. Modeling semantic containment and exclusion in natural language inference. In Proceedings of COLING, pages 521–528, Manchester, UK. Bill MacCartney, Trond Grenager, Marie-Catherine de Marneffe, Daniel Cer, and Christopher D. Manning. 2006. Learning to recognize features of valid textual entailments. In Proceedings of NAACL, pages 41–48, New York City, NY. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL, pages 160–167, Sapporo, Japan. Karolina Owczarzak, Josef van Genabith, and Andy Way. 2008. Evaluating machine translation with LFG dependencies. Machine Translation, 21(2):95– 119. Sebastian Pad´o, Michel Galley, Dan Jurafsky, and Christopher D. Manning. 2009. Textual entailment features for machine translation evaluation. In Proceedings of the EACL Workshop on Statistical Machine Translation, pages 37–41, Athens, Greece. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of ACL, pages 311–318, Philadelphia, PA. Michael Paul, Andrew Finch, and Eiichiro Sumita. 2007. Reducing human assessment of machine translation quality to binary classifiers. In Proceedings of TMI, pages 154–162, Sk¨ovde, Sweden. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of AMTA, pages 223–231, Cambridge, MA. Matthew Snover, Nitin Madnani, Bonnie J. Dorr, and Richard Schwartz. 2009. Fluency, adequacy, or HTER? Exploring different human judgments with a tunable MT metric. In Proceedings of the EACL Workshop on Statistical Machine Translation, pages 259–268, Athens, Greece. Liang Zhou, Chin-Yew Lin, and Eduard Hovy. 2006. Re-evaluating machine translation results with paraphrase support. In Proceedings of EMNLP, pages 77–84, Sydney, Australia. 305
2009
34
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 306–314, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP The Contribution of Linguistic Features to Automatic Machine Translation Evaluation Enrique Amig´o1 Jes´us Gim´enez2 Julio Gonzalo 1 Felisa Verdejo1 1UNED, Madrid {enrique,julio,felisa}@lsi.uned.es 2UPC, Barcelona [email protected] Abstract A number of approaches to Automatic MT Evaluation based on deep linguistic knowledge have been suggested. However, n-gram based metrics are still today the dominant approach. The main reason is that the advantages of employing deeper linguistic information have not been clarified yet. In this work, we propose a novel approach for meta-evaluation of MT evaluation metrics, since correlation cofficient against human judges do not reveal details about the advantages and disadvantages of particular metrics. We then use this approach to investigate the benefits of introducing linguistic features into evaluation metrics. Overall, our experiments show that (i) both lexical and linguistic metrics present complementary advantages and (ii) combining both kinds of metrics yields the most robust metaevaluation performance. 1 Introduction Automatic evaluation methods based on similarity to human references have substantially accelerated the development cycle of many NLP tasks, such as Machine Translation, Automatic Summarization, Sentence Compression and Language Generation. These automatic evaluation metrics allow developers to optimize their systems without the need for expensive human assessments for each of their possible system configurations. However, estimating the system output quality according to its similarity to human references is not a trivial task. The main problem is that many NLP tasks are open/subjective; therefore, different humans may generate different outputs, all of them equally valid. Thus, language variability is an issue. In order to tackle language variability in the context of Machine Translation, a considerable effort has also been made to include deeper linguistic information in automatic evaluation metrics, both syntactic and semantic (see Section 2 for details). However, the most commonly used metrics are still based on n-gram matching. The reason is that the advantages of employing higher linguistic processing levels have not been clarified yet. The main goal of our work is to analyze to what extent deep linguistic features can contribute to the automatic evaluation of translation quality. For that purpose, we compare – using four different test beds – the performance of 16 n-gram based metrics, 48 linguistic metrics and one combined metric from the state of the art. Analyzing the reliability of evaluation metrics requires meta-evaluation criteria. In this respect, we identify important drawbacks of the standard meta-evaluation methods based on correlation with human judgements. In order to overcome these drawbacks, we then introduce six novel meta-evaluation criteria which represent different metric reliability dimensions. Our analysis indicates that: (i) both lexical and linguistic metrics have complementary advantages and different drawbacks; (ii) combining both kinds of metrics is a more effective and robust evaluation method across all meta-evaluation criteria. In addition, we also perform a qualitative analysis of one hundred sentences that were incorrectly evaluated by state-of-the-art metrics. The analysis confirms that deep linguistic techniques are necessary to avoid the most common types of error. Section 2 examines the state of the art Section 3 describes the test beds and metrics considered in our experiments. In Section 4 the correlation between human assessors and metrics is computed, with a discussion of its drawbacks. In Section 5 different quality aspects of metrics are analysed. Conclusions are drawn in the last section. 306 2 Previous Work on Machine Translation Meta-Evaluation Insofar as automatic evaluation metrics for machine translation have been proposed, different meta-evaluation frameworks have been gradually introduced. For instance, Papineni et al. (2001) introduced the BLEU metric and evaluated its reliability in terms of Pearson correlation with human assessments for adequacy and fluency judgements. With the aim of overcoming some of the deficiencies of BLEU, Doddington (2002) introduced the NIST metric. Metric reliability was also estimated in terms of correlation with human assessments, but over different document sources and for a varying number of references and segment sizes. Melamed et al. (2003) argued, at the time of introducing the GTM metric, that Pearson correlation coefficients can be affected by scale properties, and suggested, in order to avoid this effect, to use the non-parametric Spearman correlation coefficients instead. Lin and Och (2004) experimented, unlike previous works, with a wide set of metrics, including NIST, WER (Nießen et al., 2000), PER (Tillmann et al., 1997), and variants of ROUGE, BLEU and GTM. They computed both Pearson and Spearman correlation, obtaining similar results in both cases. In a different work, Banerjee and Lavie (2005) argued that the measured reliability of metrics can be due to averaging effects but might not be robust across translations. In order to address this issue, they computed the translation-by-translation correlation with human judgements (i.e., correlation at the segment level). All that metrics were based on n-gram overlap. But there is also extensive research focused on including linguistic knowledge in metrics (Owczarzak et al., 2006; Reeder et al., 2001; Liu and Gildea, 2005; Amig´o et al., 2006; Mehay and Brew, 2007; Gim´enez and M`arquez, 2007; Owczarzak et al., 2007; Popovic and Ney, 2007; Gim´enez and M`arquez, 2008b) among others. In all these cases, metrics were also evaluated by means of correlation with human judgements. In a different research line, several authors have suggested approaching automatic evaluation through the combination of individual metric scores. Among the most relevant let us cite research by Kulesza and Shieber (2004), Albrecht and Hwa (2007). But finding optimal metric combinations requires a meta-evaluation criterion. Most approaches again rely on correlation with human judgements. However, some of them measured the reliability of metric combinations in terms of their ability to discriminate between human translations and automatic ones (human likeness) (Amig´o et al., 2005). . In this work, we present a novel approach to meta-evaluation which is distinguished by the use of additional easily interpretable meta-evaluation criteria oriented to measure different aspects of metric reliability. We then apply this approach to find out about the advantages and challenges of including linguistic features in meta-evaluation criteria. 3 Metrics and Test Beds 3.1 Metric Set For our study, we have compiled a rich set of metric variants at three linguistic levels: lexical, syntactic, and semantic. In all cases, translation quality is measured by comparing automatic translations against a set of human references. At the lexical level, we have included several standard metrics, based on different similarity assumptions: edit distance (WER, PER and TER), lexical precision (BLEU and NIST), lexical recall (ROUGE), and F-measure (GTM and METEOR). At the syntactic level, we have used several families of metrics based on dependency parsing (DP) and constituency trees (CP). At the semantic level, we have included three different families which operate using named entities (NE), semantic roles (SR), and discourse representations (DR). A detailed description of these metrics can be found in (Gim´enez and M`arquez, 2007). Finally, we have also considered ULC, which is a very simple approach to metric combination based on the unnormalized arithmetic mean of metric scores, as described by Gim´enez and M`arquez (2008a). ULC considers a subset of metrics which operate at several linguistic levels. This approach has proven very effective in recent evaluation campaigns. Metric computation has been carried out using the IQMT Framework for Automatic MT Evaluation (Gim´enez, 2007)1. The simplicity of this approach (with no training of the metric weighting scheme) ensures that the potential advantages detected in our experiments are not due to overfitting effects. 1http://www.lsi.upc.edu/˜nlp/IQMT 307 2004 2005 AE CE AE CE #references 5 5 5 4 #systemsassessed 5 10 5+1 5 #casesassessed 347 447 266 272 Table 1: NIST 2004/2005 MT Evaluation Campaigns. Test bed description 3.2 Test Beds We use the test beds from the 2004 and 2005 NIST MT Evaluation Campaigns (Le and Przybocki, 2005)2. Both campaigns include two different translations exercises: Arabic-to-English (‘AE’) and Chinese-to-English (‘CE’). Human assessments of adequacy and fluency, on a 1-5 scale, are available for a subset of sentences, each evaluated by two different human judges. A brief numerical description of these test beds is available in Table 1. The corpus AE05 includes, apart from five automatic systems, one human-aided system that is only used in our last experiment. 4 Correlation with Human Judgements 4.1 Correlation at the Segment vs. System Levels Let us first analyze the correlation with human judgements for linguistic vs. n-gram based metrics. Figure 1 shows the correlation obtained by each automatic evaluation metric at system level (horizontal axis) versus segment level (vertical axis) in our test beds. Linguistic metrics are represented by grey plots, and black plots represent metrics based on n-gram overlap. The most remarkable aspect is that there exists a certain trade-off between correlation at segment versus system level. In fact, this graph produces a negative Pearson correlation coefficient between system and segment levels of 0.44. In other words, depending on how the correlation is computed, the relative predictive power of metrics can swap. Therefore, we need additional meta-evaluation criteria in order to clarify the behavior of linguistic metrics as compared to n-gram based metrics. However, there are some exceptions. Some metrics achieve high correlation at both levels. The first one is ULC (the circle in the plot), which combines both kind of metrics in a heuristic way (see Section 3.1). The metric nearest to ULC is 2http://www.nist.gov/speech/tests/mt Figure 1: Averaged Pearson correlation at system vs. segment level over all test beds. DP-Or-⋆, which computes lexical overlapping but on dependency relationships. These results are a first evidence of the advantages of combining metrics at several linguistic processing levels. 4.2 Drawbacks of Correlation-based Meta-evaluation Although correlation with human judgements is considered the standard meta-evaluation criterion, it presents serious drawbacks. With respect to correlation at system level, the main problem is that the relative performance of different metrics changes almost randomly between testbeds. One of the reasons is that the number of assessed systems per testbed is usually low, and then correlation has a small number of samples to be estimated with. Usually, the correlation at system level is computed over no more than a few systems. For instance, Table 2 shows the best 10 metrics in CE05 according to their correlation with human judges at the system level, and then the ranking they obtain in the AE05 testbed. There are substantial swaps between both rankings. Indeed, the Pearson correlation of both ranks is only 0.26. This result supports the intuition in (Banerjee and Lavie, 2005) that correlation at segment level is necessary to ensure the reliability of metrics in different situations. However, the correlation values of metrics at segment level have also drawbacks related to their interpretability. Most metrics achieve a Pearson coefficient lower than 0.5. Figure 2 shows two possible relationships between human and metric 308 Table 2: Metrics rankings according to correlation with human judgements using CE05 vs. AE05 Figure 2: Human judgements and scores of two hypothetical metrics with Pearson correlation 0.5 produced scores. Both hypothetical metrics A and B would achieve a 0.5 correlation. In the case of Metric A, a high score implies a high human assessed quality, but not the reverse. This is the tendency hypothesized by Culy and Riehemann (2003). In the case of Metric B, the high scored translations can achieve both low or high quality according to human judges but low scores ensure low quality. Therefore, the same Pearson coefficient may hide very different behaviours. In this work, we tackle these drawbacks by defining more specific meta-evaluation criteria. 5 Alternatives to Correlation-based Meta-evaluation We have seen that correlation with human judgements has serious limitations for metric evaluation. Therefore, we have focused on other aspects of metric reliability that have revealed differences between n-gram and linguistic based metrics: 1. Is the metric able to accurately reveal improvements between two systems? 2. Can we trust the metric when it says that a translation is very good or very bad? Figure 3: SIP versus SIR 3. Are metrics able to identify good translations which are dissimilar from the models? We now discuss each of these aspects separately. 5.1 Ability of metrics to Reveal System Improvements We now investigate to what extent a significant system improvement according to the metric implies a significant improvement according to human assessors, and viceversa. In other words: are the metrics able to detect any quality improvement? Is a metric score improvement a strong evidence of quality increase? Knowing that a metric has a 0.8 Pearson correlation at the system level or 0.5 at the segment level does not provide a direct answer to this question. In order to tackle this issue, we compare metrics versus human assessments in terms of precision and recall over statistically significant improvements within all system pairs in the test beds. First, Table 3 shows the amount of significant improvements over human judgements according to the Wilcoxon statistical significant test (α ≤0.025). For instance, the testbed CE2004 consists of 10 systems, i.e. 45 system pairs; from these, in 40 cases (rightmost column) one of the systems significantly improves the other. Now we would like to know, for every metric, if the pairs which are significantly different according to human judges are also the pairs which are significantly different according to the metric. Based on these data, we define two metametrics: Significant Improvement Precision (SIP) and Significant Improvement Recall (SIR). SIP 309 Systems System pairs Sig. imp. CE2004 10 45 40 AE2004 5 10 8 CE2005 5 10 4 AE2005 5 10 6 Total 25 75 58 Table 3: System pairs with a significant difference according to human judgements (Wilcoxon test) (precision) represents the reliability of improvements detected by metrics. SIR (recall) represents to what extent the metric is able to cover the significant improvements detected by humans. Let Ih be the set of significant improvements detected by human assessors and Im the set detected by the metric m. Then: SIP = |Ih ∩Im| |Im| SIR = |Ih ∩Im| |Ih| Figure 3 shows the SIR and SIP values obtained for each metric. Linguistic metrics achieve higher precision values but at the cost of an important recall decrease. Given that linguistic metrics require matching translation with references at additional linguistic levels, the significant improvements detected are more reliable (higher precision or SIP), but at the cost of recall over real significant improvements (lower SIR). This result supports the behaviour predicted in (Gim´enez and M`arquez, 2009). Although linguistic metrics were motivated by the idea of modeling linguistic variability, the practical effect is that current linguistic metrics introduce additional restrictions (such as dependency tree overlap, for instance) for accepting automatic translations. Then they reward precision at the cost of recall in the evaluation process, and this explains the high correlation with human judgements at system level with respect to segment level. All n-gram based metrics achieve SIP and SIR values between 0.8 and 0.9. This result suggests that n-gram based metrics are reasonably reliable for this purpose. Note that the combined metric, ULC (the circle in the figure), achieves results comparable to n-gram based metrics with this test3. That is, combining linguistic and ngram based metrics preserves the good behavior of n-gram based metrics in this test. 3Notice that we just have 75 significant improvement samples, so small differences in SIP or SIR have no relevance 5.2 Reliability of High and Low Metric Scores The issue tackled in this section is to what extent a very low or high score according to the metric is reliable for detecting extreme cases (very good or very bad translations). In particular, note that detecting wrong translations is crucial in order to analyze the system drawbacks. In order to define an accuracy measure for the reliability of very low/high metric scores, it is necessary to define quality thresholds for both the human assessments and metric scales. Defining thresholds for manual scores is immediate (e.g., lower than 4/10). However, each automatic evaluation metric has its own scale properties. In order to solve scaling problems we will focus on equivalent rank positions: we associate the ith translation according to the metric ranking with the quality value manually assigned to the ith translation in the manual ranking. Being Qh(t) and Qm(t) the human and metric assessed quality for the translation t, and being rankh(t) and rankm(t) the rank of the translation t according to humans and the metric, the normalized metric assessed quality is: QNm(t) = Qh(t′)| (rankh(t′) = rankm(t)) In order to analyze the reliability of metrics when identifying wrong or high quality translations, we look for contradictory results between the metric and the assessments. In other words, we look for metric errors in which the quality estimated by the metric is low (QNm(t) ≤3) but the quality assigned by assessors is high (Qh(t) ≥5) or viceversa (QNm(t) ≥7 and Qh(t) ≤4). The vertical axis in Figure 4 represents the ratio of errors in the set of low scored translations according to a given metric. The horizontal axis represents the ratio of errors over the set of high scored translations. The first observation is that all metrics are less reliable when they assign low scores (which corresponds with the situation A described in Section 4.2). For instance, the best metric erroneously assigns a low score in more than 20% of the cases. In general, the linguistic metrics do not improve the ability to capture wrong translations (horizontal axis in the figure). However, again, the combining metric ULC achieves the same reliability as the best n-gram based metric. 310 In order to check the robustness of these results, we computed the correlation of individual metric failures between test beds, obtaining 0.67 Pearson for the lowest correlated test bed pair (AE2004 and CE2005) and 0.88 for the highest correlated pair (AE2004 and CE2004). Figure 4: Counter sample ratio for high vs low metric scored translations 5.2.1 Analysis of Evaluation Samples In order to shed some light on the reasons for the automatic evaluation failures when assigning low scores, we have manually analyzed cases in which a metric score is low but the quality according to humans is high (QNm ≤3 and Qh ≥7). We have studied 100 sentence evaluation cases from representatives of each metric family including: 1PER, BLEU, DP-Or-⋆, GTM (e = 2), METEOR and ROUGEL. The evaluation cases have been extracted from the four test beds. We have identified four main (non exclusive) failure causes: Format issues, e.g. “US ” vs “United States”). Elements such as abbreviations, acronyms or numbers which do not match the manual translation. Pseudo-synonym terms, e.g. “US Scheduled the Release” vs. “US set to Release”). ) In most of these cases, synonymy can only be identified from the discourse context. Therefore, terminological resources (e.g., WordNet) are not enough to tackle this problem. Non relevant information omissions, e.g. “Thank you” vs. “Thank you very much” or “dollar” vs. “US dollar”)). The translation system obviates some information which, in context, is not considered crucial by the human assessors. This effect is specially important in short sentences. Incorrect structures that change the meaning while maintaining the same idea (e.g., “Bush Praises NASA ’s Mars Mission” vs “ Bush praises nasa of Mars mission” ). Note that all of these kinds of failure - except formatting issues - require deep linguistic processing while n-gram overlap or even synonyms extracted from a standard ontology are not enough to deal with them. This conclusion motivates the incorporation of linguistic processing into automatic evaluation metrics. 5.3 Ability to Deal with Translations that are Dissimilar to References. The results presented in Section 5.2 indicate that a high score in metrics tends to be highly related to truly good translations. This is due to the fact that a high word overlapping with human references is a reliable evidence of quality. However, in some cases the translations to be evaluated are not so similar to human references. An example of this appears in the test bed NIST05AE which includes a human-aided system, LinearB (Callison-Burch, 2005). This system produces correct translations whose words do not necessarily overlap with references. On the other hand, a statistics based system tends to produce incorrect translations with a high level of lexical overlapping with the set of human references. This case was reported by Callison-Burch et al. (2006) and later studied by Gim´enez and M`arquez (2007). They found out that lexical metrics fail to produce reliable evaluation scores. They favor systems which share the expected reference sublanguage (e.g., statistical) and penalize those which do not (e.g., LinearB). We can find in our test bed many instances in which the statistical systems obtain a metric score similar to the assisted system while achieving a lower mark according to human assessors. For instance, for the following translations, ROUGEL assigns a slightly higher score to the output of a statistical system which contains a lot of grammatical and syntactical failures. Human assisted system: The Chinese President made unprecedented criticism of the leaders of Hong Kong after political failings in the former British colony on Monday . Human assessment=8.5. Statistical system: Chinese President Hu Jintao today unprecedented criticism to the leaders of Hong Kong wake political and financial failure in the former British colony. Human assessment=3. 311 Figure 5: Maximum translation quality decreasing over similarly scored translation pairs. In order to check the metric resistance to be cheated by translations with high lexical overlapping, we estimate the quality decrease that we could cause if we optimized the human-aided translations according to the automatic metric. For this, we consider in each translation case c, the worse automatic translation t that equals or improves the human-aided translation th according to the automatic metric m. Formally the averaged quality decrease is: Quality decrease(m) = Avgc(maxt(Qh(th) −Qh(t)|Qm(th) ≤Qm(t))) Figure 5 illustrates the results obtained. All metrics are suitable to be cheated, assigning similar or higher scores to worse translations. However, linguistic metrics are more resistant. In addition, the combined metric ULC obtains the best results, better than both linguistic and n-gram based metrics. Our conclusion is that including higher linguistic levels in metrics is relevant to prevent ungrammatical n-gram matching to achieve similar scores than grammatical constructions. 5.4 The Oracle System Test In order to obtain additional evidence about the usefulness of combining evaluation metrics at different processing levels, let us consider the following situation: given a set of reference translations we want to train a combined system that takes the most appropriate translation approach for each text segment. We consider the set of translations system presented in each competition as the translation approaches pool. Then, the upper bound on the quality of the combined system is given by the Metric OST maxOST 6.72 ULC 5.79 ROUGEW 5.71 DP-Or-⋆ 5.70 CP-Oc-⋆ 5.70 NIST 5.70 randOST 5.20 minOST 3.67 Table 4: Metrics ranked according to the Oracle System Test predictive power of the employed automatic evaluation metric. This upper bound is obtained by selecting the highest scored translation t according to a specific metric m for each translation case c. The Oracle System Test (OST) consists of computing the averaged human assessed quality Qh of the selected translations according to human assessors across all cases. Formally: OST(m) = Avgc(Qh(Argmaxt(Qm(t))|t ∈c)) We use the sum of adequacy and fluency, both in a 1-5 scale, as a global quality measure. Thus, OST scores are in a 2-10 range. In summary, the OST represents the best combined system that could be trained according to a specific automatic evaluation metric. Table 4 shows OST values obtained for the best metrics. In the table we have also included a random, a maximum (always pick the best translation according to humans) and a minimum (always pick the worse translation according to human) OST for all 4. The most remarkable result in Table 4 is that metrics are closer to the random baseline than to the upperbound (maximum OST). This result confirms the idea that an improvement on metric reliability could contribute considerably to the systems optimization process. However, the key point is that the combined metric, ULC, improves all the others (5.79 vs. 5.71), indicating the importance of combining n-gram and linguistic features. 6 Conclusions Our experiments show that, on one hand, traditional n-gram based metrics are more or equally 4In all our experiments, the meta-metric values are computed over each test bed independently before averaging in order to assign equal relevance to the four possible contexts (test beds) 312 reliable for estimating the translation quality at the segment level, for predicting significant improvement between systems and for detecting poor and excellent translations. On the other hand, linguistically motivated metrics improve n-gram metrics in two ways: (i) they achieve higher correlation with human judgements at system level and (ii) they are more resistant to reward poor translations with high word overlapping with references. The underlying phenomenon is that, rather than managing the linguistics variability, linguistic based metrics introduce additional restrictions for assigning high scores. This effect decreases the recall over significant system improvements achieved by n-gram based metrics and does not solve the problem of detecting wrong translations. Linguistic metrics, however, are more difficult to cheat. In general, the greatest pitfall of metrics is the low reliability of low metric values. Our qualitative analysis of evaluated sentences has shown that deeper linguistic techniques are necessary to overcome the important surface differences between acceptable automatic translations and human references. But our key finding is that combining both kinds of metrics gives top performance according to every meta-evaluation criteria. In addition, our Combined System Test shows that, when training a combined translation system, using metrics at several linguistic processing levels improves substantially the use of individual metrics. In summary, our results motivate: (i) working on new linguistic metrics for overcoming the barrier of linguistic variability and (ii) performing new metric combining schemes based on linear regression over human judgements (Kulesza and Shieber, 2004), training models over human/machine discrimination (Albrecht and Hwa, 2007) or non parametric methods based on reference to reference distances (Amig´o et al., 2005). Acknowledgments This work has been partially supported by the Spanish Government, project INES/Text-Mess. We are indebted to the three ACL anonymous reviewers which provided detailed suggestions to improve our work. References Joshua Albrecht and Rebecca Hwa. 2007. Regression for Sentence-Level MT Evaluation with Pseudo References. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL), pages 296–303. Enrique Amig´o, Julio Gonzalo, Anselmo Pe nas, and Felisa Verdejo. 2005. QARLA: a Framework for the Evaluation of Automatic Summarization. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL), pages 280–289. Enrique Amig´o, Jes´us Gim´enez, Julio Gonzalo, and Llu´ıs M`arquez. 2006. MT Evaluation: HumanLike vs. Human Acceptable. In Proceedings of the Joint 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL), pages 17–24. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In Proceedings of ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization. Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the Role of BLEU in Machine Translation Research. In Proceedings of 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL). Chris Callison-Burch. 2005. Linear B system description for the 2005 NIST MT evaluation exercise. In Proceedings of the NIST 2005 Machine Translation Evaluation Workshop. Christopher Culy and Susanne Z. Riehemann. 2003. The Limits of N-gram Translation Evaluation Metrics. In Proceedings of MT-SUMMIT IX, pages 1–8. George Doddington. 2002. Automatic Evaluation of Machine Translation Quality Using N-gram CoOccurrence Statistics. In Proceedings of the 2nd International Conference on Human Language Technology, pages 138–145. Jes´us Gim´enez and Llu´ıs M`arquez. 2007. Linguistic Features for Automatic Evaluation of Heterogeneous MT Systems. In Proceedings of the ACL Workshop on Statistical Machine Translation, pages 256–264. Jes´us Gim´enez and Llu´ıs M`arquez. 2008a. Heterogeneous Automatic MT Evaluation Through NonParametric Metric Combinations. In Proceedings of the Third International Joint Conference on Natural Language Processing (IJCNLP), pages 319–326. Jes´us Gim´enez and Llu´ıs M`arquez. 2008b. On the Robustness of Linguistic Features for Automatic MT Evaluation. (Under submission). 313 Jes´us Gim´enez and Llu´ıs M`arquez. 2009. On the Robustness of Syntactic and Semantic Features for Automatic MT Evaluation. In Proceedings of the 4th Workshop on Statistical Machine Translation (EACL 2009). Jes´us Gim´enez. 2007. IQMT v 2.0. Technical Manual (LSI-07-29-R). Technical report, TALP Research Center. LSI Department. http://www.lsi. upc.edu/˜nlp/IQMT/IQMT.v2.1.pdf. Alex Kulesza and Stuart M. Shieber. 2004. A learning approach to improving sentence-level MT evaluation. In Proceedings of the 10th International Conference on Theoretical and Methodological Issues in Machine Translation (TMI), pages 75–84. Audrey Le and Mark Przybocki. 2005. NIST 2005 machine translation evaluation official results. In Official release of automatic evaluation scores for all submissions, August. Chin-Yew Lin and Franz Josef Och. 2004. Automatic Evaluation of Machine Translation Quality Using Longest Common Subsequence and Skip-Bigram Statics. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL). Ding Liu and Daniel Gildea. 2005. Syntactic Features for Evaluation of Machine Translation. In Proceedings of ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization, pages 25–32. Dennis Mehay and Chris Brew. 2007. BLEUATRE: Flattening Syntactic Dependencies for MT Evaluation. In Proceedings of the 11th Conference on Theoretical and Methodological Issues in Machine Translation (TMI). I. Dan Melamed, Ryan Green, and Joseph P. Turian. 2003. Precision and Recall of Machine Translation. In Proceedings of the Joint Conference on Human Language Technology and the North American Chapter of the Association for Computational Linguistics (HLT-NAACL). Sonja Nießen, Franz Josef Och, Gregor Leusch, and Hermann Ney. 2000. An Evaluation Tool for Machine Translation: Fast Evaluation for MT Research. In Proceedings of the 2nd International Conference on Language Resources and Evaluation (LREC). Karolina Owczarzak, Declan Groves, Josef Van Genabith, and Andy Way. 2006. Contextual BitextDerived Paraphrases in Automatic MT Evaluation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas (AMTA), pages 148–155. Karolina Owczarzak, Josef van Genabith, and Andy Way. 2007. Labelled Dependencies in Machine Translation Evaluation. In Proceedings of the ACL Workshop on Statistical Machine Translation, pages 104–111. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2001. Bleu: a method for automatic evaluation of machine translation, RC22176. Technical report, IBM T.J. Watson Research Center. Maja Popovic and Hermann Ney. 2007. Word Error Rates: Decomposition over POS classes and Applications for Error Analysis. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 48–55, Prague, Czech Republic, June. Association for Computational Linguistics. Florence Reeder, Keith Miller, Jennifer Doyon, and John White. 2001. The Naming of Things and the Confusion of Tongues: an MT Metric. In Proceedings of the Workshop on MT Evaluation ”Who did what to whom?” at Machine Translation Summit VIII, pages 55–59. Christoph Tillmann, Stefan Vogel, Hermann Ney, A. Zubiaga, and H. Sawaf. 1997. Accelerated DP based Search for Statistical Translation. In Proceedings of European Conference on Speech Communication and Technology. 314
2009
35
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 315–323, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP A Syntax-Driven Bracketing Model for Phrase-Based Translation Deyi Xiong, Min Zhang, Aiti Aw and Haizhou Li Human Language Technology Institute for Infocomm Research 1 Fusionopolis Way, #21-01 South Connexis, Singapore 138632 {dyxiong, mzhang, aaiti, hli}@i2r.a-star.edu.sg Abstract Syntactic analysis influences the way in which the source sentence is translated. Previous efforts add syntactic constraints to phrase-based translation by directly rewarding/punishing a hypothesis whenever it matches/violates source-side constituents. We present a new model that automatically learns syntactic constraints, including but not limited to constituent matching/violation, from training corpus. The model brackets a source phrase as to whether it satisfies the learnt syntactic constraints. The bracketed phrases are then translated as a whole unit by the decoder. Experimental results and analysis show that the new model outperforms other previous methods and achieves a substantial improvement over the baseline which is not syntactically informed. 1 Introduction The phrase-based approach is widely adopted in statistical machine translation (SMT). It segments a source sentence into a sequence of phrases, then translates and reorder these phrases in the target. In such a process, original phrase-based decoding (Koehn et al., 2003) does not take advantage of any linguistic analysis, which, however, is broadly used in rule-based approaches. Since it is not linguistically motivated, original phrasebased decoding might produce ungrammatical or even wrong translations. Consider the following Chinese fragment with its parse tree: Src: [把[[7月11日]NP [设立[为[航海节]NP ]PP ]VP ]IP ]VP Ref: established July 11 as Sailing Festival day Output: [to/把[⟨[set up/设立[for/为navigation/航海]] on July 11/7月11日⟩knots/节]] The output is generated from a phrase-based system which does not involve any syntactic analysis. Here we use “[]” (straight orientation) and “⟨⟩” (inverted orientation) to denote the common structure of the source fragment and its translation found by the decoder. We can observe that the decoder inadequately breaks up the second NP phrase and translates the two words “航海” and “节” separately. However, the parse tree of the source fragment constrains the phrase “航海节” to be translated as a unit. Without considering syntactic constraints from the parse tree, the decoder makes wrong decisions not only on phrase movement but also on the lexical selection for the multi-meaning word “节”1. To avert such errors, the decoder can fully respect linguistic structures by only allowing syntactic constituent translations and reorderings. This, unfortunately, significantly jeopardizes performance (Koehn et al., 2003; Xiong et al., 2008) because by integrating syntactic constraint into decoding as a hard constraint, it simply prohibits any other useful non-syntactic translations which violate constituent boundaries. To better leverage syntactic constraint yet still allow non-syntactic translations, Chiang (2005) introduces a count for each hypothesis and accumulates it whenever the hypothesis exactly matches syntactic boundaries on the source side. On the contrary, Marton and Resnik (2008) and Cherry (2008) accumulate a count whenever hypotheses violate constituent boundaries. These constituent matching/violation counts are used as a feature in the decoder’s log-linear model and their weights are tuned via minimal error rate training (MERT) (Och, 2003). In this way, syntactic constraint is integrated into decoding as a soft constraint to enable the decoder to reward hypotheses that respect syntactic analyses or to pe1This word can be translated into “section”, “festival”, and “knot” in different contexts. 315 nalize hypotheses that violate syntactic structures. Although experiments show that this constituent matching/violation counting feature achieves significant improvements on various language-pairs, one issue is that matching syntactic analysis can not always guarantee a good translation, and violating syntactic structure does not always induce a bad translation. Marton and Resnik (2008) find that some constituency types favor matching the source parse while others encourage violations. Therefore it is necessary to integrate more syntactic constraints into phrase translation, not just the constraint of constituent matching/violation. The other issue is that during decoding we are more concerned with the question of phrase cohesion, i.e. whether the current phrase can be translated as a unit or not within particular syntactic contexts (Fox, 2002)2, than that of constituent matching/violation. Phrase cohesion is one of the main reasons that we introduce syntactic constraints (Cherry, 2008). If a source phrase remains contiguous after translation, we refer this type of phrase bracketable, otherwise unbracketable. It is more desirable to translate a bracketable phrase than an unbracketable one. In this paper, we propose a syntax-driven bracketing (SDB) model to predict whether a phrase (a sequence of contiguous words) is bracketable or not using rich syntactic constraints. We parse the source language sentences in the word-aligned training corpus. According to the word alignments, we define bracketable and unbracketable instances. For each of these instances, we automatically extract relevant syntactic features from the source parse tree as bracketing evidences. Then we tune the weights of these features using a maximum entropy (ME) trainer. In this way, we build two bracketing models: 1) a unary SDB model (UniSDB) which predicts whether an independent phrase is bracketable or not; and 2) a binary SDB model(BiSDB) which predicts whether two neighboring phrases are bracketable. Similar to previous methods, our SDB model is integrated into the decoder’s log-linear model as a feature so that we can inherit the idea of soft constraints. In contrast to the constituent matching/violation counting (CMVC) (Chiang, 2005; Marton and Resnik, 2008; Cherry, 2008), our SDB model has 2Here we expand the definition of phrase to include both syntactic and non-syntactic phrases. the following advantages • The SDB model automatically learns syntactic constraints from training data while the CMVC uses manually defined syntactic constraints: constituency matching/violation. In our SDB model, each learned syntactic feature from bracketing instances can be considered as a syntactic constraint. Therefore we can use thousands of syntactic constraints to guide phrase translation. • The SDB model maintains and protects the strength of the phrase-based approach in a better way than the CMVC does. It is able to reward non-syntactic translations by assigning an adequate probability to them if these translations are appropriate to particular syntactic contexts on the source side, rather than always punish them. We test our SDB model against the baseline which doest not use any syntactic constraints on Chinese-to-English translation. To compare with the CMVC, we also conduct experiments using (Marton and Resnik, 2008)’s XP+. The XP+ accumulates a count for each hypothesis whenever it violates the boundaries of a constituent with a label from {NP, VP, CP, IP, PP, ADVP, QP, LCP, DNP}. The XP+ is the best feature among all features that Marton and Resnik use for Chinese-toEnglish translation. Our experimental results display that our SDB model achieves a substantial improvement over the baseline and significantly outperforms XP+ according to the BLEU metric (Papineni et al., 2002). In addition, our analysis shows further evidences of the performance gain from a different perspective than that of BLEU. The paper proceeds as follows. In section 2 we describe how to learn bracketing instances from a training corpus. In section 3 we elaborate the syntax-driven bracketing model, including feature generation and the integration of the SDB model into phrase-based SMT. In section 4 and 5, we present our experiments and analysis. And we finally conclude in section 6. 2 The Acquisition of Bracketing Instances In this section, we formally define the bracketing instance, comprising two types namely binary bracketing instance and unary bracketing instance. 316 We present an algorithm to automatically extract these bracketing instances from word-aligned bilingual corpus where the source language sentences are parsed. Let c and e be the source sentence and the target sentence, W be the word alignment between them, T be the parse tree of c. We define a binary bracketing instance as a tuple ⟨b, τ(ci..j), τ(cj+1..k), τ(ci..k)⟩where b ∈ {bracketable, unbracketable}, ci..j and cj+1..k are two neighboring source phrases and τ(T, s) (τ(s) for short) is a subtree function which returns the minimal subtree covering the source sequence s from the source parse tree T. Note that τ(ci..k) includes both τ(ci..j) and τ(cj+1..k). For the two neighboring source phrases, the following conditions are satisfied: ∃eu..v, ep..q ∈e s.t. ∀(m, n) ∈W, i ≤m ≤j ↔u ≤n ≤v (1) ∀(m, n) ∈W, j + 1 ≤m ≤k ↔p ≤n ≤q (2) The above (1) means that there exists a target phrase eu..v aligned to ci..j and (2) denotes a target phrase ep..q aligned to cj+1..k. If eu..v and ep..q are neighboring to each other or all words between the two phrases are aligned to null, we set b = bracketable, otherwise b = unbracketable. From a binary bracketing instance, we derive a unary bracketing instance ⟨b, τ(ci..k)⟩, ignoring the subtrees τ(ci..j) and τ(cj+1..k). Let n be the number of words of c. If we extract all potential bracketing instances, there will be o(n2) unary instances and o(n3) binary instances. To keep the number of bracketing instances tractable, we only record 4 representative bracketing instances for each index j: 1) the bracketable instance with the minimal τ(ci..k), 2) the bracketable instance with the maximal τ(ci..k), 3) the unbracketable instance with the minimal τ(ci..k), and 4) the unbracketable instance with the maximal τ(ci..k). Figure 1 shows the algorithm to extract bracketing instances. Line 3-11 find all potential bracketing instances for each (i, j, k) ∈c but only keep 4 bracketing instances for each index j: two minimal and two maximal instances. This algorithm learns binary bracketing instances, from which we can derive unary bracketing instances. 1: Input: sentence pair (c, e), the parse tree T of c and the word alignment W between c and e 2: ℜ:= ∅ 3: for each (i, j, k) ∈c do 4: if There exist a target phrase eu..v aligned to ci..j and ep..q aligned to cj+1..k then 5: Get τ(ci..j), τ(cj+1..k), and τ(ci..k) 6: Determine b according to the relationship between eu..v and ep..q 7: if τ(ci..k) is currently maximal or minimal then 8: Update bracketing instances for index j 9: end if 10: end if 11: end for 12: for each j ∈c do 13: ℜ:= ℜ∪{bracketing instances from j} 14: end for 15: Output: bracketing instances ℜ Figure 1: Bracketing Instances Extraction Algorithm. 3 The Syntax-Driven Bracketing Model 3.1 The Model Our interest is to automatically detect phrase bracketing using rich contextual information. We consider this task as a binary-class classification problem: whether the current source phrase s is bracketable (b) within particular syntactic contexts (τ(s)). If two neighboring sub-phrases s1 and s2 are given, we can use more inner syntactic contexts to complete this binary classification task. We construct the syntax-driven bracketing model within the maximum entropy framework. A unary SDB model is defined as: PUniSDB(b|τ(s), T) = exp(P i θihi(b, τ(s), T) P b exp(P i θihi(b, τ(s), T) (3) where hi ∈{0, 1} is a binary feature function which we will describe in the next subsection, and θi is the weight of hi. Similarly, a binary SDB model is defined as: PBiSDB(b|τ(s1), τ(s2), τ(s), T) = exp(P i θihi(b, τ(s1), τ(s2), τ(s), T) P b exp(P i θihi(b, τ(s1), τ(s2), τ(s), T) (4) The most important advantage of ME-based SDB model is its capacity of incorporating more fine-grained contextual features besides the binary feature that detects constituent boundary violation or matching. By employing these features, we can investigate the value of various syntactic constraints in phrase translation. 317 j i n g f a n g p o l i c e y i f e n g s u o b l o c k l e b a o z h a b o m b x i a n c h a n g s c e n e N N N N N P V P A S V V A D N N A D V P V P N P I P s s 1 s 2 Figure 2: Illustration of syntax-driven features used in SDB. Here we only show the features for the source phrase s. The triangle, rounded rectangle and rectangle denote the rule feature, path feature and constituent boundary matching feature respectively. 3.2 Syntax-Driven Features Let s be the source phrase in question, s1 and s2 be the two neighboring sub-phrases. σ(.) is the root node of τ(.). The SDB model exploits various syntactic features as follows. • Rule Features (RF) We use the CFG rules of σ(s), σ(s1) and σ(s2) as features. These features capture syntactic “horizontal context” which demonstrates the expansion trend of the source phrase s, s1 and s2 on the parse tree. In figure 2, the CFG rule “ADVP→AD”, “VP→VV AS NP”, and “VP→ADVP VP” are used as features for s1, s2 and s respectively. • Path Features (PF) The tree path σ(s1)..σ(s) connecting σ(s1) and σ(s), σ(s2)..σ(s) connecting σ(s2) and σ(s), and σ(s)..ρ connecting σ(s) and the root node ρ of the whole parse tree are used as features. These features provide syntactic “vertical context” which shows the generation history of the source phrases on the parse tree. ( a ) ( b ) ( c ) Figure 3: Three scenarios of the relationship between phrase boundaries and constituent boundaries. The gray circles are constituent boundaries while the black circles are phrase boundaries. In figure 2, the path features are “ADVP VP”, “VP VP” and “VP IP” for s1, s2 and s respectively. • Constituent Boundary Matching Features (CBMF) These features are to capture the relationship between a source phrase s and τ(s) or τ(s)’s subtrees. There are three different scenarios3: 1) exact match, where s exactly matches the boundaries of τ(s) (figure 3(a)), 2) inside match, where s exactly spans a sequence of τ(s)’s subtrees (figure 3(b)), and 3) crossing, where s crosses the boundaries of one or two subtrees of τ(s) (figure 3(c)). In the case of 1) or 2), we set the value of this feature to σ(s)-M or σ(s)-I respectively. When s crosses the boundaries of the subconstituent ϵl on s’s left, we set the value to σ(ϵl)-LC; If s crosses the boundaries of the sub-constituent ϵr on s’s right, we set the value to σ(ϵr)-RC; If both, we set the value to σ(ϵl)-LC-σ(ϵr)-RC. Let’s revisit the Figure 2. The source phrase s1 exactly matches the constituent ADVP, therefore CBMF is “ADVP-M”. The source phrase s2 exactly spans two sub-trees VV and AS of VP, therefore CBMF is “VP-I”. Finally, the source phrase s cross boundaries of the lower VP on the right, therefore CBMF is “VP-RC”. 3.3 The Integration of the SDB Model into Phrase-Based SMT We integrate the SDB model into phrase-based SMT to help decoder perform syntax-driven phrase translation. In particular, we add a 3The three scenarios that we define here are similar to those in (L¨u et al., 2002). 318 new feature into the log-linear translation model: PSDB(b|T, τ(.)). This feature is computed by the SDB model described in equation (3) or equation (4), which estimates a probability that a source span is to be translated as a unit within particular syntactic contexts. If a source span can be translated as a unit, the feature will give a higher probability even though this span violates boundaries of a constituent. Otherwise, a lower probability is given. Through this additional feature, we want the decoder to prefer hypotheses that translate source spans which can be translated as a unit, and avoids translating those which are discontinuous after translation. The weight of this new feature is tuned via MERT, which measures the extent to which this feature should be trusted. In this paper, we implement the SDB model in a state-of-the-art phrase-based system which adapts a binary bracketing transduction grammar (BTG) (Wu, 1997) to phrase translation and reordering, described in (Xiong et al., 2006). Whenever a BTG merging rule (s →[s1 s2] or s →⟨s1 s2⟩) is used, the SDB model gives a probability to the span s covered by the rule, which estimates the extent to which the span is bracketable. For the unary SDB model, we only consider the features from τ(s). For the binary SDB model, we use all features from τ(s1), τ(s2) and τ(s) since the binary SDB model is naturally suitable to the binary BTG rules. The SDB model, however, is not only limited to phrase-based SMT using BTG rules. Since it is applied on a source span each time, any other hierarchical phrase-based or syntax-based system that translates source spans recursively or linearly, can adopt the SDB model. 4 Experiments We carried out the MT experiments on Chineseto-English translation, using (Xiong et al., 2006)’s system as our baseline system. We modified the baseline decoder to incorporate our SDB models as descried in section 3.3. In order to compare with Marton and Resnik’s approach, we also adapted the baseline decoder to their XP+ feature. 4.1 Experimental Setup In order to obtain syntactic trees for SDB models and XP+, we parsed source sentences using a lexicalized PCFG parser (Xiong et al., 2005). The parser was trained on the Penn Chinese Treebank with an F1 score of 79.4%. All translation models were trained on the FBIS corpus. We removed 15,250 sentences, for which the Chinese parser failed to produce syntactic parse trees. To obtain word-level alignments, we ran GIZA++ (Och and Ney, 2000) on the remaining corpus in both directions, and applied the “grow-diag-final” refinement rule (Koehn et al., 2005) to produce the final many-to-many word alignments. We built our four-gram language model using Xinhua section of the English Gigaword corpus (181.1M words) with the SRILM toolkit (Stolcke, 2002). For the efficiency of MERT, we built our development set (580 sentences) using sentences not exceeding 50 characters from the NIST MT-02 set. We evaluated all models on the NIST MT-05 set using case-sensitive BLEU-4. Statistical significance in BLEU score differences was tested by paired bootstrap re-sampling (Koehn, 2004). 4.2 SDB Training We extracted 6.55M bracketing instances from our training corpus using the algorithm shown in figure 1, which contains 4.67M bracketable instances and 1.89M unbracketable instances. From extracted bracketing instances we generated syntaxdriven features, which include 73,480 rule features, 153,614 path features and 336 constituent boundary matching features. To tune weights of features, we ran the MaxEnt toolkit (Zhang, 2004) with iteration number being set to 100 and Gaussian prior to 1 to avoid overfitting. 4.3 Results We ran the MERT module with our decoders to tune the feature weights. The values are shown in Table 1. The PSDB receives the largest feature weight, 0.29 for UniSDB and 0.38 for BiSDB, indicating that the SDB models exert a nontrivial impact on decoder. In Table 2, we present our results. Like (Marton and Resnik, 2008), we find that the XP+ feature obtains a significant improvement of 1.08 BLEU over the baseline. However, using all syntax-driven features described in section 3.2, our SDB models achieve larger improvements of up to 1.67 BLEU. The binary SDB (BiSDB) model statistically significantly outperforms Marton and Resnik’s XP+ by an absolute improvement of 0.59 (relatively 2%). It is also marginally better than the unary SDB model. 319 Features System P(c|e) P(e|c) Pw(c|e) Pw(e|c) Plm(e) Pr(e) Word Phr. XP+ PSDB Baseline 0.041 0.030 0.006 0.065 0.20 0.35 0.19 -0.12 — — XP+ 0.002 0.049 0.046 0.044 0.17 0.29 0.16 0.12 -0.12 — UniSDB 0.023 0.051 0.055 0.012 0.21 0.20 0.12 0.04 — 0.29 BiSDB 0.016 0.032 0.027 0.013 0.13 0.23 0.08 0.09 — 0.38 Table 1: Feature weights obtained by MERT on the development set. The first 4 features are the phrase translation probabilities in both directions and the lexical translation probabilities in both directions. Plm = language model; Pr = MaxEnt-based reordering model; Word = word bonus; Phr = phrase bonus. BLEU-n n-gram Precision System 4 1 2 3 4 5 6 7 8 Baseline 0.2612 0.71 0.36 0.18 0.10 0.054 0.030 0.016 0.009 XP+ 0.2720** 0.72 0.37 0.19 0.11 0.060 0.035 0.021 0.012 UniSDB 0.2762**+ 0.72 0.37 0.20 0.11 0.062 0.035 0.020 0.011 BiSDB 0.2779**++ 0.72 0.37 0.20 0.11 0.065 0.038 0.022 0.014 Table 2: Results on the test set. **: significantly better than baseline (p < 0.01). + or ++: significantly better than Marton and Resnik’s XP+ (p < 0.05 or p < 0.01, respectively). 5 Analysis In this section, we present analysis to perceive the influence mechanism of the SDB model on phrase translation by studying the effects of syntax-driven features and differences of 1-best translation outputs. 5.1 Effects of Syntax-Driven Features We conducted further experiments using individual syntax-driven features and their combinations. Table 3 shows the results, from which we have the following key observations. • The constituent boundary matching feature (CBMF) is a very important feature, which by itself achieves significant improvement over the baseline (up to 1.13 BLEU). Both our CBMF and Marton and Resnik’s XP+ feature focus on the relationship between a source phrase and a constituent. Their significant contribution to the improvement implies that this relationship is an important syntactic constraint for phrase translation. • Adding more features, such as path feature and rule feature, achieves further improvements. This demonstrates the advantage of using more syntactic constraints in the SDB model, compared with Marton and Resnik’s XP+. BLEU-4 Features UniSDB BiSDB PF + RF 0.2555 0.2644*@@ PF 0.2596 0.2671**@@ CBMF 0.2678** 0.2725**@ RF + CBMF 0.2737** 0.2780**++@@ PF + CBMF 0.2755**+ 0.2782**++@− RF + PF + CBMF 0.2762**+ 0.2779**++ Table 3: Results of different feature sets. * or **: significantly better than baseline (p < 0.05 or p < 0.01, respectively). + or ++: significantly better than XP+ (p < 0.05 or p < 0.01, respectively). @−: almost significantly better than its UniSDB counterpart (p < 0.075). @ or @@: significantly better than its UniSDB counterpart (p < 0.05 or p < 0.01, respectively). • In most cases, the binary SDB is constantly significantly better than the unary SDB, suggesting that inner contexts are useful in predicting phrase bracketing. 5.2 Beyond BLEU We want to further study the happenings after we integrate the constraint feature (our SDB model and Marton and Resnik’s XP+) into the log-linear translation model. In particular, we want to investigate: to what extent syntactic constraints change translation outputs? And in what direction the changes take place? Since BLEU is not sufficient 320 System CCM Rate (%) Baseline 43.5 XP+ 74.5 BiSDB 72.4 Table 4: Consistent constituent matching rates reported on 1-best translation outputs. to provide such insights, we introduce a new statistical metric which measures the proportion of syntactic constituents 4 whose boundaries are consistently matched by decoder during translation. This proportion, which we call consistent constituent matching (CCM) rate , reflects the extent to which the translation output respects the source parse tree. In order to calculate this rate, we output translation results as well as phrase alignments found by decoders. Then for each multi-branch constituent cj i spanning from i to j on the source side, we check the following conditions. • If its boundaries i and j are aligned to phrase segmentation boundaries found by decoder. • If all target phrases inside cj i’s target span 5 are aligned to the source phrases within cj i and not to the phrases outside cj i. If both conditions are satisfied, the constituent cj i is consistently matched by decoder. Table 4 shows the consistent constituent matching rates. Without using any source-side syntactic information, the baseline obtains a low CCM rate of 43.53%, indicating that the baseline decoder violates the source parse tree more than it respects the source structure. The translation output described in section 1 is actually generated by the baseline decoder, where the second NP phrase boundaries are violated. By integrating syntactic constraints into decoding, we can see that both Marton and Resnik’s XP+ and our SDB model achieve a significantly higher constituent matching rate, suggesting that they are more likely to respect the source structure. The examples in Table 5 show that the decoder is able to generate better translations if it is 4We only consider multi-branch constituents. 5Given a phrase alignment P = {cg f ↔eq p}, if the segmentation within cj i defined by P is cj i = cj1 i1 ...cjk ik , and cjr ir ↔evr ur ∈P, 1 ≤r ≤k, we define the target span of cj i as a pair where the first element is min(eu1...euk) and the second element is max(ev1...evk), similar to (Fox, 2002). CCM Rates (%) System <6 6-10 11-15 16-20 >20 XP+ 75.2 70.9 71.0 76.2 82.2 BiSDB 69.3 74.7 74.2 80.0 85.6 Table 6: Consistent constituent matching rates for structures with different spans. faithful to the source parse tree by using syntactic constraints. We further conducted a deep comparison of translation outputs of BiSDB vs. XP+ with regard to constituent matching and violation. We found two significant differences that may explain why our BiSDB outperforms XP+. First, although the overall CCM rate of XP+ is higher than that of BiSDB, BiSDB obtains higher CCM rates for long-span structures than XP+ does, which are shown in Table 6. Generally speaking, violations of long-span constituents have a more negative impact on performance than short-span violations if these violations are toxic. This explains why BiSDB achieves relatively higher precision improvements for higher n-grams over XP+, as shown in Table 3. Second, compared with XP+ that only punishes constituent boundary violations, our SDB model is able to encourage violations if these violations are done on bracketable phrases. We observed in many cases that by violating constituent boundaries BiSDB produces better translations than XP+ does, which on the contrary matches these boundaries. Still consider the example shown in section 1. The following translations are found by XP+ and BiSDB respectively. XP+: [to/把⟨[set up/设立[for the/为[navigation/航海section/节]]] on July 11/7月11日⟩] BiSDB: [to/把⟨[[set up/设立a/为] [marine/航海 festival/节]] on July 11/7月11日⟩] XP+ here matches all constituent boundaries while BiSDB violates the PP constituent to translate the non-syntactic phrase “设立为”. Table 7 shows more examples. From these examples, we clearly see that appropriate violations are helpful and even necessary for generating better translations. By allowing appropriate violations to translate nonsyntactic phrases according to particular syntactic contexts, our SDB model better inherits the strength of phrase-based approach than XP+. 321 Src: [[为[印度洋灾区民众]NP ]PP [奉献[自己]NP [一份爱心]NP ]VP ]VP Ref: show their loving hearts to people in the Indian Ocean disaster areas Baseline: ⟨love/爱心[for the/为⟨[people/民众[to/奉献[own/自己a report/一份]]]⟩⟨in/灾区the Indian Ocean/印 度洋⟩]⟩ XP+: ⟨[contribute/奉献[its/自己[part/一份love/爱心]]] [for/为⟨the people/民众⟨in/灾区the Indian Ocean/印 度洋⟩⟩]⟩ BiSDB: ⟨[[[contribute/奉献its/自己] part/一份] love/爱心] [for/为⟨the people/民众⟨in/灾区the Indian Ocean印 度洋⟩⟩]⟩ Src: [五角大厦[已]ADVP [派遣[[二十架]QP 飞机]NP [至南亚]PP]VP]IP [,]PU [其中包括...]IP Ref: The Pentagon has dispatched 20 airplanes to South Asia, including... Baseline: [[The Pentagon/五角大厦has sent/已派遣] [⟨[to/至[[South Asia/南亚,/,] including/其中包括]] [20/二 十plane/架飞机]⟩]] XP+: [The Pentagon/五角大厦[has/已[sent/派遣[[20/二十planes/架飞机] [to/至South Asia/南亚]]]]] [,/, [including/其中包括...]] BiSDB: [The Pentagon/五角大厦[has sent/已派遣[[20/二十planes/架飞机] [to/至South Asia/南亚]]] [,/,[including/其中包括...]] Table 5: Translation examples showing that both XP+ and BiSDB produce better translations than the baseline, which inappropriately violates constituent boundaries (within underlined phrases). Src: [[在[[[美国国务院与鲍尔]NP [短暂]ADJP [会谈]NP]NP 后]LCP]PP 表示]VP Ref: said after a brief discussion with Powell at the US State Department XP+: [⟨after/后⟨⟨[a brief/短暂meeting/会谈] [with/与Powell/鲍尔]⟩[in/在the US State Department/美国国 务院]⟩said/表示] BiSDB: ⟨said after/后表示⟨[a brief/短暂meeting/会谈] ⟨with Powell/与鲍尔[at/在the State Department of the United States/美国国务院]⟩⟩⟩ Src: [向[[建立[未来民主政治]NP]VP]IP]PP [迈出了[关键性的一步]NP]VP Ref: took a key step towards building future democratic politics XP+: ⟨[a/了[key/关键性step/的一步]] ⟨forward/迈出[to/向[a/建立[future/未来political democracy/民主政 治]]]⟩⟩ BiSDB: ⟨[made a/迈出了[key/关键性step/的一步]] [towards establishing a/向建立⟨democratic politics/民主政 治in the future/未来⟩]⟩ Table 7: Translation examples showing that BiSDB produces better translations than XP+ via appropriate violations of constituent boundaries (within double-underlined phrases). 6 Conclusion In this paper, we presented a syntax-driven bracketing model that automatically learns bracketing knowledge from training corpus. With this knowledge, the model is able to predict whether source phrases can be translated together, regardless of matching or crossing syntactic constituents. We integrate this model into phrase-based SMT to increase its capacity of linguistically motivated translation without undermining its strengths. Experiments show that our model achieves substantial improvements over baseline and significantly outperforms (Marton and Resnik, 2008)’s XP+. Compared with previous constituency feature, our SDB model is capable of incorporating more syntactic constraints, and rewarding necessary violations of the source parse tree. Marton and Resnik (2008) find that their constituent constraints are sensitive to language pairs. In the future work, we will use other language pairs to test our models so that we could know whether our method is language-independent. References Colin Cherry. 2008. Cohesive Phrase-based Decoding for Statistical Machine Translation. In Proceedings of ACL. David Chiang. 2005. A Hierarchical Phrase-based Model for Statistical Machine Translation. In Proceedings of ACL, pages 263–270. David Chiang, Yuval Marton and Philip Resnik. 2008. Online Large-Margin Training of Syntactic and Structural Translation Features. In Proceedings of EMNLP. Heidi J. Fox 2002. Phrasal Cohesion and Statistical Machine Translation. In Proceedings of EMNLP, pages 304–311. Philipp Koehn, Franz Joseph Och, and Daniel Marcu. 2003. Statistical Phrase-based Translation. In Proceedings of HLT-NAACL. 322 Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In Proceedings of EMNLP. Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne and David Talbot. 2005. Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation. In International Workshop on Spoken Language Translation. Yajuan L¨u, Sheng Li, Tiezhun Zhao and Muyun Yang. 2002. Learning Chinese Bracketing Knowledge Based on a Bilingual Language Model. In Proceedings of COLING. Yuval Marton and Philip Resnik. 2008. Soft Syntactic Constraints for Hierarchical Phrase-Based Translation. In Proceedings of ACL. Franz Josef Och and Hermann Ney. 2000. Improved Statistical Alignment Models. In Proceedings of ACL 2000. Franz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of ACL 2003. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a Method for Automatically Evaluation of Machine Translation. In Proceedings of ACL. Andreas Stolcke. 2002. SRILM - an Extensible Language Modeling Toolkit. In Proceedings of International Conference on Spoken Language Processing, volume 2, pages 901-904. Dekai Wu. 1997. Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora. Computational Linguistics, 23(3):377-403. Deyi Xiong, Shuanglong Li, Qun Liu, Shouxun Lin, Yueliang Qian. 2005. Parsing the Penn Chinese Treebank with Semantic Knowledge. In Proceedings of IJCNLP, Jeju Island, Korea. Deyi Xiong, Qun Liu and Shouxun Lin. 2006. Maximum Entropy Based Phrase Reordering Model for Statistical Machine Translation. In Proceedings of ACL-COLING 2006. Deyi Xiong, Min Zhang, Aiti Aw, and Haizhou Li. 2008. Linguistically Annotated BTG for Statistical Machine Translation. In Proceedings of COLING 2008. Le Zhang. 2004. Maximum Entropy Modeling Tooklkit for Python and C++. Available at http://homepages.inf.ed.ac.uk/s0450736 /maxent toolkit.html. 323
2009
36
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 324–332, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Topological Ordering of Function Words in Hierarchical Phrase-based Translation Hendra Setiawan1 and Min-Yen Kan2 and Haizhou Li3 and Philip Resnik1 1University of Maryland Institute for Advanced Computer Studies 2School of Computing, National University of Singapore 3Human Language Technology, Institute for Infocomm Research, Singapore {hendra,resnik}@umiacs.umd.edu, [email protected], [email protected] Abstract Hierarchical phrase-based models are attractive because they provide a consistent framework within which to characterize both local and long-distance reorderings, but they also make it difcult to distinguish many implausible reorderings from those that are linguistically plausible. Rather than appealing to annotationdriven syntactic modeling, we address this problem by observing the inuential role of function words in determining syntactic structure, and introducing soft constraints on function word relationships as part of a standard log-linear hierarchical phrase-based model. Experimentation on Chinese-English and Arabic-English translation demonstrates that the approach yields signicant gains in performance. 1 Introduction Hierarchical phrase-based models (Chiang, 2005; Chiang, 2007) offer a number of attractive benets in statistical machine translation (SMT), while maintaining the strengths of phrase-based systems (Koehn et al., 2003). The most important of these is the ability to model long-distance reordering efciently. To model such a reordering, a hierarchical phrase-based system demands no additional parameters, since long and short distance reorderings are modeled identically using synchronous context free grammar (SCFG) rules. The same rule, depending on its topological ordering – i.e. its position in the hierarchical structure – can affect both short and long spans of text. Interestingly, hierarchical phrase-based models provide this benet without making any linguistic commitments beyond the structure of the model. However, the system's lack of linguistic commitment is also responsible for one of its greatest drawbacks. In the absence of linguistic knowledge, the system models linguistic structure using an SCFG that contains only one type of nonterminal symbol1. As a result, the system is susceptible to the overgeneration problem: the grammar may suggest more reordering choices than appropriate, and many of those choices lead to ungrammatical translations. Chiang (2005) hypothesized that incorrect reordering choices would often correspond to hierarchical phrases that violate syntactic boundaries in the source language, and he explored the use of a “constituent feature” intended to reward the application of hierarchical phrases which respect source language syntactic categories. Although this did not yield signicant improvements, Marton and Resnik (2008) and Chiang et al. (2008) extended this approach by introducing soft syntactic constraints similar to the constituent feature, but more ne-grained and sensitive to distinctions among syntactic categories; these led to substantial improvements in performance. Zollman et al. (2006) took a complementary approach, constraining the application of hierarchical rules to respect syntactic boundaries in the target language syntax. Whether the focus is on constraints from the source language or the target language, the main ingredient in both previous approaches is the idea of constraining the spans of hierarchical phrases to respect syntactic boundaries. In this paper, we pursue a different approach to improving reordering choices in a hierarchical phrase-based model. Instead of biasing the model toward hierarchical phrases whose spans respect syntactic boundaries, we focus on the topological ordering of phrases in the hierarchical structure. We conjecture that since incorrect reordering choices correspond to incorrect topological orderings, boosting the probability of correct topo1In practice, one additional nonterminal symbol is used in “glue rules”. This is not relevant in the present discussion. 324 logical ordering choices should improve the system. Although related to previous proposals (correct topological orderings lead to correct spans and vice versa), our proposal incorporates broader context and is structurally more aware, since we look at the topological ordering of a phrase relative to other phrases, rather than modeling additional properties of a phrase in isolation. In addition, our proposal requires no monolingual parsing or linguistically informed syntactic modeling for either the source or target language. The key to our approach is the observation that we can approximate the topological ordering of hierarchical phrases via the topological ordering of function words. We introduce a statistical reordering model that we call the pairwise dominance model, which characterizes reorderings of phrases around a pair of function words. In modeling function words, our model can be viewed as a successor to the function words-centric reordering model (Setiawan et al., 2007), expanding on the previous approach by modeling pairs of function words rather than individual function words in isolation. The rest of the paper is organized as follows. In Section 2, we briey review hierarchical phrasebased models. In Section 3, we rst describe the overgeneration problem in more detail with a concrete example, and then motivate our idea of using the topological ordering of function words to address the problem. In Section 4, we develop our idea by introducing the pairwise dominance model, expressing function word relationships in terms of what we call the the dominance predicate. In Section 5, we describe an algorithm to estimate the parameters of the dominance predicate from parallel text. In Sections 6 and 7, we describe our experiments, and in Section 8, we analyze the output of our system and lay out a possible future direction. Section 9 discusses the relation of our approach to prior work and Section 10 wraps up with our conclusions. 2 Hierarchical Phrase-based System Formally, a hierarchical phrase-based SMT system is based on a weighted synchronous context free grammar (SCFG) with one type of nonterminal symbol. Synchronous rules in hierarchical phrase-based models take the following form: X →⟨γ, α, ∼⟩ (1) where X is the nonterminal symbol and γ and α are strings that contain the combination of lexical items and nonterminals in the source and target languages, respectively. The ∼symbol indicates that nonterminals in γ and α are synchronized through co-indexation; i.e., nonterminals with the same index are aligned. Nonterminal correspondences are strictly one-to-one, and in practice the number of nonterminals on the right hand side is constrained to at most two, which must be separated by lexical items. Each rule is associated with a score that is computed via the following log linear formula: w(X →⟨γ, α, ∼⟩) = Y i fλi i (2) where fi is a feature describing one particular aspect of the rule and λi is the corresponding weight of that feature. Given ˜e and ˜f as the source and target phrases associated with the rule, typical features used are rule's translation probability Ptrans( ˜f|˜e) and its inverse Ptrans(˜e| ˜f), the lexical probability Plex( ˜f|˜e) and its inverse Plex(˜e| ˜f). Systems generally also employ a word penalty, a phrase penalty, and target language model feature. (See (Chiang, 2005) for more detailed discussion.) Our pairwise dominance model will be expressed as an additional rule-level feature in the model. Translation of a source sentence e using hierarchical phrase-based models is formulated as a search for the most probable derivation D∗whose source side is equal to e: D∗= argmax P(D), where source(D)=e. D = Xi, i ∈1...|D| is a set of rules following a certain topological ordering, indicated here by the use of the superscript. 3 Overgeneration and Topological Ordering of Function Words The use of only one type of nonterminal allows a exible permutation of the topological ordering of the same set of rules, resulting in a huge number of possible derivations from a given source sentence. In that respect, the overgeneration problem is not new to SMT: Bracketing Transduction Grammar (BTG) (Wu, 1997) uses a single type of nonterminal and is subject to overgeneration problems, as well.2 2Note, however, that overgeneration in BTG can be viewed as a feature, not a bug, since the formalism was origi325 The problem may be less severe in hierarchical phrase-based MT than in BTG, since lexical items on the rules' right hand sides often limit the span of nonterminals. Nonetheless overgeneration of reorderings is still problematic, as we illustrate using the hypothetical Chinese-to-English example in Fig. 1. Suppose we want to translate the Chinese sentence in Fig. 1 into English using the following set of rules: 1. Xa →⟨ž Z X1, computers and X1⟩ 2. Xb →⟨X1 4 X2, X1 are X2⟩ 3. Xc →⟨Cå , cell phones ⟩ 4. Xd →⟨X1 { Ò , inventions of X1⟩ 5. Xe →⟨ÞÇ- , the last century ⟩ Co-indexation of nonterminals on the right hand side is indicated by subscripts, and for our examples the label of the nonterminal on the left hand side is used as the rule's unique identier. To correctly translate the sentence, a hierarchical phrase-based system needs to model the subject noun phrase, object noun phrase and copula constructions; these are captured by rules Xa, Xd and Xb respectively, so this set of rules represents a hierarchical phrase-based system that can be used to correctly translate the Chinese sentence. Note that the Chinese word order is correctly preserved in the subject (Xa) as well as copula constructions (Xb), and correctly inverted in the object construction (Xd). However, although it can generate the correct translation in Fig. 2, the grammar has no mechanism to prevent the generation of an incorrect translation like the one illustrated in Fig. 3. If we contrast the topological ordering of the rules in Fig. 2 and Fig. 3, we observe that the difference is small but quite signicant. Using precede symbol (≺) to indicate the rst operand immediately dominates the second operand in the hierarchical structure, the topological orderings in Fig. 2 and Fig. 3 are Xa ≺Xb ≺Xc ≺Xd ≺Xe and Xd ≺Xa ≺Xb ≺Xc ≺Xe, respectively. The only difference is the topological ordering of Xd: in Fig. 2, it appears below most of the other hierarchical phrases, while in Fig. 3, it appears above all the other hierarchical phrases. nally introduced for bilingual analysis rather than generation of translations. Modeling the topological ordering of hierarchical phrases is computationally prohibitive, since there are literally millions of hierarchical rules in the system's automatically-learned grammar and millions of possible ways to order their application. To avoid this computational problem and still model the topological ordering, we propose to use the topological ordering of function words as a practical approximation. This is motivated by the fact that function words tend to carry crucial syntactic information in sentences, serving as the “glue” for content-bearing phrases. Moreover, the positional relationships between function words and content phrases tends to be xed (e.g., in English, prepositions invariably precede their object noun phrase), at least for the languages we have worked with thus far. In the Chinese sentence above, there are three function words involved: the conjunction Z (and), the copula 4 (are), and the noun phrase marker { (of).3 Using the function words as approximate representations of the rules in which they appear, the topological ordering of hierarchical phrases in Fig. 2 is Z(and) ≺4(are) ≺{(of), while that in Fig. 3 is {(of) ≺Z(and) ≺4(are).4 We can distinguish the correct and incorrect reordering choices by looking at this simple information. In the correct reordering choice, {(of) appears at the lower level of the hierarchy while in the incorrect one, {(of) appears at the highest level of the hierarchy. 4 Pairwise Dominance Model Our example suggests that we may be able to improve the translation model's sensitivity to correct versus incorrect reordering choices by modeling the topological ordering of function words. We do so by introducing a predicate capturing the dominance relationship in a derivation between pairs of neighboring function words.5 Let us dene a predicate d(Y ′, Y ′′) that takes two function words as input and outputs one of 3We use the term “noun phrase marker” here in a general sense, meaning that in this example it helps tell us that the phrase is part of an NP, not as a technical linguistic term. It serves in other grammatical roles, as well. Disambiguating the syntactic roles of function words might be a particularly useful thing to do in the model we are proposing; this is a question for future research. 4Note that for expository purposes, we designed our simple grammar to ensure that these function words appear in separate rules. 5Two function words are considered neighbors iff no other function word appears between them in the source sentence. 326 ž Z Cå 4 Ò { ÞÇ- ? XXXXX z      9 ? ? ? ? are computers and cell phones inventions of the last century Figure 1: A running example of Chinese-to-English translation. Xa⇒⟨ž Z Xb, computers and Xb⟩ ⇒⟨ž Z Xc 4 Xd, computers and Xc are Xd⟩ ⇒⟨ž Z Cå 4 Xd, computers and cell phones are Xd⟩ ⇒⟨ž Z Cå 4 Xe { Ò , computers and cell phones are inventions of Xe⟩ ⇒⟨ž Z Cå 4 ÞÇ- { Ò , computers and cell phones are inventions of the last century⟩ Figure 2: The derivation that leads to the correct translation Xd⇒⟨Xa { Ò , inventions of Xa⟩ ⇒⟨ž Z Xb { Ò , inventions of computers and Xb⟩ ⇒⟨ž Z Xc 4 Xe { Ò , inventions of computers and Xc are Xe⟩ ⇒⟨ž Z Cå 4 Xe { Ò , inventions of computers and cell phones are Xe⟩ ⇒⟨ž Z Cå 4 ÞÇ- { Ò , inventions of computers and cell phones are the last century⟩ Figure 3: The derivation that leads to the incorrect translation four values: {leftFirst, rightFirst, dontCare, neither}, where Y ′ appears to the left of Y ′′ in the source sentence. The value leftFirst indicates that in the derivation's topological ordering, Y ′ precedes Y ′′ (i.e. Y ′ dominates Y ′′ in the hierarchical structure), while rightFirst indicates that Y ′′ dominates Y ′. In Fig. 2, d(Y ′, Y ′′) = leftFirst for Y ′ = the copula 4 (are) and Y ′′ = the noun phrase marker { (of). The dontCare and neither values capture two additional relationships: dontCare indicates that the topological ordering of the function words is exible, and neither indicates that the topological ordering of the function words is disjoint. The former is useful in cases where the hierarchical phrases suggest the same kind of reordering, and therefore restricting their topological ordering is not necessary. This is illustrated in Fig. 2 by the pair Z(and) and the copula 4(are), where putting either one above the other does not change the nal word order. The latter is useful in cases where the two function words do not share a same parent. Formally, this model requires several changes in the design of the hierarchical phrase-based system. 1. To facilitate topological ordering of function words, the hierarchical phrases must be subcategorized with function words. Taking Xb in Fig. 2 as a case in point, subcategorization using function words would yield:6 Xb(4 ≺{) →Xc 4 Xd({) (3) The subcategorization (indicated by the information in parentheses following the nonterminal) propagates the function word 4(are) of Xb to the higher level structure together with the function word {(of) of Xd. This propagation process generalizes to other rules by maintaining the ordering of the function words according to their appearance in the source sentence. Note that the subcategorized nonterminals often resemble genuine syntactic categories, for instance X({) can frequently be interpreted as a noun phrase. 2. To facilitate the computation of the dominance relationship, the coindexing in synchronized rules (indicated by the ∼symbol in Eq. 1) must be expanded to include information not only about the nonterminal correspondences but also about the alignment of the lexical items. For example, adding lexical alignment information to rule Xd would yield: Xd →⟨X1{2Ò3, inventions3 of2 X1⟩ (4) 6The target language side is concealed for clarity. 327 The computation of the dominance relationship using this alignment information will be discussed in detail in the next section. Again taking Xb in Fig. 2 as a case in point, the dominance feature takes the following form: fdom(Xb) ≈dom(d(4, {)|4, {)) (5) dom(d(YL, YR)|YL, YR)) (6) where the probability of 4 ≺{ is estimated according to the probability of d(4, {). In practice, both 4(are) and {(of) may appear together in one same rule. In such a case, a dominance score is not calculated since the topological ordering of the two function words is unambiguous. Hence, in our implementation, a dominance score is only calculated at the points where the topological ordering of the hierarchical phrases needs to be resolved, i.e. the two function words always come from two different hierarchical phrases. 5 Parameter Estimation Learning the dominance model involves extracting d values for every pair of neighboring function words in the training bitext. Such statistics are not directly observable in parallel corpora, so estimation is needed. Our estimation method is based on two facts: (1) the topological ordering of hierarchical phrases is tightly coupled with the span of the hierarchical phrases, and (2) the span of a hierarchical phrase at a higher level is always a superset of the span of all other hierarchical phrases at the lower level of its substructure. Thus, to establish soft estimates of dominance counts, we utilize alignment information available in the rule together with the consistent alignment heuristic (Och and Ney, 2004) traditionally used to guess phrase alignments. Specically, we dene the span of a function word as a maximal, consistent alignment in the source language that either starts from or ends with the function word. (Requiring that spans be maximal ensures their uniqueness.) We will refer to such spans as Maximal Consistent Alignments (MCA). Note that each function word has two such Maximal Consistent Alignments: one that ends with the function word (MCAR)and another that starts from the function word (MCAL). Y ′ Y ′′ leftrightdontneiFirst First Care ther Z (and) 4 (are) 0.11 0.16 0.68 0.05 4 (are) { (of) 0.57 0.15 0.06 0.22 Table 1: The distribution of the dominance values of the function words involved in Fig. 1. The value with the highest probability is in bold. Given two function words Y ′ and Y ′′, with Y ′ preceding Y ′′, we dene the value of d by examining the MCAs of the two function words. d(Y ′, Y ′′) =            leftFirst, Y ′ ̸∈MCAR(Y ′′) ∧Y ′′ ∈MCAL(Y ′) rightFirst, Y ′ ∈MCAR(Y ′′) ∧Y ′′ ̸∈MCAL(Y ′) dontCare, Y ′ ∈MCAR(Y ′′) ∧Y ′′ ∈MCAL(Y ′) neither, Y ′ ̸∈MCAR(Y ′′) ∧Y ′′ ̸∈MCAL(Y ′) (6) Fig. 4a illustrates the leftFirst dominance value where the intersection of the MCAs contains only the second function word ({(of)). Fig. 4b illustrates the dontCare value, where the intersection contains both function words. Similarly, rightFirst and neither are represented by an intersection that contains only Y ′, or by an empty intersection, respectively. Once all the d values are counted, the pairwise dominance model of neighboring function words can be estimated simply from counts using maximum likelihood. Table 1 illustrates estimated dominance values that correctly resolve the topological ordering for our running example. 6 Experimental Setup We tested the effect of introducing the pairwise dominance model into hierarchical phrase-based translation on Chinese-to-English and Arabic-toEnglish translation tasks, thus studying its effect in two languages where the use of function words differs signicantly. Following Setiawan et al. (2007), we identify function words as the N most frequent words in the corpus, rather than identifying them according to linguistic criteria; this approximation removes the need for any additional language-specic resources. We report results for N = 32, 64, 128, 256, 512, 1024, 2048.7 For 7We observe that even N = 2048 represents less than 1.5% and 0.8% of the words in the Chinese and Arabic vocabularies, respectively. The validity of the frequency-based strategy, relative to linguistically-dened function words, is discussed in Section 8 328 n a n b j j j z j z j the last century of innovations are cell phones and computers ž Z C å 4 Þ  {  Ò j z j z j j j the last century of innovations are cell phones and computers ž Z C å 4 Þ  {  Ò Figure 4: Illustrations for: a) the leftFirst value, and b) the dontCare value. Thickly bordered boxes are MCAs of the function words while solid circles are the alignment points of the function words. The gray boxes are the intersections of the two MCAs. all experiments, we report performance using the BLEU score (Papineni et al., 2002), and we assess statistical signicance using the standard bootstrapping approach introduced by (Koehn, 2004). Chinese-to-English experiments. We trained the system on the NIST MT06 Eval corpus excluding the UN data (approximately 900K sentence pairs). For the language model, we used a 5gram model with modied Kneser-Ney smoothing (Kneser and Ney, 1995) trained on the English side of our training data as well as portions of the Gigaword v2 English corpus. We used the NIST MT03 test set as the development set for optimizing interpolation weights using minimum error rate training (MERT; (Och and Ney, 2002)). We carried out evaluation of the systems on the NIST 2006 evaluation test (MT06) and the NIST 2008 evaluation test (MT08). We segmented Chinese as a preprocessing step using the Harbin segmenter (Zhao et al., 2001). Arabic-to-English experiments. We trained the system on a subset of 950K sentence pairs from the NIST MT08 training data, selected by “subsampling” from the full training data using a method proposed by Kishore Papineni (personal communication). The subsampling algorithm selects sentence pairs from the training data in a way that seeks reasonable representation for all ngrams appearing in the test set. For the language model, we used a 5-gram model trained on the English portion of the whole training data plus portions of the Gigaword v2 corpus. We used the NIST MT03 test set as the development set for optimizing the interpolation weights using MERT. We carried out the evaluation of the systems on the NIST 2006 evaluation set (MT06) and the NIST 2008 evaluation set (MT08). Arabic source text was preprocessed by separating clitics, the deniteness marker, and the future tense marker from their stems. 7 Experimental Results Chinese-to-English experiments. Table 2 summarizes the results of our Chinese-to-English experiments. These results conrm that the pairwise dominance model can signicantly increase performance as measured by the BLEU score, with a consistent pattern of results across the MT06 and MT08 test sets. Modeling N = 32 drops the performance marginally below baseline, suggesting that perhaps there are not enough words for the pairwise dominance model to work with. Doubling the number of words (N = 64) produces a small gain, and dening the pairwise dominance model using N = 128 most frequent words produces a statistically signicant 1-point gain over the baseline (p < 0.01). Larger values of N yield statistically signicant performance above the baseline, but without further improvements over N = 128. Arabic-to-English experiments. Table 3 summarizes the results of our Arabic-to-English experiments. This set of experiments shows a pattern consistent with what we observed in Chineseto-English translation, again generally consistent across MT06 and MT08 test sets although modeling a small number of lexical items (N = 32) brings a marginal improvement over the baseline. In addition, we again nd that the pairwise dominance model with N = 128 produces the most signicant gain over the baseline in the MT06, although, interestingly, modeling a much larger number of lexical items (N = 2048) yields the strongest improvement for the MT08 test set. 329 MT06 MT08 baseline 30.58 24.08 +dom(N = 32) 30.43 23.91 +dom(N = 64) 30.96 24.45 +dom(N = 128) 31.59 24.91 +dom(N = 256) 31.24 24.26 +dom(N = 512) 31.33 24.39 +dom(N = 1024) 31.22 24.79 +dom(N = 2048) 30.75 23.92 Table 2: Experimental results on Chinese-toEnglish translation with the pairwise dominance model (dom) of different N. The baseline (the rst line) is the original hierarchical phrase-based system. Statistically signicant results (p < 0.01) over the baseline are in bold. MT06 MT08 baseline 41.56 40.06 +dom(N = 32) 41.66 40.26 +dom(N = 64) 42.03 40.73 +dom(N = 128) 42.66 41.08 +dom(N = 256) 42.28 40.69 +dom(N = 512) 41.97 40.95 +dom(N = 1024) 42.05 40.55 +dom(N = 2048) 42.48 41.47 Table 3: Experimental results on Arabic-toEnglish translation with the pairwise dominance model (dom) of different N. The baseline (the rst line) is the original hierarchical phrase-based system. Statistically signicant results over the baseline (p < 0.01) are in bold. 8 Discussion and Future Work The results in both sets of experiments show consistently that we have achieved a signicant gains by modeling the topological ordering of function words. When we visually inspect and compare the outputs of our system with those of the baseline, we observe that improved BLEU score often corresponds to visible improvements in the subjective translation quality. For example, the translations for the Chinese sentence “<1 ‰2 :3 ‹4 ó5 ›6 8ñ7 8 9 À10 õ11 È12 ?13”, taken from Chinese MT06 test set, are as follows (co-indexing subscripts represent reconstructed word alignments): • baseline: “military1 intelligence2 under observation8 in5 u.s.6 air raids7 :3 iran4 to9 how11 long12 ?13 ” • +dom(N=128): “ military1 survey2 :3 how11 long12 iran4 under8 air strikes7 of the u.s6 can9 hold out10 ?13 ” In addition to some lexical translation errors (e.g. ›6 should be translated to U.S. Army), the baseline system also makes mistakes in reordering. The most obvious, perhaps, is its failure to capture the wh-movement involving the interrogative word õ11 (how); this should move to the beginning of the translated clause, consistent with English wh-fronting as opposed to Chinese wh in situ. The pairwise dominance model helps, since the dominance value between the interrogative word and its previous function word, the modal verb 9(can) in the baseline system's output, is neither, rather than rightFirst as in the better translation. The fact that performance tends to be best using a frequency threshold of N = 128 strikes us as intuitively sensible, given what we know about word frequency rankings.8 In English, for example, the most frequent 128 words include virtually all common conjunctions, determiners, prepositions, auxiliaries, and complementizers – the crucial elements of “syntactic glue” that characterize the types of linguistic phrases and the ordering relationships between them – and a very small proportion of content words. Using Adam Kilgarriff's lemmatized frequency list from the British National Corpus, http://www.kilgarriff.co.uk/bnc-readme.html, the most frequent 128 words in English are heavily dominated by determiners, “functional” adverbs like not and when, “particle” adverbs like up, prepositions, pronouns, and conjunctions, with some arguably “functional” auxiliary and light verbs like be, have, do, give, make, take. Content words are generally limited to a small number of frequent verbs like think and want and a very small handful of frequent nouns. In contrast, ranks 129-256 are heavily dominated by the traditional content-word categories, i.e. nouns, verbs, adjectives and adverbs, with a small number of left-over function words such as less frequent conjunctions while, when, and although. Consistent with these observations for English, the empirical results for Chinese suggest that our 8In fact, we initially simply chose N = 128 for our experimentation, and then did runs with alternative N to conrm our intuitions. 330 approximation of function words using word frequency is reasonable. Using a list of approximately 900 linguistically identied function words in Chinese extracted from (Howard, 2002), we observe that that the performance drops when increasing N above 128 corresponds to a large increase in the number of non-function words used in the model. For example, with N = 2048, the proportion of non-function words is 88%, compared to 60% when N = 128.9 One natural extension of this work, therefore, would be to tighten up our characterization of function words, whether statistically, distributionally, or simply using manually created resources that exist for many languages. As a rst step, we did a version of the Chinese-English experiment using the list of approximately 900 genuine function words, testing on the Chinese MT06 set. Perhaps surprisingly, translation performance, 30.90 BLEU, was around the level we obtained when using frequency to approximate function words at N = 64. However, we observe that many of the words in the linguistically motivated function word list are quite infrequent; this suggests that data sparseness may be an additional factor worth investigating. Finally, although we believe there are strong motivations for focusing on the role of function words in reordering, there may well be value in extending the dominance model to include content categories. Verbs and many nouns have subcategorization properties that may inuence phrase ordering, for example, and this may turn out to explain the increase in Arabic-English performance for N = 2048 using the MT08 test set. More generally, the approach we are taking can be viewed as a way of selectively lexicalizing the automatically extracted grammar, and there is a large range of potentially interesting choices in how such lexicalization could be done. 9 Related Work In the introduction, we discussed Chiang's (2005) constituency feature, related ideas explored by Marton and Resnik (2008) and Chiang et al. (2008), and the target-side variation investigated by Zollman et al. (2006). These methods differ from each other mainly in terms of the specic lin9We plan to do corresponding experimentation and analysis for Arabic once we identify a suitable list of manually identied function words. guistic knowledge being used and on which side the constraints are applied. Shen et al. (2008) proposed to use linguistic knowledge expressed in terms of a dependency grammar, instead of a syntactic constituency grammar. Villar et al. (2008) attempted to use syntactic constituency on both the source and target languages in the same spirit as the constituency feature, along with some simple patternbased heuristics – an approach also investigated by Iglesias et al. (2009). Aiming at improving the selection of derivations, Zhou et al. (2008) proposed prior derivation models utilizing syntactic annotation of the source language, which can be seen as smoothing the probabilities of hierarchical phrase features. A key point is that the model we have introduced in this paper does not require the linguistic supervision needed in most of this prior work. We estimate the parameters of our model from parallel text without any linguistic annotation. That said, we would emphasize that our approach is, in fact, motivated in linguistic terms by the role of function words in natural language syntax. 10 Conclusion We have presented a pairwise dominance model to address reordering issues that are not handled particularly well by standard hierarchical phrasebased modeling. In particular, the minimal linguistic commitment in hierarchical phrase-based models renders them susceptible to overgeneration of reordering choices. Our proposal handles the overgeneration problem by identifying hierarchical phrases with function words and by using function word relationships to incorporate soft constraints on topological orderings. Our experimental results demonstrate that introducing the pairwise dominance model into hierarchical phrase-based modeling improves performance signicantly in large-scale Chinese-to-English and Arabic-to-English translation tasks. Acknowledgments This research was supported in part by the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR001106-2-001. Any opinions, ndings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reect the view of the sponsors. 331 References David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and structural translation features. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 224–233, Honolulu, Hawaii, October. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 263–270, Ann Arbor, Michigan, June. Association for Computational Linguistics. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Jiaying Howard. 2002. A Student Handbook for Chinese Function Words. The Chinese University Press. Gonzalo Iglesias, Adria de Gispert, Eduardo R. Banga, and William Byrne. 2009. Rule ltering by pattern for efcient hierarchical translation. In Proceedings of the 12th Conference of the European Chapter of the Association of Computational Linguistics (to appear). R. Kneser and H. Ney. 1995. Improved backingoff for m-gram language modeling. In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing95, pages 181–184, Detroit, MI, May. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 127–133, Edmonton, Alberta, Canada, May. Association for Computational Linguistics. Philipp Koehn. 2004. Statistical signicance tests for machine translation evaluation. In Proceedings of EMNLP 2004, pages 388–395, Barcelona, Spain, July. Yuval Marton and Philip Resnik. 2008. Soft syntactic constraints for hierarchical phrased-based translation. In Proceedings of The 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1003– 1011, Columbus, Ohio, June. Franz Josef Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 295–302, Philadelphia, Pennsylvania, USA, July. Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417–449. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA, July. Hendra Setiawan, Min-Yen Kan, and Haizhou Li. 2007. Ordering phrases with function words. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 712– 719, Prague, Czech Republic, June. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proceedings of The 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 577–585, Columbus, Ohio, June. David Vilar, Daniel Stein, and Hermann Ney. 2008. Analysing soft syntax features and heuristics for hierarchical phrase based machine translation. International Workshop on Spoken Language Translation 2008, pages 190–197, October. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–404, Sep. Tiejun Zhao, Yajuan Lv, Jianmin Yao, Hao Yu, Muyun Yang, and Fang Liu. 2001. Increasing accuracy of chinese segmentation with strategy of multi-step processing. Journal of Chinese Information Processing (Chinese Version), 1:13–18. Bowen Zhou, Bing Xiang, Xiaodan Zhu, and Yuqing Gao. 2008. Prior derivation models for formally syntax-based translation using linguistically syntactic parsing and tree kernels. In Proceedings of the ACL-08: HLT Second Workshop on Syntax and Structure in Statistical Translation (SSST-2), pages 19–27, Columbus, Ohio, June. Andreas Zollmann and Ashish Venugopal. 2006. Syntax augmented machine translation via chart parsing. In Proceedings on the Workshop on Statistical Machine Translation, pages 138–141, New York City, June. 332
2009
37
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 333–341, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Phrase-Based Statistical Machine Translation as a Traveling Salesman Problem Mikhail Zaslavskiy∗ Marc Dymetman Nicola Cancedda Mines ParisTech, Institut Curie Xerox Research Centre Europe 77305 Fontainebleau, France 38240 Meylan, France [email protected] {marc.dymetman,nicola.cancedda}@xrce.xerox.com Abstract An efficient decoding algorithm is a crucial element of any statistical machine translation system. Some researchers have noted certain similarities between SMT decoding and the famous Traveling Salesman Problem; in particular (Knight, 1999) has shown that any TSP instance can be mapped to a sub-case of a word-based SMT model, demonstrating NP-hardness of the decoding task. In this paper, we focus on the reverse mapping, showing that any phrase-based SMT decoding problem can be directly reformulated as a TSP. The transformation is very natural, deepens our understanding of the decoding problem, and allows direct use of any of the powerful existing TSP solvers for SMT decoding. We test our approach on three datasets, and compare a TSP-based decoder to the popular beam-search algorithm. In all cases, our method provides competitive or better performance. 1 Introduction Phrase-based systems (Koehn et al., 2003) are probably the most widespread class of Statistical Machine Translation systems, and arguably one of the most successful. They use aligned sequences of words, called biphrases, as building blocks for translations, and score alternative candidate translations for the same source sentence based on a log-linear model of the conditional probability of target sentences given the source sentence: p(T, a|S) = 1 ZS exp X k λkhk(S, a, T) (1) where the hk are features, that is, functions of the source string S, of the target string T, and of the ∗This work was conducted during an internship at XRCE. alignment a, where the alignment is a representation of the sequence of biphrases that where used in order to build T from S; The λk’s are weights and ZS is a normalization factor that guarantees that p is a proper conditional probability distribution over the pairs (T, A). Some features are local, i.e. decompose over biphrases and can be precomputed and stored in advance. These typically include forward and reverse phrase conditional probability features log p(˜t|˜s) as well as log p(˜s|˜t), where ˜s is the source side of the biphrase and ˜t the target side, and the so-called “phrase penalty” and “word penalty” features, which count the number of phrases and words in the alignment. Other features are non-local, i.e. depend on the order in which biphrases appear in the alignment. Typical non-local features include one or more n-gram language models as well as a distortion feature, measuring by how much the order of biphrases in the candidate translation deviates from their order in the source sentence. Given such a model, where the λi’s have been tuned on a development set in order to minimize some error rate (see e.g. (Lopez, 2008)), together with a library of biphrases extracted from some large training corpus, a decoder implements the actual search among alternative translations: (a∗, T ∗) = arg max (a,T) P(T, a|S). (2) The decoding problem (2) is a discrete optimization problem. Usually, it is very hard to find the exact optimum and, therefore, an approximate solution is used. Currently, most decoders are based on some variant of a heuristic left-to-right search, that is, they attempt to build a candidate translation (a, T) incrementally, from left to right, extending the current partial translation at each step with a new biphrase, and computing a score composed of two contributions: one for the known elements of the partial translation so far, and one a heuristic 333 estimate of the remaining cost for completing the translation. The variant which is mostly used is a form of beam-search, where several partial candidates are maintained in parallel, and candidates for which the current score is too low are pruned in favor of candidates that are more promising. We will see in the next section that some characteristics of beam-search make it a suboptimal choice for phrase-based decoding, and we will propose an alternative. This alternative is based on the observation that phrase-based decoding can be very naturally cast as a Traveling Salesman Problem (TSP), one of the best studied problems in combinatorial optimization. We will show that this formulation is not only a powerful conceptual device for reasoning on decoding, but is also practically convenient: in the same amount of time, off-the-shelf TSP solvers can find higher scoring solutions than the state-of-the art beam-search decoder implemented in Moses (Hoang and Koehn, 2008). 2 Related work Beam-search decoding In beam-search decoding, candidate translation prefixes are iteratively extended with new phrases. In its most widespread variant, stack decoding, prefixes obtained by consuming the same number of source words, no matter which, are grouped together in the same stack1 and compete against one another. Threshold and histogram pruning are applied: the former consists in dropping all prefixes having a score lesser than the best score by more than some fixed amount (a parameter of the algorithm), the latter consists in dropping all prefixes below a certain rank. While quite successful in practice, stack decoding presents some shortcomings. A first one is that prefixes obtained by translating different subsets of source words compete against one another. In one early formulation of stack decoding for SMT (Germann et al., 2001), the authors indeed proposed to lazily create one stack for each subset of source words, but acknowledged issues with the potential combinatorial explosion in the number of stacks. This problem is reduced by the use of heuristics for estimating the cost of translating the remaining part of the source sentence. How1While commonly adopted in the speech and SMT communities, this is a bit of a misnomer, since the used data structures are priority queues, not stacks. ever, this solution is only partially satisfactory. On the one hand, heuristics should be computationally light, much lighter than computing the actual best score itself, while, on the other hand, the heuristics should be tight, as otherwise pruning errors will ensue. There is no clear criterion to guide in this trade-off. Even when good heuristics are available, the decoder will show a bias towards putting at the beginning the translation of a certain portion of the source, either because this portion is less ambiguous (i.e. its translation has larger conditional probability) or because the associated heuristics is less tight, hence more optimistic. Finally, since the translation is built left-to-right the decoder cannot optimize the search by taking advantage of highly unambiguous and informative portions that should be best translated far from the beginning. All these reasons motivate considering alternative decoding strategies. Word-based SMT and the TSP As already mentioned, the similarity between SMT decoding and TSP was recognized in (Knight, 1999), who focussed on showing that any TSP can be reformulated as a sub-class of the SMT decoding problem, proving that SMT decoding is NP-hard. Following this work, the existence of many efficient TSP algorithms then inspired certain adaptations of the underlying techniques to SMT decoding for word-based models. Thus, (Germann et al., 2001) adapt a TSP subtour elimination strategy to an IBM-4 model, using generic Integer Programming techniques. The paper comes close to a TSP formulation of decoding with IBM-4 models, but does not pursue this route to the end, stating that “It is difficult to convert decoding into straight TSP, but a wide range of combinatorial optimization problems (including TSP) can be expressed in the more general framework of linear integer programming”. By employing generic IP techniques, it is however impossible to rely on the variety of more efficient both exact and approximate approaches which have been designed specifically for the TSP. In (Tillmann and Ney, 2003) and (Tillmann, 2006), the authors modify a certain Dynamic Programming technique used for TSP for use with an IBM4 word-based model and a phrase-based model respectively. However, to our knowledge, none of these works has proposed a direct reformulation of these SMT models as TSP instances. We believe we are the first to do so, working in our case 334 with the mainstream phrase-based SMT models, and therefore making it possible to directly apply existing TSP solvers to SMT. 3 The Traveling Salesman Problem and its variants In this paper the Traveling Salesman Problem appears in four variants: STSP. The most standard, and most studied, variant is the Symmetric TSP: we are given a nondirected graph G on N nodes, where the edges carry real-valued costs. The STSP problem consists in finding a tour of minimal total cost, where a tour (also called Hamiltonian Circuit) is a “circular” sequence of nodes visiting each node of the graph exactly once; ATSP. The Asymmetric TSP, or ATSP, is a variant where the underlying graph G is directed and where, for i and j two nodes of the graph, the edges (i,j) and (j,i) may carry different costs. SGTSP. The Symmetric Generalized TSP, or SGTSP: given a non-oriented graph G of |G| nodes with edges carrying real-valued costs, given a partition of these |G| nodes into m non-empty, disjoint, subsets (called clusters), find a circular sequence of m nodes of minimal total cost, where each cluster is visited exactly once. AGTSP. The Asymmetric Generalized TSP, or AGTSP: similar to the SGTSP, but G is now a directed graph. The STSP is often simply denoted TSP in the literature, and is known to be NP-hard (Applegate et al., 2007); however there has been enormous interest in developing efficient solvers for it, both exact and approximate. Most of existing algorithms are designed for STSP, but ATSP, SGTSP and AGTSP may be reduced to STSP, and therefore solved by STSP algorithms. 3.1 Reductions AGTSP→ATSP→STSP The transformation of the AGTSP into the ATSP, introduced by (Noon and Bean, 1993)), is illustrated in Figure (1). In this diagram, we assume that Y1, . . . , YK are the nodes of a given cluster, while X and Z are arbitrary nodes belonging to other clusters. In the transformed graph, we introduce edges between the Yi’s in order to form a cycle as shown in the figure, where each edge has a large negative cost −K. We leave alone the incoming edge to Yi from X, but the outgoing edge Figure 1: AGTSP→ATSP. from Yi to X has its origin changed to Yi−1. A feasible tour in the original AGTSP problem passing through X, Yi, Z will then be “encoded” as a tour of the transformed graph that first traverses X , then traverses Yi, . . . , YK, . . . , Yi−1, then traverses Z (this encoding will have the same cost as the original cost, minus (k −1)K). Crucially, if K is large enough, then the solver for the transformed ATSP graph will tend to traverse as many K edges as possible, meaning that it will traverse exactly k −1 such edges in the cluster, that is, it will produce an encoding of some feasible tour of the AGTSP problem. As for the transformation ATSP→STSP, several variants are described in the literature, e.g. (Applegate et al., 2007, p. 126); the one we use is from (Wikipedia, 2009) (not illustrated here for lack of space). 3.2 TSP algorithms TSP is one of the most studied problems in combinatorial optimization, and even a brief review of existing approaches would take too much place. Interested readers may consult (Applegate et al., 2007; Gutin, 2003) for good introductions. One of the best existing TSP solvers is implemented in the open source Concorde package (Applegate et al., 2005). Concorde includes the fastest exact algorithm and one of the most efficient implementations of the Lin-Kernighan (LK) heuristic for finding an approximate solution. LK works by generating an initial random feasible solution for the TSP problem, and then repeatedly identifying an ordered subset of k edges in the current tour and an ordered subset of k edges not included in the tour such that when they are swapped the objective function is improved. This is somewhat 335 reminiscent of the Greedy decoding of (Germann et al., 2001), but in LK several transformations can be applied simultaneously, so that the risk of being stuck in a local optimum is reduced (Applegate et al., 2007, chapter 15). As will be shown in the next section, phrasebased SMT decoding can be directly reformulated as an AGTSP. Here we use Concorde through first transforming AGTSP into STSP, but it might also be interesting in the future to use algorithms specifically designed for AGTSP, which could improve efficiency further (see Conclusion). 4 Phrase-based Decoding as TSP In this section we reformulate the SMT decoding problem as an AGTSP. We will illustrate the approach through a simple example: translating the French sentence “cette traduction automatique est curieuse” into English. We assume that the relevant biphrases for translating the sentence are as follows: ID source target h cette this t traduction translation ht cette traduction this translation mt traduction automatique machine translation a automatique automatic m automatique machine i est is s curieuse strange c curieuse curious Under this model, we can produce, among others, the following translations: h · mt · i · s this machine translation is strange h · c · t · i · a this curious translation is automatic ht · s · i · a this translation strange is automatic where we have indicated on the left the ordered sequence of biphrases that leads to each translation. We now formulate decoding as an AGTSP, in the following way. The graph nodes are all the possible pairs (w, b), where w is a source word in the source sentence s and b is a biphrase containing this source word. The graph clusters are the subsets of the graph nodes that share a common source word w. The costs of a transition between nodes M and N of the graph are defined as follows: (a) If M is of the form (w, b) and N of the form (w′, b), in which b is a single biphrase, and w and w′ are consecutive words in b, then the transition cost is 0: once we commit to using the first word of b, there is no additional cost for traversing the other source words covered by b. (b) If M = (w, b), where w is the rightmost source word in the biphrase b, and N = (w′, b′), where w′ ̸= w is the leftmost source word in b′, then the transition cost corresponds to the cost of selecting b′ just after b; this will correspond to “consuming” the source side of b′ after having consumed the source side of b (whatever their relative positions in the source sentence), and to producing the target side of b′ directly after the target side of b; the transition cost is then the addition of several contributions (weighted by their respective λ (not shown), as in equation 1): • The cost associated with the features local to b in the biphrase library; • The “distortion” cost of consuming the source word w′ just after the source word w: |pos(w′) −pos(w) −1|, where pos(w) and pos(w′) are the positions of w and w′ in the source sentence. • The language model cost of producing the target words of b′ right after the target words of b; with a bigram language model, this cost can be precomputed directly from b and b′. This restriction to bigram models will be removed in Section 4.1. (c) In all other cases, the transition cost is infinite, or, in other words, there is no edge in the graph between M and N. A special cluster containing a single node (denoted by $-$$ in the figures), and corresponding to special beginning-of-sentence symbols must also be included: the corresponding edges and weights can be worked out easily. Figures 2 and 3 give some illustrations of what we have just described. 4.1 From Bigram to N-gram LM Successful phrase-based systems typically employ language models of order higher than two. However, our models so far have the following important “Markovian” property: the cost of a path is additive relative to the costs of transitions. For example, in the example of Figure 3, the cost of this · machine translation · is · strange, can only take into account the conditional probability of the word strange relative to the word is, but not relative to the words translation and is. If we want to extend the power of the model to general n-gram language models, and in particular to the 3-gram 336 Figure 2: Transition graph for the source sentence cette traduction automatique est curieuse. Only edges entering or exiting the node traduction −mt are shown. The only successor to [traduction − mt] is [automatique −mt], and [cette −ht] is not a predecessor of [traduction −mt]. Figure 3: A GTSP tours is illustrated, corresponding to the displayed output. case (on which we concentrate here, but the techniques can be easily extended to the general case), the following approach can be applied. Compiling Out for Trigram models This approach consists in “compiling out” all biphrases with a target side of only one word. We replace each biphrase b with single-word target side by “extended” biphrases b1, . . . , br, which are “concatenations” of b and some other biphrase b′ in the library.2 To give an example, consider that we: (1) remove from the biphrase library the biphrase i, which has a single word target, and (2) add to the library the extended biphrases mti, ti, si, . . ., that is, all the extended biphrases consisting of the concatenation of a biphrase in the library with i, then it is clear that these extended biphrases will provide enough context to compute a trigram probability for the target word produced immediately next (in the examples, for the words strange, 2In the figures, such “concatenations” are denoted by [b′ · b] ; they are interpreted as encapsulations of first consuming the source side of b′, whether or not this source side precedes the source side of b in the source sentence, producing the target side of b′, consuming the source side of b, and producing the target side of b immediately after that of b′. Figure 4: Compiling-out of biphrase i: (est,is). automatic and automatic respectively). If we do that exhaustively for all biphrases (relevant for the source sentence at hand) that, like i, have a singleword target, we will obtain a representation that allows a trigram language model to be computed at each point. The situation becomes clearer by looking at Figure 4, where we have only eliminated the biphrase i, and only shown some of the extended biphrases that now encapsulate i, and where we show one valid circuit. Note that we are now able to associate with the edge connecting the two nodes (est, mti) and (curieuse, s) a trigram cost because mti provides a large enough target context. While this exhaustive “compiling out” method works in principle, it has a serious defect: if for the sentence to be translated, there are m relevant biphrases, among which k have single-word targets, then we will create on the order of km extended biphrases, which may represent a significant overhead for the TSP solver, as soon as k is large relative to m, which is typically the case. The problem becomes even worse if we extend the compiling-out method to n-gram language models with n > 3. In the Future Work section below, we describe a powerful approach for circumventing this problem, but with which we have not experimented yet. 5 Experiments 5.1 Monolingual word re-ordering In the first series of experiments we consider the artificial task of reconstructing the original word order of a given English sentence. First, we randomly permute words in the sentence, and then we try to reconstruct the original order by max337 10 0 10 2 10 4 −0.8 −0.6 −0.4 −0.2 0 0.2 Time (sec) Decoder score BEAM−SEARCH TSP 10 0 10 2 10 4 −0.4 −0.3 −0.2 −0.1 0 0.1 Time (sec) Decoder score BEAM−SEARCH TSP (a) (b) (c) (d) Figure 5: (a), (b): LM and BLEU scores as functions of time for a bigram LM; (c), (d): the same for a trigram LM. The x axis corresponds to the cumulative time for processing the test set; for (a) and (c), the y axis corresponds to the mean difference (over all sentences) between the lm score of the output and the lm score of the reference normalized by the sentence length N: (LM(ref)-LM(true))/N. The solid line with star marks corresponds to using beam-search with different pruning thresholds, which result in different processing times and performances. The cross corresponds to using the exact-TSP decoder (in this case the time to the optimal solution is not under the user’s control). imizing the LM score over all possible permutations. The reconstruction procedure may be seen as a translation problem from “Bad English” to “Good English”. Usually the LM score is used as one component of a more complex decoder score which also includes biphrase and distortion scores. But in this particular “translation task” from bad to good English, we consider that all “biphrases” are of the form e −e, where e is an English word, and we do not take into account any distortion: we only consider the quality of the permutation as it is measured by the LM component. Since for each “source word” e, there is exactly one possible “biphrase” e −e each cluster of the Generalized TSP representation of the decoding problem contains exactly one node; in other terms, the Generalized TSP in this situation is simply a standard TSP. Since the decoding phase is then equivalent to a word reordering, the LM score may be used to compare the performance of different decoding algorithms. Here, we compare three different algorithms: classical beamsearch (Moses); a decoder based on an exact TSP solver (Concorde); a decoder based on an approximate TSP solver (Lin-Kernighan as implemented in the Concorde solver) 3. In the Beam-search and the LK-based TSP solver we can control the trade-off between approximation quality and running time. To measure re-ordering quality, we use two scores. The first one is just the “internal” LM score; since all three algorithms attempt to maximize this score, a natural evaluation procedure is to plot its value versus the elapsed time. The sec3Both TSP decoders may be used with/or without a distortion limit; in our experiments we do not use this parameter. ond score is BLEU (Papineni et al., 2001), computed between the reconstructed and the original sentences, which allows us to check how well the quality of reconstruction correlates with the internal score. The training dataset for learning the LM consists of 50000 sentences from NewsCommentary corpus (Callison-Burch et al., 2008), the test dataset for word reordering consists of 170 sentences, the average length of test sentences is equal to 17 words. Bigram based reordering. First we consider a bigram Language Model and the algorithms try to find the re-ordering that maximizes the LM score. The TSP solver used here is exact, that is, it actually finds the optimal tour. Figures 5(a,b) present the performance of the TSP and Beamsearch based methods. Trigram based reordering. Then we consider a trigram based Language Model and the algorithms again try to maximize the LM score. The trigram model used is a variant of the exhaustive compiling-out procedure described in Section 4.1. Again, we use an exact TSP solver. Looking at Figure 5a, we see a somewhat surprising fact: the cross and some star points have positive y coordinates! This means that, when using a bigram language model, it is often possible to reorder the words of a randomly permuted reference sentence in such a way that the LM score of the reordered sentence is larger than the LM of the reference. A second notable point is that the increase in the LM-score of the beam-search with time is steady but very slow, and never reaches the level of performance obtained with the exact-TSP procedure, even when increasing the time by sev338 eral orders of magnitude. Also to be noted is that the solution obtained by the exact-TSP is provably the optimum, which is almost never the case of the beam-search procedure. In Figure 5b, we report the BLEU score of the reordered sentences in the test set relative to the original reference sentences. Here we see that the exact-TSP outputs are closer to the references in terms of BLEU than the beam-search solutions. Although the TSP output does not recover the reference sentences (it produces sentences with a slightly higher LM score than the references), it does reconstruct the references better than the beam-search. The experiments with trigram language models (Figures 5(c,d)) show similar trends to those with bigrams. 5.2 Translation experiments with a bigram language model In this section we consider two real translation tasks, namely, translation from English to French, trained on Europarl (Koehn et al., 2003) and translation from German to Spanish training on the NewsCommentary corpus. For Europarl, the training set includes 2.81 million sentences, and the test set 500. For NewsCommentary the training set is smaller: around 63k sentences, with a test set of 500 sentences. Figure 6 presents Decoder and Bleu scores as functions of time for the two corpuses. Since in the real translation task, the size of the TSP graph is much larger than in the artificial reordering task (in our experiments the median size of the TSP graph was around 400 nodes, sometimes growing up to 2000 nodes), directly applying the exact TSP solver would take too long; instead we use the approximate LK algorithm and compare it to Beam-Search. The efficiency of the LK algorithm can be significantly increased by using a good initialization. To compare the quality of the LK and Beam-Search methods we take a rough initial solution produced by the Beam-Search algorithm using a small value for the stack size and then use it as initial point, both for the LK algorithm and for further Beam-Search optimization (where as before we vary the Beam-Search thresholds in order to trade quality for time). In the case of the Europarl corpus, we observe that LK outperforms Beam-Search in terms of the Decoder score as well as in terms of the BLEU score. Note that the difference between the two algorithms increases steeply at the beginning, which means that we can significantly increase the quality of the Beam-Search solution by using the LK algorithm at a very small price. In addition, it is important to note that the BLEU scores obtained in these experiments correspond to feature weights, in the log-linear model (1), that have been optimized for the Moses decoder, but not for the TSP decoder: optimizing these parameters relatively to the TSP decoder could improve its BLEU scores still further. On the News corpus, again, LK outperforms Beam-Search in terms of the Decoder score. The situation with the BLEU score is more confuse. Both algorithms do not show any clear score improvement with increasing running time which suggests that the decoder’s objective function is not very well correlated with the BLEU score on this corpus. 6 Future Work In section 4.1, we described a general “compiling out” method for extending our TSP representation to handling trigram and N-gram language models, but we noted that the method may lead to combinatorial explosion of the TSP graph. While this problem was manageable for the artificial monolingual word re-ordering (which had only one possible translation for each source word), it becomes unwieldy for the real translation experiments, which is why in this paper we only considered bigram LMs for these experiments. However, we know how to handle this problem in principle, and we now describe a method that we plan to experiment with in the future. To avoid the large number of artificial biphrases as in 4.1, we perform an adaptive selection. Let us suppose that (w, b) is a SMT decoding graph node, where b is a biphrase containing only one word on the target side. On the first step, when we evaluate the traveling cost from (w, b) to (w′, b′), we take the language model component equal to min b′′̸=b′,b −log p(b′.v|b.e, b′′.e), where b′.v represents the first word of the b′ target side, b.e is the only word of the b target side, and b′′.e is the last word of the b′′ target size. This procedure underestimates the total cost of tour passing through biphrases that have a single-word target. Therefore if the optimal tour passes only through biphrases with more than one 339 10 3 10 4 10 5 −273 −272.5 −272 −271.5 −271 Time (sec) Decoder score BEAM−SEARCH TSP (LK) 10 3 10 4 10 5 0.18 0.185 0.19 Time (sec) BLEU score BEAM−SEARCH TSP (LK) 10 3 10 4 −414 −413.8 −413.6 −413.4 −413.2 −413 Time (sec) Decoder score TSP (LK) BEAM−SEARCH 10 3 10 4 0.242 0.243 0.244 0.245 Time (sec) BLEU score TSP (LK) BEAM−SEARCH (a) (b) (c) (d) Figure 6: (a), (b): Europarl corpus, translation from English to French; (c),(d): NewsCommentary corpus, translation from German to Spanish. Average value of the decoder and the BLEU scores (over 500 test sentences) as a function of time. The trade-off quality/time in the case of LK is controlled by the number of iterations, and each point corresponds to a particular number of iterations, in our experiments LK was run with a number of iterations varying between 2k and 170k. The same trade-off in the case of Beam-Search is controlled by varying the beam thresholds. word on their target side, then we are sure that this tour is also optimal in terms of the tri-gram language model. Otherwise, if the optimal tour passes through (w, b), where b is a biphrase having a single-word target, we add only the extended biphrases related to b as we described in section 4.1, and then we recompute the optimal tour. Iterating this procedure provably converges to an optimal solution. This powerful method, which was proposed in (Kam and Kopec, 1996; Popat et al., 2001) in the context of a finite-state model (but not of TSP), can be easily extended to N-gram situations, and typically converges in a small number of iterations. 7 Conclusion The main contribution of this paper has been to propose a transformation for an arbitrary phrasebased SMT decoding instance into a TSP instance. While certain similarities of SMT decoding and TSP were already pointed out in (Knight, 1999), where it was shown that any Traveling Salesman Problem may be reformulated as an instance of a (simplistic) SMT decoding task, and while certain techniques used for TSP were then adapted to word-based SMT decoding (Germann et al., 2001; Tillmann and Ney, 2003; Tillmann, 2006), we are not aware of any previous work that shows that SMT decoding can be directly reformulated as a TSP. Beside the general interest of this transformation for understanding decoding, it also opens the door to direct application of the variety of existing TSP algorithms to SMT. Our experiments on synthetic and real data show that fast TSP algorithms can handle selection and reordering in SMT comparably or better than the state-of-theart beam-search strategy, converging on solutions with higher objective function in a shorter time. The proposed method proceeds by first constructing an AGTSP instance from the decoding problem, and then converting this instance first into ATSP and finally into STSP. At this point, a direct application of the well known STSP solver Concorde (with Lin-Kernighan heuristic) already gives good results. We believe however that there might exist even more efficient alternatives. Instead of converting the AGTSP instance into a STSP instance, it might prove better to use directly algorithms expressly designed for ATSP or AGTSP. For instance, some of the algorithms tested in the context of the DIMACS implementation challenge for ATSP (Johnson et al., 2002) might well prove superior. There is also active research around AGTSP algorithms. Recently new effective methods based on a “memetic” strategy (Buriol et al., 2004; Gutin et al., 2008) have been put forward. These methods combined with our proposed formulation provide ready-to-use SMT decoders, which it will be interesting to compare. Acknowledgments Thanks to Vassilina Nikoulina for her advice about running Moses on the test datasets. 340 References David L. Applegate, Robert E. Bixby, Vasek Chvatal, and William J. Cook. 2005. Concorde tsp solver. http://www.tsp.gatech.edu/ concorde.html. David L. Applegate, Robert E. Bixby, Vasek Chvatal, and William J. Cook. 2007. The Traveling Salesman Problem: A Computational Study (Princeton Series in Applied Mathematics). Princeton University Press, January. Luciana Buriol, Paulo M. Franc¸a, and Pablo Moscato. 2004. A new memetic algorithm for the asymmetric traveling salesman problem. Journal of Heuristics, 10(5):483–506. Chris Callison-Burch, Philipp Koehn, Christof Monz, Josh Schroeder, and Cameron Shaw Fordyce, editors. 2008. Proceedings of the Third Workshop on SMT. ACL, Columbus, Ohio, June. Ulrich Germann, Michael Jahr, Kevin Knight, and Daniel Marcu. 2001. Fast decoding and optimal decoding for machine translation. In In Proceedings of ACL 39, pages 228–235. Gregory Gutin, Daniel Karapetyan, and Krasnogor Natalio. 2008. Memetic algorithm for the generalized asymmetric traveling salesman problem. In NICSO 2007, pages 199–210. Springer Berlin. G. Gutin. 2003. Travelling salesman and related problems. In Handbook of Graph Theory. Hieu Hoang and Philipp Koehn. 2008. Design of the Moses decoder for statistical machine translation. In ACL 2008 Software workshop, pages 58–65, Columbus, Ohio, June. ACL. D.S. Johnson, G. Gutin, L.A. McGeoch, A. Yeo, W. Zhang, and A. Zverovich. 2002. Experimental analysis of heuristics for the atsp. In The Travelling Salesman Problem and Its Variations, pages 445–487. Anthony C. Kam and Gary E. Kopec. 1996. Document image decoding by heuristic search. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18:945–950. Kevin Knight. 1999. Decoding complexity in wordreplacement translation models. Computational Linguistics, 25:607–615. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In NAACL 2003, pages 48–54, Morristown, NJ, USA. Association for Computational Linguistics. Adam Lopez. 2008. Statistical machine translation. ACM Comput. Surv., 40(3):1–49. C. Noon and J.C. Bean. 1993. An efficient transformation of the generalized traveling salesman problem. INFOR, pages 39–44. Kishore Papineni, Salim Roukos, Todd Ward, and Wei J. Zhu. 2001. BLEU: a Method for Automatic Evaluation of Machine Translation. IBM Research Report, RC22176. Kris Popat, Daniel H. Greene, Justin K. Romberg, and Dan S. Bloomberg. 2001. Adding linguistic constraints to document image decoding: Comparing the iterated complete path and stack algorithms. Christoph Tillmann and Hermann Ney. 2003. Word reordering and a dynamic programming beam search algorithm for statistical machine translation. Comput. Linguist., 29(1):97–133. Christoph Tillmann. 2006. Efficient Dynamic Programming Search Algorithms For Phrase-Based SMT. In Workshop On Computationally Hard Problems And Joint Inference In Speech And Language Processing. Wikipedia. 2009. Travelling Salesman Problem — Wikipedia, The Free Encyclopedia. [Online; accessed 5-May-2009]. 341
2009
38
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 342–350, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Concise Integer Linear Programming Formulations for Dependency Parsing Andr´e F. T. Martins∗† Noah A. Smith∗Eric P. Xing∗ ∗School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA †Instituto de Telecomunicac¸˜oes, Instituto Superior T´ecnico, Lisboa, Portugal {afm,nasmith,epxing}@cs.cmu.edu Abstract We formulate the problem of nonprojective dependency parsing as a polynomial-sized integer linear program. Our formulation is able to handle non-local output features in an efficient manner; not only is it compatible with prior knowledge encoded as hard constraints, it can also learn soft constraints from data. In particular, our model is able to learn correlations among neighboring arcs (siblings and grandparents), word valency, and tendencies toward nearlyprojective parses. The model parameters are learned in a max-margin framework by employing a linear programming relaxation. We evaluate the performance of our parser on data in several natural languages, achieving improvements over existing state-of-the-art methods. 1 Introduction Much attention has recently been devoted to integer linear programming (ILP) formulations of NLP problems, with interesting results in applications like semantic role labeling (Roth and Yih, 2005; Punyakanok et al., 2004), dependency parsing (Riedel and Clarke, 2006), word alignment for machine translation (Lacoste-Julien et al., 2006), summarization (Clarke and Lapata, 2008), and coreference resolution (Denis and Baldridge, 2007), among others. In general, the rationale for the development of ILP formulations is to incorporate non-local features or global constraints, which are often difficult to handle with traditional algorithms. ILP formulations focus more on the modeling of problems, rather than algorithm design. While solving an ILP is NP-hard in general, fast solvers are available today that make it a practical solution for many NLP problems. This paper presents new, concise ILP formulations for projective and non-projective dependency parsing. We believe that our formulations can pave the way for efficient exploitation of global features and constraints in parsing applications, leading to more powerful models. Riedel and Clarke (2006) cast dependency parsing as an ILP, but efficient formulations remain an open problem. Our formulations offer the following comparative advantages: • The numbers of variables and constraints are polynomial in the sentence length, as opposed to requiring exponentially many constraints, eliminating the need for incremental procedures like the cutting-plane algorithm; • LP relaxations permit fast online discriminative training of the constrained model; • Soft constraints may be automatically learned from data. In particular, our formulations handle higher-order arc interactions (like siblings and grandparents), model word valency, and can learn to favor nearly-projective parses. We evaluate the performance of the new parsers on standard parsing tasks in seven languages. The techniques that we present are also compatible with scenarios where expert knowledge is available, for example in the form of hard or soft firstorder logic constraints (Richardson and Domingos, 2006; Chang et al., 2008). 2 Dependency Parsing 2.1 Preliminaries A dependency tree is a lightweight syntactic representation that attempts to capture functional relationships between words. Lately, this formalism has been used as an alternative to phrase-based parsing for a variety of tasks, ranging from machine translation (Ding and Palmer, 2005) to relation extraction (Culotta and Sorensen, 2004) and question answering (Wang et al., 2007). Let us first describe formally the set of legal dependency parse trees. Consider a sentence x = 342 ⟨w0, . . . , wn⟩, where wi denotes the word at the ith position, and w0 = $ is a wall symbol. We form the (complete1) directed graph D = ⟨V, A⟩, with vertices in V = {0, . . . , n} (the i-th vertex corresponding to the i-th word) and arcs in A = V 2. Using terminology from graph theory, we say that B ⊆A is an r-arborescence2 of the directed graph D if ⟨V, B⟩is a (directed) tree rooted at r. We define the set of legal dependency parse trees of x (denoted Y(x)) as the set of 0-arborescences of D, i.e., we admit each arborescence as a potential dependency tree. Let y ∈Y(x) be a legal dependency tree for x; if the arc a = ⟨i, j⟩∈y, we refer to i as the parent of j (denoted i = π(j)) and j as a child of i. We also say that a is projective (in the sense of Kahane et al., 1998) if any vertex k in the span of a is reachable from i (in other words, if for any k satisfying min(i, j) < k < max(i, j), there is a directed path in y from i to k). A dependency tree is called projective if it only contains projective arcs. Fig. 1 illustrates this concept.3 The formulation to be introduced in §3 makes use of the notion of the incidence vector associated with a dependency tree y ∈Y(x). This is the binary vector z ≜⟨za⟩a∈A with each component defined as za = I(a ∈y) (here, I(.) denotes the indicator function). Considering simultaneously all incidence vectors of legal dependency trees and taking the convex hull, we obtain a polyhedron that we call the arborescence polytope, denoted by Z(x). Each vertex of Z(x) can be identified with a dependency tree in Y(x). The Minkowski-Weyl theorem (Rockafellar, 1970) ensures that Z(x) has a representation of the form Z(x) = {z ∈R|A| | Az ≤b}, for some p-by-|A| matrix A and some vector b in Rp. However, it is not easy to obtain a compact representation (where p grows polynomially with the number of words n). In §3, we will provide a compact representation of an outer polytope ¯Z(x) ⊇Z(x) whose integer vertices correspond to dependency trees. Hence, the problem of finding the dependency tree that maximizes some linear function of the inci1The general case where A ⊆V 2 is also of interest; it arises whenever a constraint or a lexicon forbids some arcs from appearing in dependency tree. It may also arise as a consequence of a first-stage pruning step where some candidate arcs are eliminated; this will be further discussed in §4. 2Or “directed spanning tree with designated root r.” 3In this paper, we consider unlabeled dependency parsing, where only the backbone structure (i.e., the arcs without the labels depicted in Fig. 1) is to be predicted. Figure 1: A projective dependency graph. Figure 2: Non-projective dependency graph. those that assume each dependency decision is independent modulo the global structural constraint that dependency graphs must be trees. Such models are commonly referred to as edge-factored since their parameters factor relative to individual edges of the graph (Paskin, 2001; McDonald et al., 2005a). Edge-factored models have many computational benefits, most notably that inference for nonprojective dependency graphs can be achieved in polynomial time (McDonald et al., 2005b). The primary problem in treating each dependency as independent is that it is not a realistic assumption. Non-local information, such as arity (or valency) and neighbouring dependencies, can be crucial to obtaining high parsing accuracies (Klein and Manning, 2002; McDonald and Pereira, 2006). However, in the data-driven parsing setting this can be partially adverted by incorporating rich feature representations over the input (McDonald et al., 2005a). The goal of this work is to further our current understanding of the computational nature of nonprojective parsing algorithms for both learning and inference within the data-driven setting. We start by investigating and extending the edge-factored model of McDonald et al. (2005b). In particular, we appeal to the Matrix Tree Theorem for multi-digraphs to design polynomial-time algorithms for calculating both the partition function and edge expectations over all possible dependency graphs for a given sentence. To motivate these algorithms, we show that they can be used in many important learning and inference problems including min-risk decoding, training globally normalized log-linear models, syntactic language modeling, and unsupervised learning via the E previously been implementations We then switc non-local inform bouring parse de ity constraints w nian graph probl lem is intractab parse decisions, and Pereira (200 neighbourhoods tion to modeling sequence of the exact non-projec for any model as by the edge-facto 1.1 Related W There has been pendency parsin ner, 1996; Pask 2003; Nivre an 2005a) and non and Nilsson, 200 ald et al., 2005b classified into tw egory are those inference, typica shift-reduce par sumoto, 2003; N Nilsson, 2005). that employ exh ally by making s is the case for ed McDonald et al Recently there h tive methods tha tion, including b ald and Pereira, 2 teger linear prog or branch-and-bo For grammar b work on empiric ing systems, no of Wang and Ha note include the showing that th $ Figure 1: A projective dependency graph. Figure 2: Non-projective dependency graph. those that assume each dependency decision is independent modulo the global structural constraint that dependency graphs must be trees. Such models are commonly referred to as edge-factored since their parameters factor relative to individual edges of the graph (Paskin, 2001; McDonald et al., 2005a). Edge-factored models have many computational benefits, most notably that inference for nonprojective dependency graphs can be achieved in polynomial time (McDonald et al., 2005b). The primary problem in treating each dependency as independent is that it is not a realistic assumption. Non-local information, such as arity (or valency) and neighbouring dependencies, can be crucial to obtaining high parsing accuracies (Klein and Manning, 2002; McDonald and Pereira, 2006). However, in the data-driven parsing setting this can be partially adverted by incorporating rich feature representations over the input (McDonald et al., 2005a). The goal of this work is to further our current understanding of the computational nature of nonprojective parsing algorithms for both learning and inference within the data-driven setting. We start by investigating and extending the edge-factored model of McDonald et al. (2005b). In particular, we appeal to the Matrix Tree Theorem for multi-digraphs to design polynomial-time algorithms for calculating both the partition function and edge expectations over all possible dependency graphs for a given sentence. To motivate these algorithms, we show that they can be used in many important learning and inference problems including min-risk decoding, training globally normalized log-linear models, syntactic language modeling, and unsupervised learning via the E previously been k implementations. We then switc non-local inform bouring parse dec ity constraints we nian graph proble lem is intractabl parse decisions, and Pereira (2006 neighbourhoods tion to modeling sequence of thes exact non-project for any model ass by the edge-facto 1.1 Related Wo There has been e pendency parsing ner, 1996; Paskin 2003; Nivre and 2005a) and nonand Nilsson, 200 ald et al., 2005b classified into tw egory are those m inference, typica shift-reduce pars sumoto, 2003; N Nilsson, 2005). that employ exha ally by making st is the case for ed McDonald et al., Recently there ha tive methods that tion, including bo ald and Pereira, 2 teger linear progr or branch-and-bo For grammar b work on empirica ing systems, not of Wang and Har note include the w showing that the $ Figure 1: A projective dependency parse (top), and a nonprojective dependency parse (bottom) for two English sentences; examples from McDonald and Satta (2007). dence vectors can be cast as an ILP. A similar idea was applied to word alignment by Lacoste-Julien et al. (2006), where permutations (rather than arborescences) were the combinatorial structure being requiring representation. Letting X denote the set of possible sentences, define Y ≜S x∈X Y(x). Given a labeled dataset L ≜⟨⟨x1, y1⟩, . . . , ⟨xm, ym⟩⟩∈(X × Y)m, we aim to learn a parser, i.e., a function h : X →Y that given x ∈X outputs a legal dependency parse y ∈Y(x). The fact that there are exponentially many candidates in Y(x) makes dependency parsing a structured classification problem. 2.2 Arc Factorization and Locality There has been much recent work on dependency parsing using graph-based, transition-based, and hybrid methods; see Nivre and McDonald (2008) for an overview. Typical graph-based methods consider linear classifiers of the form hw(x) = argmaxy∈Y w⊤f(x, y), (1) where f(x, y) is a vector of features and w is the corresponding weight vector. One wants hw to have small expected loss; the typical loss function is the Hamming loss, ℓ(y′; y) ≜|{⟨i, j⟩∈ y′ : ⟨i, j⟩/∈y}|. Tractability is usually ensured by strong factorization assumptions, like the one underlying the arc-factored model (Eisner, 1996; McDonald et al., 2005), which forbids any feature that depends on two or more arcs. This induces a decomposition of the feature vector f(x, y) as: f(x, y) = P a∈y fa(x). (2) Under this decomposition, each arc receives a score; parsing amounts to choosing the configuration that maximizes the overall score, which, as 343 shown by McDonald et al. (2005), is an instance of the maximal arborescence problem. Combinatorial algorithms (Chu and Liu, 1965; Edmonds, 1967) can solve this problem in cubic time.4 If the dependency parse trees are restricted to be projective, cubic-time algorithms are available via dynamic programming (Eisner, 1996). While in the projective case, the arc-factored assumption can be weakened in certain ways while maintaining polynomial parser runtime (Eisner and Satta, 1999), the same does not happen in the nonprojective case, where finding the highest-scoring tree becomes NP-hard (McDonald and Satta, 2007). Approximate algorithms have been employed to handle models that are not arc-factored (although features are still fairly local): McDonald and Pereira (2006) adopted an approximation based on O(n3) projective parsing followed by a hillclimbing algorithm to rearrange arcs, and Smith and Eisner (2008) proposed an algorithm based on loopy belief propagation. 3 Dependency Parsing as an ILP Our approach will build a graph-based parser without the drawback of a restriction to local features. By formulating inference as an ILP, nonlocal features can be easily accommodated in our model; furthermore, by using a relaxation technique we can still make learning tractable. The impact of LP-relaxed inference in the learning problem was studied elsewhere (Martins et al., 2009). A linear program (LP) is an optimization problem of the form minx∈Rd c⊤x s.t. Ax ≤b. (3) If the problem is feasible, the optimum is attained at a vertex of the polyhedron that defines the constraint space. If we add the constraint x ∈Zd, then the above is called an integer linear program (ILP). For some special parameter settings—e.g., when b is an integer vector and A is totally unimodular5—all vertices of the constraining polyhedron are integer points; in these cases, the integer constraint may be suppressed and (3) is guaranteed to have integer solutions (Schrijver, 2003). Of course, this need not happen: solving a general ILP is an NP-complete problem. Despite this 4There is also a quadratic algorithm due to Tarjan (1977). 5A matrix is called totally unimodular if the determinants of each square submatrix belong to {0, 1, −1}. fact, fast solvers are available today that make this a practical solution for many problems. Their performance depends on the dimensions and degree of sparsity of the constraint matrix A. Riedel and Clarke (2006) proposed an ILP formulation for dependency parsing which refines the arc-factored model by imposing linguistically motivated “hard” constraints that forbid some arc configurations. Their formulation includes an exponential number of constraints—one for each possible cycle. Since it is intractable to throw in all constraints at once, they propose a cuttingplane algorithm, where the cycle constraints are only invoked when violated by the current solution. The resulting algorithm is still slow, and an arc-factored model is used as a surrogate during training (i.e., the hard constraints are only used at test time), which implies a discrepancy between the model that is optimized and the one that is actually going to be used. Here, we propose ILP formulations that eliminate the need for cycle constraints; in fact, they require only a polynomial number of constraints. Not only does our model allow expert knowledge to be injected in the form of constraints, it is also capable of learning soft versions of those constraints from data; indeed, it can handle features that are not arc-factored (correlating, for example, siblings and grandparents, modeling valency, or preferring nearly projective parses). While, as pointed out by McDonald and Satta (2007), the inclusion of these features makes inference NPhard, by relaxing the integer constraints we obtain approximate algorithms that are very efficient and competitive with state-of-the-art methods. In this paper, we focus on unlabeled dependency parsing, for clarity of exposition. If it is extended to labeled parsing (a straightforward extension), our formulation fully subsumes that of Riedel and Clarke (2006), since it allows using the same hard constraints and features while keeping the ILP polynomial in size. 3.1 The Arborescence Polytope We start by describing our constraint space. Our formulations rely on a concise polyhedral representation of the set of candidate dependency parse trees, as sketched in §2.1. This will be accomplished by drawing an analogy with a network flow problem. Let D = ⟨V, A⟩be the complete directed graph 344 associated with a sentence x ∈X, as stated in §2. A subgraph y = ⟨V, B⟩is a legal dependency tree (i.e., y ∈Y(x)) if and only if the following conditions are met: 1. Each vertex in V \ {0} must have exactly one incoming arc in B, 2. 0 has no incoming arcs in B, 3. B does not contain cycles. For each vertex v ∈V , let δ−(v) ≜{⟨i, j⟩∈ A | j = v} denote its set of incoming arcs, and δ+(v) ≜{⟨i, j⟩∈A | i = v} denote its set of outgoing arcs. The two first conditions can be easily expressed by linear constraints on the incidence vector z: P a∈δ−(j) za = 1, j ∈V \ {0} (4) P a∈δ−(0) za = 0 (5) Condition 3 is somewhat harder to express. Rather than adding exponentially many constraints, one for each potential cycle (like Riedel and Clarke, 2006), we equivalently replace condition 3 by 3′. B is connected. Note that conditions 1-2-3 are equivalent to 1-23′, in the sense that both define the same set Y(x). However, as we will see, the latter set of conditions is more convenient. Connectedness of graphs can be imposed via flow constraints (by requiring that, for any v ∈V \ {0}, there is a directed path in B connecting 0 to v). We adapt the single commodity flow formulation for the (undirected) minimum spanning tree problem, due to Magnanti and Wolsey (1994), that requires O(n2) variables and constraints. Under this model, the root node must send one unit of flow to every other node. By making use of extra variables, φ ≜⟨φa⟩a∈A, to denote the flow of commodities through each arc, we are led to the following constraints in addition to Eqs. 4–5 (we denote U ≜[0, 1], and B ≜{0, 1} = U ∩Z): • Root sends flow n: P a∈δ+(0) φa = n (6) • Each node consumes one unit of flow: X a∈δ−(j) φa − X a∈δ+(j) φa = 1, j ∈V \ {0} (7) • Flow is zero on disabled arcs: φa ≤nza, a ∈A (8) • Each arc indicator lies in the unit interval: za ∈U, a ∈A. (9) These constraints project an outer bound of the arborescence polytope, i.e., ¯Z(x) ≜ {z ∈R|A| | (z, φ) satisfy (4–9)} ⊇ Z(x). (10) Furthermore, the integer points of ¯Z(x) are precisely the incidence vectors of dependency trees in Y(x); these are obtained by replacing Eq. 9 by za ∈B, a ∈A. (11) 3.2 Arc-Factored Model Given our polyhedral representation of (an outer bound of) the arborescence polytope, we can now formulate dependency parsing with an arcfactored model as an ILP. By storing the arclocal feature vectors into the columns of a matrix F(x) ≜[fa(x)]a∈A, and defining the score vector s ≜F(x)⊤w (each entry is an arc score) the inference problem can be written as max y∈Y(x) w⊤f(x, y) = max z∈Z(x) w⊤F(x)z = max z,φ s⊤z s.t. A " z φ # ≤b z ∈B (12) where A is a sparse constraint matrix (with O(|A|) non-zero elements), and b is the constraint vector; A and b encode the constraints (4–9). This is an ILP with O(|A|) variables and constraints (hence, quadratic in n); if we drop the integer constraint the problem becomes the LP relaxation. As is, this formulation is no more attractive than solving the problem with the existing combinatorial algorithms discussed in §2.2; however, we can now start adding non-local features to build a more powerful model. 3.3 Sibling and Grandparent Features To cope with higher-order features of the form fa1,...,aK(x) (i.e., features whose values depend on the simultaneous inclusion of arcs a1, . . . , aK on 345 a candidate dependency tree), we employ a linearization trick (Boros and Hammer, 2002), defining extra variables za1...aK ≜za1 ∧. . .∧zaK. This logical relation can be expressed by the following O(K) agreement constraints:6 za1...aK ≤ zai, i = 1, . . . , K za1...aK ≥ PK i=1 zai −K + 1. (13) As shown by McDonald and Pereira (2006) and Carreras (2007), the inclusion of features that correlate sibling and grandparent arcs may be highly beneficial, even if doing so requires resorting to approximate algorithms.7 Define Rsibl ≜ {⟨i, j, k⟩| ⟨i, j⟩∈A, ⟨i, k⟩∈A} and Rgrand ≜ {⟨i, j, k⟩| ⟨i, j⟩∈A, ⟨j, k⟩∈A}. To include such features in our formulation, we need to add extra variables zsibl ≜⟨zr⟩r∈Rsibl and zgrand ≜ ⟨zr⟩r∈Rgrand that indicate the presence of sibling and grandparent arcs. Observe that these indicator variables are conjunctions of arc indicator variables, i.e., zsibl ijk = zij ∧zik and zgrand ijk = zij ∧zjk. Hence, these features can be handled in our formulation by adding the following O(|A| · |V |) variables and constraints: zsibl ijk ≤zij, zsibl ijk ≤zik, zsibl ijk ≥zij + zik −1 (14) for all triples ⟨i, j, k⟩∈Rsibl, and zgrand ijk ≤zij, zgrand ijk ≤zjk, zgrand ijk ≥zij+zjk−1 (15) for all triples ⟨i, j, k⟩∈Rgrand. Let R ≜A ∪ Rsibl ∪Rgrand; by redefining z ≜⟨zr⟩r∈R and F(x) ≜[fr(x)]r∈R, we may express our inference problem as in Eq. 12, with O(|A| · |V |) variables and constraints. Notice that the strategy just described to handle sibling features is not fully compatible with the features proposed by Eisner (1996) for projective parsing, as the latter correlate only consecutive siblings and are also able to place special features on the first child of a given word. The ability to handle such “ordered” features is intimately associated with Eisner’s dynamic programming parsing algorithm and with the Markovian assumptions made explicitly by his generative model. We next show how similar features 6Actually, any logical condition can be encoded with linear constraints involving binary variables; see e.g. Clarke and Lapata (2008) for an overview. 7By sibling features we mean features that depend on pairs of sibling arcs (i.e., of the form ⟨i, j⟩and ⟨i, k⟩); by grandparent features we mean features that depend on pairs of grandparent arcs (of the form ⟨i, j⟩and ⟨j, k⟩). can be incorporated in our model by adding “dynamic” constraints to our ILP. Define: znext sibl ijk ≜      1 if ⟨i, j⟩and ⟨i, k⟩are consecutive siblings, 0 otherwise, zfirst child ij ≜ ( 1 if j is the first child of i, 0 otherwise. Suppose (without loss of generality) that i < j < k ≤n. We could naively compose the constraints (14) with additional linear constraints that encode the logical relation znext sibl ijk = zsibl ijk ∧V j<l<k ¬zil, but this would yield a constraint matrix with O(n4) non-zero elements. Instead, we define auxiliary variables βjk and γij: βjk = ( 1, if ∃l s.t. π(l) = π(j) < j < l < k 0, otherwise, γij = ( 1, if ∃k s.t. i < k < j and ⟨i, k⟩∈y 0, otherwise. (16) Then, we have that znext sibl ijk = zsibl ijk ∧(¬βjk) and zfirst child ij = zij ∧(¬γij), which can be encoded via znext sibl ijk ≤ zsibl ijk zfirst child ij ≤ zij znext sibl ijk ≤ 1 −βjk zfirst child ij ≤ 1 −γij znext sibl ijk ≥ zsibl ijk −βjk zfirst child ij ≥ zij −γij The following “dynamic” constraints encode the logical relations for the auxiliary variables (16): βj(j+1) = 0 γi(i+1) = 0 βj(k+1) ≥ βjk γi(j+1) ≥ γij βj(k+1) ≥ X i<j zsibl ijk γi(j+1) ≥ zij βj(k+1) ≤ βjk + X i<j zsibl ijk γi(j+1) ≤ γij + zij Auxiliary variables and constraints are defined analogously for the case n ≥i > j > k. This results in a sparser constraint matrix, with only O(n3) non-zero elements. 3.4 Valency Features A crucial fact about dependency grammars is that words have preferences about the number and arrangement of arguments and modifiers they accept. Therefore, it is desirable to include features 346 that indicate, for a candidate arborescence, how many outgoing arcs depart from each vertex; denote these quantities by vi ≜P a∈δ+(i) za, for each i ∈V . We call vi the valency of the ith vertex. We add valency indicators zval ik ≜I(vi = k) for i ∈V and k = 0, . . . , n −1. This way, we are able to penalize candidate dependency trees that assign unusual valencies to some of their vertices, by specifying a individual cost for each possible value of valency. The following O(|V |2) constraints encode the agreement between valency indicators and the other variables: Pn−1 k=0 kzval ik = P a∈δ+(i) za, i ∈V (17) Pn−1 k=0 zval ik = 1, i ∈V zval ik ≥ 0, i ∈V, k ∈{0, . . . , n −1} 3.5 Projectivity Features For most languages, dependency parse trees tend to be nearly projective (cf. Buchholz and Marsi, 2006). We wish to make our model capable of learning to prefer “nearly” projective parses whenever that behavior is observed in the data. The multicommodity directed flow model of Magnanti and Wolsey (1994) is a refinement of the model described in §3.1 which offers a compact and elegant way to indicate nonprojective arcs, requiring O(n3) variables and constraints. In this model, every node k ̸= 0 defines a commodity: one unit of commodity k originates at the root node and must be delivered to node k; the variable φk ij denotes the flow of commodity k in arc ⟨i, j⟩. We first replace (4–9) by (18–22): • The root sends one unit of commodity to each node: X a∈δ−(0) φk a − X a∈δ+(0) φk a = −1, k ∈V \ {0} (18) • Any node consumes its own commodity and no other: X a∈δ−(j) φk a − X a∈δ+(j) φk a = δk j , j, k ∈V \ {0} (19) where δk j ≜I(j = k) is the Kronecker delta. • Disabled arcs do not carry any flow: φk a ≤za, a ∈A, k ∈V (20) • There are exactly n enabled arcs: P a∈A za = n (21) • All variables lie in the unit interval: za ∈U, φk a ∈U, a ∈A, k ∈V (22) We next define auxiliary variables ψjk that indicate if there is a path from j to k. Since each vertex except the root has only one incoming arc, the following linear equalities are enough to describe these new variables: ψjk = P a∈δ−(j) φk a, j, k ∈V \ {0} ψ0k = 1, k ∈V \ {0}. (23) Now, define indicators znp ≜⟨znp a ⟩a∈A, where znp a ≜I(a ∈y and a is nonprojective). From the definition of projective arcs in §2.1, we have that znp a = 1 if and only if the arc is active (za = 1) and there is some vertex k in the span of a = ⟨i, j⟩such that ψik = 0. We are led to the following O(|A| · |V |) constraints for ⟨i, j⟩∈A: znp ij ≤ zij znp ij ≥ zij −ψik, min(i, j) ≤k ≤max(i, j) znp ij ≤ −Pmax(i,j)−1 k=min(i,j)+1 ψik + |j −i| −1 There are other ways to introduce nonprojectivity indicators and alternative definitions of “nonprojective arc.” For example, by using dynamic constraints of the same kind as those in §3.3, we can indicate arcs that “cross” other arcs with O(n3) variables and constraints, and a cubic number of non-zero elements in the constraint matrix (omitted for space). 3.6 Projective Parsing It would be straightforward to adapt the constraints in §3.5 to allow only projective parse trees: simply force znp a = 0 for any a ∈A. But there are more efficient ways of accomplish this. While it is difficult to impose projectivity constraints or cycle constraints individually, there is a simpler way of imposing both. Consider 3 (or 3′) from §3.1. Proposition 1 Replace condition 3 (or 3′) with 3′′. If ⟨i, j⟩∈B, then, for any k = 1, . . . , n such that k ̸= j, the parent of k must satisfy (defining i′ ≜min(i, j) and j′ ≜max(i, j)):      i′ ≤π(k) ≤j′, if i′ < k < j′, π(k) < i′ ∨π(k) > j′, if k < i′ or k > j′ or k = i. 347 Then, Y(x) will be redefined as the set of projective dependency parse trees. We omit the proof for space. Conditions 1, 2, and 3′′ can be encoded with O(n2) constraints. 4 Experiments We report experiments on seven languages, six (Danish, Dutch, Portuguese, Slovene, Swedish and Turkish) from the CoNLL-X shared task (Buchholz and Marsi, 2006), and one (English) from the CoNLL-2008 shared task (Surdeanu et al., 2008).8 All experiments are evaluated using the unlabeled attachment score (UAS), using the default settings.9 We used the same arc-factored features as McDonald et al. (2005) (included in the MSTParser toolkit10); for the higher-order models described in §3.3–3.5, we employed simple higher order features that look at the word, part-of-speech tag, and (if available) morphological information of the words being correlated through the indicator variables. For scalability (and noting that some of the models require O(|V | · |A|) constraints and variables, which, when A = V 2, grows cubically with the number of words), we first prune the base graph by running a simple algorithm that ranks the k-best candidate parents for each word in the sentence (we set k = 10); this reduces the number of candidate arcs to |A| = kn.11 This strategy is similar to the one employed by Carreras et al. (2008) to prune the search space of the actual parser. The ranker is a local model trained using a max-margin criterion; it is arc-factored and not subject to any structural constraints, so it is very fast. The actual parser was trained via the online structured passive-aggressive algorithm of Crammer et al. (2006); it differs from the 1-best MIRA algorithm of McDonald et al. (2005) by solving a sequence of loss-augmented inference problems.12 The number of iterations was set to 10. The results are summarized in Table 1; for the sake of comparison, we reproduced three strong 8We used the provided train/test splits except for English, for which we tested on the development partition. For training, sentences longer than 80 words were discarded. For testing, all sentences were kept (the longest one has length 118). 9http://nextens.uvt.nl/∼conll/software.html 10http://sourceforge.net/projects/mstparser 11Note that, unlike reranking approaches, there are still exponentially many candidate parse trees after pruning. The oracle constrained to pick parents from these lists achieves > 98% in every case. 12The loss-augmented inference problem can also be expressed as an LP for Hamming loss functions that factor over arcs; we refer to Martins et al. (2009) for further details. baselines, all of them state-of-the-art parsers based on non-arc-factored models: the second order model of McDonald and Pereira (2006), the hybrid model of Nivre and McDonald (2008), which combines a (labeled) transition-based and a graphbased parser, and a refinement of the latter, due to Martins et al. (2008), which attempts to approximate non-local features.13 We did not reproduce the model of Riedel and Clarke (2006) since the latter is tailored for labeled dependency parsing; however, experiments reported in that paper for Dutch (and extended to other languages in the CoNLL-X task) suggest that their model performs worse than our three baselines. By looking at the middle four columns, we can see that adding non-arc-factored features makes the models more accurate, for all languages. With the exception of Portuguese, the best results are achieved with the full set of features. We can also observe that, for some languages, the valency features do not seem to help. Merely modeling the number of dependents of a word may not be as valuable as knowing what kinds of dependents they are (for example, distinguishing among arguments and adjuncts). Comparing with the baselines, we observe that our full model outperforms that of McDonald and Pereira (2006), and is in line with the most accurate dependency parsers (Nivre and McDonald, 2008; Martins et al., 2008), obtained by combining transition-based and graph-based parsers.14 Notice that our model, compared with these hybrid parsers, has the advantage of not requiring an ensemble configuration (eliminating, for example, the need to tune two parsers). Unlike the ensembles, it directly handles non-local output features by optimizing a single global objective. Perhaps more importantly, it makes it possible to exploit expert knowledge through the form of hard global constraints. Although not pursued here, the same kind of constraints employed by Riedel and Clarke (2006) can straightforwardly fit into our model, after extending it to perform labeled dependency parsing. We believe that a careful design of fea13Unlike our model, the hybrid models used here as baselines make use of the dependency labels at training time; indeed, the transition-based parser is trained to predict a labeled dependency parse tree, and the graph-based parser use these predicted labels as input features. Our model ignores this information at training time; therefore, this comparison is slightly unfair to us. 14See also Zhang and Clark (2008) for a different approach that combines transition-based and graph-based methods. 348 [MP06] [NM08] [MDSX08] ARC-FACTORED +SIBL/GRANDP. +VALENCY +PROJ. (FULL) FULL, RELAXED DANISH 90.60 91.30 91.54 89.80 91.06 90.98 91.18 91.04 (-0.14) DUTCH 84.11 84.19 84.79 83.55 84.65 84.93 85.57 85.41 (-0.16) PORTUGUESE 91.40 91.81 92.11 90.66 92.11 92.01 91.42 91.44 (+0.02) SLOVENE 83.67 85.09 85.13 83.93 85.13 85.45 85.61 85.41 (-0.20) SWEDISH 89.05 90.54 90.50 89.09 90.50 90.34 90.60 90.52 (-0.08) TURKISH 75.30 75.68 76.36 75.16 76.20 76.08 76.34 76.32 (-0.02) ENGLISH 90.85 – – 90.15 91.13 91.12 91.16 91.14 (-0.02) Table 1: Results for nonprojective dependency parsing (unlabeled attachment scores). The three baselines are the second order model of McDonald and Pereira (2006) and the hybrid models of Nivre and McDonald (2008) and Martins et al. (2008). The four middle columns show the performance of our model using exact (ILP) inference at test time, for increasing sets of features (see §3.2–§3.5). The rightmost column shows the results obtained with the full set of features using relaxed LP inference followed by projection onto the feasible set. Differences are with respect to exact inference for the same set of features. Bold indicates the best result for a language. As for overall performance, both the exact and relaxed full model outperform the arcfactored model and the second order model of McDonald and Pereira (2006) with statistical significance (p < 0.01) according to Dan Bikel’s randomized method (http://www.cis.upenn.edu/∼dbikel/software.html). tures and constraints can lead to further improvements on accuracy. We now turn to a different issue: scalability. In previous work (Martins et al., 2009), we showed that training the model via LP-relaxed inference (as we do here) makes it learn to avoid fractional solutions; as a consequence, ILP solvers will converge faster to the optimum (on average). Yet, it is known from worst case complexity theory that solving a general ILP is NP-hard; hence, these solvers may not scale well with the sentence length. Merely considering the LP-relaxed version of the problem at test time is unsatisfactory, as it may lead to a fractional solution (i.e., a solution whose components indexed by arcs, ˜z = ⟨za⟩a∈A, are not all integer), which does not correspond to a valid dependency tree. We propose the following approximate algorithm to obtain an actual parse: first, solve the LP relaxation (which can be done in polynomial time with interior-point methods); then, if the solution is fractional, project it onto the feasible set Y(x). Fortunately, the Euclidean projection can be computed in a straightforward way by finding a maximal arborescence in the directed graph whose weights are defined by ˜z (we omit the proof for space); as we saw in §2.2, the ChuLiu-Edmonds algorithm can do this in polynomial time. The overall parsing runtime becomes polynomial with respect to the length of the sentence. The last column of Table 1 compares the accuracy of this approximate method with the exact one. We observe that there is not a substantial drop in accuracy; on the other hand, we observed a considerable speed-up with respect to exact inference, particularly for long sentences. The average runtime (across all languages) is 0.632 seconds per sentence, which is in line with existing higher-order parsers and is much faster than the runtimes reported by Riedel and Clarke (2006). 5 Conclusions We presented new dependency parsers based on concise ILP formulations. We have shown how non-local output features can be incorporated, while keeping only a polynomial number of constraints. These features can act as soft constraints whose penalty values are automatically learned from data; in addition, our model is also compatible with expert knowledge in the form of hard constraints. Learning through a max-margin framework is made effective by the means of a LPrelaxation. Experimental results on seven languages show that our rich-featured parsers outperform arc-factored and approximate higher-order parsers, and are in line with stacked parsers, having with respect to the latter the advantage of not requiring an ensemble configuration. Acknowledgments The authors thank the reviewers for their comments. Martins was supported by a grant from FCT/ICTI through the CMU-Portugal Program, and also by Priberam Inform´atica. Smith was supported by NSF IIS-0836431 and an IBM Faculty Award. Xing was supported by NSF DBI0546594, DBI-0640543, IIS-0713379, and an Alfred Sloan Foundation Fellowship in Computer Science. 349 References E. Boros and P.L. Hammer. 2002. Pseudo-Boolean optimization. Discrete Applied Mathematics, 123(1– 3):155–225. S. Buchholz and E. Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proc. of CoNLL. X. Carreras, M. Collins, and T. Koo. 2008. TAG, dynamic programming, and the perceptron for efficient, feature-rich parsing. In Proc. of CoNLL. X. Carreras. 2007. Experiments with a higher-order projective dependency parser. In Proc. of CoNLL. M. Chang, L. Ratinov, and D. Roth. 2008. Constraints as prior knowledge. In ICML Workshop on Prior Knowledge for Text and Language Processing. Y. J. Chu and T. H. Liu. 1965. On the shortest arborescence of a directed graph. Science Sinica, 14:1396– 1400. J. Clarke and M. Lapata. 2008. Global inference for sentence compression an integer linear programming approach. JAIR, 31:399–429. K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. 2006. Online passive-aggressive algorithms. JMLR, 7:551–585. A. Culotta and J. Sorensen. 2004. Dependency tree kernels for relation extraction. In Proc. of ACL. P. Denis and J. Baldridge. 2007. Joint determination of anaphoricity and coreference resolution using integer programming. In Proc. of HLT-NAACL. Y. Ding and M. Palmer. 2005. Machine translation using probabilistic synchronous dependency insertion grammar. In Proc. of ACL. J. Edmonds. 1967. Optimum branchings. Journal of Research of the National Bureau of Standards, 71B:233–240. J. Eisner and G. Satta. 1999. Efficient parsing for bilexical context-free grammars and head automaton grammars. In Proc. of ACL. J. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proc. of COLING. S. Kahane, A. Nasr, and O. Rambow. 1998. Pseudoprojectivity: a polynomially parsable non-projective dependency grammar. In Proc. of COLING-ACL. S. Lacoste-Julien, B. Taskar, D. Klein, and M. I. Jordan. 2006. Word alignment via quadratic assignment. In Proc. of HLT-NAACL. T. L. Magnanti and L. A. Wolsey. 1994. Optimal Trees. Technical Report 290-94, Massachusetts Institute of Technology, Operations Research Center. A. F. T. Martins, D. Das, N. A. Smith, and E. P. Xing. 2008. Stacking dependency parsers. In Proc. of EMNLP. A. F. T. Martins, N. A. Smith, and E. P. Xing. 2009. Polyhedral outer approximations with application to natural language parsing. In Proc. of ICML. R. T. McDonald and F. C. N. Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proc. of EACL. R. McDonald and G. Satta. 2007. On the complexity of non-projective data-driven dependency parsing. In Proc. of IWPT. R. T. McDonald, F. Pereira, K. Ribarov, and J. Hajiˇc. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proc. of HLT-EMNLP. J. Nivre and R. McDonald. 2008. Integrating graphbased and transition-based dependency parsers. In Proc. of ACL-HLT. V. Punyakanok, D. Roth, W. Yih, and D. Zimak. 2004. Semantic role labeling via integer linear programming inference. In Proc. of COLING. M. Richardson and P. Domingos. 2006. Markov logic networks. Machine Learning, 62(1):107–136. S. Riedel and J. Clarke. 2006. Incremental integer linear programming for non-projective dependency parsing. In Proc. of EMNLP. R. T. Rockafellar. 1970. Convex Analysis. Princeton University Press. D. Roth and W. T. Yih. 2005. Integer linear programming inference for conditional random fields. In ICML. A. Schrijver. 2003. Combinatorial Optimization: Polyhedra and Efficiency, volume 24 of Algorithms and Combinatorics. Springer. D. A. Smith and J. Eisner. 2008. Dependency parsing by belief propagation. In Proc. of EMNLP. M. Surdeanu, R. Johansson, A. Meyers, L. M`arquez, and J. Nivre. 2008. The conll-2008 shared task on joint parsing of syntactic and semantic dependencies. Proc. of CoNLL. R. E. Tarjan. 1977. Finding optimum branchings. Networks, 7(1):25–36. M. Wang, N. A. Smith, and T. Mitamura. 2007. What is the Jeopardy model? A quasi-synchronous grammar for QA. In Proceedings of EMNLP-CoNLL. Y. Zhang and S. Clark. 2008. A tale of two parsers: investigating and combining graphbased and transition-based dependency parsing using beam-search. In Proc. of EMNLP. 350
2009
39
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 28–36, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Unsupervised Argument Identification for Semantic Role Labeling Omri Abend1 Roi Reichart2 Ari Rappoport1 1Institute of Computer Science , 2ICNC Hebrew University of Jerusalem {omria01|roiri|arir}@cs.huji.ac.il Abstract The task of Semantic Role Labeling (SRL) is often divided into two sub-tasks: verb argument identification, and argument classification. Current SRL algorithms show lower results on the identification sub-task. Moreover, most SRL algorithms are supervised, relying on large amounts of manually created data. In this paper we present an unsupervised algorithm for identifying verb arguments, where the only type of annotation required is POS tagging. The algorithm makes use of a fully unsupervised syntactic parser, using its output in order to detect clauses and gather candidate argument collocation statistics. We evaluate our algorithm on PropBank10, achieving a precision of 56%, as opposed to 47% of a strong baseline. We also obtain an 8% increase in precision for a Spanish corpus. This is the first paper that tackles unsupervised verb argument identification without using manually encoded rules or extensive lexical or syntactic resources. 1 Introduction Semantic Role Labeling (SRL) is a major NLP task, providing a shallow sentence-level semantic analysis. SRL aims at identifying the relations between the predicates (usually, verbs) in the sentence and their associated arguments. The SRL task is often viewed as consisting of two parts: argument identification (ARGID) and argument classification. The former aims at identifying the arguments of a given predicate present in the sentence, while the latter determines the type of relation that holds between the identified arguments and their corresponding predicates. The division into two sub-tasks is justified by the fact that they are best addressed using different feature sets (Pradhan et al., 2005). Performance in the ARGID stage is a serious bottleneck for general SRL performance, since only about 81% of the arguments are identified, while about 95% of the identified arguments are labeled correctly (M`arquez et al., 2008). SRL is a complex task, which is reflected by the algorithms used to address it. A standard SRL algorithm requires thousands to dozens of thousands sentences annotated with POS tags, syntactic annotation and SRL annotation. Current algorithms show impressive results but only for languages and domains where plenty of annotated data is available, e.g., English newspaper texts (see Section 2). Results are markedly lower when testing is on a domain wider than the training one, even in English (see the WSJ-Brown results in (Pradhan et al., 2008)). Only a small number of works that do not require manually labeled SRL training data have been done (Swier and Stevenson, 2004; Swier and Stevenson, 2005; Grenager and Manning, 2006). These papers have replaced this data with the VerbNet (Kipper et al., 2000) lexical resource or a set of manually written rules and supervised parsers. A potential answer to the SRL training data bottleneck are unsupervised SRL models that require little to no manual effort for their training. Their output can be used either by itself, or as training material for modern supervised SRL algorithms. In this paper we present an algorithm for unsupervised argument identification. The only type of annotation required by our algorithm is POS tag28 ging, which needs relatively little manual effort. The algorithm consists of two stages. As preprocessing, we use a fully unsupervised parser to parse each sentence. Initially, the set of possible arguments for a given verb consists of all the constituents in the parse tree that do not contain that predicate. The first stage of the algorithm attempts to detect the minimal clause in the sentence that contains the predicate in question. Using this information, it further reduces the possible arguments only to those contained in the minimal clause, and further prunes them according to their position in the parse tree. In the second stage we use pointwise mutual information to estimate the collocation strength between the arguments and the predicate, and use it to filter out instances of weakly collocating predicate argument pairs. We use two measures to evaluate the performance of our algorithm, precision and F-score. Precision reflects the algorithm’s applicability for creating training data to be used by supervised SRL models, while the standard SRL F-score measures the model’s performance when used by itself. The first stage of our algorithm is shown to outperform a strong baseline both in terms of Fscore and of precision. The second stage is shown to increase precision while maintaining a reasonable recall. We evaluated our model on sections 2-21 of Propbank. As is customary in unsupervised parsing work (e.g. (Seginer, 2007)), we bounded sentence length by 10 (excluding punctuation). Our first stage obtained a precision of 52.8%, which is more than 6% improvement over the baseline. Our second stage improved precision to nearly 56%, a 9.3% improvement over the baseline. In addition, we carried out experiments on Spanish (on sentences of length bounded by 15, excluding punctuation), achieving an increase of over 7.5% in precision over the baseline. Our algorithm increases F–score as well, showing an 1.8% improvement over the baseline in English and a 2.2% improvement in Spanish. Section 2 reviews related work. In Section 3 we detail our algorithm. Sections 4 and 5 describe the experimental setup and results. 2 Related Work The advance of machine learning based approaches in this field owes to the usage of large scale annotated corpora. English is the most studied language, using the FrameNet (FN) (Baker et al., 1998) and PropBank (PB) (Palmer et al., 2005) resources. PB is a corpus well suited for evaluation, since it annotates every non-auxiliary verb in a real corpus (the WSJ sections of the Penn Treebank). PB is a standard corpus for SRL evaluation and was used in the CoNLL SRL shared tasks of 2004 (Carreras and M`arquez, 2004) and 2005 (Carreras and M`arquez, 2005). Most work on SRL has been supervised, requiring dozens of thousands of SRL annotated training sentences. In addition, most models assume that a syntactic representation of the sentence is given, commonly in the form of a parse tree, a dependency structure or a shallow parse. Obtaining these is quite costly in terms of required human annotation. The first work to tackle SRL as an independent task is (Gildea and Jurafsky, 2002), which presented a supervised model trained and evaluated on FrameNet. The CoNLL shared tasks of 2004 and 2005 were devoted to SRL, and studied the influence of different syntactic annotations and domain changes on SRL results. Computational Linguistics has recently published a special issue on the task (M`arquez et al., 2008), which presents state-of-the-art results and surveys the latest achievements and challenges in the field. Most approaches to the task use a multi-level approach, separating the task to an ARGID and an argument classification sub-tasks. They then use the unlabeled argument structure (without the semantic roles) as training data for the ARGID stage and the entire data (perhaps with other features) for the classification stage. Better performance is achieved on the classification, where stateof-the-art supervised approaches achieve about 81% F-score on the in-domain identification task, of which about 95% are later labeled correctly (M`arquez et al., 2008). There have been several exceptions to the standard architecture described in the last paragraph. One suggestion poses the problem of SRL as a sequential tagging of words, training an SVM classifier to determine for each word whether it is inside, outside or in the beginning of an argument (Hacioglu and Ward, 2003). Other works have integrated argument classification and identification into one step (Collobert and Weston, 2007), while others went further and combined the former two along with parsing into a single model (Musillo 29 and Merlo, 2006). Work on less supervised methods has been scarce. Swier and Stevenson (2004) and Swier and Stevenson (2005) presented the first model that does not use an SRL annotated corpus. However, they utilize the extensive verb lexicon VerbNet, which lists the possible argument structures allowable for each verb, and supervised syntactic tools. Using VerbNet along with the output of a rule-based chunker (in 2004) and a supervised syntactic parser (in 2005), they spot instances in the corpus that are very similar to the syntactic patterns listed in VerbNet. They then use these as seed for a bootstrapping algorithm, which consequently identifies the verb arguments in the corpus and assigns their semantic roles. Another less supervised work is that of (Grenager and Manning, 2006), which presents a Bayesian network model for the argument structure of a sentence. They use EM to learn the model’s parameters from unannotated data, and use this model to tag a test corpus. However, ARGID was not the task of that work, which dealt solely with argument classification. ARGID was performed by manually-created rules, requiring a supervised or manual syntactic annotation of the corpus to be annotated. The three works above are relevant but incomparable to our work, due to the extensive amount of supervision (namely, VerbNet and a rule-based or supervised syntactic system) they used, both in detecting the syntactic structure and in detecting the arguments. Work has been carried out in a few other languages besides English. Chinese has been studied in (Xue, 2008). Experiments on Catalan and Spanish were done in SemEval 2007 (M`arquez et al., 2007) with two participating systems. Attempts to compile corpora for German (Burdchardt et al., 2006) and Arabic (Diab et al., 2008) are also underway. The small number of languages for which extensive SRL annotated data exists reflects the considerable human effort required for such endeavors. Some SRL works have tried to use unannotated data to improve the performance of a base supervised model. Methods used include bootstrapping approaches (Gildea and Jurafsky, 2002; Kate and Mooney, 2007), where large unannotated corpora were tagged with SRL annotation, later to be used to retrain the SRL model. Another approach used similarity measures either between verbs (Gordon and Swanson, 2007) or between nouns (Gildea and Jurafsky, 2002) to overcome lexical sparsity. These measures were estimated using statistics gathered from corpora augmenting the model’s training data, and were then utilized to generalize across similar verbs or similar arguments. Attempts to substitute full constituency parsing by other sources of syntactic information have been carried out in the SRL community. Suggestions include posing SRL as a sequence labeling problem (M`arquez et al., 2005) or as an edge tagging problem in a dependency representation (Hacioglu, 2004). Punyakanok et al. (2008) provide a detailed comparison between the impact of using shallow vs. full constituency syntactic information in an English SRL system. Their results clearly demonstrate the advantage of using full annotation. The identification of arguments has also been carried out in the context of automatic subcategorization frame acquisition. Notable examples include (Manning, 1993; Briscoe and Carroll, 1997; Korhonen, 2002) who all used statistical hypothesis testing to filter a parser’s output for arguments, with the goal of compiling verb subcategorization lexicons. However, these works differ from ours as they attempt to characterize the behavior of a verb type, by collecting statistics from various instances of that verb, and not to determine which are the arguments of specific verb instances. The algorithm presented in this paper performs unsupervised clause detection as an intermediate step towards argument identification. Supervised clause detection was also tackled as a separate task, notably in the CoNLL 2001 shared task (Tjong Kim Sang and D`ejean, 2001). Clause information has been applied to accelerating a syntactic parser (Glaysher and Moldovan, 2006). 3 Algorithm In this section we describe our algorithm. It consists of two stages, each of which reduces the set of argument candidates, which a-priori contains all consecutive sequences of words that do not contain the predicate in question. 3.1 Algorithm overview As pre-processing, we use an unsupervised parser that generates an unlabeled parse tree for each sen30 tence (Seginer, 2007). This parser is unique in that it is able to induce a bracketing (unlabeled parsing) from raw text (without even using POS tags) achieving state-of-the-art results. Since our algorithm uses millions to tens of millions sentences, we must use very fast tools. The parser’s high speed (thousands of words per second) enables us to process these large amounts of data. The only type of supervised annotation we use is POS tagging. We use the taggers MXPOST (Ratnaparkhi, 1996) for English and TreeTagger (Schmid, 1994) for Spanish, to obtain POS tags for our model. The first stage of our algorithm uses linguistically motivated considerations to reduce the set of possible arguments. It does so by confining the set of argument candidates only to those constituents which obey the following two restrictions. First, they should be contained in the minimal clause containing the predicate. Second, they should be k-th degree cousins of the predicate in the parse tree. We propose a novel algorithm for clause detection and use its output to determine which of the constituents obey these two restrictions. The second stage of the algorithm uses pointwise mutual information to rule out constituents that appear to be weakly collocating with the predicate in question. Since a predicate greatly restricts the type of arguments with which it may appear (this is often referred to as “selectional restrictions”), we expect it to have certain characteristic arguments with which it is likely to collocate. 3.2 Clause detection stage The main idea behind this stage is the observation that most of the arguments of a predicate are contained within the minimal clause that contains the predicate. We tested this on our development data – section 24 of the WSJ PTB, where we saw that 86% of the arguments that are also constituents (in the gold standard parse) were indeed contained in that minimal clause (as defined by the tree label types in the gold standard parse that denote a clause, e.g., S, SBAR). Since we are not provided with clause annotation (or any label), we attempted to detect them in an unsupervised manner. Our algorithm attempts to find sub-trees within the parse tree, whose structure resembles the structure of a full sentence. This approximates the notion of a clause. L L DT The NNS materials L L IN in L DT each NN set L VBP reach L L IN about CD 90 NNS students L L L L L VBP L L VBP L Figure 1: An example of an unlabeled POS tagged parse tree. The middle tree is the ST of ‘reach’ with the root as the encoded ancestor. The bottom one is the ST with its parent as the encoded ancestor. Statistics gathering. In order to detect which of the verb’s ancestors is the minimal clause, we score each of the ancestors and select the one that maximizes the score. We represent each ancestor using its Spinal Tree (ST). The ST of a given verb’s ancestor is obtained by replacing all the constituents that do not contain the verb by a leaf having a label. This effectively encodes all the kth degree cousins of the verb (for every k). The leaf labels are either the word’s POS in case the constituent is a leaf, or the generic label “L” denoting a non-leaf. See Figure 1 for an example. In this stage we collect statistics of the occurrences of STs in a large corpus. For every ST in the corpus, we count the number of times it occurs in a form we consider to be a clause (positive examples), and the number of times it appears in other forms (negative examples). Positive examples are divided into two main types. First, when the ST encodes the root ancestor (as in the middle tree of Figure 1); second, when the ancestor complies to a clause lexicosyntactic pattern. In many languages there is a small set of lexico-syntactic patterns that mark a clause, e.g. the English ‘that’, the German ‘dass’ and the Spanish ‘que’. The patterns which were used in our experiments are shown in Figure 2. For each verb instance, we traverse over its an31 English TO + VB. The constituent starts with “to” followed by a verb in infinitive form. WP. The constituent is preceded by a Wh-pronoun. That. The constituent is preceded by a “that” marked by an “IN” POS tag indicating that it is a subordinating conjunction. Spanish CQUE. The constituent is preceded by a word with the POS “CQUE” which denotes the word “que” as a conjunction. INT. The constituent is preceded by a word with the POS “INT” which denotes an interrogative pronoun. CSUB. The constituent is preceded by a word with one of the POSs “CSUBF”, “CSUBI” or “CSUBX”, which denote a subordinating conjunction. Figure 2: The set of lexico-syntactic patterns that mark clauses which were used by our model. cestors from top to bottom. For each of them we update the following counters: sentence(ST) for the root ancestor’s ST, patterni(ST) for the ones complying to the i-th lexico-syntactic pattern and negative(ST) for the other ancestors1. Clause detection. At test time, when detecting the minimal clause of a verb instance, we use the statistics collected in the previous stage. Denote the ancestors of the verb with A1 . . . Am. For each of them, we calculate clause(STAj) and total(STAj). clause(STAj) is the sum of sentence(STAj) and patterni(STAj) if this ancestor complies to the i-th pattern (if there is no such pattern, clause(STAj) is equal to sentence(STAj)). total(STAj) is the sum of clause(STAj) and negative(STAj). The selected ancestor is given by: (1) Amax = argmaxAj clause(STAj ) total(STAj ) An ST whose total(ST) is less than a small threshold2 is not considered a candidate to be the minimal clause, since its statistics may be unreliable. In case of a tie, we choose the lowest constituent that obtained the maximal score. 1If while traversing the tree, we encounter an ancestor whose first word is preceded by a coordinating conjunction (marked by the POS tag “CC”), we refrain from performing any additional counter updates. Structures containing coordinating conjunctions tend not to obey our lexico-syntactic rules. 2We used 4 per million sentences, derived from development data. If there is only one verb in the sentence3 or if clause(STAj) = 0 for every 1 ≤j ≤m, we choose the top level constituent by default to be the minimal clause containing the verb. Otherwise, the minimal clause is defined to be the yield of the selected ancestor. Argument identification. For each predicate in the corpus, its argument candidates are now defined to be the constituents contained in the minimal clause containing the predicate. However, these constituents may be (and are) nested within each other, violating a major restriction on SRL arguments. Hence we now prune our set, by keeping only the siblings of all of the verb’s ancestors, as is common in supervised SRL (Xue and Palmer, 2004). 3.3 Using collocations We use the following observation to filter out some superfluous argument candidates: since the arguments of a predicate many times bear a semantic connection with that predicate, they consequently tend to collocate with it. We collect collocation statistics from a large corpus, which we annotate with parse trees and POS tags. We mark arguments using the argument detection algorithm described in the previous two sections, and extract all (predicate, argument) pairs appearing in the corpus. Recall that for each sentence, the arguments are a subset of the constituents in the parse tree. We use two representations of an argument: one is the POS tag sequence of the terminals contained in the argument, the other is its head word4. The predicate is represented as the conjunction of its lemma with its POS tag. Denote the number of times a predicate x appeared with an argument y by nxy. Denote the total number of (predicate, argument) pairs by N. Using these notations, we define the following quantities: nx = Σynxy, ny = Σxnxy, p(x) = nx N , p(y) = ny N and p(x, y) = nxy N . The pointwise mutual information of x and y is then given by: 3In this case, every argument in the sentence must be related to that verb. 4Since we do not have syntactic labels, we use an approximate notion. For English we use the Bikel parser default head word rules (Bikel, 2004). For Spanish, we use the leftmost word. 32 (2) PMI(x, y) = log p(x,y) p(x)·p(y) = log nxy (nx·ny)/N PMI effectively measures the ratio between the number of times x and y appeared together and the number of times they were expected to appear, had they been independent. At test time, when an (x, y) pair is observed, we check if PMI(x, y), computed on the large corpus, is lower than a threshold α for either of x’s representations. If this holds, for at least one representation, we prune all instances of that (x, y) pair. The parameter α may be selected differently for each of the argument representations. In order to avoid using unreliable statistics, we apply this for a given pair only if nx·ny N > r, for some parameter r. That is, we consider PMI(x, y) to be reliable, only if the denominator in equation (2) is sufficiently large. 4 Experimental Setup Corpora. We used the PropBank corpus for development and for evaluation on English. Section 24 was used for the development of our model, and sections 2 to 21 were used as our test data. The free parameters of the collocation extraction phase were tuned on the development data. Following the unsupervised parsing literature, multiple brackets and brackets covering a single word are omitted. We exclude punctuation according to the scheme of (Klein, 2005). As is customary in unsupervised parsing (e.g. (Seginer, 2007)), we bounded the lengths of the sentences in the corpus to be at most 10 (excluding punctuation). This results in 207 sentences in the development data, containing a total of 132 different verbs and 173 verb instances (of the non-auxiliary verbs in the SRL task, see ‘evaluation’ below) having 403 arguments. The test data has 6007 sentences containing 1008 different verbs and 5130 verb instances (as above) having 12436 arguments. Our algorithm requires large amounts of data to gather argument structure and collocation patterns. For the statistics gathering phase of the clause detection algorithm, we used 4.5M sentences of the NANC (Graff, 1995) corpus, bounding their length in the same manner. In order to extract collocations, we used 2M sentences from the British National Corpus (Burnard, 2000) and about 29M sentences from the Dmoz corpus (Gabrilovich and Markovitch, 2005). Dmoz is a web corpus obtained by crawling and cleaning the URLs in the Open Directory Project (dmoz.org). All of the above corpora were parsed using Seginer’s parser and POS-tagged by MXPOST (Ratnaparkhi, 1996). For our experiments on Spanish, we used 3.3M sentences of length at most 15 (excluding punctuation) extracted from the Spanish Wikipedia. Here we chose to bound the length by 15 due to the smaller size of the available test corpus. The same data was used both for the first and the second stages. Our development and test data were taken from the training data released for the SemEval 2007 task on semantic annotation of Spanish (M`arquez et al., 2007). This data consisted of 1048 sentences of length up to 15, from which 200 were randomly selected as our development data and 848 as our test data. The development data included 313 verb instances while the test data included 1279. All corpora were parsed using the Seginer parser and tagged by the “TreeTagger” (Schmid, 1994). Baselines. Since this is the first paper, to our knowledge, which addresses the problem of unsupervised argument identification, we do not have any previous results to compare to. We instead compare to a baseline which marks all k-th degree cousins of the predicate (for every k) as arguments (this is the second pruning we use in the clause detection stage). We name this baseline the ALL COUSINS baseline. We note that a random baseline would score very poorly since any sequence of terminals which does not contain the predicate is a possible candidate. Therefore, beating this random baseline is trivial. Evaluation. Evaluation is carried out using standard SRL evaluation software5. The algorithm is provided with a list of predicates, whose arguments it needs to annotate. For the task addressed in this paper, non-consecutive parts of arguments are treated as full arguments. A match is considered each time an argument in the gold standard data matches a marked argument in our model’s output. An unmatched argument is an argument which appears in the gold standard data, and fails to appear in our model’s output, and an excessive argument is an argument which appears in our model’s output but does not appear in the gold standard. Precision and recall are defined accordingly. We report an F-score as well (the harmonic mean of precision and recall). We do not attempt 5http://www.lsi.upc.edu/∼srlconll/soft.html#software. 33 to identify multi-word verbs, and therefore do not report the model’s performance in identifying verb boundaries. Since our model detects clauses as an intermediate product, we provide a separate evaluation of this task for the English corpus. We show results on our development data. We use the standard parsing F-score evaluation measure. As a gold standard in this evaluation, we mark for each of the verbs in our development data the minimal clause containing it. A minimal clause is the lowest ancestor of the verb in the parse tree that has a syntactic label of a clause according to the gold standard parse of the PTB. A verb is any terminal marked by one of the POS tags of type verb according to the gold standard POS tags of the PTB. 5 Results Our results are shown in Table 1. The left section presents results on English and the right section presents results on Spanish. The top line lists results of the clause detection stage alone. The next two lines list results of the full algorithm (clause detection + collocations) in two different settings of the collocation stage. The bottom line presents the performance of the ALL COUSINS baseline. In the “Collocation Maximum Precision” setting the parameters of the collocation stage (α and r) were generally tuned such that maximal precision is achieved while preserving a minimal recall level (40% for English, 20% for Spanish on the development data). In the “Collocation Maximum Fscore” the collocation parameters were generally tuned such that the maximum possible F-score for the collocation algorithm is achieved. The best or close to best F-score is achieved when using the clause detection algorithm alone (59.14% for English, 23.34% for Spanish). Note that for both English and Spanish F-score improvements are achieved via a precision improvement that is more significant than the recall degradation. F-score maximization would be the aim of a system that uses the output of our unsupervised ARGID by itself. The “Collocation Maximum Precision” achieves the best precision level (55.97% for English, 21.8% for Spanish) but at the expense of the largest recall loss. Still, it maintains a reasonable level of recall. The “Collocation Maximum F-score” is an example of a model that provides a precision improvement (over both the baseline and the clause detection stage) with a relatively small recall degradation. In the Spanish experiments its F-score (23.87%) is even a bit higher than that of the clause detection stage (23.34%). The full two–stage algorithm (clause detection + collocations) should thus be used when we intend to use the model’s output as training data for supervised SRL engines or supervised ARGID algorithms. In our algorithm, the initial set of potential arguments consists of constituents in the Seginer parser’s parse tree. Consequently the fraction of arguments that are also constituents (81.87% for English and 51.83% for Spanish) poses an upper bound on our algorithm’s recall. Note that the recall of the ALL COUSINS baseline is 74.27% (45.75%) for English (Spanish). This score emphasizes the baseline’s strength, and justifies the restriction that the arguments should be k-th cousins of the predicate. The difference between these bounds for the two languages provides a partial explanation for the corresponding gap in the algorithm’s performance. Figure 3 shows the precision of the collocation model (on development data) as a function of the amount of data it was given. We can see that the algorithm reaches saturation at about 5M sentences. It achieves this precision while maintaining a reasonable recall (an average recall of 43.1% after saturation). The parameters of the collocation model were separately tuned for each corpus size, and the graph displays the maximum which was obtained for each of the corpus sizes. To better understand our model’s performance, we performed experiments on the English corpus to test how well its first stage detects clauses. Clause detection is used by our algorithm as a step towards argument identification, but it can be of potential benefit for other purposes as well (see Section 2). The results are 23.88% recall and 40% precision. As in the ARGID task, a random selection of arguments would have yielded an extremely poor result. 6 Conclusion In this work we presented the first algorithm for argument identification that uses neither supervised syntactic annotation nor SRL tagged data. We have experimented on two languages: English and Spanish. The straightforward adaptability of un34 English (Test Data) Spanish (Test Data) Precision Recall F1 Precision Recall F1 Clause Detection 52.84 67.14 59.14 18.00 33.19 23.34 Collocation Maximum F–score 54.11 63.53 58.44 20.22 29.13 23.87 Collocation Maximum Precision 55.97 40.02 46.67 21.80 18.47 20.00 ALL COUSINS baseline 46.71 74.27 57.35 14.16 45.75 21.62 Table 1: Precision, Recall and F1 score for the different stages of our algorithm. Results are given for English (PTB, sentences length bounded by 10, left part of the table) and Spanish (SemEval 2007 Spanish SRL task, right part of the table). The results of the collocation (second) stage are given in two configurations, Collocation Maximum F-score and Collocation Maximum Precision (see text). The upper bounds on Recall, obtained by taking all arguments output by our unsupervised parser, are 81.87% for English and 51.83% for Spanish. 0 2 4 6 8 10 42 44 46 48 50 52 Number of Sentences (Millions) Precision Second Stage First Stage Baseline Figure 3: The performance of the second stage on English (squares) vs. corpus size. The precision of the baseline (triangles) and of the first stage (circles) is displayed for reference. The graph indicates the maximum precision obtained for each corpus size. The graph reaches saturation at about 5M sentences. The average recall of the sampled points from there on is 43.1%. Experiments were performed on the English development data. supervised models to different languages is one of their most appealing characteristics. The recent availability of unsupervised syntactic parsers has offered an opportunity to conduct research on SRL, without reliance on supervised syntactic annotation. This work is the first to address the application of unsupervised parses to an SRL related task. Our model displayed an increase in precision of 9% in English and 8% in Spanish over a strong baseline. Precision is of particular interest in this context, as instances tagged by high quality annotation could be later used as training data for supervised SRL algorithms. In terms of F–score, our model showed an increase of 1.8% in English and of 2.2% in Spanish over the baseline. Although the quality of unsupervised parses is currently low (compared to that of supervised approaches), using great amounts of data in identifying recurring structures may reduce noise and in addition address sparsity. The techniques presented in this paper are based on this observation, using around 35M sentences in total for English and 3.3M sentences for Spanish. As this is the first work which addressed unsupervised ARGID, many questions remain to be explored. Interesting issues to address include assessing the utility of the proposed methods when supervised parses are given, comparing our model to systems with no access to unsupervised parses and conducting evaluation using more relaxed measures. Unsupervised methods for syntactic tasks have matured substantially in the last few years. Notable examples are (Clark, 2003) for unsupervised POS tagging and (Smith and Eisner, 2006) for unsupervised dependency parsing. Adapting our algorithm to use the output of these models, either to reduce the little supervision our algorithm requires (POS tagging) or to provide complementary syntactic information, is an interesting challenge for future work. References Collin F. Baker, Charles J. Fillmore and John B. Lowe, 1998. The Berkeley FrameNet Project. ACLCOLING ’98. Daniel M. Bikel, 2004. Intricacies of Collins’ Parsing Model. Computational Linguistics, 30(4):479–511. Ted Briscoe, John Carroll, 1997. Automatic Extraction of Subcategorization from Corpora. Applied NLP 1997. Aljoscha Burchardt, Katrin Erk, Anette Frank, Andrea Kowalski, Sebastian Pad and Manfred Pinkal, 2006 The SALSA Corpus: a German Corpus Resource for Lexical Semantics. LREC ’06. Lou Burnard, 2000. User Reference Guide for the British National Corpus. Technical report, Oxford University. Xavier Carreras and Llu`ıs M`arquez, 2004. Introduction to the CoNLL–2004 Shared Task: Semantic Role Labeling. CoNLL ’04. 35 Xavier Carreras and Llu`ıs M`arquez, 2005. Introduction to the CoNLL–2005 Shared Task: Semantic Role Labeling. CoNLL ’05. Alexander Clark, 2003. Combining Distributional and Morphological Information for Part of Speech Induction. EACL ’03. Ronan Collobert and Jason Weston, 2007. Fast Semantic Extraction Using a Novel Neural Network Architecture. ACL ’07. Mona Diab, Aous Mansouri, Martha Palmer, Olga Babko-Malaya, Wajdi Zaghouani, Ann Bies and Mohammed Maamouri, 2008. A pilot Arabic PropBank. LREC ’08. Evgeniy Gabrilovich and Shaul Markovitch, 2005. Feature Generation for Text Categorization using World Knowledge. IJCAI ’05. Daniel Gildea and Daniel Jurafsky, 2002. Automatic Labeling of Semantic Roles. Computational Linguistics, 28(3):245–288. Elliot Glaysher and Dan Moldovan, 2006. Speeding Up Full Syntactic Parsing by Leveraging Partial Parsing Decisions. COLING/ACL ’06 poster session. Andrew Gordon and Reid Swanson, 2007. Generalizing Semantic Role Annotations across Syntactically Similar Verbs. ACL ’07. David Graff, 1995. North American News Text Corpus. Linguistic Data Consortium. LDC95T21. Trond Grenager and Christopher D. Manning, 2006. Unsupervised Discovery of a Statistical Verb Lexicon. EMNLP ’06. Kadri Hacioglu, 2004. Semantic Role Labeling using Dependency Trees. COLING ’04. Kadri Hacioglu and Wayne Ward, 2003. Target Word Detection and Semantic Role Chunking using Support Vector Machines. HLT-NAACL ’03. Rohit J. Kate and Raymond J. Mooney, 2007. SemiSupervised Learning for Semantic Parsing using Support Vector Machines. HLT–NAACL ’07. Karin Kipper, Hoa Trang Dang and Martha Palmer, 2000. Class-Based Construction of a Verb Lexicon. AAAI ’00. Dan Klein, 2005. The Unsupervised Learning of Natural Language Structure. Ph.D. thesis, Stanford University. Anna Korhonen, 2002. Subcategorization Acquisition. Ph.D. thesis, University of Cambridge. Christopher D. Manning, 1993. Automatic Acquisition of a Large Subcategorization Dictionary. ACL ’93. Llu`ıs M`arquez, Xavier Carreras, Kenneth C. Littkowski and Suzanne Stevenson, 2008. Semantic Role Labeling: An introdution to the Special Issue. Computational Linguistics, 34(2):145–159 Llu`ıs M`arquez, Jesus Gim`enez Pere Comas and Neus Catal`a, 2005. Semantic Role Labeling as Sequential Tagging. CoNLL ’05. Llu`ıs M`arquez, Lluis Villarejo, M. A. Mart`ı and Mariona Taul`e, 2007. SemEval–2007 Task 09: Multilevel Semantic Annotation of Catalan and Spanish. The 4th international workshop on Semantic Evaluations (SemEval ’07). Gabriele Musillo and Paula Merlo, 2006. Accurate Parsing of the proposition bank. HLT-NAACL ’06. Martha Palmer, Daniel Gildea and Paul Kingsbury, 2005. The Proposition Bank: A Corpus Annotated with Semantic Roles. Computational Linguistics, 31(1):71–106. Sameer Pradhan, Kadri Hacioglu, Valerie Krugler, Wayne Ward, James H. Martin and Daniel Jurafsky, 2005. Support Vector Learning for Semantic Argument Classification. Machine Learning, 60(1):11– 39. Sameer Pradhan, Wayne Ward, James H. Martin, 2008. Towards Robust Semantic Role Labeling. Computational Linguistics, 34(2):289–310. Adwait Ratnaparkhi, 1996. Maximum Entropy PartOf-Speech Tagger. EMNLP ’96. Helmut Schmid, 1994. Probabilistic Part-of-Speech Tagging Using Decision Trees International Conference on New Methods in Language Processing. Yoav Seginer, 2007. Fast Unsupervised Incremental Parsing. ACL ’07. Noah A. Smith and Jason Eisner, 2006. Annealing Structural Bias in Multilingual Weighted Grammar Induction. ACL ’06. Robert S. Swier and Suzanne Stevenson, 2004. Unsupervised Semantic Role Labeling. EMNLP ’04. Robert S. Swier and Suzanne Stevenson, 2005. Exploiting a Verb Lexicon in Automatic Semantic Role Labelling. EMNLP ’05. Erik F. Tjong Kim Sang and Herv´e D´ejean, 2001. Introduction to the CoNLL-2001 Shared Task: Clause Identification. CoNLL ’01. Nianwen Xue and Martha Palmer, 2004. Calibrating Features for Semantic Role Labeling. EMNLP ’04. Nianwen Xue, 2008. Labeling Chinese Predicates with Semantic Roles. Computational Linguistics, 34(2):225–255. 36
2009
4
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 351–359, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Non-Projective Dependency Parsing in Expected Linear Time Joakim Nivre Uppsala University, Department of Linguistics and Philology, SE-75126 Uppsala V¨axj¨o University, School of Mathematics and Systems Engineering, SE-35195 V¨axj¨o E-mail: [email protected] Abstract We present a novel transition system for dependency parsing, which constructs arcs only between adjacent words but can parse arbitrary non-projective trees by swapping the order of words in the input. Adding the swapping operation changes the time complexity for deterministic parsing from linear to quadratic in the worst case, but empirical estimates based on treebank data show that the expected running time is in fact linear for the range of data attested in the corpora. Evaluation on data from five languages shows state-of-the-art accuracy, with especially good results for the labeled exact match score. 1 Introduction Syntactic parsing using dependency structures has become a standard technique in natural language processing with many different parsing models, in particular data-driven models that can be trained on syntactically annotated corpora (Yamada and Matsumoto, 2003; Nivre et al., 2004; McDonald et al., 2005a; Attardi, 2006; Titov and Henderson, 2007). A hallmark of many of these models is that they can be implemented very efficiently. Thus, transition-based parsers normally run in linear or quadratic time, using greedy deterministic search or fixed-width beam search (Nivre et al., 2004; Attardi, 2006; Johansson and Nugues, 2007; Titov and Henderson, 2007), and graph-based models support exact inference in at most cubic time, which is efficient enough to make global discriminative training practically feasible (McDonald et al., 2005a; McDonald et al., 2005b). However, one problem that still has not found a satisfactory solution in data-driven dependency parsing is the treatment of discontinuous syntactic constructions, usually modeled by non-projective dependency trees, as illustrated in Figure 1. In a projective dependency tree, the yield of every subtree is a contiguous substring of the sentence. This is not the case for the tree in Figure 1, where the subtrees rooted at node 2 (hearing) and node 4 (scheduled) both have discontinuous yields. Allowing non-projective trees generally makes parsing computationally harder. Exact inference for parsing models that allow non-projective trees is NP hard, except under very restricted independence assumptions (Neuhaus and Br¨oker, 1997; McDonald and Pereira, 2006; McDonald and Satta, 2007). There is recent work on algorithms that can cope with important subsets of all nonprojective trees in polynomial time (Kuhlmann and Satta, 2009; G´omez-Rodr´ıguez et al., 2009), but the time complexity is at best O(n6), which can be problematic in practical applications. Even the best algorithms for deterministic parsing run in quadratic time, rather than linear (Nivre, 2008a), unless restricted to a subset of non-projective structures as in Attardi (2006) and Nivre (2007). But allowing non-projective dependency trees also makes parsing empirically harder, because it requires that we model relations between nonadjacent structures over potentially unbounded distances, which often has a negative impact on parsing accuracy. On the other hand, it is hardly possible to ignore non-projective structures completely, given that 25% or more of the sentences in some languages cannot be given a linguistically adequate analysis without invoking non-projective structures (Nivre, 2006; Kuhlmann and Nivre, 2006; Havelka, 2007). Current approaches to data-driven dependency parsing typically use one of two strategies to deal with non-projective trees (unless they ignore them completely). Either they employ a non-standard parsing algorithm that can combine non-adjacent substructures (McDonald et al., 2005b; Attardi, 2006; Nivre, 2007), or they try to recover non351 ROOT0 A1   ? DET hearing2   ? SBJ is3   ? ROOT scheduled4   ? VG on5   ? NMOD the6   ? DET issue7   ? PC today8   ? ADV .9 ?   P Figure 1: Dependency tree for an English sentence (non-projective). projective dependencies by post-processing the output of a strictly projective parser (Nivre and Nilsson, 2005; Hall and Nov´ak, 2005; McDonald and Pereira, 2006). In this paper, we will adopt a different strategy, suggested in recent work by Nivre (2008b) and Titov et al. (2009), and propose an algorithm that only combines adjacent substructures but derives non-projective trees by reordering the input words. The rest of the paper is structured as follows. In Section 2, we define the formal representations needed and introduce the framework of transitionbased dependency parsing. In Section 3, we first define a minimal transition system and explain how it can be used to perform projective dependency parsing in linear time; we then extend the system with a single transition for swapping the order of words in the input and demonstrate that the extended system can be used to parse unrestricted dependency trees with a time complexity that is quadratic in the worst case but still linear in the best case. In Section 4, we present experiments indicating that the expected running time of the new system on naturally occurring data is in fact linear and that the system achieves state-ofthe-art parsing accuracy. We discuss related work in Section 5 and conclude in Section 6. 2 Background Notions 2.1 Dependency Graphs and Trees Given a set L of dependency labels, a dependency graph for a sentence x = w1, . . . , wn is a directed graph G = (Vx, A), where 1. Vx = {0, 1, . . . , n} is a set of nodes, 2. A ⊆Vx × L × Vx is a set of labeled arcs. The set Vx of nodes is the set of positive integers up to and including n, each corresponding to the linear position of a word in the sentence, plus an extra artificial root node 0. The set A of arcs is a set of triples (i, l, j), where i and j are nodes and l is a label. For a dependency graph G = (Vx, A) to be well-formed, we in addition require that it is a tree rooted at the node 0, as illustrated in Figure 1. 2.2 Transition Systems Following Nivre (2008a), we define a transition system for dependency parsing as a quadruple S = (C, T, cs, Ct), where 1. C is a set of configurations, 2. T is a set of transitions, each of which is a (partial) function t : C →C, 3. cs is an initialization function, mapping a sentence x = w1, . . . , wn to a configuration c ∈C, 4. Ct ⊆C is a set of terminal configurations. In this paper, we take the set C of configurations to be the set of all triples c = (Σ, B, A) such that Σ and B are disjoint sublists of the nodes Vx of some sentence x, and A is a set of dependency arcs over Vx (and some label set L); we take the initial configuration for a sentence x = w1, . . . , wn to be cs(x) = ([0], [1, . . . , n], { }); and we take the set Ct of terminal configurations to be the set of all configurations of the form c = ([0], [ ], A) (for any arc set A). The set T of transitions will be discussed in detail in Sections 3.1–3.2. We will refer to the list Σ as the stack and the list B as the buffer, and we will use the variables σ and β for arbitrary sublists of Σ and B, respectively. For reasons of perspicuity, we will write Σ with its head (top) to the right and B with its head to the left. Thus, c = ([σ|i], [j|β], A) is a configuration with the node i on top of the stack Σ and the node j as the first node in the buffer B. Given a transition system S = (C, T, cs, Ct), a transition sequence for a sentence x is a sequence C0,m = (c0, c1, . . . , cm) of configurations, such that 1. c0 = cs(x), 2. cm ∈Ct, 3. for every i (1 ≤i ≤m), ci = t(ci−1) for some t ∈T. 352 Transition Condition LEFT-ARCl ([σ|i, j], B, A) ⇒([σ|j], B, A∪{(j, l, i)}) i ̸= 0 RIGHT-ARCl ([σ|i, j], B, A) ⇒([σ|i], B, A∪{(i, l, j)}) SHIFT (σ, [i|β], A) ⇒([σ|i], β, A) SWAP ([σ|i, j], β, A) ⇒([σ|j], [i|β], A) 0 < i < j Figure 2: Transitions for dependency parsing; Tp = {LEFT-ARCl, RIGHT-ARCl, SHIFT}; Tu = Tp ∪{SWAP}. The parse assigned to S by C0,m is the dependency graph Gcm = (Vx, Acm), where Acm is the set of arcs in cm. A transition system S is sound for a class G of dependency graphs iff, for every sentence x and transition sequence C0,m for x in S, Gcm ∈G. S is complete for G iff, for every sentence x and dependency graph G for x in G, there is a transition sequence C0,m for x in S such that Gcm = G. 2.3 Deterministic Transition-Based Parsing An oracle for a transition system S is a function o : C →T. Ideally, o should always return the optimal transition t for a given configuration c, but all we require formally is that it respects the preconditions of transitions in T. That is, if o(c) = t then t is permissible in c. Given an oracle o, deterministic transition-based parsing can be achieved by the following simple algorithm: PARSE(o, x) 1 c ←cs(x) 2 while c ̸∈Ct 3 do t ←o(c); c ←t(c) 4 return Gc Starting in the initial configuration cs(x), the parser repeatedly calls the oracle function o for the current configuration c and updates c according to the oracle transition t. The iteration stops when a terminal configuration is reached. It is easy to see that, provided that there is at least one transition sequence in S for every sentence, the parser constructs exactly one transition sequence C0,m for a sentence x and returns the parse defined by the terminal configuration cm, i.e., Gcm = (Vx, Acm). Assuming that the calls o(c) and t(c) can both be performed in constant time, the worst-case time complexity of a deterministic parser based on a transition system S is given by an upper bound on the length of transition sequences in S. When building practical parsing systems, the oracle can be approximated by a classifier trained on treebank data, a technique that has been used successfully in a number of systems (Yamada and Matsumoto, 2003; Nivre et al., 2004; Attardi, 2006). This is also the approach we will take in the experimental evaluation in Section 4. 3 Transitions for Dependency Parsing Having defined the set of configurations, including initial and terminal configurations, we will now focus on the transition set T required for dependency parsing. The total set of transitions that will be considered is given in Figure 2, but we will start in Section 3.1 with the subset Tp (p for projective) consisting of the first three. In Section 3.2, we will add the fourth transition (SWAP) to get the full transition set Tu (u for unrestricted). 3.1 Projective Dependency Parsing The minimal transition set Tp for projective dependency parsing contains three transitions: 1. LEFT-ARCl updates a configuration with i, j on top of the stack by adding (j, l, i) to A and replacing i, j on the stack by j alone. It is permissible as long as i is distinct from 0. 2. RIGHT-ARCl updates a configuration with i, j on top of the stack by adding (i, l, j) to A and replacing i, j on the stack by i alone. 3. SHIFT updates a configuration with i as the first node of the buffer by removing i from the buffer and pushing it onto the stack. The system Sp = (C, Tp, cs, Ct) is sound and complete for the set of projective dependency trees (over some label set L) and has been used, in slightly different variants, by a number of transition-based dependency parsers (Yamada and Matsumoto, 2003; Nivre, 2004; Attardi, 2006; 353 Transition Stack (Σ) Buffer (B) Added Arc [ROOT0] [A1, . . . , .9] SHIFT [ROOT0, A1] [hearing2, . . . , .9] SHIFT [ROOT0, A1, hearing2] [is3, . . . , .9] LADET [ROOT0, hearing2] [is3, . . . , .9] (2, DET, 1) SHIFT [ROOT0, hearing2, is3] [scheduled4, . . . , .9] SHIFT [ROOT0, . . . , is3, scheduled4] [on5, . . . , .9] SHIFT [ROOT0, . . . , scheduled4, on5] [the6, . . . , .9] SWAP [ROOT0, . . . , is3, on5] [scheduled4, . . . , .9] SWAP [ROOT0, hearing2, on5] [is3, . . . , .9] SHIFT [ROOT0, . . . , on5, is3] [scheduled4, . . . , .9] SHIFT [ROOT0, . . . , is3, scheduled4] [the6, . . . , .9] SHIFT [ROOT0, . . . , scheduled4, the6] [issue7, . . . , .9] SWAP [ROOT0, . . . , is3, the6] [scheduled4, . . . , .9] SWAP [ROOT0, . . . , on5, the6] [is3, . . . , .9] SHIFT [ROOT0, . . . , the6, is3] [scheduled4, . . . , .9] SHIFT [ROOT0, . . . , is3, scheduled4] [issue7, . . . , .9] SHIFT [ROOT0, . . . , scheduled4, issue7] [today8, .9] SWAP [ROOT0, . . . , is3, issue7] [scheduled4, . . . , .9] SWAP [ROOT0, . . . , the6, issue7] [is3, . . . , .9] LADET [ROOT0, . . . , on5, issue7] [is3, . . . , .9] (7, DET, 6) RAPC [ROOT0, hearing2, on5] [is3, . . . , .9] (5, PC, 7) RANMOD [ROOT0, hearing2] [is3, . . . , .9] (2, NMOD, 5) SHIFT [ROOT0, . . . , hearing2, is3] [scheduled4, . . . , .9] LASBJ [ROOT0, is3] [scheduled4, . . . , .9] (3, SBJ, 2) SHIFT [ROOT0, is3, scheduled4] [today8, .9] SHIFT [ROOT0, . . . , scheduled4, today8] [.9] RAADV [ROOT0, is3, scheduled4] [.9] (4, ADV, 8) RAVG [ROOT0, is3] [.9] (3, VG, 4) SHIFT [ROOT0, is3, .9] [ ] RAP [ROOT0, is3] [ ] (3, P, 9) RAROOT [ROOT0] [ ] (0, ROOT, 3) Figure 3: Transition sequence for parsing the sentence in Figure 1 (LA = LEFT-ARC, RA = REFT-ARC). Nivre, 2008a). For proofs of soundness and completeness, see Nivre (2008a). As noted in section 2, the worst-case time complexity of a deterministic transition-based parser is given by an upper bound on the length of transition sequences. In Sp, the number of transitions for a sentence x = w1, . . . , wn is always exactly 2n, since a terminal configuration can only be reached after n SHIFT transitions (moving nodes 1, . . . , n from B to Σ) and n applications of LEFT-ARCl or RIGHT-ARCl (removing the same nodes from Σ). Hence, the complexity of deterministic parsing is O(n) in the worst case (as well as in the best case). 3.2 Unrestricted Dependency Parsing We now consider what happens when we add the fourth transition from Figure 2 to get the extended transition set Tu. The SWAP transition updates a configuration with stack [σ|i, j] by moving the node i back to the buffer. This has the effect that the order of the nodes i and j in the appended list Σ+B is reversed compared to the original word order in the sentence. It is important to note that SWAP is only permissible when the two nodes on top of the stack are in the original word order, which prevents the same two nodes from being swapped more than once, and when the leftmost node i is distinct from the root node 0. Note also that SWAP moves the node i back to the buffer, so that LEFT-ARCl, RIGHT-ARCl or SWAP can subsequently apply with the node j on top of the stack. The fact that we can swap the order of nodes, implicitly representing subtrees, means that we can construct non-projective trees by applying 354 o(c) =          LEFT-ARCl if c = ([σ|i, j], B, Ac), (j, l, i)∈A and Ai ⊆Ac RIGHT-ARCl if c = ([σ|i, j], B, Ac), (i, l, j)∈A and Aj ⊆Ac SWAP if c = ([σ|i, j], B, Ac) and j <G i SHIFT otherwise Figure 4: Oracle function for Su = (C, Tu, cs, Ct) with target tree G = (Vx, A). We use the notation Ai to denote the subset of A that only contains the outgoing arcs of the node i. LEFT-ARCl or RIGHT-ARCl to subtrees whose yields are not adjacent according to the original word order. This is illustrated in Figure 3, which shows the transition sequence needed to parse the example in Figure 1. For readability, we represent both the stack Σ and the buffer B as lists of tokens, indexed by position, rather than abstract nodes. The last column records the arc that is added to the arc set A in a given transition (if any). Given the simplicity of the extension, it is rather remarkable that the system Su = (C, Tu, cs, Ct) is sound and complete for the set of all dependency trees (over some label set L), including all non-projective trees. The soundness part is trivial, since any terminating transition sequence will have to move all the nodes 1, . . . , n from B to Σ (using SHIFT) and then remove them from Σ (using LEFT-ARCl or RIGHT-ARCl), which will produce a tree with root 0. For completeness, we note first that projectivity is not a property of a dependency tree in itself, but of the tree in combination with a word order, and that a tree can always be made projective by reordering the nodes. For instance, let x be a sentence with dependency tree G = (Vx, A), and let <G be the total order on Vx defined by an inorder traversal of G that respects the local ordering of a node and its children given by the original word order. Regardless of whether G is projective with respect to x, it must by necessity be projective with respect to <G. We call <G the projective order corresponding to x and G and use it as our canonical way of finding a node order that makes the tree projective. By way of illustration, the projective order for the sentence and tree in Figure 1 is: A1 <G hearing2 <G on5 <G the6 <G issue7 <G is3 <G scheduled4 <G today8 <G .9. If the words of a sentence x with dependency tree G are already in projective order, this means that G is projective with respect to x and that we can parse the sentence using only transitions in Tp, because nodes can be pushed onto the stack in projective order using only the SHIFT transition. If the words are not in projective order, we can use a combination of SHIFT and SWAP transitions to ensure that nodes are still pushed onto the stack in projective order. More precisely, if the next node in the projective order is the kth node in the buffer, we perform k SHIFT transitions, to get this node onto the stack, followed by k−1 SWAP transitions, to move the preceding k −1 nodes back to the buffer.1 In this way, the parser can effectively sort the input nodes into projective order on the stack, repeatedly extracting the minimal element of <G from the buffer, and build a tree that is projective with respect to the sorted order. Since any input can be sorted using SHIFT and SWAP, and any projective tree can be built using SHIFT, LEFT-ARCl and RIGHT-ARCl, the system Su is complete for the set of all dependency trees. In Figure 4, we define an oracle function o for the system Su, which implements this “sort and parse” strategy and predicts the optimal transition t out of the current configuration c, given the target dependency tree G = (Vx, A) and the projective order <G. The oracle predicts LEFT-ARCl or RIGHT-ARCl if the two top nodes on the stack should be connected by an arc and if the dependent node of this arc is already connected to all its dependents; it predicts SWAP if the two top nodes are not in projective order; and it predicts SHIFT otherwise. This is the oracle that has been used to generate training data for classifiers in the experimental evaluation in Section 4. Let us now consider the time complexity of the extended system Su = (C, Tu, cs, Ct) and let us begin by observing that 2n is still a lower bound on the number of transitions required to reach a terminal configuration. A sequence of 2n transi1This can be seen in Figure 3, where transitions 4–8, 9– 13, and 14–18 are the transitions needed to make sure that on5, the6 and issue7 are processed on the stack before is3 and scheduled4. 355 Figure 5: Abstract running time during training (black) and parsing (white) for Arabic (1460/146 sentences) and Danish (5190/322 sentences). tions occurs when no SWAP transitions are performed, in which case the behavior of the system is identical to the simpler system Sp. This is important, because it means that the best-case complexity of the deterministic parser is still O(n) and that the we can expect to observe the best case for all sentences with projective dependency trees. The exact number of additional transitions needed to reach a terminal configuration is determined by the number of SWAP transitions. Since SWAP moves one node from Σ to B, there will be one additional SHIFT for every SWAP, which means that the total number of transitions is 2n + 2k, where k is the number of SWAP transitions. Given the condition that SWAP can only apply in a configuration c = ([σ|i, j], B, A) if 0 < i < j, the number of SWAP transitions is bounded by n(n−1) 2 , which means that 2n + n(n −1) = n + n2 is an upper bound on the number of transitions in a terminating sequence. Hence, the worst-case complexity of the deterministic parser is O(n2). The running time of a deterministic transitionbased parser using the system Su is O(n) in the best case and O(n2) in the worst case. But what about the average case? Empirical studies, based on data from a wide range of languages, have shown that dependency trees tend to be projective and that most non-projective trees only contain a small number of discontinuities (Nivre, 2006; Kuhlmann and Nivre, 2006; Havelka, 2007). This should mean that the expected number of swaps per sentence is small, and that the running time is linear on average for the range of inputs that occur in natural languages. This is a hypothesis that will be tested experimentally in the next section. 4 Experiments Our experiments are based on five data sets from the CoNLL-X shared task: Arabic, Czech, Danish, Slovene, and Turkish (Buchholz and Marsi, 2006). These languages have been selected because the data come from genuine dependency treebanks, whereas all the other data sets are based on some kind of conversion from another type of representation, which could potentially distort the distribution of different types of structures in the data. 4.1 Running Time In section 3.2, we hypothesized that the expected running time of a deterministic parser using the transition system Su would be linear, rather than quadratic. To test this hypothesis, we examine how the number of transitions varies as a function of sentence length. We call this the abstract running time, since it abstracts over the actual time needed to compute each oracle prediction and transition, which is normally constant but dependent on the type of classifier used. We first measured the abstract running time on the training sets, using the oracle to derive the transition sequence for every sentence, to see how many transitions are required in the ideal case. We then performed the same measurement on the test sets, using classifiers trained on the oracle transition sequences from the training sets (as described below in Section 4.2), to see whether the trained parsers deviate from the ideal case. The result for Arabic and Danish can be seen 356 Arabic Czech Danish Slovene Turkish System AS EM AS EM AS EM AS EM AS EM Su 67.1 (9.1) 11.6 82.4 (73.8) 35.3 84.2 (22.5) 26.7 75.2 (23.0) 29.9 64.9 (11.8) 21.5 Sp 67.3 (18.2) 11.6 80.9 (3.7) 31.2 84.6 (0.0) 27.0 74.2 (3.4) 29.9 65.3 (6.6) 21.0 Spp 67.2 (18.2) 11.6 82.1 (60.7) 34.0 84.7 (22.5) 28.9 74.8 (20.7) 26.9 65.5 (11.8) 20.7 Malt-06 66.7 (18.2) 11.0 78.4 (57.9) 27.4 84.8 (27.5) 26.7 70.3 (20.7) 19.7 65.7 (9.2) 19.3 MST-06 66.9 (0.0) 10.3 80.2 (61.7) 29.9 84.8 (62.5) 25.5 73.4 (26.4) 20.9 63.2 (11.8) 20.2 MSTMalt 68.6 (9.4) 11.0 82.3 (69.2) 31.2 86.7 (60.0) 29.8 75.9 (27.6) 26.6 66.3 (9.2) 18.6 Table 1: Labeled accuracy; AS = attachment score (non-projective arcs in brackets); EM = exact match. in Figure 5, where black dots represent training sentences (parsed with the oracle) and white dots represent test sentences (parsed with a classifier). For Arabic there is a very clear linear relationship in both cases with very few outliers. Fitting the data with a linear function using the least squares method gives us m = 2.06n (R2 = 0.97) for the training data and m = 2.02n (R2 = 0.98) for the test data, where m is the number of transitions in parsing a sentence of length n. For Danish, there is clearly more variation, especially for the training data, but the least-squares approximation still explains most of the variance, with m = 2.22n (R2 = 0.85) for the training data and m = 2.07n (R2 = 0.96) for the test data. For both languages, we thus see that the classifier-based parsers have a lower mean number of transitions and less variance than the oracle parsers. And in both cases, the expected number of transitions is only marginally greater than the 2n of the strictly projective transition system Sp. We have chosen to display results for Arabic and Danish because they are the two extremes in our sample. Arabic has the smallest variance and the smallest linear coefficients, and Danish has the largest variance and the largest coefficients. The remaining three languages all lie somewhere in the middle, with Czech being closer to Arabic and Slovene closer to Danish. Together, the evidence from all five languages strongly corroborates the hypothesis that the expected running time for the system Su is linear in sentence length for naturally occurring data. 4.2 Parsing Accuracy In order to assess the parsing accuracy that can be achieved with the new transition system, we trained a deterministic parser using the new transition system Su for each of the five languages. For comparison, we also trained two parsers using Sp, one that is strictly projective and one that uses the pseudo-projective parsing technique to recover non-projective dependencies in a post-processing step (Nivre and Nilsson, 2005). We will refer to the latter system as Spp. All systems use SVM classifiers with a polynomial kernel to approximate the oracle function, with features and parameters taken from Nivre et al. (2006), which was the best performing transition-based system in the CoNLL-X shared task.2 Table 1 shows the labeled parsing accuracy of the parsers measured in two ways: attachment score (AS) is the percentage of tokens with the correct head and dependency label; exact match (EM) is the percentage of sentences with a completely correct labeled dependency tree. The score in brackets is the attachment score for the (small) subset of tokens that are connected to their head by a non-projective arc in the gold standard parse. For comparison, the table also includes results for the two best performing systems in the original CoNLL-X shared task, Malt-06 (Nivre et al., 2006) and MST-06 (McDonald et al., 2006), as well as the integrated system MSTMalt, which is a graph-based parser guided by the predictions of a transition-based parser and currently has the best reported results on the CoNLL-X data sets (Nivre and McDonald, 2008). Looking first at the overall attachment score, we see that Su gives a substantial improvement over Sp (and outperforms Spp) for Czech and Slovene, where the scores achieved are rivaled only by the combo system MSTMalt. For these languages, there is no statistical difference between Su and MSTMalt, which are both significantly better than all the other parsers, except Spp for Czech (McNemar’s test, α = .05). This is accompanied by an improvement on non-projective arcs, where 2Complete information about experimental settings can be found at http://stp.lingfil.uu.se/∼nivre/exp/. 357 Su outperforms all other systems for Czech and is second only to the two MST parsers (MST-06 and MSTMalt) for Slovene. It is worth noting that the percentage of non-projective arcs is higher for Czech (1.9%) and Slovene (1.9%) than for any of the other languages. For the other three languages, Su has a drop in overall attachment score compared to Sp, but none of these differences is statistically significant. In fact, the only significant differences in attachment score here are the positive differences between MSTMalt and all other systems for Arabic and Danish, and the negative difference between MST-06 and all other systems for Turkish. The attachment scores for non-projective arcs are generally very low for these languages, except for the two MST parsers on Danish, but Su performs at least as well as Spp on Danish and Turkish. (The results for Arabic are not very meaningful, given that there are only eleven non-projective arcs in the entire test set, of which the (pseudo-)projective parsers found two and Su one, while MSTMalt and MST-06 found none at all.) Considering the exact match scores, finally, it is very interesting to see that Su almost consistently outperforms all other parsers, including the combo system MSTMalt, and sometimes by a fairly wide margin (Czech, Slovene). The difference is statistically significant with respect to all other systems except MSTMalt for Slovene, all except MSTMalt and Spp for Czech, and with respect to MSTMalt for Turkish. For Arabic and Danish, there are no significant differences in the exact match scores. We conclude that Su may increase the probability of finding a completely correct analysis, which is sometimes reflected also in the overall attachment score, and we conjecture that the strength of the positive effect is dependent on the frequency of non-projective arcs in the language. 5 Related Work Processing non-projective trees by swapping the order of words has recently been proposed by both Nivre (2008b) and Titov et al. (2009), but these systems cannot handle unrestricted non-projective trees. It is worth pointing out that, although the system described in Nivre (2008b) uses four transitions bearing the same names as the transitions of Su, the two systems are not equivalent. In particular, the system of Nivre (2008b) is sound but not complete for the class of all dependency trees. There are also affinities to the system of Attardi (2006), which combines non-adjacent nodes on the stack instead of swapping nodes and is equivalent to a restricted version of our system, where no more than two consecutive SWAP transitions are permitted. This restriction preserves linear worstcase complexity at the expense of completeness. Finally, the algorithm first described by Covington (2001) and used for data-driven parsing by Nivre (2007), is complete but has quadratic complexity even in the best case. 6 Conclusion We have presented a novel transition system for dependency parsing that can handle unrestricted non-projective trees. The system reuses standard techniques for building projective trees by combining adjacent nodes (representing subtrees with adjacent yields), but adds a simple mechanism for swapping the order of nodes on the stack, which gives a system that is sound and complete for the set of all dependency trees over a given label set but behaves exactly like the standard system for the subset of projective trees. As a result, the time complexity of deterministic parsing is O(n2) in the worst case, which is rare, but O(n) in the best case, which is common, and experimental results on data from five languages support the conclusion that expected running time is linear in the length of the sentence. Experimental results also show that parsing accuracy is competitive, especially for languages like Czech and Slovene where nonprojective dependency structures are common, and especially with respect to the exact match score, where it has the best reported results for four out of five languages. Finally, the simplicity of the system makes it very easy to implement. Future research will include an in-depth error analysis to find out why the system works better for some languages than others and why the exact match score improves even when the attachment score goes down. In addition, we want to explore alternative oracle functions, which try to minimize the number of swaps by allowing the stack to be temporarily “unsorted”. Acknowledgments Thanks to Johan Hall and Jens Nilsson for help with implementation and evaluation, and to Marco Kuhlmann and three anonymous reviewers for useful comments. 358 References Giuseppe Attardi. 2006. Experiments with a multilanguage non-projective dependency parser. In Proceedings of CoNLL, pages 166–170. Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceedings of CoNLL, pages 149–164. Michael A. Covington. 2001. A fundamental algorithm for dependency parsing. In Proceedings of the 39th Annual ACM Southeast Conference, pages 95– 102. Carlos G´omez-Rodr´ıguez, David Weir, and John Carroll. 2009. Parsing mildly non-projective dependency structures. In Proceedings of EACL, pages 291–299. Keith Hall and Vaclav Nov´ak. 2005. Corrective modeling for non-projective dependency parsing. In Proceedings of IWPT, pages 42–52. Jiri Havelka. 2007. Beyond projectivity: Multilingual evaluation of constraints and measures on nonprojective structures. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 608–615. Richard Johansson and Pierre Nugues. 2007. Incremental dependency parsing using online learning. In Proceedings of the Shared Task of EMNLP-CoNLL, pages 1134–1138. Marco Kuhlmann and Joakim Nivre. 2006. Mildly non-projective dependency structures. In Proceedings of the COLING/ACL Main Conference Poster Sessions, pages 507–514. Marco Kuhlmann and Giorgio Satta. 2009. Treebank grammar techniques for non-projective dependency parsing. In Proceedings of EACL, pages 478–486. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of EACL, pages 81–88. Ryan McDonald and Giorgio Satta. 2007. On the complexity of non-projective data-driven dependency parsing. In Proceedings of IWPT, pages 122–131. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of dependency parsers. In Proceedings of ACL, pages 91– 98. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajiˇc. 2005b. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of HLT/EMNLP, pages 523–530. Ryan McDonald, Kevin Lerman, and Fernando Pereira. 2006. Multilingual dependency analysis with a two-stage discriminative parser. In Proceedings of CoNLL, pages 216–220. Peter Neuhaus and Norbert Br¨oker. 1997. The complexity of recognition of linguistically adequate dependency grammars. In Proceedings of ACL/EACL, pages 337–343. Joakim Nivre and Ryan McDonald. 2008. Integrating graph-based and transition-based dependency parsers. In Proceedings of ACL, pages 950–958. Joakim Nivre and Jens Nilsson. 2005. Pseudoprojective dependency parsing. In Proceedings of ACL, pages 99–106. Joakim Nivre, Johan Hall, and Jens Nilsson. 2004. Memory-based dependency parsing. In Proceedings of CoNLL, pages 49–56. Joakim Nivre, Johan Hall, Jens Nilsson, G¨ulsen Eryi˘git, and Svetoslav Marinov. 2006. Labeled pseudo-projective dependency parsing with support vector machines. In Proceedings of CoNLL, pages 221–225. Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together (ACL), pages 50–57. Joakim Nivre. 2006. Constraints on non-projective dependency graphs. In Proceedings of EACL, pages 73–80. Joakim Nivre. 2007. Incremental non-projective dependency parsing. In Proceedings of NAACL HLT, pages 396–403. Joakim Nivre. 2008a. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34:513–553. Joakim Nivre. 2008b. Sorting out dependency parsing. In Proceedings of the 6th International Conference on Natural Language Processing (GoTAL), pages 16–27. Ivan Titov and James Henderson. 2007. A latent variable model for generative dependency parsing. In Proceedings of IWPT, pages 144–155. Ivan Titov, James Henderson, Paola Merlo, and Gabriele Musillo. 2009. Online graph planarization for synchronous parsing of semantic and syntactic dependencies. In Proceedings of IJCAI. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT, pages 195–206. 359
2009
40
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 360–368, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Semi-supervised Learning of Dependency Parsers using Generalized Expectation Criteria Gregory Druck Dept. of Computer Science University of Massachusetts Amherst, MA 01003 [email protected] Gideon Mann Google, Inc. 76 9th Ave. New York, NY 10011 [email protected] Andrew McCallum Dept. of Computer Science University of Massachusetts Amherst, MA 01003 [email protected] Abstract In this paper, we propose a novel method for semi-supervised learning of nonprojective log-linear dependency parsers using directly expressed linguistic prior knowledge (e.g. a noun’s parent is often a verb). Model parameters are estimated using a generalized expectation (GE) objective function that penalizes the mismatch between model predictions and linguistic expectation constraints. In a comparison with two prominent “unsupervised” learning methods that require indirect biasing toward the correct syntactic structure, we show that GE can attain better accuracy with as few as 20 intuitive constraints. We also present positive experimental results on longer sentences in multiple languages. 1 Introduction Early approaches to parsing assumed a grammar provided by human experts (Quirk et al., 1985). Later approaches avoided grammar writing by learning the grammar from sentences explicitly annotated with their syntactic structure (Black et al., 1992). While such supervised approaches have yielded accurate parsers (Charniak, 2001), the syntactic annotation of corpora such as the Penn Treebank is extremely costly, and consequently there are few treebanks of comparable size. As a result, there has been recent interest in unsupervised parsing. However, in order to attain reasonable accuracy, these methods have to be carefully biased towards the desired syntactic structure. This weak supervision has been encoded using priors and initializations (Klein and Manning, 2004; Smith, 2006), specialized models (Klein and Manning, 2004; Seginer, 2007; Bod, 2006), and implicit negative evidence (Smith, 2006). These indirect methods for leveraging prior knowledge can be cumbersome and unintuitive for a non-machine-learning expert. This paper proposes a method for directly guiding the learning of dependency parsers with naturally encoded linguistic insights. Generalized expectation (GE) (Mann and McCallum, 2008; Druck et al., 2008) is a recently proposed framework for incorporating prior knowledge into the learning of conditional random fields (CRFs) (Lafferty et al., 2001). GE criteria express a preference on the value of a model expectation. For example, we know that “in English, when a determiner is directly to the left of a noun, the noun is usually the parent of the determiner”. With GE we may add a term to the objective function that encourages a feature-rich CRF to match this expectation on unlabeled data, and in the process learn about related features. In this paper we use a non-projective dependency tree CRF (Smith and Smith, 2007). While a complete exploration of linguistic prior knowledge for dependency parsing is beyond the scope of this paper, we provide several promising demonstrations of the proposed method. On the English WSJ10 data set, GE training outperforms two prominent unsupervised methods using only 20 constraints either elicited from a human or provided by an “oracle” simulating a human. We also present experiments on longer sentences in Dutch, Spanish, and Turkish in which we obtain accuracy comparable to supervised learning with tens to hundreds of complete parsed sentences. 2 Related Work This work is closely related to the prototypedriven grammar induction method of Haghighi and Klein (2006), which uses prototype phrases to guide the EM algorithm in learning a PCFG. Direct comparison with this method is not possible because we are interested in dependency syntax rather than phrase structure syntax. However, the approach we advocate has several significant 360 advantages. GE is more general than prototypedriven learning because GE constraints can be uncertain. Additionally prototype-driven grammar induction needs to be used in conjunction with other unsupervised methods (distributional similarity and CCM (Klein and Manning, 2004)) to attain reasonable accuracy, and is only evaluated on length 10 or less sentences with no lexical information. In contrast, GE uses only the provided constraints and unparsed sentences, and is used to train a feature-rich discriminative model. Conventional semi-supervised learning requires parsed sentences. Kate and Mooney (2007) and McClosky et al. (2006) both use modified forms of self-training to bootstrap parsers from limited labeled data. Wang et al. (2008) combine a structured loss on parsed sentences with a least squares loss on unlabeled sentences. Koo et al. (2008) use a large unlabeled corpus to estimate cluster features which help the parser generalize with fewer examples. Smith and Eisner (2007) apply entropy regularization to dependency parsing. The above methods can be applied to small seed corpora, but McDonald1 has criticized such methods as working from an unrealistic premise, as a significant amount of the effort required to build a treebank comes in the first 100 sentences (both because of the time it takes to create an appropriate rubric and to train annotators). There are also a number of methods for unsupervised learning of dependency parsers. Klein and Manning (2004) use a carefully initialized and structured generative model (DMV) in conjunction with the EM algorithm to get the first positive results on unsupervised dependency parsing. As empirical evidence of the sensitivity of DMV to initialization, Smith (2006) (pg. 37) uses three different initializations, and only one, the method of Klein and Manning (2004), gives accuracy higher than 31% on the WSJ10 corpus (see Section 5). This initialization encodes the prior knowledge that long distance attachments are unlikely. Smith and Eisner (2005) develop contrastive estimation (CE), in which the model is encouraged to move probability mass away from implicit negative examples defined using a carefully chosen neighborhood function. For instance, Smith (2006) (pg. 82) uses eight different neighborhood functions to estimate parameters for the DMV model. The best performing neighborhood 1R. McDonald, personal communication, 2007 function DEL1ORTRANS1 provides accuracy of 57.6% on WSJ10 (see Section 5). Another neighborhood, DEL1ORTRANS2, provides accuracy of 51.2%. The remaining six neighborhood functions provide accuracy below 50%. This demonstrates that constructing an appropriate neighborhood function can be delicate and challenging. Smith and Eisner (2006) propose structural annealing (SA), in which a strong bias for local dependency attachments is enforced early in learning, and then gradually relaxed. This method is sensitive to the annealing schedule. Smith (2006) (pg. 136) use 10 annealing schedules in conjunction with three initializers. The best performing combination attains accuracy of 66.7% on WSJ10, but the worst attains accuracy of 32.5%. Finally, Seginer (2007) and Bod (2006) approach unsupervised parsing by constructing novel syntactic models. The development and tuning of the above methods constitute the encoding of prior domain knowledge about the desired syntactic structure. In contrast, our framework provides a straightforward and explicit method for incorporating prior knowledge. Ganchev et al. (2009) propose a related method that uses posterior constrained EM to learn a projective target language parser using only a source language parser and word alignments. 3 Generalized Expectation Criteria Generalized expectation criteria (Mann and McCallum, 2008; Druck et al., 2008) are terms in a parameter estimation objective function that express a preference on the value of a model expectation. Let x represent input variables (i.e. a sentence) and y represent output variables (i.e. a parse tree). A generalized expectation term G(λ) is defined by a constraint function G(y, x) that returns a non-negative real value given input and output variables, an empirical distribution ˜p(x) over input variables (i.e. unlabeled data), a model distribution pλ(y|x), and a score function S: G(λ) = S(E˜p(x)[Epλ(y|x)[G(y, x)]]). In this paper, we use a score function that is the squared difference of the model expectation of G and some target expectation ˜G: Ssq = −( ˜G −E˜p(x)[Epλ(y|x)[G(y, x)]])2 (1) We can incorporate prior knowledge into the training of pλ(y|x) by specifying the from of the constraint function G and the target expectation ˜G. 361 Importantly, G does not need to match a particular feature in the underlying model. The complete objective function2 includes multiple GE terms and a prior on parameters3, p(λ) O(λ; D) = p(λ) + X G G(λ) GE has been applied to logistic regression models (Mann and McCallum, 2007; Druck et al., 2008) and linear chain CRFs (Mann and McCallum, 2008). In the following sections we apply GE to non-projective CRF dependency parsing. 3.1 GE in General CRFs We first consider an arbitrarily structured conditional random field (Lafferty et al., 2001) pλ(y|x). We describe the CRF for non-projective dependency parsing in Section 3.2. The probability of an output y conditioned on an input x is pλ(y|x) = 1 Zx exp  X j λjFj(y, x)  , where Fj are feature functions over the cliques of the graphical model and Z(x) is a normalizing constant that ensures pλ(y|x) sums to 1. We are interested in the expectation of constraint function G(x, y) under this model. We abbreviate this model expectation as: Gλ = E˜p(x)[Epλ(y|x)[G(y, x)]] It can be shown that partial derivative of G(λ) using Ssq4 with respect to model parameter λj is ∂ ∂λj G(λ) = 2( ˜G −Gλ) (2)  E˜p(x) h Epλ(y|x) [G(y, x)Fj(y, x)] −Epλ(y|x) [G(y, x)] Epλ(y|x) [Fj(y, x)] i . Equation 2 has an intuitive interpretation. The first term (on the first line) is the difference between the model and target expectations. The second term 2In general, the objective function could also include the likelihood of available labeled data, but throughout this paper we assume we have no parsed sentences. 3Throughout this paper we use a Gaussian prior on parameters with σ2 = 10. 4In previous work, S was the KL-divergence from the target expectation. The partial derivative of the KL divergence score function includes the same covariance term as above but substitutes a different multiplicative term: ˜G/Gλ. (the rest of the equation) is the predicted covariance between the constraint function G and the model feature function Fj. Therefore, if the constraint is not satisfied, GE updates parameters for features that the model predicts are related to the constraint function. If there are constraint functions G for all model feature functions Fj, and the target expectations ˜G are estimated from labeled data, then the globally optimal parameter setting under the GE objective function is equivalent to the maximum likelihood solution. However, GE does not require such a one-to-one correspondence between constraint functions and model feature functions. This allows bootstrapping of feature-rich models with a small number of prior expectation constraints. 3.2 Non-Projective Dependency Tree CRFs We now define a CRF pλ(y|x) for unlabeled, nonprojective5 dependency parsing. The tree y is represented as a vector of the same length as the sentence, where yi is the index of the parent of word i. The probability of a tree y given sentence x is pλ(y|x) = 1 Zx exp  n X i=1 X j λjfj(xi, xyi, x)  , where fj are edge-factored feature functions that consider the child input (word, tag, or other feature), the parent input, and the rest of the sentence. This factorization implies that dependency decisions are independent conditioned on the input sentence x if y is a tree. Computing Zx and the edge expectations needed for partial derivatives requires summing over all possible trees for x. By relating the sum of the scores of all possible trees to counting the number of spanning trees in a graph, it can be shown that Zx is the determinant of the Kirchoff matrix K, which is constructed using the scores of possible edges. (McDonald and Satta, 2007; Smith and Smith, 2007). Computing the determinant takes O(n3) time, where n is the length of the sentence. To compute the marginal probability of a particular edge k →i (i.e. yi =k), the score of any edge k′ →i such that k′ ̸= k is set to 0. The determinant of the resulting modified Kirchoff matrix Kk→i is then the sum of the scores of all trees that include the edge k →i. The 5Note that we could instead define a CRF for projective dependency parse trees and use a variant of the inside outside algorithm for inference. We choose non-projective because it is the more general case. 362 marginal p(yi =k|x; θ) can be computed by dividing this score by Zx (McDonald and Satta, 2007). Computing all edge expectations with this algorithm takes O(n5) time. Smith and Smith (2007) describe a more efficient algorithm that can compute all edge expectations in O(n3) time using the inverse of the Kirchoff matrix K−1. 3.3 GE for Non-Projective Dependency Tree CRFs While in general constraint functions G may consider multiple edges, in this paper we use edge-factored constraint functions. In this case Epλ(y|x)[G(y, x)]Epλ(y|x)[Fj(y, x)], the second term of the covariance in Equation 2, can be computed using the edge marginal distributions pλ(yi|x). The first term of the covariance Epλ(y|x)[G(y, x)Fj(y, x)] is more difficult to compute because it requires the marginal probability of two edges pλ(yi, yi′|x). It is important to note that the model pλ is still edge-factored. The sum of the scores of all trees that contain edges k →i and k′ →i′ can be computed by setting the scores of edges j →i such that j ̸= k and j′ →i′ such that j′ ̸= k′ to 0, and computing the determinant of the resulting modified Kirchoff matrix Kk→i,k′→i′. There are O(n4) pairs of possible edges, and the determinant computation takes time O(n3), so this naive algorithm takes O(n7) time. An improved algorithm computes, for each possible edge k →i, a modified Kirchoff matrix Kk→i that requires the presence of that edge. Then, the method of Smith and Smith (2007) can be used to compute the probability of every possible edge conditioned on the presence of k →i, pλ(yi′ = k′|yi = k, x), using K−1 k→i. Multiplying this probability by pλ(yi=k|x) yields the desired two edge marginal. Because this algorithm pulls the O(n3) matrix operation out of the inner loop over edges, the run time is reduced to O(n5). If it were possible to perform only one O(n3) matrix operation per sentence, then the gradient computation would take only O(n4) time, the time required to consider all pairs of edges. Unfortunately, there is no straightforward generalization of the method of Smith and Smith (2007) to the two edge marginal problem. Specifically, Laplace expansion generalizes to second-order matrix minors, but it is not clear how to compute secondorder cofactors from the inverse Kirchoff matrix alone (c.f. (Smith and Smith, 2007)). Consequently, we also propose an approximation that can be used to speed up GE training at the expense of a less accurate covariance computation. We consider different cases of the edges k →i, and k′ →i′. • pλ(yi=k, yi′=k′|x)=0 when i=i′ and k̸=k′ (different parent for the same word), or when i=k′ and k=i′ (cycle), because these pairs of edges break the tree constraint. • pλ(yi=k, yi′ =k′|x)=pλ(yi=k|x) when i= i′, k=k′. • pλ(yi =k, yi′ =k′|x)≈pλ(yi =k|x)pλ(yi′ = k′|x) when i ̸= i′ and i ̸= k′ or i′ ̸= k (different words, do not create a cycle). This approximation assumes that pairs of edges that do not fall into one of the above cases are conditionally independent given x. This is not true because there are partial trees in which k →i and k′ →i′ can appear separately, but not together (for example if i = k′ and the partial tree contains i′ →k). Using this approximation, the covariance for one sentence is approximately equal to n X i Epλ(yi|x)[fj(xi, xyi, x)g(xi, xyi, x)] − n X i Epλ(yi|x)[fj(xi, xyi, x)]Epλ(yi|x)[g(xi, xyi, x)] − n X i,k pλ(yi=k|x)pλ(yk=i|x)fj(xi, xk, x)g(xk, xi, x). Intuitively, the first and second terms compute a covariance over possible parents for a single word, and the third term accounts for cycles. Computing the above takes O(n3) time, the time required to compute single edge marginals. In this paper, we use the O(n5) exact method, though we find that the accuracy attained by approximate training is usually within 5% of the exact method. If G is not edge-factored, then we need to compute a marginal over three or more edges, making exact training intractable. An appealing alternative to a similar approximation to the above would use loopy belief propagation to efficiently approximate the marginals (Smith and Eisner, 2008). In this paper g is binary and normalized by its total count in the corpus. The expectation of g is then the probability that it indicates a true edge. 363 4 Linguistic Prior Knowledge Training parsers using GE with the aid of linguists is an exciting direction for future work. In this paper, we use constraints derived from several basic types of linguistic knowledge. One simple form of linguistic knowledge is the set of possible parent tags for a given child tag. This type of constraint was used in the development of a rule-based dependency parser (Debusmann et al., 2004). Additional information can be obtained from small grammar fragments. Haghighi and Klein (2006) provide a list of prototype phrase structure rules that can be augmented with dependencies and used to define constraints involving parent and child tags, surrounding or interposing tags, direction, and distance. Finally there are well known hypotheses about the direction and distance of attachments that can be used to define constraints. Eisner and Smith (2005) use the fact that short attachments are more common to improve unsupervised parsing accuracy. 4.1 “Oracle” constraints For some experiments that follow we use “oracle” constraints that are estimated from labeled data. This involves choosing feature templates (motivated by the linguistic knowledge described above) and estimating target expectations. Oracle methods used in this paper consider three simple statistics of candidate constraint functions: count ˜c(g), edge count ˜cedge(g), and edge probability ˜p(edge|g). Let D be the labeled corpus. ˜c(g) = X x∈D X i X j g(xi, xj, x) ˜cedge(g) = X (x,y)∈D X i g(xi, xyi, x) ˜p(edge|g) = ˜cedge(g) ˜c(g) Constraint functions are selected according to some combination of the above statistics. In some cases we additionally prune the candidate set by considering only certain templates. To compute the target expectation, we simply use bin(˜p(edge|g)), where bin returns the closest value in the set {0, 0.1, 0.25, 0.5, 0.75, 1}. This can be viewed as specifying that g is very indicative of edge, somewhat indicative of edge, etc. 5 Experimental Comparison with Unsupervised Learning In this section we compare GE training with methods for unsupervised parsing. We use the WSJ10 corpus (as processed by Smith (2006)), which is comprised of English sentences of ten words or fewer (after stripping punctuation) from the WSJ portion of the Penn Treebank. As in previous work sentences contain only part-of-speech tags. We compare GE and supervised training of an edge-factored CRF with unsupervised learning of a DMV model (Klein and Manning, 2004) using EM and contrastive estimation (CE) (Smith and Eisner, 2005). We also report the accuracy of an attach-right baseline6. Finally, we report the accuracy of a constraint baseline that assigns a score to each possible edge that is the sum of the target expectations for all constraints on that edge. Possible edges without constraints receive a score of 0. These scores are used as input to the maximum spanning tree algorithm, which returns the best tree. Note that this is a strong baseline because it can handle uncertain constraints, and the tree constraint imposed by the MST algorithm helps information propagate across edges. We note that there are considerable differences between the DMV and CRF models. The DMV model is more expressive than the CRF because it can model the arity of a head as well as sibling relationships. Because these features consider multiple edges, including them in the CRF model would make exact inference intractable (McDonald and Satta, 2007). However, the CRF may consider the distance between head and child, whereas DMV does not model distance. The CRF also models non-projective trees, which when evaluating on English is likely a disadvantage. Consequently, we experiment with two sets of features for the CRF model. The first, restricted set includes features that consider the head and child tags of the dependency conjoined with the direction of the attachment, (parent-POS,childPOS,direction). With this feature set, the CRF model is less expressive than DMV. The second full set includes standard features for edgefactored dependency parsers (McDonald et al., 2005), though still unlexicalized. The CRF cannot consider valency even with the full feature set, but this is balanced by the ability to use distance. 6The reported accuracies with the DMV model and the attach-right baseline are taken from (Smith, 2006). 364 feature ex. feature ex. MD →VB 1.00 NNS ←VBD 0.75 POS ←NN 0.75 PRP ←VBD 0.75 JJ ←NNS 0.75 VBD →TO 1.00 NNP ←POS 0.75 VBD →VBN 0.75 ROOT →MD 0.75 NNS ←VBP 0.75 ROOT →VBD 1.00 PRP ←VBP 0.75 ROOT →VBP 0.75 VBP →VBN 0.75 ROOT →VBZ 0.75 PRP ←VBZ 0.75 TO →VB 1.00 NN ←VBZ 0.75 VBN →IN 0.75 VBZ →VBN 0.75 Table 1: 20 constraints that give 61.3% accuracy on WSJ10. Tags are grouped according to heads, and are in the order they appear in the sentence, with the arrow pointing from head to modifier. We generate constraints in two ways. First, we use oracle constraints of the form (parentPOS,child-POS,direction) such that ˜c(g) ≥200. We choose constraints in descending order of ˜p(edge|g). The first 20 constraints selected using this method are displayed in Table 1. Although the reader can verify that the constraints in Table 1 are reasonable, we additionally experiment with human-provided constraints. We use the prototype phrase-structure constraints provided by Haghighi and Klein (2006), and with the aid of head-finding rules, extract 14 (parent-pos,child-pos,direction) constraints.7 We then estimated target expectations for these constraints using our prior knowledge, without looking at the training data. We also created a second constraint set with an additional six constraints for tag pairs that were previously underrepresented. 5.1 Results We present results varying the number of constraints in Figures 1 and 2. Figure 1 compares supervised and GE training of the CRF model, as well as the feature constraint baseline. First we note that GE training using the full feature set substantially outperforms the restricted feature set, despite the fact that the same set of constraints is used for both experiments. This result demonstrates GE’s ability to learn about related but nonconstrained features. GE training also outperforms the baseline8. We compare GE training of the CRF model 7Because the CFG rules in (Haghighi and Klein, 2006) are “flattened” and in some cases do not generate appropriate dependency constraints, we only used a subset. 8The baseline eventually matches the accuracy of the restricted CRF but this is understandable because GE’s ability to bootstrap is greatly reduced with the restricted feature set. with unsupervised learning of the DMV model in Figure 29. Despite the fact that the restricted CRF is less expressive than DMV, GE training of this model outperforms EM with 30 constraints and CE with 50 constraints. GE training of the full CRF outperforms EM with 10 constraints and CE with 20 constraints (those displayed in Table 1). GE training of the full CRF with the set of 14 constraints from (Haghighi and Klein, 2006), gives accuracy of 53.8%, which is above the interpolated oracle constraints curve (43.5% accuracy with 10 constraints, 61.3% accuracy with 20 constraints). With the 6 additional constraints, we obtain accuracy of 57.7% and match CE. Recall that CE, EM, and the DMV model incorporate prior knowledge indirectly, and that the reported results are heavily-tuned ideal cases (see Section 2). In contrast, GE provides a method to directly encode intuitive linguistic insights. Finally, note that structural annealing (Smith and Eisner, 2006) provides 66.7% accuracy on WSJ10 when choosing the best performing annealing schedule (Smith, 2006). As noted in Section 2 other annealing schedules provide accuracy as low as 32.5%. GE training of the full CRF attains accuracy of 67.0% with 30 constraints. 6 Experimental Comparison with Supervised Training on Long Sentences Unsupervised parsing methods are typically evaluated on short sentences, as in Section 5. In this section we show that GE can be used to train parsers for longer sentences that provide comparable accuracy to supervised training with tens to hundreds of parsed sentences. We use the standard train/test splits of the Spanish, Dutch, and Turkish data from the 2006 CoNLL Shared Task. We also use standard edge-factored feature templates (McDonald et al., 2005)10. We experiment with versions of the dat9Klein and Manning (2004) report 43.2% accuracy for DMV with EM on WSJ10. When jointly modeling constituency and dependencies, Klein and Manning (2004) report accuracy of 47.5%. Seginer (2007) and Bod (2006) propose unsupervised phrase structure parsing methods that give better unlabeled F-scores than DMV with EM, but they do not report directed dependency accuracy. 10Typical feature processing uses only supported features, or those features that occur on at least one true edge in the training data. Because we assume that the data is unlabeled, we instead use features on all possible edges. This generates tens of millions features, so we prune those features that occur fewer than 10 total times, as in (Smith and Eisner, 2007). 365 10 20 30 40 50 60 10 20 30 40 50 60 70 80 90 number of constraints accuracy constraint baseline CRF restricted supervised CRF supervised CRF restricted GE CRF GE CRF GE human Figure 1: Comparison of the constraint baseline and both GE and supervised training of the restricted and full CRF. Note that supervised training uses 5,301 parsed sentences. GE with human provided constraints closely matches the oracle results. 10 20 30 40 50 60 10 20 30 40 50 60 70 80 number of constraints accuracy attach right baseline DMV EM DMV CE CRF restricted GE CRF GE CRF GE human Figure 2: Comparison of GE training of the restricted and full CRFs with unsupervised learning of DMV. GE training of the full CRF outperforms CE with just 20 constraints. GE also matches CE with 20 human provided constraints. sets in which we remove sentences that are longer than 20 words and 60 words. For these experiments, we use an oracle constraint selection method motivated by the linguistic prior knowledge described in Section 4. The first set of constraints specify the most frequent head tag, attachment direction, and distance combinations for each child tag. Specifically, we select oracle constraints of the type (parent-CPOS,child-CPOS,direction,distance)11. We add constraints for every g such that ˜cedge(g) > 100 for max length 60 data sets, and ˜cedge(g)>10 times for max length 20 data sets. In some cases, the possible parent constraints described above will not be enough to provide high accuracy, because they do not consider other tags in the sentence (McDonald et al., 2005). Consequently, we experiment with adding an additional 25 sequence constraints (for what are often called “between” and “surrounding” features). The oracle feature selection method aims to choose such constraints that help to reduce uncertainty in the possible parents constraint set. Consequently, we consider sequence features gs with ˜p(edge|gs = 1) ≥0.75, and whose corresponding (parent-CPOS,child-CPOS,direction,distance) constraint g, has edge probability ˜p(edge|g) ≤ 0.25. Among these candidates, we sort by ˜c(gs =1), and select the top 25. We compare with the constraint baseline described in Section 5. Additionally, we report 11For these experiments we use coarse-grained part-ofspeech tags in constraints. the number of parsed sentences required for supervised CRF training (averaged over 5 random splits) to match the accuracy of GE training using the possible parents + sequence constraint set. The results are provided in Table 2. We first observe that GE always beats the baseline, especially on parent decisions for which there are no constraints (not reported in Table 2, but for example 53.8% vs. 20.5% on Turkish 20). Second, we note that accuracy is always improved by adding sequence constraints. Importantly, we observe that GE gives comparable performance to supervised training with tens or hundreds of parsed sentences. These parsed sentences provide a tremendous amount of information to the model, as for example in 20 Spanish length ≤60 sentences, a total of 1,630,466 features are observed, 330,856 of them unique. In contrast, the constraint-based methods are provided at most a few hundred constraints. When comparing the human costs of parsing sentences and specifying constraints, remember that parsing sentences requires the development of detailed annotation guidelines, which can be extremely time-consuming (see also the discussion is Section 2). Finally, we experiment with iteratively adding constraints. We sort constraints with ˜c(g) > 50 by ˜p(edge|g), and ensure that 50% are (parent-CPOS,child-CPOS,direction,distance) constraints and 50% are sequence constraints. For lack of space, we only show the results for Spanish 60. In Figure 3, we see that GE beats the baseline more soundly than above, and that 366 possible parent constraints + sequence constraints complete trees baseline GE baseline GE dutch 20 69.5 70.7 69.8 71.8 80-160 dutch 60 66.5 69.3 66.7 69.8 40-80 spanish 20 70.0 73.2 71.2 75.8 40-80 spanish 60 62.1 66.2 62.7 66.9 20-40 turkish 20 66.3 71.8 67.1 72.9 80-160 turkish 60 62.1 65.5 62.3 66.6 20-40 Table 2: Experiments on Dutch, Spanish, and Turkish with maximum sentence lengths of 20 and 60. Observe that GE outperforms the baseline, adding sequence constraints improves accuracy, and accuracy with GE training is comparable to supervised training with tens to hundreds of parsed sentences. parent tag true predicted det. 0.005 0.005 adv. 0.018 0.013 conj. 0.012 0.001 pron. 0.011 0.009 verb 0.355 0.405 adj. 0.067 0.075 punc. 0.031 0.013 noun 0.276 0.272 prep. 0.181 0.165 direction true predicted right 0.621 0.598 left 0.339 0.362 distance true predicted 1 0.495 0.564 2 0.194 0.206 3 0.066 0.050 4 0.042 0.037 5 0.028 0.031 6-10 0.069 0.033 > 10 0.066 0.039 feature (distance) false pos. occ. verb →punc. (>10) 1183 noun →prep. (1) 1139 adj. →prep. (1) 855 verb →verb (6-10) 756 verb →verb (>10) 569 noun ←punc. (1) 512 verb ←punc. (2) 509 prep. ←punc. (1) 476 verb →punc. (4) 427 verb →prep. (1) 422 Table 3: Error analysis for GE training with possible parent + sequence constraints on Spanish 60 data. On the left, the predicted and true distribution over parent coarse part-of-speech tags. In the middle, the predicted and true distributions over attachment directions and distances. On the right, common features on false positive edges. 100 200 300 400 500 600 700 800 25 30 35 40 45 50 55 60 65 70 75 number of constraints accuracy Spanish (maximum length 60) constraint baseline GE Figure 3: Comparing GE training of a CRF and constraint baseline while increasing the number of oracle constraints. adding constraints continues to increase accuracy. 7 Error Analysis In this section, we analyze the errors of the model learned with the possible parent + sequence constraints on the Spanish 60 data. In Table 3, we present four types of analysis. First, we present the predicted and true distributions over coarsegrained parent part of speech tags. We can see that verb is being predicted as a parent tag more often then it should be, while most other tags are predicted less often than they should be. Next, we show the predicted and true distributions over attachment direction and distance. From this we see that the model is often incorrectly predicting left attachments, and is predicting too many short attachments. Finally, we show the most common parent-child tag with direction and distance features that occur on false positive edges. From this table, we see that many errors concern the attachments of punctuation. The second line indicates a prepositional phrase attachment ambiguity. This analysis could also be performed by a linguist by looking at predicted trees for selected sentences. Once errors are identified, GE constraints could be added to address these problems. 8 Conclusions In this paper, we developed a novel method for the semi-supervised learning of a non-projective CRF dependency parser that directly uses linguistic prior knowledge as a training signal. It is our hope that this method will permit more effective leveraging of linguistic insight and resources and enable the construction of parsers in languages and domains where treebanks are not available. Acknowledgments We thank Ryan McDonald, Keith Hall, John Hale, Xiaoyun Wu, and David Smith for helpful discussions. This work was completed in part while Gregory Druck was an intern at Google. This work was supported in part by the Center for Intelligent Information Retrieval, The Central Intelligence Agency, the National Security Agency and National Science Foundation under NSF grant #IIS-0326249, and by the Defense Advanced Research Projects Agency (DARPA) under Contract No. FA8750-07-D-0185/0004. Any opinions, findings and conclusions or recommendations expressed in this material are the author’s and do not necessarily reflect those of the sponsor. 367 References E. Black, J. Lafferty, and S. Roukos. 1992. Development and evaluation of a broad-coverage probabilistic grammar of english language computer manuals. In ACL, pages 185– 192. Rens Bod. 2006. An all-subtrees approach to unsupervised parsing. In ACL, pages 865–872. E. Charniak. 2001. Immediate-head parsing for language models. In ACL. R. Debusmann, D. Duchier, A. Koller, M. Kuhlmann, G. Smolka, and S. Thater. 2004. A relational syntaxsemantics interface based on dependency grammar. In COLING. G. Druck, G. S. Mann, and A. McCallum. 2008. Learning from labeled features using generalized expectation criteria. In SIGIR. J. Eisner and N.A. Smith. 2005. Parsing with soft and hard constraints on dependency length. In IWPT. Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2009. Dependency grammar induction via bitext projection constraints. In ACL. A. Haghighi and D. Klein. 2006. Prototype-driven grammar induction. In COLING. R. J. Kate and R. J. Mooney. 2007. Semi-supervised learning for semantic parsing using support vector machines. In HLT-NAACL (Short Papers). D. Klein and C. Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. In ACL. T. Koo, X. Carreras, and M. Collins. 2008. Simple semisupervised dependency parsing. In ACL. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML. G. Mann and A. McCallum. 2007. Simple, robust, scalable semi-supervised learning via expectation regularization. In ICML. G. Mann and A. McCallum. 2008. Generalized expectation criteria for semi-supervised learning of conditional random fields. In ACL. D. McClosky, E. Charniak, and M. Johnson. 2006. Effective self-training for parsing. In HLT-NAACL. Ryan McDonald and Giorgio Satta. 2007. On the complexity of non-projective data-driven dependency parsing. In Proc. of IWPT, pages 121–132. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In ACL, pages 91–98. R. Quirk, S. Greenbaum, G. Leech, and J. Svartvik. 1985. A Comprehensive Grammar of the English Language. Longman. Yoav Seginer. 2007. Fast unsupervised incremental parsing. In ACL, pages 384–391, Prague, Czech Republic. Noah A. Smith and Jason Eisner. 2005. Contrastive estimation: training log-linear models on unlabeled data. In ACL, pages 354–362. Noah A. Smith and Jason Eisner. 2006. Annealing structural bias in multilingual weighted grammar induction. In COLING-ACL, pages 569–576. David A. Smith and Jason Eisner. 2007. Bootstrapping feature-rich dependency parsers with entropic priors. In EMNLP-CoNLL, pages 667–677. David A. Smith and Jason Eisner. 2008. Dependency parsing by belief propagation. In EMNLP. David A. Smith and Noah A. Smith. 2007. Probabilistic models of nonprojective dependency trees. In EMNLPCoNLL, pages 132–140. Noah A. Smith. 2006. Novel Estimation Methods for Unsupervised Discovery of Latent Structure in Natural Language Text. Ph.D. thesis, Johns Hopkins University. Qin Iris Wang, Dale Schuurmans, and Dekang Lin. 2008. Semi-supervised convex training for dependency parsing. In ACL, pages 532–540. 368
2009
41
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 369–377, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Dependency Grammar Induction via Bitext Projection Constraints Kuzman Ganchev and Jennifer Gillenwater and Ben Taskar Department of Computer and Information Science University of Pennsylvania, Philadelphia PA, USA {kuzman,jengi,taskar}@seas.upenn.edu Abstract Broad-coverage annotated treebanks necessary to train parsers do not exist for many resource-poor languages. The wide availability of parallel text and accurate parsers in English has opened up the possibility of grammar induction through partial transfer across bitext. We consider generative and discriminative models for dependency grammar induction that use word-level alignments and a source language parser (English) to constrain the space of possible target trees. Unlike previous approaches, our framework does not require full projected parses, allowing partial, approximate transfer through linear expectation constraints on the space of distributions over trees. We consider several types of constraints that range from generic dependency conservation to language-specific annotation rules for auxiliary verb analysis. We evaluate our approach on Bulgarian and Spanish CoNLL shared task data and show that we consistently outperform unsupervised methods and can outperform supervised learning for limited training data. 1 Introduction For English and a handful of other languages, there are large, well-annotated corpora with a variety of linguistic information ranging from named entity to discourse structure. Unfortunately, for the vast majority of languages very few linguistic resources are available. This situation is likely to persist because of the expense of creating annotated corpora that require linguistic expertise (Abeillé, 2003). On the other hand, parallel corpora between many resource-poor languages and resource-rich languages are ample, motivating recent interest in transferring linguistic resources from one language to another via parallel text. For example, several early works (Yarowsky and Ngai, 2001; Yarowsky et al., 2001; Merlo et al., 2002) demonstrate transfer of shallow processing tools such as part-of-speech taggers and noun-phrase chunkers by using word-level alignment models (Brown et al., 1994; Och and Ney, 2000). Alshawi et al. (2000) and Hwa et al. (2005) explore transfer of deeper syntactic structure: dependency grammars. Dependency and constituency grammar formalisms have long coexisted and competed in linguistics, especially beyond English (Mel’ˇcuk, 1988). Recently, dependency parsing has gained popularity as a simpler, computationally more efficient alternative to constituency parsing and has spurred several supervised learning approaches (Eisner, 1996; Yamada and Matsumoto, 2003a; Nivre and Nilsson, 2005; McDonald et al., 2005) as well as unsupervised induction (Klein and Manning, 2004; Smith and Eisner, 2006). Dependency representation has been used for language modeling, textual entailment and machine translation (Haghighi et al., 2005; Chelba et al., 1997; Quirk et al., 2005; Shen et al., 2008), to name a few tasks. Dependency grammars are arguably more robust to transfer since syntactic relations between aligned words of parallel sentences are better conserved in translation than phrase structure (Fox, 2002; Hwa et al., 2005). Nevertheless, several challenges to accurate training and evaluation from aligned bitext remain: (1) partial word alignment due to non-literal or distant translation; (2) errors in word alignments and source language parses, (3) grammatical annotation choices that differ across languages and linguistic theories (e.g., how to analyze auxiliary verbs, conjunctions). In this paper, we present a flexible learning 369 framework for transferring dependency grammars via bitext using the posterior regularization framework (Graça et al., 2008). In particular, we address challenges (1) and (2) by avoiding commitment to an entire projected parse tree in the target language during training. Instead, we explore formulations of both generative and discriminative probabilistic models where projected syntactic relations are constrained to hold approximately and only in expectation. Finally, we address challenge (3) by introducing a very small number of language-specific constraints that disambiguate arbitrary annotation choices. We evaluate our approach by transferring from an English parser trained on the Penn treebank to Bulgarian and Spanish. We evaluate our results on the Bulgarian and Spanish corpora from the CoNLL X shared task. We see that our transfer approach consistently outperforms unsupervised methods and, given just a few (2 to 7) languagespecific constraints, performs comparably to a supervised parser trained on a very limited corpus (30 - 140 training sentences). 2 Approach At a high level our approach is illustrated in Figure 1(a). A parallel corpus is word-level aligned using an alignment toolkit (Graça et al., 2009) and the source (English) is parsed using a dependency parser (McDonald et al., 2005). Figure 1(b) shows an aligned sentence pair example where dependencies are perfectly conserved across the alignment. An edge from English parent p to child c is called conserved if word p aligns to word p′ in the second language, c aligns to c′ in the second language, and p′ is the parent of c′. Note that we are not restricting ourselves to one-to-one alignments here; p, c, p′, and c′ can all also align to other words. After filtering to identify well-behaved sentences and high confidence projected dependencies, we learn a probabilistic parsing model using the posterior regularization framework (Graça et al., 2008). We estimate both generative and discriminative models by constraining the posterior distribution over possible target parses to approximately respect projected dependencies and other rules which we describe below. In our experiments we evaluate the learned models on dependency treebanks (Nivre et al., 2007). Unfortunately the sentence in Figure 1(b) is highly unusual in its amount of dependency conservation. To get a feel for the typical case, we used off-the-shelf parsers (McDonald et al., 2005) for English, Spanish and Bulgarian on two bitexts (Koehn, 2005; Tiedemann, 2007) and compared several measures of dependency conservation. For the English-Bulgarian corpus, we observed that 71.9% of the edges we projected were edges in the corpus, and we projected on average 2.7 edges per sentence (out of 5.3 tokens on average). For Spanish, we saw conservation of 64.4% and an average of 5.9 projected edges per sentence (out of 11.5 tokens on average). As these numbers illustrate, directly transferring information one dependency edge at a time is unfortunately error prone for two reasons. First, parser and word alignment errors cause much of the transferred information to be wrong. We deal with this problem by constraining groups of edges rather than a single edge. For example, in some sentence pair we might find 10 edges that have both end points aligned and can be transferred. Rather than requiring our target language parse to contain each of the 10 edges, we require that the expected number of edges from this set is at least 10η, where η is a strength parameter. This gives the parser freedom to have some uncertainty about which edges to include, or alternatively to choose to exclude some of the transferred edges. A more serious problem for transferring parse information across languages are structural differences and grammar annotation choices between the two languages. For example dealing with auxiliary verbs and reflexive constructions. Hwa et al. (2005) also note these problems and solve them by introducing dozens of rules to transform the transferred parse trees. We discuss these differences in detail in the experimental section and use our framework introduce a very small number of rules to cover the most common structural differences. 3 Parsing Models We explored two parsing models: a generative model used by several authors for unsupervised induction and a discriminative model used for fully supervised training. The discriminative parser is based on the edge-factored model and features of the MSTParser (McDonald et al., 2005). The parsing model defines a conditional distribution pθ(z | x) over each projective parse tree z for a particular sentence x, parameterized by a vector θ. The prob370 (a) (b) Figure 1: (a) Overview of our grammar induction approach via bitext: the source (English) is parsed and word-aligned with target; after filtering, projected dependencies define constraints over target parse tree space, providing weak supervision for learning a target grammar. (b) An example word-aligned sentence pair with perfectly projected dependencies. ability of any particular parse is pθ(z | x) ∝ Y z∈z eθ·φ(z,x), (1) where z is a directed edge contained in the parse tree z and φ is a feature function. In the fully supervised experiments we run for comparison, parameter estimation is performed by stochastic gradient ascent on the conditional likelihood function, similar to maximum entropy models or conditional random fields. One needs to be able to compute expectations of the features φ(z, x) under the distribution pθ(z | x). A version of the insideoutside algorithm (Lee and Choi, 1997) performs this computation. Viterbi decoding is done using Eisner’s algorithm (Eisner, 1996). We also used a generative model based on dependency model with valence (Klein and Manning, 2004). Under this model, the probability of a particular parse z and a sentence with part of speech tags x is given by pθ(z, x) = proot(r(x)) · (2)  Y z∈z p¬stop(zp, zd, vz) pchild(zp, zd, zc)  ·  Y x∈x pstop(x, left, vl) pstop(x, right, vr)  where r(x) is the part of speech tag of the root of the parse tree z, z is an edge from parent zp to child zc in direction zd, either left or right, and vz indicates valency—false if zp has no other children further from it in direction zd than zc, true otherwise. The valencies vr/vl are marked as true if x has any children on the left/right in z, false otherwise. 4 Posterior Regularization Graça et al. (2008) introduce an estimation framework that incorporates side-information into unsupervised problems in the form of linear constraints on posterior expectations. In grammar transfer, our basic constraint is of the form: the expected proportion of conserved edges in a sentence pair is at least η (the exact proportion we used was 0.9, which was determined using unlabeled data as described in Section 5). Specifically, let Cx be the set of directed edges projected from English for a given sentence x, then given a parse z, the proportion of conserved edges is f(x, z) = 1 |Cx| P z∈z 1(z ∈Cx) and the expected proportion of conserved edges under distribution p(z | x) is Ep[f(x, z)] = 1 |Cx| X z∈Cx p(z | x). The posterior regularization framework (Graça et al., 2008) was originally defined for generative unsupervised learning. The standard objective is to minimize the negative marginal log-likelihood of the data : bE[−log pθ(x)] = bE[−log P z pθ(z, x)] over the parameters θ (we use bE to denote expectation over the sample sentences x). We typically also add standard regularization term on θ, resulting from a parameter prior −log p(θ) = R(θ), where p(θ) is Gaussian for the MST-Parser models and Dirichlet for the valence model. To introduce supervision into the model, we define a set Qx of distributions over the hidden variables z satisfying the desired posterior constraints in terms of linear equalities or inequalities on feature expectations (we use inequalities in this paper): Qx = {q(z) : E[f(x, z)] ≤b}. 371 Basic Uni-gram Features xi-word, xi-pos xi-word xi-pos xj-word, xj-pos xj-word xj-pos Basic Bi-gram Features xi-word, xi-pos, xj-word, xj-pos xi-pos, xj-word, xj-pos xi-word, xj-word, xj-pos xi-word, xi-pos, xj-pos xi-word, xi-pos, xj-word xi-word, xj-word xi-pos, xj-pos In Between POS Features xi-pos, b-pos, xj-pos Surrounding Word POS Features xi-pos, xi-pos+1, xj-pos-1, xj-pos xi-pos-1, xi-pos, xj-pos-1, xj-pos xi-pos, xi-pos+1, xj-pos, xj-pos+1 xi-pos-1, xi-pos, xj-pos, xj-pos+1 Table 1: Features used by the MSTParser. For each edge (i, j), xi-word is the parent word and xj-word is the child word, analogously for POS tags. The +1 and -1 denote preceeding and following tokens in the sentence, while b denotes tokens between xi and xj. In this paper, for example, we use the conservededge-proportion constraint as defined above. The marginal log-likelihood objective is then modified with a penalty for deviation from the desired set of distributions, measured by KLdivergence from the set Qx, KL(Qx||pθ(z|x)) = minq∈Qx KL(q(z)||pθ(z|x)). The generative learning objective is to minimize: bE[−log pθ(x)] + R(θ) + bE[KL(Qx||pθ(z | x))]. For discriminative estimation (Ganchev et al., 2008), we do not attempt to model the marginal distribution of x, so we simply have the two regularization terms: R(θ) + bE[KL(Qx||pθ(z | x))]. Note that the idea of regularizing moments is related to generalized expectation criteria algorithm of Mann and McCallum (2007), as we discuss in the related work section below. In general, the objectives above are not convex in θ. To optimize these objectives, we follow an Expectation Maximization-like scheme. Recall that standard EM iterates two steps. An E-step computes a probability distribution over the model’s hidden variables (posterior probabilities) and an M-step that updates the model’s parameters based on that distribution. The posterior-regularized EM algorithm leaves the M-step unchanged, but involves projecting the posteriors onto a constraint set after they are computed for each sentence x: arg min q KL(q(z) ∥pθ(z|x)) s.t. Eq[f(x, z)] ≤b, (3) where pθ(z|x) are the posteriors. The new posteriors q(z) are used to compute sufficient statistics for this instance and hence to update the model’s parameters in the M-step for either the generative or discriminative setting. The optimization problem in Equation 3 can be efficiently solved in its dual formulation: arg min λ≥0 b⊤λ+log X z pθ(z | x) exp {−λ⊤f(x, z)}. (4) Given λ, the primal solution is given by: q(z) = pθ(z | x) exp{−λ⊤f(x, z)}/Z, where Z is a normalization constant. There is one dual variable per expectation constraint, and we can optimize them by projected gradient descent, similar to log-linear model estimation. The gradient with respect to λ is given by: b −Eq[f(x, z)], so it involves computing expectations under the distribution q(z). This remains tractable as long as features factor by edge, f(x, z) = P z∈z f(x, z), because that ensures that q(z) will have the same form as pθ(z | x). Furthermore, since the constraints are per instance, we can use incremental or online version of EM (Neal and Hinton, 1998), where we update parameters θ after posterior-constrained E-step on each instance x. 5 Experiments We conducted experiments on two languages: Bulgarian and Spanish, using each of the parsing models. The Bulgarian experiments transfer a parser from English to Bulgarian, using the OpenSubtitles corpus (Tiedemann, 2007). The Spanish experiments transfer from English to Spanish using the Spanish portion of the Europarl corpus (Koehn, 2005). For both corpora, we performed word alignments with the open source PostCAT (Graça et al., 2009) toolkit. We used the Tokyo tagger (Tsuruoka and Tsujii, 2005) to POS tag the English tokens, and generated parses using the first-order model of McDonald et al. (2005) with projective decoding, trained on sections 2-21 of the Penn treebank with dependencies extracted using the head rules of Yamada and Matsumoto (2003b). For Bulgarian we trained the Stanford POS tagger (Toutanova et al., 2003) on the Bul372 Discriminative model Generative model Bulgarian Spanish Bulgarian Spanish no rules 2 rules 7 rules no rules 3 rules no rules 2 rules 7 rules no rules 3 rules Baseline 63.8 72.1 72.6 67.6 69.0 66.5 69.1 71.0 68.2 71.3 Post.Reg. 66.9 77.5 78.3 70.6 72.3 67.8 70.7 70.8 69.5 72.8 Table 2: Comparison between transferring a single tree of edges and transferring all possible projected edges. The transfer models were trained on 10k sentences of length up to 20, all models tested on CoNLL train sentences of up to 10 words. Punctuation was stripped at train time. gtreebank corpus from CoNLL X. The Spanish Europarl data was POS tagged with the FreeLing language analyzer (Atserias et al., 2006). The discriminative model used the same features as MSTParser, summarized in Table 1. In order to evaluate our method, we a baseline inspired by Hwa et al. (2005). The baseline constructs a full parse tree from the incomplete and possibly conflicting transferred edges using a simple random process. We start with no edges and try to add edges one at a time verifying at each step that it is possible to complete the tree. We first try to add the transferred edges in random order, then for each orphan node we try all possible parents (both in random order). We then use this full labeling as supervision for a parser. Note that this baseline is very similar to the first iteration of our model, since for a large corpus the different random choices made in different sentences tend to smooth each other out. We also tried to create rules for the adoption of orphans, but the simple rules we tried added bias and performed worse than the baseline we report. Table 2 shows attachment accuracy of our method and the baseline for both language pairs under several conditions. By attachment accuracy we mean the fraction of words assigned the correct parent. The experimental details are described in this section. Link-left baselines for these corpora are much lower: 33.8% and 27.9% for Bulgarian and Spanish respectively. 5.1 Preprocessing Preliminary experiments showed that our word alignments were not always appropriate for syntactic transfer, even when they were correct for translation. For example, the English “bike/V” could be translated in French as “aller/V en vélo/N”, where the word “bike” would be aligned with “vélo”. While this captures some of the semantic shared information in the two languages, we have no expectation that the noun “vélo” will have a similar syntactic behavior to the verb “bike”. To prevent such false transfer, we filter out alignments between incompatible POS tags. In both language pairs, filtering out noun-verb alignments gave the biggest improvement. Both corpora also contain sentence fragments, either because of question responses or fragmented speech in movie subtitles or because of voting announcements and similar formulaic sentences in the parliamentary proceedings. We overcome this problem by filtering out sentences that do not have a verb as the English root or for which the English root is not aligned to a verb in the target language. For the subtitles corpus we also remove sentences that end in an ellipsis or contain more than one comma. Finally, following (Klein and Manning, 2004) we strip out punctuation from the sentences. For the discriminative model this did not affect results significantly but improved them slightly in most cases. We found that the generative model gets confused by punctuation and tends to predict that periods at the end of sentences are the parents of words in the sentence. Our basic model uses constraints of the form: the expected proportion of conserved edges in a sentence pair is at least η = 90%.1 5.2 No Language-Specific Rules We call the generic model described above “norules” to distinguish it from the language-specific constraints we introduce in the sequel. The no rules columns of Table 2 summarize the performance in this basic setting. Discriminative models outperform the generative models in the majority of cases. The left panel of Table 3 shows the most common errors by child POS tag, as well as by true parent and guessed parent POS tag. Figure 2 shows that the discriminative model continues to improve with more transfer-type data 1We chose η in the following way: we split the unlabeled parallel text into two portions. We trained a models with different η on one portion and ran it on the other portion. We chose the model with the highest fraction of conserved constraints on the second portion. 373 0.52 0.54 0.56 0.58 0.6 0.62 0.64 0.66 0.68 0.1 1 10 accuracy (%) training data size (thousands of sentences) our method baseline Figure 2: Learning curve of the discriminative no-rules transfer model on Bulgarian bitext, testing on CoNLL train sentences of up to 10 words. Figure 3: A Spanish example where an auxiliary verb dominates the main verb. up to at least 40 thousand sentences. 5.3 Annotation guidelines and constraints Using the straightforward approach outlined above is a dramatic improvement over the standard link-left baseline (and the unsupervised generative model as we discuss below), however it doesn’t have any information about the annotation guidelines used for the testing corpus. For example, the Bulgarian corpus has an unusual treatment of nonfinite clauses. Figure 4 shows an example. We see that the “da” is the parent of both the verb and its object, which is different than the treatment in the English corpus. We propose to deal with these annotation dissimilarities by creating very simple rules. For Spanish, we have three rules. The first rule sets main verbs to dominate auxiliary verbs. Specifically, whenever an auxiliary precedes a main verb the main verb becomes its parent and adopts its children; if there is only one main verb it becomes the root of the sentence; main verbs also become Figure 4: An example where transfer fails because of different handling of reflexives and nonfinite clauses. The alignment links provide correct glosses for Bulgarian words. “Bh” is a past tense marker while “se” is a reflexive marker. parents of pronouns, adverbs, and common nouns that directly preceed auxiliary verbs. By adopting children we mean that we change the parent of transferred edges to be the adopting node. The second Spanish rule states that the first element of an adjective-noun or noun-adjective pair dominates the second; the first element also adopts the children of the second element. The third and final Spanish rule sets all prepositions to be children of the first main verb in the sentence, unless the preposition is a “de” located between two noun phrases. In this later case, we set the closest noun in the first of the two noun phrases as the preposition’s parent. For Bulgarian the first rule is that “da” should dominate all words until the next verb and adopt their noun, preposition, particle and adverb children. The second rule is that auxiliary verbs should dominate main verbs and adopt their children. We have a list of 12 Bulgarian auxiliary verbs. The “seven rules” experiments add rules for 5 more words similar to the rule for “da”, specifically “qe”, “li”, “kakvo”, “ne”, “za”. Table 3 compares the errors for different linguistic rules. When we train using the “da” rule and the rules for auxiliary verbs, the model learns that main verbs attach to auxiliary verbs and that “da” dominates its nonfinite clause. This causes an improvement in the attachment of verbs, and also drastically reduces words being attached to verbs instead of particles. The latter is expected because “da” is analyzed as a particle in the Bulgarian POS tagset. We see an improvement in root/verb confusions since “da” is sometimes errenously attached to a the following verb rather than being the root of the sentence. The rightmost panel of Table 3 shows similar analysis when we also use the rules for the five other closed-class words. We see an improvement in attachments in all categories, but no qualitative change is visible. The reason for this is probably that these words are relatively rare, but by encouraging the model to add an edge, it also rules out incorrect edges that would cross it. Consequently we are seeing improvements not only directly from the constraints we enforce but also indirectly as types of edges that tend to get ruled out. 5.4 Generative parser The generative model we use is a state of the art model for unsupervised parsing and is our only 374 No Rules Two Rules Seven Rules child POS parent POS acc(%) errors errors V 65.2 2237 T/V 2175 N 73.8 1938 V/V 1305 P 58.5 1705 N/V 1112 R 70.3 961 root/V 555 child POS parent POS acc(%) errors errors N 78.7 1572 N/V 938 P 70.2 1224 V/V 734 V 84.4 1002 V/N 529 R 79.3 670 N/N 376 child POS parent POS acc(%) errors errors N 79.3 1532 N/V 1116 P 75.7 998 V/V 560 R 69.3 993 V/N 507 V 86.2 889 N/N 450 Table 3: Top 4 discriminative parser errors by child POS tag and true/guess parent POS tag in the Bulgarian CoNLL train data of length up to 10. Training with no language-specific rules (left); two rules (center); and seven rules (right). POS meanings: V verb, N noun, P pronoun, R preposition, T particle. Accuracies are by child or parent truth/guess POS tag. 0.6 0.65 0.7 0.75 20 40 60 80 100 120 140 accuracy (%) supervised training data size supervised no rules two rules seven rules 0.65 0.7 0.75 0.8 20 40 60 80 100 120 140 accuracy (%) supervised training data size supervised no rules three rules 0.65 0.7 0.75 0.8 20 40 60 80 100 120 140 accuracy (%) supervised training data size supervised no rules two rules seven rules 0.65 0.7 0.75 0.8 20 40 60 80 100 120 140 accuracy (%) supervised training data size supervised no rules three rules Figure 5: Comparison to parsers with supervised estimation and transfer. Top: Generative. Bottom: Discriminative. Left: Bulgarian. Right: Spanish. The transfer models were trained on 10k sentences all of length at most 20, all models tested on CoNLL train sentences of up to 10 words. The x-axis shows the number of examples used to train the supervised model. Boxes show first and third quartile, whiskers extend to max and min, with the line passing through the median. Supervised experiments used 30 random samples from CoNLL train. fully unsupervised baseline. As smoothing we add a very small backoff probability of 4.5 × 10−5 to each learned paramter. Unfortunately, we found generative model performance was disappointing overall. The maximum unsupervised accuracy it achieved on the Bulgarian data is 47.6% with initialization from Klein and Manning (2004) and this result is not stable. Changing the initialization parameters, training sample, or maximum sentence length used for training drastically affected the results, even for samples with several thousand sentences. When we use the transferred information to constrain the learning, EM stabilizes and achieves much better performance. Even setting all parameters equal at the outset does not prevent the model from learning the dependency structure of the aligned language. The top panels in Figure 5 show the results in this setting. We see that performance is still always below the accuracy achieved by supervised training on 20 annotated sentences. However, the improvement in stability makes the algorithm much more usable. As we shall see below, the discriminative parser performs even better than the generative model. 5.5 Discriminative parser We trained our discriminative parser for 100 iterations of online EM with a Gaussian prior variance of 100. Results for the discriminative parser are shown in the bottom panels of Figure 5. The supervised experiments are given to provide context for the accuracies. For Bulgarian, we see that without any hints about the annotation guidelines, the transfer system performs better than an unsu375 pervised parser, comparable to a supervised parser trained on 10 sentences. However, if we specify just the two rules for “da” and verb conjugations performance jumps to that of training on 6070 fully labeled sentences. If we have just a little more prior knowledge about how closed-class words are handled, performance jumps above 140 fully labeled sentence equivalent. We observed another desirable property of the discriminative model. While the generative model can get confused and perform poorly when the training data contains very long sentences, the discriminative parser does not appear to have this drawback. In fact we observed that as the maximum training sentence length increased, the parsing performance also improved. 6 Related Work Our work most closely relates to Hwa et al. (2005), who proposed to learn generative dependency grammars using Collins’ parser (Collins, 1999) by constructing full target parses via projected dependencies and completion/transformation rules. Hwa et al. (2005) found that transferring dependencies directly was not sufficient to get a parser with reasonable performance, even when both the source language parses and the word alignments are performed by hand. They adjusted for this by introducing on the order of one or two dozen language-specific transformation rules to complete target parses for unaligned words and to account for diverging annotation rules. Transferring from English to Spanish in this way, they achieve 72.1% and transferring to Chinese they achieve 53.9%. Our learning method is very closely related to the work of (Mann and McCallum, 2007; Mann and McCallum, 2008) who concurrently developed the idea of using penalties based on posterior expectations of features not necessarily in the model in order to guide learning. They call their method generalized expectation constraints or alternatively expectation regularization. In this volume (Druck et al., 2009) use this framework to train a dependency parser based on constraints stated as corpus-wide expected values of linguistic rules. The rules select a class of edges (e.g. auxiliary verb to main verb) and require that the expectation of these be close to some value. The main difference between this work and theirs is the source of the information (a linguistic informant vs. cross-lingual projection). Also, we define our regularization with respect to inequality constraints (the model is not penalized for exceeding the required model expectations), while they require moments to be close to an estimated value. We suspect that the two learning methods could perform comparably when they exploit similar information. 7 Conclusion In this paper, we proposed a novel and effective learning scheme for transferring dependency parses across bitext. By enforcing projected dependency constraints approximately and in expectation, our framework allows robust learning from noisy partially supervised target sentences, instead of committing to entire parses. We show that discriminative training generally outperforms generative approaches even in this very weakly supervised setting. By adding easily specified languagespecific constraints, our models begin to rival strong supervised baselines for small amounts of data. Our framework can handle a wide range of constraints and we are currently exploring richer syntactic constraints that involve conservation of multiple edge constructions as well as constraints on conservation of surface length of dependencies. Acknowledgments This work was partially supported by an Integrative Graduate Education and Research Traineeship grant from National Science Foundation (NSFIGERT 0504487), by ARO MURI SUBTLE W911NF-07-1-0216 and by the European Projects AsIsKnown (FP6-028044) and LTfLL (FP7-212578). References A. Abeill´e. 2003. Treebanks: Building and Using Parsed Corpora. Springer. H. Alshawi, S. Bangalore, and S. Douglas. 2000. Learning dependency translation models as collections of finite state head transducers. Computational Linguistics, 26(1). J. Atserias, B. Casas, E. Comelles, M. Gonz´alez, L. Padr´o, and M. Padr´o. 2006. Freeling 1.3: Syntactic and semantic services in an open-source nlp library. In Proc. LREC, Genoa, Italy. 376 P. F. Brown, S. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1994. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. C. Chelba, D. Engle, F. Jelinek, V. Jimenez, S. Khudanpur, L. Mangu, H. Printz, E. Ristad, R. Rosenfeld, A. Stolcke, and D. Wu. 1997. Structure and performance of a dependency language model. In Proc. Eurospeech. M. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. G. Druck, G. Mann, and A. McCallum. 2009. Semisupervised learning of dependency parsers using generalized expectation criteria. In Proc. ACL. J. Eisner. 1996. Three new probabilistic models for dependency parsing: an exploration. In Proc. CoLing. H. Fox. 2002. Phrasal cohesion and statistical machine translation. In Proc. EMNLP, pages 304–311. K. Ganchev, J. Graca, J. Blitzer, and B. Taskar. 2008. Multi-view learning over structured and nonidentical outputs. In Proc. UAI. J. Grac¸a, K. Ganchev, and B. Taskar. 2008. Expectation maximization and posterior constraints. In Proc. NIPS. J. Grac¸a, K. Ganchev, and B. Taskar. 2009. Postcat - posterior constrained alignment toolkit. In The Third Machine Translation Marathon. A. Haghighi, A. Ng, and C. Manning. 2005. Robust textual inference via graph matching. In Proc. EMNLP. R. Hwa, P. Resnik, A. Weinberg, C. Cabezas, and O. Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural Language Engineering, 11:11–311. D. Klein and C. Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. In Proc. of ACL. P. Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT Summit. S. Lee and K. Choi. 1997. Reestimation and bestfirst parsing algorithm for probabilistic dependency grammar. In In WVLC-5, pages 41–55. G. Mann and A. McCallum. 2007. Simple, robust, scalable semi-supervised learning via expectation regularization. In Proc. ICML. G. Mann and A. McCallum. 2008. Generalized expectation criteria for semi-supervised learning of conditional random fields. In Proc. ACL, pages 870 – 878. R. McDonald, K. Crammer, and F. Pereira. 2005. Online large-margin training of dependency parsers. In Proc. ACL, pages 91–98. I. Mel’ˇcuk. 1988. Dependency syntax: theory and practice. SUNY. inci. P. Merlo, S. Stevenson, V. Tsang, and G. Allaria. 2002. A multilingual paradigm for automatic verb classification. In Proc. ACL. R. M. Neal and G. E. Hinton. 1998. A new view of the EM algorithm that justifies incremental, sparse and other variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355–368. Kluwer. J. Nivre and J. Nilsson. 2005. Pseudo-projective dependency parsing. In Proc. ACL. J. Nivre, J. Hall, S. K¨ubler, R. McDonald, J. Nilsson, S. Riedel, and D. Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In Proc. EMNLP-CoNLL. F. J. Och and H. Ney. 2000. Improved statistical alignment models. In Proc. ACL. C. Quirk, A. Menezes, and C. Cherry. 2005. Dependency treelet translation: syntactically informed phrasal smt. In Proc. ACL. L. Shen, J. Xu, and R. Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proc. of ACL. N. Smith and J. Eisner. 2006. Annealing structural bias in multilingual weighted grammar induction. In Proc. ACL. J. Tiedemann. 2007. Building a multilingual parallel subtitle corpus. In Proc. CLIN. K. Toutanova, D. Klein, C. Manning, and Y. Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proc. HLT-NAACL. Y. Tsuruoka and J. Tsujii. 2005. Bidirectional inference with the easiest-first strategy for tagging sequence data. In Proc. HLT/EMNLP. H. Yamada and Y. Matsumoto. 2003a. Statistical dependency analysis with support vector machines. In Proc. IWPT, pages 195–206. H. Yamada and Y. Matsumoto. 2003b. Statistical dependency analysis with support vector machines. In Proc. IWPT. D. Yarowsky and G. Ngai. 2001. Inducing multilingual pos taggers and np bracketers via robust projection across aligned corpora. In Proc. NAACL. D. Yarowsky, G. Ngai, and R. Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proc. HLT. 377
2009
42
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 378–386, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Cross-Domain Dependency Parsing Using a Deep Linguistic Grammar Yi Zhang LT-Lab, DFKI GmbH and Dept of Computational Linguistics Saarland University D-66123 Saarbr¨ucken, Germany [email protected] Rui Wang Dept of Computational Linguistics Saarland University 66123 Saarbr¨ucken, Germany [email protected] Abstract Pure statistical parsing systems achieves high in-domain accuracy but performs poorly out-domain. In this paper, we propose two different approaches to produce syntactic dependency structures using a large-scale hand-crafted HPSG grammar. The dependency backbone of an HPSG analysis is used to provide general linguistic insights which, when combined with state-of-the-art statistical dependency parsing models, achieves performance improvements on out-domain tests.† 1 Introduction Syntactic dependency parsing is attracting more and more research focus in recent years, partially due to its theory-neutral representation, but also thanks to its wide deployment in various NLP tasks (machine translation, textual entailment recognition, question answering, information extraction, etc.). In combination with machine learning methods, several statistical dependency parsing models have reached comparable high parsing accuracy (McDonald et al., 2005b; Nivre et al., 2007b). In the meantime, successful continuation of CoNLL Shared Tasks since 2006 (Buchholz and Marsi, 2006; Nivre et al., 2007a; Surdeanu et al., 2008) have witnessed how easy it has become to train a statistical syntactic dependency parser provided that there is annotated treebank. While the dissemination continues towards various languages, several issues arise with such purely data-driven approaches. One common observation is that statistical parser performance drops significantly when tested on a dataset different from the training set. For instance, when using †The first author thanks the German Excellence Cluster of Multimodal Computing and Interaction for the support of the work. The second author is funded by the PIRE PhD scholarship program. the Wall Street Journal (WSJ) sections of the Penn Treebank (Marcus et al., 1993) as training set, tests on BROWN Sections typically result in a 6-8% drop in labeled attachment scores, although the average sentence length is much shorter in BROWN than that in WSJ. The common interpretation is that the test set is heterogeneous to the training set, hence in a different “domain” (in a loose sense). The typical cause of this is that the model overfits the training domain. The concerns over random choice of training corpus leading to linguistically inadequate parsing systems increase over time. While the statistical revolution in the field of computational linguistics gaining high publicity, the conventional symbolic grammar-based parsing approaches have undergone a quiet period of development during the past decade, and reemerged very recently with several large scale grammar-driven parsing systems, benefiting from the combination of well-established linguistic theories and data-driven stochastic models. The obvious advantage of such systems over pure statistical parsers is their usage of hand-coded linguistic knowledge irrespective of the training data. A common problem with grammar-based parser is the lack of robustness. Also it is difficult to derive grammar compatible annotations to train the statistical components. 2 Parser Domain Adaptation In recent years, two statistical dependency parsing systems, MaltParser (Nivre et al., 2007b) and MSTParser (McDonald et al., 2005b), representing different threads of research in data-driven machine learning approaches have obtained high publicity, for their state-of-the-art performances in open competitions such as CoNLL Shared Tasks. MaltParser follows the transition-based approach, where parsing is done through a series of actions deterministically predicted by an oracle model. MSTParser, on the other hand, follows 378 the graph-based approach where the best parse tree is acquired by searching for a spanning tree which maximizes the score on either a partially or a fully connected graph with all words in the sentence as nodes (Eisner, 1996; McDonald et al., 2005b). As reported in various evaluation competitions, the two systems achieved comparable performances. More recently, approaches of combining these two parsers achieved even better dependency accuracy (Nivre and McDonald, 2008). Granted for the differences between their approaches, both systems heavily rely on machine learning methods to estimate the parsing model from an annotated corpus as training set. Due to the heavy cost of developing high quality large scale syntactically annotated corpora, even for a resource-rich language like English, only very few of them meets the criteria for training a general purpose statistical parsing model. For instance, the text style of WSJ is newswire, and most of the sentences are statements. Being lack of non-statements in the training data could cause problems, when the testing data contain many interrogative or imperative sentences as in the BROWN corpus. Therefore, the unbalanced distribution of linguistic phenomena in the training data leads to inadequate parser output structures. Also, the financial domain specific terminology seen in WSJ can skew the interpretation of daily life sentences seen in BROWN. There has been a substantial amount of work on parser adaptation, especially from WSJ to BROWN. Gildea (2001) compared results from different combinations of the training and testing data to demonstrate that the size of the feature model can be reduced via excluding “domain-dependent” features, while the performance could still be preserved. Furthermore, he also pointed out that if the additional training data is heterogeneous from the original one, the parser will not obtain a substantially better performance. Bacchiani et al. (2006) generalized the previous approaches using a maximum a posteriori (MAP) framework and proposed both supervised and unsupervised adaptation of statistical parsers. McClosky et al. (2006) and McClosky et al. (2008) have shown that out-domain parser performance can be improved with selftraining on a large amount of unlabeled data. Most of these approaches focused on the machine learning perspective instead of the linguistic knowledge embraced in the parsers. Little study has been reported on approaches of incorporating linguistic features to make the parser less dependent on the nature of training and testing dataset, without resorting to huge amount of unlabeled out-domain data. In addition, most of the previous work have been focusing on constituent-based parsing, while the domain adaptation of the dependency parsing has not been fully explored. Taking a different approach towards parsing, grammar-based parsers appear to have much linguistic knowledge encoded within the grammars. In recent years, several of these linguistically motivated grammar-driven parsing systems achieved high accuracy which are comparable to the treebank-based statistical parsers. Notably are the constraint-based linguistic frameworks with mathematical rigor, and provide grammatical analyses for a large variety of phenomena. For instance, the Head-Driven Phrase Structure Grammar (Pollard and Sag, 1994) has been successfully applied in several parsing systems for more than a dozen of languages. Some of these grammars, such as the English Resource Grammar (ERG; Flickinger (2002)), have undergone over decades of continuous development, and provide precise linguistic analyses for a broad range of phenomena. These linguistic knowledge are encoded in highly generalized form according to linguists’ reflection for the target languages, and tend to be largely independent from any specific domain. The main issue of parsing with precision grammars is that broad coverage and high precision on linguistic phenomena do not directly guarantee robustness of the parser with noisy real world texts. Also, the detailed linguistic analysis is not always of the highest interest to all NLP applications. It is not always straightforward to scale down the detailed analyses embraced by deep grammars to a shallower representation which is more accessible for specific NLP tasks. On the other hand, since the dependency representation is relatively theory-neutral, it is possible to convert from other frameworks into its backbone representation in dependencies. For HPSG, this is further assisted by the clear marking of head daughters in headed phrases. Although the statistical components of the grammar-driven parser might be still biased by the training domain, the hand-coded grammar rules guarantee the basic linguistic constraints to be met. This not to say that domain adaptation is 379 HPSG DB Extraction HPSG DB Feature Models MSTParser Feature Model MaltParser Feature Model Section 3.1 Section 3.3 McDonald et al., 2005 Nivre et al., 2007 Nivre and McDonald, 2008 Section 4.2 Section 4.3 Figure 1: Different dependency parsing models and their combinations. DB stands for dependency backbone. not an issue for grammar-based parsing systems, but the built-in linguistic knowledge can be explored to reduce the performance drop in pure statistical approaches. 3 Dependency Parsing with HPSG In this section, we explore two possible applications of the HPSG parsing onto the syntactic dependency parsing task. One is to extract dependency backbone from the HPSG analyses of the sentences and directly convert them into the target representation; the other way is to encode the HPSG outputs as additional features into the existing statistical dependency parsing models. In the previous work, Nivre and McDonald (2008) have integrated MSTParser and MaltParser by feeding one parser’s output as features into the other. The relationships between our work and their work are roughly shown in Figure 1. 3.1 Extracting Dependency Backbone from HPSG Derivation Tree Given a sentence, each parse produced by the parser is represented by a typed feature structure, which recursively embeds smaller feature structures for lower level phrases or words. For the purpose of dependency backbone extraction, we only look at the derivation tree which corresponds to the constituent tree of an HPSG analysis, with all non-terminal nodes labeled by the names of the grammar rules applied. Figure 2 shows an example. Note that all grammar rules in ERG are either unary or binary, giving us relatively deep trees when compared with annotations such as Penn Treebank. Conceptually, this conversion is similar to the conversions from deeper structures to GR reprsentations reported by Clark and Curran (2007) and Miyao et al. (2007). np_title_cmpnd ms_n2 proper_np subjh generic_proper_ne Haag play_v1 hcomp proper_np generic_proper_ne Elianti. plays Ms. Figure 2: An example of an HPSG derivation tree with ERG Ms. Haag plays Elianti. hcomp np_title_cmpnd subjh Figure 3: An HPSG dependency backbone structure The dependency backbone extraction works by first identifying the head daughter for each binary grammar rule, and then propagating the head word of the head daughter upwards to their parents, and finally creating a dependency relation, labeled with the HPSG rule name of the parent node, from the head word of the parent to the head word of the non-head daughter. See Figure 3 for an example of such an extracted backbone. For the experiments in this paper, we used July08 version of the ERG, which contains in total 185 grammar rules (morphological rules are not counted). Among them, 61 are unary rules, and 124 are binary. Many of the binary rules are clearly marked as headed phrases. The grammar also indicates whether the head is on the left (head-initial) or on the right (head-final). However, there are still quite a few binary rules which are not marked as headed-phrases (according to the linguistic theory), e.g. rules to handle coordinations, appositions, compound nouns, etc. For these rules, we refer to the conversion of the Penn Treebank into dependency structures used in the CoNLL 2008 Shared Task, and mark the heads of these rules in a way that will arrive at a compatible dependency backbone. For instance, the left most daughters of coordination rules are marked as heads. In combination with the right-branching analysis of coordination in ERG, this leads to the same dependency attachment in the CoNLL syntax. Eventually, 37 binary rules are marked with a head daughter on the left, and 86 with a head daughter on the right. Although the extracted dependency is similar to 380 the CoNLL shared task dependency structures, minor systematic differences still exist for some phenomena. For example, the possessive “’s” is annotated to be governed by its preceding word in CoNLL dependency; while in HPSG, it is treated as the head of a “specifier-head” construction, hence governing the preceding word in the dependency backbone. With several simple tree rewriting rules, we are able to fix the most frequent inconsistencies. With the rule-based backbone extraction and repair, we can finally turn our HPSG parser outputs into dependency structures1. The unlabeled attachment agreement between the HPSG backbone and CoNLL dependency annotation will be shown in Section 4.2. 3.2 Robust Parsing with HPSG As mentioned in Section 2, one pitfall of using a precision-oriented grammar in parsing is its lack of robustness. Even with a large scale broad coverage grammar like ERG, using our settings we only achieved 75% of sentential coverage2. Given that the grammar has never been fine-tuned for the financial domain, such coverage is very encouraging. But still, the remaining unparsed sentences comprise a big coverage gap. Different strategies can be taken here. One can either keep the high precision by only looking at full parses from the HPSG parser, of which the analyses are completely admitted by grammar constraints. Or one can trade precision for extra robustness by looking at the most probable incomplete analysis. Several partial parsing strategies have been proposed (Kasper et al., 1999; Zhang and Kordoni, 2008) as the robust fallbacks for the parser when no available analysis can be derived. In our experiment, we select the sequence of most likely fragment analyses according to their local disambiguation scores as the partial parse. When combined with the dependency backbone extraction, partial parses generate disjoint tree fragments. We simply attach all fragments onto the virtual root node. 1It is also possible map from HPSG rule names (together with the part-of-speech of head and dependent) to CoNLL dependency labels. This remains to be explored in the future. 2More recent study shows that with carefully designed retokenization and preprocessing rules, over 80% sentential coverage can be achieved on the WSJ sections of the Penn Treebank data using the same version of ERG. The numbers reported in this paper are based on a simpler preprocessor, using rather strict time/memory limits for the parser. Hence the coverage number reported here should not be taken as an absolute measure of grammar performance. 3.3 Using Feature-Based Models Besides directly using the dependency backbone of the HPSG output, we could also use it for building feature-based models of statistical dependency parsers. Since we focus on the domain adaptation issue, we incorporate a less domain dependent language resource (i.e. the HPSG parsing outputs using ERG) into the features models of statistical parsers. As mordern grammar-based parsers has achieved high runtime efficency (with our HPSG parser parsing at an average speed of ∼3 sentences per second), this adds up to an acceptable overhead. 3.3.1 Feature Model with MSTParser As mentioned before, MSTParser is a graphbased statistical dependency parser, whose learning procedure can be viewed as the assignment of different weights to all kinds of dependency arcs. Therefore, the feature model focuses on each kind of head-child pair in the dependency tree, and mainly contains four categories of features (Mcdonald et al., 2005a): basic uni-gram features, basic bi-gram features, in-between POS features, and surrounding POS features. It is emphasized by the authors that the last two categories contribute a large improvement to the performance and bring the parser to the state-of-the-art accuracy. Therefore, we extend this feature set by adding four more feature categories, which are similar to the original ones, but the dependency relation was replaced by the dependency backbone of the HPSG outputs. The extended feature set is shown in Table 1. 3.3.2 Feature Model with MaltParser MaltParser is another trend of dependency parser, which is based on transitions. The learning procedure is to train a statistical model, which can help the parser to decide which operation to take at each parsing status. The basic data structures are a stack, where the constructed dependency graph is stored, and an input queue, where the unprocessed data are put. Therefore, the feature model focuses on the tokens close to the top of the stack and also the head of the queue. Provided with the original features used in MaltParser, we add extra ones about the top token in the stack and the head token of the queue derived from the HPSG dependency backbone. The extended feature set is shown in Table 2 (the new features are listed separately). 381 Uni-gram Features: h-w,h-p; h-w; h-p; c-w,c-p; c-w; c-p Bi-gram Features: h-w,h-p,c-w,c-p; h-p,c-w,c-p; h-w,c-w,c-p; h-w,h-p,c-p; h-w,h-p,c-w; h-w,c-w; h-p,c-p POS Features of words in between: h-p,b-p,c-p POS Features of words surround: h-p,h-p+1,c-p-1,c-p; h-p-1,h-p,c-p-1,c-p; h-p,h-p+1,c-p,c-p+1; h-p-1,h-p,c-p,c-p+1 Table 1: The Extra Feature Set for MSTParser. h: the HPSG head of the current token; c: the current token; b: each token in between; -1/+1: the previous/next token; w: word form; p: POS POS Features: s[0]-p; s[1]-p; i[0]-p; i[1]-p; i[2]-p; i[3]-p Word Form Features: s[0]-h-w; s[0]-w; i[0]-w; i[1]-w Dependency Features: s[0]-lmc-d; s[0]-d; s[0]-rmc-d; i[0]-lmc-d New Features: s[0]-hh-w; s[0]-hh-p; s[0]-hr; i[0]-hh-w; i[0]-hh-p; i[0]-hr Table 2: The Extended Feature Set for MaltParser. s[0]/s[1]: the first and second token on the top of the stack; i[0]/i[1]/i[2]/i[3]: front tokens in the input queue; h: head of the token; hh: HPSG DB head of the token; w: word form; p: POS; d: dependency relation; hr: HPSG rule; lmc/rmc: left-/right-most child With the extra features, we hope that the training of the statistical model will not overfit the indomain data, but be able to deal with domain independent linguistic phenomena as well. 4 Experiment Results & Error Analyses To evaluate the performance of our different dependency parsing models, we tested our approaches on several dependency treebanks for English in a similar spirit to the CoNLL 2006-2008 Shared Tasks. In this section, we will first describe the datasets, then present the results. An error analysis is also carried out to show both pros and cons of different models. 4.1 Datasets In previous years of CoNLL Shared Tasks, several datasets have been created for the purpose of dependency parser evaluation. Most of them are converted automatically from existing treebanks in various forms. Our experiments adhere to the CoNLL 2008 dependency syntax (Yamada et al. 2003, Johansson et al. 2007) which was used to convert Penn-Treebank constituent trees into single-head, single-root, traceless and nonprojective dependencies. WSJ This dataset comprises of three portions. The larger part is converted from the Penn Treebank Wall Street Journal Sections #2–#21, and is used for training statistical dependency parsing models; the smaller part, which covers sentences from Section #23, is used for testing. Brown This dataset contains a subset of converted sentences from BROWN sections of the Penn Treebank. It is used for the out-domain test. PChemtb This dataset was extracted from the PennBioIE CYP corpus, containing 195 sentences from biomedical domain. The same dataset has been used for the domain adaptation track of the CoNLL 2007 Shared Task. Although the original annotation scheme is similar to the Penn Treebank, the dependency extraction setting is slightly different to the CoNLLWSJ dependencies (e.g. the coordinations). Childes This is another out-domain test set from the children language component of the TalkBank, containing dialogs between parents and children. This is the other datasets used in the domain adaptation track of the CoNLL 2007 Shared Task. The dataset is annotated with unlabeled dependencies. As have been reported by others, several systematic differences in the original CHILDES annotation scheme has led to the poor system performances on this track of the Shared Task in 2007. Two main differences concern a) root attachments, and b) coordinations. With several simple heuristics, we change the annotation scheme of the original dataset to match the Penn Treebankbased datasets. The new dataset is referred to as CHILDES*. 4.2 HPSG Backbone as Dependency Parser First we test the agreement between HPSG dependency backbone and CoNLL dependency. While approximating a target dependency structure with rule-based conversion is not the main focus of this work, the agreement between two representations gives indication on how similar and consistent the two representations are, and a rough impression of whether the feature-based models can benefit from the HPSG backbone. 382 # sentence φ w/s DB(F)% DB(P)% WSJ 2399 24.04 50.68 63.85 BROWN 425 16.96 66.36 76.25 PCHEMTB 195 25.65 50.27 61.60 CHILDES* 666 7.51 67.37 70.66 WSJ-P 1796 (75%) 22.25 71.33 – BROWN-P 375 (88%) 15.74 80.04 – PCHEMTB-P 147 (75%) 23.99 69.27 – CHILDES*-P 595 (89%) 7.49 73.91 – Table 3: Agreement between HPSG dependency backbone and CoNLL 2008 dependency in unlabeled attachment score. DB(F): full parsing mode; DB(P): partial parsing mode; Punctuations are excluded from the evaluation. The PET parser, an efficient parser HPSG parser is used in combination with ERG to parse the test sets. Note that the training set is not used. The grammar is not adapted for any of these specific domain. To pick the most probable reading from HPSG parsing outputs, we used a discriminative parse selection model as described in (Toutanova et al., 2002) trained on the LOGON Treebank (Oepen et al., 2004), which is significantly different from any of the test domain. The treebank contains about 9K sentences for which HPSG analyses are manually disambiguated. The difference in annotation make it difficult to simply merge this HPSG treebank into the training set of the dependency parser. Also, as Gildea (2001) suggests, adding such heterogeneous data to the training set will not automatically lead to performance improvement. It should be noted that domain adaptation also presents a challenge to the disambiguation model of the HPSG parser. All datasets we use in our should be considered outdomain to the HPSG disambiguation model. Table 3 shows the agreement between the HPSG backbone and CoNLL dependency in unlabeled attachment score (UAS). The parser is set in either full parsing or partial parsing mode. Partial parsing is used as a fallback when full parse is not available. UAS are reported on all complete test sets, as well as fully parsed subsets (suffixed with “-p”). It is not surprising to see that, without a decent fallback strategy, the full parse HPSG backbone suffers from insufficient coverage. Since the grammar coverage is statistically correlated to the average sentence length, the worst performance is observed for the PCHEMTB. Although sentences in CHILDES* are significantly shorter than those in BROWN, there is a fairly large amount of less well-formed sentences (either as a nature of child language, or due to the transcription from spoken dialogs). This leads to the close performance between these two datasets. PCHEMTB appears to be the most difficult one for the HPSG parser. The partial parsing fallback sets up a good safe net for sentences that fail to parse. Without resorting to any external resource, the performance was significantly improved on all complete test sets. When we set the coverage of the HPSG grammar aside and only compare performance on the subsets of these datasets which are fully parsed by the HPSG grammar, the unlabeled attachment score jumps up significantly. Most notable is that the dependency backbone achieved over 80% UAS on BROWN, which is close to the performance of state-of-the-art statistical dependency parsing systems trained on WSJ (see Table 5 and Table 4). The performance difference across data sets correlates to varying levels of difficulties in linguists’ view. Our error analysis does confirm that frequent errors occur in WSJ test with financial terminology missing from the grammar lexicon. The relative performance difference between the WSJ and BROWN test is contrary to the results observed for statistical parsers trained on WSJ. To further investigate the effect of HPSG parse disambiguation model on the dependency backbone accuracy, we used a set of 222 sentences from section of WSJ which have been parsed with ERG and manually disambiguated. Comparing to the WSJ-P result in Table 3, we improved the agreement with CoNLL dependency by another 8% (an upper-bound in case of a perfect disambiguation model). 4.3 Statistical Dependency Parsing with HPSG Features Similar evaluations were carried out for the statistical parsers using extra HPSG dependency backbone as features. It should be noted that the performance comparison between MSTParser and MaltParser is not the aim of this experiment, and the difference might be introduced by the specific settings we use for each parser. Instead, performance variance using different feature models is the main subject. Also, performance drop on out-domain tests shows how domain dependent the feature models are. For MaltParser, we use Arc-Eager algo383 rithm, and polynomial kernel with d = 2. For MSTParser, we use 1st order features and a projective decoder (Eisner, 1996). When incorporating HPSG features, two settings are used. The PARTIAL model is derived by robust-parsing the entire training data set and extract features from every sentence to train a unified model. When testing, the PARTIAL model is used alone to determine the dependency structures of the input sentences. The FULL model, on the other hand is only trained on the full parsed subset of sentences, and only used to predict dependency structures for sentences that the grammar parses. For the unparsed sentences, the original models without HPSG features are used. Parser performances are measured using both labeled and unlabeled attachment scores (LAS/UAS). For unlabeled CHILDES* data, only UAS numbers are reported. Table 4 and 5 summarize results for MSTParser and MaltParser, respectively. With both parsers, we see slight performance drops with both HPSG feature models on indomain tests (WSJ), compared with the original models. However, on out-domain tests, full-parse HPSG feature models consistently outperform the original models for both parsers. The difference is even larger when only the HPSG fully parsed subsets of the test sets are concerned. When we look at the performance difference between in-domain and out-domain tests for each feature model, we observe that the drop is significantly smaller for the extended models with HPSG features. We should note that we have not done any feature selection for our HPSG feature models. Nor have we used the best known configurations of the existing parsers (e.g. second order features in MSTParser). Admittedly the results on PCHEMTB are lower than the best reported results in CoNLL 2007 Shared Task, we shall note that we are not using any in-domain unlabeled data. Also, the poor performance of the HPSG parser on this dataset indicates that the parser performance drop is more related to domain-specific phenomena and not general linguistic knowledge. Nevertheless, the drops when compared to in-domain tests are constantly decreased with the help of HPSG analyses features. With the results on BROWN, the performance of our HPSG feature models will rank 2nd on the out-domain test for the CoNLL 2008 Shared Task. Unlike the observations in Section 4.2, the partial parsing mode does not work well as a fallback in the feature models. In most cases, its performances are between the original models and the full-parse HPSG feature models. The partial parsing features obscure the linguistic certainty of grammatical structures produced in the full model. When used as features, such uncertainty leads to further confusion. Practically, falling back to the original models works better when HPSG full parse is not available. 4.4 Error Analyses Qualitative error analysis is also performed. Since our work focuses on the domain adaptation, we manually compare the outputs of the original statistical models, the dependency backbone, and the feature-based models on the out-domain data, i.e. the BROWN data set (both labeled and unlabeled results) and the CHILDES* data set (only unlabeled results). For the dependency attachment (i.e. unlabeled dependency relation), fine-grained HPSG features do help the parser to deal with colloquial sentences, such as “What’s wrong with you?”. The original parser wrongly takes “what” as the root of the dependency tree and “’s” is attached to “what”. The dependency backbone correctly finds out the root, and thus guide the extended model to make the right prediction. A correct structure of “..., were now neither active nor really relaxed.” is also predicted by our model, while the original model wrongly attaches “really” to “nor” and “relaxed” to “were”. The rich linguistic knowledge from the HPSG outputs also shows its usefulness. For example, in a sentence from the CHILDES* data, “Did you put dolly’s shoes on?”, the verb phrase “put on” can be captured by the HPSG backbone, while the original model attaches “on” to the adjacent token “shoes”. For the dependency labels, the most difficulty comes from the prepositions. For example, “Scotty drove home alone in the Plymouth”, all the systems get the head of “in” correct, which is “drove”. However, none of the dependency labels is correct. The original model predicts the “DIR” relation, the extended feature-based model says “TMP”, but the gold standard annotation is “LOC”. This is because the HPSG dependency backbone knows that “in the Plymouth” is an adjunct of “drove”, but whether it is a temporal or 384 Original PARTIAL FULL LAS% UAS% LAS% UAS% LAS% UAS% WSJ 87.38 90.35 87.06 90.03 86.87 89.91 BROWN 80.46 (-6.92) 86.26 (-4.09) 80.55 (-6.51) 86.17 (-3.86) 80.92 (-5.95) 86.58 (-3.33) PCHEMTB 53.37 (-33.8) 62.11 (-28.24) 54.69 (-32.37) 64.09 (-25.94) 56.45 (-30.42) 65.77 (-24.14) CHILDES* – 72.17 (-18.18) – 74.91 (-15.12) – 75.64 (-14.27) WSJ-P 87.86 90.88 87.78 90.85 87.12 90.25 BROWN-P 81.58 (-6.28) 87.41 (-3.47) 81.92 (-5.86) 87.51 (-3.34) 82.14 (-4.98) 87.80 (-2.45) PCHEMTB-P 56.32 (-31.54) 65.26 (-25.63) 59.36 (-28.42) 69.20 (-21.65) 60.69 (-26.43) 70.45 (-19.80) CHILDES*-P – 72.88 (-18.00) – 76.02 (-14.83) – 76.76 (-13.49) Table 4: Performance of the MSTParser with different feature models. Numbers in parentheses are performance drops in out-domain tests, comparing to in-domain results. The upper part represents the results on the complete data sets, and the lower part is on the fully parsed subsets, indicated by “-P”. Original PARTIAL FULL LAS% UAS% LAS% UAS% LAS% UAS% WSJ 86.47 88.97 85.39 88.10 85.66 88.40 BROWN 79.41 (-7.06) 84.75 (-4.22) 79.10 (-6.29) 84.58 (-3.52) 79.56 (-6.10) 85.24 (-3.16) PCHEMTB 61.05 (-25.42) 71.32 (-17.65) 61.01 (-24.38) 70.99 (-17.11) 60.93 (-24.73) 70.89 (-17.51) CHILDES* – 74.97 (-14.00) – 75.64 (-12.46) – 76.18 (-12.22) WSJ-P 86.99 89.58 86.09 88.83 85.82 88.76 BROWN-P 80.43 (-6.56) 85.78 (-3.80) 80.46 (-5.63) 85.94 (-2.89) 80.62 (-5.20) 86.38 (-2.38) PCHEMTB-P 63.33 (-23.66) 73.54 (-16.04) 63.27 (-22.82) 73.31 (-15.52) 63.16 (-22.66) 73.06 (-15.70) CHILDES*-P – 75.95 (-13.63) – 77.05 (-11.78) – 77.30 (-11.46) Table 5: Performance of the MaltParser with different feature models. locative expression cannot be easily predicted at the pure syntactic level. This also suggests a joint learning of syntactic and semantic dependencies, as proposed in the CoNLL 2008 Shared Task. Instances of wrong HPSG analyses have also been observed as one source of errors. For most of the cases, a correct reading exists, but not picked by our parse selection model. This happens more often with the WSJ test set, partially contributing to the low performance. 5 Conclusion & Future Work Similar to our work, Sagae et al. (2007) also considered the combination of dependency parsing with an HPSG parser, although their work was to use statistical dependency parser outputs as soft constraints to improve the HPSG parsing. Nevertheless, a similar backbone extraction algorithm was used to map between different representations. Similar work also exists in the constituentbased approaches, where CFG backbones were used to improve the efficiency and robustness of HPSG parsers (Matsuzaki et al., 2007; Zhang and Kordoni, 2008). In this paper, we restricted our investigation on the syntactic evaluation using labeled/unlabeled attachment scores. Recent discussions in the parsing community about meaningful crossframework evaluation metrics have suggested to use measures that are semantically informed. In this spirit, Zhang et al. (2008) showed that the semantic outputs of the same HPSG parser helps in the semantic role labeling task. Consistent with the results reported in this paper, more improvement was achieved on the out-domain tests in their work as well. Although the experiments presented in this paper were carried out on a HPSG grammar for English, the method can be easily adapted to work with other grammar frameworks (e.g. LFG, CCG, TAG, etc.), as well as on langugages other than English. We chose to use a hand-crafted grammar, so that the effect of training corpus on the deep parser is minimized (with the exception of the lexical coverage and disambiguation model). As mentioned in Section 4.4, the performance of our HPSG parse selection model varies across different domains. This indicates that, although the deep grammar embraces domain independent linguistic knowledge, the lexical coverage and the disambiguation process among permissible readings is still domain dependent. With the mapping between HPSG analyses and their dependency backbones, one can potentially use existing dependency treebanks to help overcome the insufficient data problem for deep parse selection models. 385 References Michiel Bacchiani, Michael Riley, Brian Roark, and Richard Sproat. 2006. Map adaptation of stochastic grammars. Computer speech and language, 20(1):41–68. Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceedings of the 10th Conference on Computational Natural Language Learning (CoNLL-X), New York City, USA. Stephen Clark and James Curran. 2007. Formalismindependent parser evaluation with ccg and depbank. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 248–255, Prague, Czech Republic. Jason Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of the 16th International Conference on Computational Linguistics (COLING-96), pages 340–345, Copenhagen, Denmark. Dan Flickinger. 2002. On building a more efficient grammar by exploiting types. In Stephan Oepen, Dan Flickinger, Jun’ichi Tsujii, and Hans Uszkoreit, editors, Collaborative Language Engineering, pages 1–17. CSLI Publications. Daniel Gildea. 2001. Corpus variation and parser performance. In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing, pages 167–202, Pittsburgh, USA. Walter Kasper, Bernd Kiefer, Hans-Ulrich Krieger, C.J. Rupp, and Karsten Worm. 1999. Charting the depths of robust speech processing. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL 1999), pages 405–412, Maryland, USA. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2):313–330. Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2007. Efficient HPSG parsing with supertagging and CFGfiltering. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI 2007), pages 1671–1676, Hyderabad, India. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Reranking and self-training for parser adaptation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, pages 337–344, Sydney, Australia. David McClosky, Eugene Charniak, and Mark Johnson. 2008. When is self-training effective for parsing? In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 561–568, Manchester, UK. Ryan Mcdonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of dependency parsers. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 91–98, Ann Arbor, Michigan. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005b. Non-Projective Dependency Parsing using Spanning Tree Algorithms. In Proceedings of HLTEMNLP 2005, pages 523–530, Vancouver, Canada. Yusuke Miyao, Kenji Sagae, and Jun’ichi Tsujii. 2007. Towards framework-independent evaluation of deep linguistic parsers. In Proceedings of the GEAF07 Workshop, pages 238–258, Stanford, CA. Joakim Nivre and Ryan McDonald. 2008. Integrating graphbased and transition-based dependency parsers. In Proceedings of ACL-08: HLT, pages 950–958, Columbus, Ohio, June. Joakim Nivre, Johan Hall, Sandra K¨ubler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007a. The CoNLL 2007 shared task on dependency parsing. In Proceedings of EMNLP-CoNLL 2007, pages 915–932, Prague, Czech Republic. Joakim Nivre, Jens Nilsson, Johan Hall, Atanas Chanev, G¨ulsen Eryigit, Sandra K¨ubler, Svetoslav Marinov, and Erwin Marsi. 2007b. Maltparser: A languageindependent system for data-driven dependency parsing. Natural Language Engineering, 13(1):1–41. Stephan Oepen, Helge Dyvik, Jan Tore Lønning, Erik Velldal, Dorothee Beermann, John Carroll, Dan Flickinger, Lars Hellan, Janne Bondi Johannessen, Paul Meurer, Torbjørn Nordg˚ard, and Victoria Ros´en. 2004. Som ˚a kapp-ete med trollet? Towards MRS-Based Norwegian– English Machine Translation. In Proceedings of the 10th International Conference on Theoretical and Methodological Issues in Machine Translation, Baltimore, USA. Carl J. Pollard and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. University of Chicago Press, Chicago, USA. Kenji Sagae, Yusuke Miyao, and Jun’ichi Tsujii. 2007. Hpsg parsing with shallow dependency constraints. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 624–631, Prague, Czech Republic. Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu´ıs M`arquez, and Joakim Nivre. 2008. The CoNLL-2008 shared task on joint parsing of syntactic and semantic dependencies. In Proceedings of the 12th Conference on Computational Natural Language Learning (CoNLL2008), Manchester, UK. Kristina Toutanova, Christoper D. Manning, Stuart M. Shieber, Dan Flickinger, and Stephan Oepen. 2002. Parse ranking for a rich HPSG grammar. In Proceedings of the 1st Workshop on Treebanks and Linguistic Theories (TLT 2002), pages 253–263, Sozopol, Bulgaria. Yi Zhang and Valia Kordoni. 2008. Robust Parsing with a Large HPSG Grammar. In Proceedings of the Sixth International Language Resources and Evaluation (LREC’08), Marrakech, Morocco. Yi Zhang, Rui Wang, and Hans Uszkoreit. 2008. Hybrid Learning of Dependency Structures from Heterogeneous Linguistic Resources. In Proceedings of the Twelfth Conference on Computational Natural Language Learning (CoNLL 2008), pages 198–202, Manchester, UK. 386
2009
43
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 387–395, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP A Chinese-English Organization Name Translation System Using Heuristic Web Mining and Asymmetric Alignment Fan Yang, Jun Zhao, Kang Liu National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China {fyang,jzhao,kliu}@nlpr.ia.ac.cn Abstract In this paper, we propose a novel system for translating organization names from Chinese to English with the assistance of web resources. Firstly, we adopt a chunkingbased segmentation method to improve the segmentation of Chinese organization names which is plagued by the OOV problem. Then a heuristic query construction method is employed to construct an efficient query which can be used to search the bilingual Web pages containing translation equivalents. Finally, we align the Chinese organization name with English sentences using the asymmetric alignment method to find the best English fragment as the translation equivalent. The experimental results show that the proposed method outperforms the baseline statistical machine translation system by 30.42%. 1 Introduction The task of Named Entity (NE) translation is to translate a named entity from the source language to the target language, which plays an important role in machine translation and cross-language information retrieval (CLIR). The organization name (ON) translation is the most difficult subtask in NE translation. The structure of ON is complex and usually nested, including person name, location name and sub-ON etc. For example, the organization name “北京诺基亚通 信有限公司(Beijing Nokia Communication Ltd.)” contains a company name (诺基亚/Nokia) and a location name (北京/Beijing). Therefore, the translation of organization names should combine transliteration and translation together. Many previous researchers have tried to solve ON translation problem by building a statistical model or with the assistance of web resources. The performance of ON translation using web knowledge is determined by the solution of the following two problems:  The efficiency of web page searching: how can we find the web pages which contain the translation equivalent when the amount of the returned web pages is limited?  The reliability of the extraction method: how reliably can we extract the translation equivalent from the web pages that we obtained in the searching phase? For solving these two problems, we propose a Chinese-English organization name translation system using heuristic web mining and asymmetric alignment, which has three innovations. 1) Chunking-based segmentation: A Chinese ON is a character sequences, we need to segment it before translation. But the OOV words always make the ON segmentation much more difficult. We adopt a new two-phase method here. First, the Chinese ON is chunked and each chunk is classified into four types. Then, different types of chunks are segmented separately using different strategies. Through chunking the Chinese ON first, the OOVs can be partitioned into one chunk which will not be segmented in the next phase. In this way, the performance of segmentation is improved. 2) Heuristic Query construction: We need to obtain the bilingual web pages that contain both the input Chinese ON and its translation equivalent. But in most cases, if we just send the Chinese ON to the search engine, we will always get the Chinese monolingual web pages which don’t contain any English word sequences, let alone the English translation equivalent. So we propose a heuristic query construction method to generate an efficient bilingual query. Some words in the Chinese ON are selected and their translations are added into the query. These English words will act as clues for searching 387 bilingual web pages. The selection of the Chinese words to be translated will take into consideration both the translation confidence of the words and the information contents that they contain for the whole ON. 3) Asymmetric alignment: When we extract the translation equivalent from the web pages, the traditional method should recognize the named entities in the target language sentence first, and then the extracted NEs will be aligned with the source ON. However, the named entity recognition (NER) will always introduce some mistakes. In order to avoid NER mistakes, we propose an asymmetric alignment method which align the Chinese ON with an English sentence directly and then extract the English fragment with the largest alignment score as the equivalent. The asymmetric alignment method can avoid the influence of improper results of NER and generate an explicit matching between the source and the target phrases which can guarantee the precision of alignment. In order to illustrate the above ideas clearly, we give an example of translating the Chinese ON “中国华融资产管理公司 (China Huarong Asset Management Corporation)”. Step1: We first chunk the ON, where “LC”, “NC”, “MC” and “KC” are the four types of chunks defined in Section 4.2. 中国(China)/LC 华融(Huarong)/NC 资产管理 (asset management)/MC 公司(corporation)/KC Step2: We segment the ON based on the chunking results. 中国(china) 华融(Huarong) 资产(asset) 管理(management) 公司(corporation) If we do not chunk the ON first, the OOV word “华融(Huarong)” may be segmented as “华 融”. This result will certainly lead to translation errors. Step 3: Query construction: We select the words “资产” and “管理” to translate and a bilingual query is constructed as: “ 中国华融资产管理公司” + asset + management If we don’t add some English words into the query, we may not obtain the web pages which contain the English phrase “China Huarong Asset Management Corporation”. In that case, we can not extract the translation equivalent. Step 4: Asymmetric Alignment: We extract a sentence “…President of China Huarong Asset Management Corporation…” from the returned snippets. Then the best fragment of the sentence “China Huarong Asset Management Corporation” will be extracted as the translation equivalent. We don’t need to implement English NER process which may make mistakes. The remainder of the paper is structured as follows. Section 2 reviews the related works. In Section 3, we present the framework of our system. We discuss the details of the ON chunking in Section 4. In Section 5, we introduce the approach of heuristic query construction. In section 6, we will analyze the asymmetric alignment method. The experiments are reported in Section 7. The last section gives the conclusion and future work. 2 Related Work In the past few years, researchers have proposed many approaches for organization translation. There are three main types of methods. The first type of methods translates ONs by building a statistical translation model. The model can be built on the granularity of word [Stalls et al., 1998], phrase [Min Zhang et al., 2005] or structure [Yufeng Chen et al., 2007]. The second type of methods finds the translation equivalent based on the results of alignment from the source ON to the target ON [Huang et al., 2003; Feng et al., 2004; Lee et al., 2006]. The ONs are extracted from two corpora. The corpora can be parallel corpora [Moore et al., 2003] or contentaligned corpora [Kumano et al., 2004]. The third type of methods introduces the web resources into ON translation. [Al-Onaizan et al., 2002] uses the web knowledge to assist NE translation and [Huang et al., 2004; Zhang et al., 2005; Chen et al., 2006] extracts the translation equivalents from web pages directly. The above three types of methods have their advantages and shortcomings. The statistical translation model can give an output for any input. But the performance is not good enough on complex ONs. The method of extracting translation equivalents from bilingual corpora can obtain high-quality translation equivalents. But the quantity of the results depends heavily on the amount and coverage of the corpora. So this kind of method is fit for building a reliable ON dictionary. In the third type of method, with the assistance of web pages, the task of ON translation can be viewed as a two-stage process. Firstly, the web pages that may contain the target translation are found through a search engine. Then the translation equivalent will be extracted from the web pages based on the alignment score with the original ON. This method will not 388 depend on the quantity and quality of the corpora and can be used for translating complex ONs. 3 The Framework of Our System The Framework of our ON translation system shown in Figure 1 has four modules. Figure 1. System framework 1) Chunking-based ON Segmentation Module: The input of this module is a Chinese ON. The Chunking model will partition the ON into chunks, and label each chunk using one of four classes. Then, different segmentation strategies will be executed for different types of chunks. 2) Statistical Organization Translation Module: The input of the module is a word set in which the words are selected from the Chinese ON. The module will output the translation of these words. 3) Web Retrieval Module: When input a Chinese ON, this module generates a query which contains both the ON and some words’ translation output from the translation module. Then we can obtain the snippets that may contain the translation of the ON from the search engine. The English sentences will be extracted from these snippets. 4) NE Alignment Module: In this module, the asymmetric alignment method is employed to align the Chinese ON with these English sentences obtained in Web retrieval module. The best part of the English sentences will be extracted as the translation equivalent. 4 The Chunking-based Segmentation for Chinese ONs In this section, we will illustrate a chunkingbased Chinese ON segmentation method, which can efficiently deal with the ONs containing OOVs. 4.1 The Problems in ON Segmentation The performance of the statistical ON translation model is dependent on the precision of the Chinese ON segmentation to some extent. When Chinese words are aligned with English words, the mistakes made in Chinese segmentation may result in wrong alignment results. We also need correct segmentation results when decoding. But Chinese ONs usually contain some OOVs that are hard to segment, especially the ONs containing names of people or brand names. To solve this problem, we try to chunk Chinese ONs firstly and the OOVs will be partitioned into one chunk. Then the segmentation will be executed for every chunk except the chunks containing OOVs. 4.2 Four Types of Chunks We define the following four types of chunks for Chinese ONs:  Location Chunk (LC): LC contains the location information of an ON.  Name Chunk (NC): NC contains the name or brand information of an ON. In most cases, Name chunks should be transliterated.  Modification Chunk (MC): MC contains the modification information of an ON.  Key word Chunk (KC): KC contains the type information of an ON. The following is an example of an ON containing these four types of chunks. 北京(Beijing)/LC 百富勤(Peregrine)/NC 投资咨询(investment consulting)/MC 有限公司 (co.)/KC In the above example, the OOV “ 百富勤 (Peregrine)” is partitioned into name chunk. Then the name chunk will not be segmented. 4.3 The CRFs Model for Chunking Considered as a discriminative probabilistic model for sequence joint labeling and with the advantage of flexible feature fusion ability, Conditional Random Fields (CRFs) [J.Lafferty et al., 2001] is believed to be one of the best probabilistic models for sequence labeling tasks. So the CRFs model is employed for chunking. We select 6 types of features which are proved to be efficient for chunking through experiments. The templates of features are shown in Table 1, 389 Description Features current/previous/success character C0、C-1、C1 whether the characters is a word W(C-2C-1C0)、W(C0C1C2)、 W(C-1C0C1) whether the characters is a location name L(C-2C-1C0)、L(C0C1C2)、 L(C-1C0C1) whether the characters is an ON suffix SK(C-2C-1C0)、SK(C0C1C2)、 SK(C-1C0C1) whether the characters is a location suffix SL(C-2C-1C0)、SL(C0C1C2)、 SL(C-1C0C1) relative position in the sentence POS(C0) Table 1. Features used in CRFs model where Ci denotes a Chinese character, i denotes the position relative to the current character. We also use bigram and unigram features but only show trigram templates in Table 1. 5 Heuristic Query Construction In order to use the web information to assist Chinese-English ON translation, we must firstly retrieve the bilingual web pages effectively. So we should develop a method to construct efficient queries which are used to obtain web pages through the search engine. 5.1 The Limitation of Monolingual Query We expect to find the web pages where the Chinese ON and its translation equivalent cooccur. If we just use a Chinese ON as the query, we will always obtain the monolingual web pages only containing the Chinese ON. In order to solve the problem, some words in the Chinese ON can be translated into English, and the English words will be added into the query as the clues to search the bilingual web pages. 5.2 The Strategy of Query Construction We use the metric of precision here to evaluate the possibility in which the translation equivalent is contained in the snippets returned by the search engine. That means, on the condition that we obtain a fixed number of snippets, the more the snippets which contain the translation equivalent are obtained, the higher the precision is. There are two factors to be considered. The first is how efficient the added English words can improve the precision. The second is how to avoid adding wrong translations which may bring down the precision. The first factor means that we should select the most informative words in the Chinese ON. The second factor means that we should consider the confidence of the SMT model at the same time. For example: 天津/LC 本田/NC 车 摩托 /MC 有限公司/KC (Tianjin Honda motor co. ltd.) There are three strategies of constructing queries as follows: Q1.“天津本田摩托车有限公司” Honda Q2.“天津本田摩托车有限公司” Ltd. Q3. “ 天津本田摩托车有限公司” Motor Tianjin In the first strategy, we translate the word “本 田(Honda)” which is the most informative word in the ON. But its translation confidence is very low, which means that the statistical model gives wrong results usually. The mistakes in translation will mislead the search engine. In the second strategy, we translate the word which has the largest translation confidence. Unfortunately the word is so common that it can’t give any help in filtering out useless web pages. In the third strategy, the words which have sufficient translation confidence and information content are selected. 5.3 Heuristically Selecting the Words to be Translated The mutual information is used to evaluate the importance of the words in a Chinese ON. We calculate the mutual information on the granularity of words in formula 1 and chunks in formula 2. The integration of the two kinds of mutual information is in formula 3. y Y p (x ,y ) ( , ) = lo g p ( x ) p (y ) M IW x Y ∈∑ (1) Y p ( y ,c ) ( , ) = lo g p ( y ) p ( c ) y M I C c Y ∈∑ (2) ( , )= ( , )+(1- ) ( , ) x IC x Y MIW x Y MIC c Y α α (3) Here, MIW(x,Y) denotes the mutual information of word x with ON Y. That is the summation of the mutual information of x with every word in Y. MIC(c,Y) is similar. cx denotes the label of the chunk containing x. We should also consider the risk of obtaining wrong translation results. We can see that the name chunk usually has the largest mutual information. However, the name chunk always needs to be transliterated, and transliteration is often more difficult than translation by lexicon. So we set a threshold Tc for translation confidence. We only select the words whose translation confidences are higher than Tc, with their mutual information from high to low. 390 6 Asymmetric Alignment Method for Equivalent Extraction After we have obtained the web pages with the assistant of search engine, we extract the equivalent candidates from the bilingual web pages. So we first extract the pure English sentences and then an asymmetric alignment method is executed to find the best fragment of the English sentences as the equivalent candidate. 6.1 Traditional Alignment Method To find the translation candidates, the traditional method has three main steps. 1) The NEs in the source and the target language sentences are extracted separately. The NE collections are Sne and Tne. 2) For each NE in Sne, calculate the alignment probability with every NE in Tne. 3) For each NE in Sne, the NE in Tne which has the highest alignment probability will be selected as its translation equivalent. This method has two main shortcomings: 1) Traditional alignment method needs the NER process in both sides, but the NER process may often bring in some mistakes. 2) Traditional alignment method evaluates the alignment probability coarsely. In other words, we don’t know exactly which target word(s) should be aligned to for the source word. A coarse alignment method may have negative effect on translation equivalent extraction. 6.2 The Asymmetric Alignment Method To solve the above two problems, we propose an asymmetric alignment method. The alignment method is so called “asymmetric” for that it aligns a phrase with a sentence, in other words, the alignment is conducted between two objects with different granularities. The NER process is not necessary for that we align the Chinese ON with English sentences directly. [Wai Lam et al., 2007] proposed a method which uses the KM algorithm to find the optimal explicit matching between a Chinese ON and a given English ON. KM algorithm [Kuhn, 1955] is a traditional graphic algorithm for finding the maximum matching in bipartite weighted graph. In this paper, the KM algorithm is extended to be an asymmetric alignment method. So we can obtain an explicit matching between a Chinese ON and a fragment of English sentence. A Chinese NE CO={CW1, CW2, …, CWn} is a sequence of Chinese words CWi and the English sentence ES={EW1, EW2, …, EWm} is a sequence of English words EWi. Our goal is to find a fragment EWi,i+n={EWi, …, EWi+n} in ES, which has the highest alignment score with CO. Through executing the extended KM algorithm, we can obtain an explicit matching L. For any CWi, we can get its corresponding English word EWj, written as L(CWi)=EWj and vice versa. We find the optimal matching L between two phrases, and calculate the alignment score based on L. An example of the asymmetric alignment will be given in Fig2. Fig2. An example of asymmetric alignment In Fig2, the Chinese ON “中国农业银行” is aligned to an English sentence “… the Agriculture Bank of China is the four…”. The stop words in parentheses are deleted for they have no meaning in Chinese. In step 1, the English fragment contained in the square brackets is aligned with the Chinese ON. We can obtain an explicit matching L1, shown by arrows, and an alignment score. In step 2, the square brackets move right by one word, we can obtain a new matching L2 and its corresponding alignment score, and so on. When we have calculated every consequent fragment in English sentence, we can find the best fragment “the Agriculture Bank of China” according to the alignment score as the translation equivalent. The algorithm is shown in Fig3. Where, m is the number of words in an English sentence and n is the number of words in a Chinese ON. KM algorithm will generate an equivalent sub-graph by setting a value to each vertex. The edge whose weight is equal to the summation of the values of its two vertexes will be added into the sub-graph. Then the Hungary algorithm will be executed in the equivalent sub-graph to find the optimal matching. We find the optimal matching between CW1,n and EW1,n first. Then we move the window right and find the optimal matching between CW1,n and EW2,n+1. The process will continue until the window arrives at the right most of the … [(The) Agriculture Bank (of) China] (is) (the) four 中国 农业 银行 (The) Agriculture [Bank (of) China] (is) (the) four]… 中国 农业 银行 Step 1: Step 2: 391 English sentence. When the window moves right, we only need to find a new matching for the new added English vertex EWend and the Chinese vertex Cdrop which has been matched with EWstart in the last step. In the Hungary algorithm, the matching is added through finding an augmenting path. So we only need to find one augmenting path each time. The time complexity of finding an augmenting path is O(n3). So the whole complexity of asymmetric alignment is O(m*n3). Algorithm: Asymmetric Alignment Algorithm Input: A segmented Chinese ON CO and an English sentence ES. Output: an English fragment EWk,k+n 1. Let start=1, end=n, L0=null 2. Using KM algorithm to find the optimal matching between two phrases CW1,n and EWstart,end based on the previous matching Lstart1. We obtain a matching Lstart and calculate the alignment score Sstart based on Lstart. 3. CWdrop = L(EWstart) L(CWdrop)=null. 4. If (end==m) go to 7, else start=start+1, end=end+1. 5. Calculate the feasible vertex labeling for the vertexes CWdrop and EWend 6. Go to 2. 7. The fragment EWk,k+n-1 which has the highest alignment score will be returned. Fig3. The asymmetric alignment algorithm 6.3 Obtain the Translation Equivalent For each English sentence, we can obtain a fragment ESi,i+n which has the highest alignment score. We will also take into consideration the frequency information of the fragment and its distance away from the Chinese ON. We use formula (4) to obtain a final score for each translation candidate ETi and select the largest one as translation result. ( )= + log( +1)+ log(1 / +1) i i i i S ET SA C D α β γ (4) Where Ci denotes the frequency of ETi, and Di denotes the nearest distance between ETi and the Chinese ON. 7 Experiments We carried out experiments to investigate the performance improvement of ON translation under the assistance of web knowledge. 7.1 Experimental Data Our experiment data are extracted from LDC2005T34. There are two corpora, ldc_propernames_org_ce_v1.beta (Indus_corpus for short) and ldc_propernames_indu stry_ce_v1.beta (Org_corpus for short). Some pre-process will be executed to filter out some noisy translation pairs. For example, the translation pairs involving other languages such as Japanese and Korean will be filtered out. There are 65,835 translation pairs that we used as the training corpus and the chunk labels are added manually. We randomly select 250 translation pairs from the Org_corpus and 253 translation pairs from the Indus_corpus. Altogether, there are 503 translation pairs as the testing set. 7.2 The Effect of Chunking-based Segmentation upon ON Translation In order to evaluate the influence of segmentation results upon the statistical ON translation system, we compare the results of two translation models. One model uses chunking-based segmentation results as input, while the other uses traditional segmentation results. To train the CRFs-chunking model, we randomly selected 59,200 pairs of equivalent translations from Indus_corpus and org_corpus. We tested the performance on the set which contains 6,635 Chinese ONs and the results are shown as Table-2. For constructing a statistical ON translation model, we use GIZA++1 to align the Chinese NEs and the English NEs in the training set. Then the phrase-based machine translation system MOSES2 is adopted to translate the 503 Chinese NEs in testing set into English. Precision Recall F-measure LC 0.8083 0.7973 0.8028 NC 0.8962 0.8747 0.8853 MC 0.9104 0.9073 0.9088 KC 0.9844 0.9821 0.9833 All 0.9437 0.9372 0.9404 Table 2. The test results of CRFs-chunking model We have two metrics to evaluate the translation results. The first metric L1 is used to evaluate whether the translation result is exactly the same as the answer. The second metric L2 is used to evaluate whether the translation result contains almost the same words as the answer, 1 http://www.fjoch.com/GIZA++.html 2 http://www.statmt.org/moses/ 392 without considering the order of words. The results are shown in Table-3: chunking-based segmentation traditional segmentation L1 21.47% 18.29% L2 40.76% 36.78% Table 3. Comparison of segmentation influence From the above experimental data, we can see that the chunking-based segmentation improves L1 precision from 18.29% to 21.47% and L2 precision from 36.78% to 40.76% in comparison with the traditional segmentation method. Because the segmentation results will be used in alignment, the errors will affect the computation of alignment probability. The chunking based segmentation can generate better segmentation results; therefore better alignment probabilities can be obtained. 7.3 The Efficiency of Query Construction The heuristic query construction method aims to improve the efficiency of Web searching. The performance of searching for translation equivalents mostly depends on how to construct the query. To test its validity, we design four kinds of queries and evaluate their ability using the metric of average precision in formula 5 and macro average precision (MAP) in formula 6, 1 1 P r N i i i H A verage ecision N S = = ∑ (5) where Hi is the count of snippets that contain at least one equivalent for the ith query. And Si is the total number of snippets we got for the ith query, 1 = 1 1 ( ) 1 j i H N j j i M A P R i N H = = ∑ ∑ (6) where R(i) is the order of snippet where the ith equivalent occurs. We construct four kinds of queries for the 503 Chinese ONs in testing set as follows: Q1: only the Chinese ON. Q2: the Chinese ON and the results of the statistical translation model. Q3: the Chinese ON and some parts’ translation selected by the heuristic query construction method. Q4: the Chinese ON and its correct English translation equivalent. We obtain at most 100 snippets from Google for every query. Sometimes there are not enough snippets as we expect. We set α in formula 4 at 0.7,and the threshold of translation confidence at 0.05. The results are shown as Table 4. Average precision MAP Q1 0.031 0.0527 Q2 0.187 0.2061 Q3 0.265 0.3129 Q4 1.000 1.0000 Table 4. Comparison of four types query Here we can see that, the result of Q4 is the upper bound of the performance, and the Q1 is the lower bound of the performance. We concentrate on the comparison between Q2 and Q3. Q2 contains the translations of every word in a Chinese ON, while Q3 just contains the translations of the words we select using the heuristic method. Q2 may give more information to search engine about which web pages we expect to obtain, but it also brings in translation mistakes that may mislead the search engine. The results show that Q3 is better than Q2, which proves that a careful clue selection is needed. 7.4 The Effect of Asymmetric Alignment Algorithm The asymmetric alignment method can avoid the mistakes made in the NER process and give an explicit alignment matching. We will compare the asymmetric alignment algorithm with the traditional alignment method on performance. We adopt two methods to align the Chinese NE with the English sentences. The first method has two phases, the English ONs are extracted from English sentences firstly, and then the English ONs are aligned with the Chinese ON. Lastly, the English ON with the highest alignment score will be selected as the translation equivalent. We use the software Lingpipe3 to recognize NEs in the English sentences. The alignment probability can be calculated as formula 7: ( , ) ( | ) i j i j Score C E p e c = ∑∑ (7) The second method is our asymmetric alignment algorithm. Our method is different from the one in [Wai Lam et al., 2007] which segmented a Chinese ON using an English ON as suggestion. We segment the Chinese ON using the chunking-based segmentation method. The English sentences extracted from snippets will be preprocessed. Some stop words will be deleted, such as “the”, “of”, “on” etc. To execute the extended KM algorithm for finding the best alignment matching, we must assure that the vertex number in each side of the bipartite is the 3 http://www.alias-i.com/lingpipe/ 393 same. So we will execute a phrase combination process before alignment, which combines some frequently occurring consequent English words into single vertex, such as “limited company” etc. The combination is based on the phrase pair table which is generated from phrase-based SMT system. The results are shown in Table 5: Asymmetric Alignment Traditional method Statistical model Top1 48.71% 36.18% 18.29% Top5 53.68% 46.12% -- Table 5. Comparison the precision of alignment method From the results (column 1 and column 2) we can see that, the Asymmetric alignment method outperforms the traditional alignment method. Our method can overcome the mistakes introduced in the NER process. On the other hand, in our asymmetric alignment method, there are two main reasons which may result in mistakes, one is that the correct equivalent doesn’t occur in the snippet; the other is that some English ONs can’t be aligned to the Chinese ON word by word. 7.5 Comparison between Statistical ON Translation Model and Our Method Compared with the statistical ON translation model, we can see that the performance is improved from 18.29% to 48.71% (the bold data shown in column 1 and column 3 of Table 5) by using our Chinese-English ON translation system. Transforming the translation problem into the problem of searching for the correct translation equivalent in web pages has three advantages. First, word order determination is difficult in statistical machine translation (SMT), while search engines are insensitive to this problem. Second, SMT often loses some function word such as “the”, “a”, “of”, etc, while our method can avoid this problem because such words are stop words in search engines. Third, SMT often makes mistakes in the selection of synonyms. This problem can be solved by the fuzzy matching of search engines. In summary, web assistant method makes Chinese ON translation easier than traditional SMT method. 8 Conclusion In this paper, we present a new approach which translates the Chinese ON into English with the assistance of web resources. We first adopt the chunking-based segmentation method to improve the ON segmentation. Then a heuristic query construction method is employed to construct a query which can search translation equivalent more efficiently. At last, the asymmetric alignment method aligns the Chinese ON with English sentences directly. The performance of ON translation is improved from 18.29% to 48.71%. It proves that our system can work well on the Chinese-English ON translation task. In the future, we will try to apply this method in mining the NE translation equivalents from monolingual web pages. In addition, the asymmetric alignment algorithm also has some space to be improved. Acknowledgement The work is supported by the National High Technology Development 863 Program of China under Grants no. 2006AA01Z144, and the National Natural Science Foundation of China under Grants no. 60673042 and 60875041. References Yaser Al-Onaizan and Kevin Knight. 2002. Translating named entities using monolingual and bilingual resources. In Proc of ACL-2002. Yufeng Chen, Chenqing Zong. 2007. A StructureBased Model for Chinese Organization Name Translation. In Proc. of ACM Transactions on Asian Language Information Processing (TALIP) Donghui Feng, Yajuan Lv, Ming Zhou. 2004. A new approach for English-Chinese named entity alignment. In Proc. of EMNLP 2004. Fei Huang, Stephan Vogal. 2002. Improved named entity translation and bilingual named entity extraction. In Proc. of the 4th IEEE International Conference on Multimodal Interface. Fei Huang, Stephan Vogal, Alex Waibel. 2003. Automatic extraction of named entity translingual equivalence based on multi-feature cost minimization. In Proc. of the 2003 Annual Conference of the ACL, Workshop on Multilingual and Mixed-language Named Entity Recognition Masaaki Nagata, Teruka Saito, and Kenji Suzuki. 2001. Using the Web as a Bilingual Dictionary. In Proc. of ACL 2001 Workshop on Data-driven Methods in Machine Translation. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. of ACL 2005. Conrad Chen, Hsin-His Chen. 2006. A High-Accurate Chinese-English NE Backward Translation System Combining Both Lexical Information and Web Statistics. In Proc. of ACL 2006. 394 Wai Lam, Shing-Kit Chan. 2007. Named Entity Translation Matching and Learning: With Application for Mining Unseen Translations. In Proc. of ACM Transactions on Information Systems. Chun-Jen Lee, Jason S. Chang, Jyh-Shing R. Jang. 2006. Alignment of bilingual named entities in parallel corpora using statistical models and multiple knowledge sources. In Proc. of ACM Transactions on Asian Language Information Processing (TALIP). Kuhn, H. 1955. The Hungarian method for the assignment problem. Naval Rese. Logist. Quart 2,83-97. Min Zhang., Haizhou Li, Su Jian, Hendra Setiawan. 2005. A phrase-based context-dependent joint probability model for named entity translation. In Proc. of the 2nd International Joint Conference on Natural Language Processing(IJCNLP) Ying Zhang, Fei Huang, Stephan Vogel. 2005. Mining translations of OOV terms from the web through cross-lingual query expansion. In Proc. of the 28th ACM SIGIR. Bonnie Glover Stalls and Kevin Knight. 1998. Translating names and technical terms in Arabic text. In Proc. of the COLING/ACL Workshop on Computational Approaches to Semitic Language. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. ICML-2001. Tadashi Kumano, Hideki Kashioka, Hideki Tanaka and Takahiro Fukusima. 2004. Acquiring bilingual named entity translations from content-aligned corpora. In Proc. IJCNLP-04. Robert C. Moore. 2003. Learning translation of named-entity phrases from parallel corpora. In Proc. of 10th conference of the European chapter of ACL. 395
2009
44
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 396–404, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Reducing semantic drift with bagging and distributional similarity Tara McIntosh and James R. Curran School of Information Technologies University of Sydney NSW 2006, Australia {tara,james}@it.usyd.edu.au Abstract Iterative bootstrapping algorithms are typically compared using a single set of handpicked seeds. However, we demonstrate that performance varies greatly depending on these seeds, and favourable seeds for one algorithm can perform very poorly with others, making comparisons unreliable. We exploit this wide variation with bagging, sampling from automatically extracted seeds to reduce semantic drift. However, semantic drift still occurs in later iterations. We propose an integrated distributional similarity filter to identify and censor potential semantic drifts, ensuring over 10% higher precision when extracting large semantic lexicons. 1 Introduction Iterative bootstrapping algorithms have been proposed to extract semantic lexicons for NLP tasks with limited linguistic resources. Bootstrapping was initially proposed by Riloff and Jones (1999), and has since been successfully applied to extracting general semantic lexicons (Riloff and Jones, 1999; Thelen and Riloff, 2002), biomedical entities (Yu and Agichtein, 2003), facts (Pas¸ca et al., 2006), and coreference data (Yang and Su, 2007). Bootstrapping approaches are attractive because they are domain and language independent, require minimal linguistic pre-processing and can be applied to raw text, and are efficient enough for tera-scale extraction (Pas¸ca et al., 2006). Bootstrapping is minimally supervised, as it is initialised with a small number of seed instances of the information to extract. For semantic lexicons, these seeds are terms from the category of interest. The seeds identify contextual patterns that express a particular semantic category, which in turn recognise new terms (Riloff and Jones, 1999). Unfortunately, semantic drift often occurs when ambiguous or erroneous terms and/or patterns are introduced into and then dominate the iterative process (Curran et al., 2007). Bootstrapping algorithms are typically compared using only a single set of hand-picked seeds. We first show that different seeds cause these algorithms to generate diverse lexicons which vary greatly in precision. This makes evaluation unreliable – seeds which perform well on one algorithm can perform surprisingly poorly on another. In fact, random gold-standard seeds often outperform seeds carefully chosen by domain experts. Our second contribution exploits this diversity we have identified. We present an unsupervised bagging algorithm which samples from the extracted lexicon rather than relying on existing gazetteers or hand-selected seeds. Each sample is then fed back as seeds to the bootstrapper and the results combined using voting. This both improves the precision of the lexicon and the robustness of the algorithms to the choice of initial seeds. Unfortunately, semantic drift still dominates in later iterations, since erroneous extracted terms and/or patterns eventually shift the category’s direction. Our third contribution focuses on detecting and censoring the terms introduced by semantic drift. We integrate a distributional similarity filter directly into WMEB (McIntosh and Curran, 2008). This filter judges whether a new term is more similar to the earlier or most recently extracted terms, a sign of potential semantic drift. We demonstrate these methods for extracting biomedical semantic lexicons using two bootstrapping algorithms. Our unsupervised bagging approach outperforms carefully hand-picked seeds by ∼10% in later iterations. Our distributional similarity filter gives a similar performance improvement. This allows us to produce large lexicons accurately and efficiently for domain-specific language processing. 396 2 Background Hearst (1992) exploited patterns for information extraction, to acquire is-a relations using manually devised patterns like such Z as X and/or Y where X and Y are hyponyms of Z. Riloff and Jones (1999) extended this with an automated bootstrapping algorithm, Multi-level Bootstrapping (MLB), which iteratively extracts semantic lexicons from text. In MLB, bootstrapping alternates between two stages: pattern extraction and selection, and term extraction and selection. MB is seeded with a small set of user selected seed terms. These seeds are used to identify contextual patterns they appear in, which in turn identify new lexicon entries. This process is repeated with the new lexicon terms identifying new patterns. In each iteration, the topn candidates are selected, based on a metric scoring their membership in the category and suitability for extracting additional terms and patterns. Bootstrapping eventually extracts polysemous terms and patterns which weakly constrain the semantic class, causing the lexicon’s meaning to shift, called semantic drift by Curran et al. (2007). For example, female firstnames may drift into flowers when Iris and Rose are extracted. Many variations on bootstrapping have been developed to reduce semantic drift.1 One approach is to extract multiple semantic categories simultaneously, where the individual bootstrapping instances compete with one another in an attempt to actively direct the categories away from each other. Multi-category algorithms outperform MLB (Thelen and Riloff, 2002), and we focus on these algorithms in our experiments. In BASILISK, MEB, and WMEB, each competing category iterates simultaneously between the term and pattern extraction and selection stages. These algorithms differ in how terms and patterns selected by multiple categories are handled, and their scoring metrics. In BASILISK (Thelen and Riloff, 2002), candidate terms are ranked highly if they have strong evidence for a category and little or no evidence for other categories. This typically favours less frequent terms, as they will match far fewer patterns and are thus more likely to belong to one category. Patterns are selected similarly, however patterns may also be selected by different categories in later iterations. Curran et al. (2007) introduced Mutual Exclu1Komachi et al. (2008) used graph-based algorithms to reduce semantic drift for Word Sense Disambiguation. sion Bootstrapping (MEB) which forces stricter boundaries between the competing categories than BASILISK. In MEB, the key assumptions are that terms only belong to a category and that patterns only extract terms of a single category. Semantic drift is reduced by eliminating patterns that collide with multiple categories in an iteration and by ignoring colliding candidate terms (for the current iteration). This excludes generic patterns that can occur frequently with multiple categories, and reduces the chance of assigning ambiguous terms to their less dominant sense. 2.1 Weighted MEB The scoring of candidate terms and patterns in MEB is na¨ıve. Candidates which 1) match the most input instances; and 2) have the potential to generate the most new candidates, are preferred (Curran et al., 2007). This second criterion aims to increase recall. However, the selected instances are highly likely to introduce drift. Our Weighted MEB algorithm (McIntosh and Curran, 2008), extends MEB by incorporating term and pattern weighting, and a cumulative pattern pool. WMEB uses the χ2 statistic to identify patterns and terms that are strongly associated with the growing lexicon terms and their patterns respectively. The terms and patterns are then ranked first by the number of input instances they match (as in MEB), but then by their weighted score. In MEB and BASILISK2, the top-k patterns for each iteration are used to extract new candidate terms. As the lexicons grow, general patterns can drift into the top-k and as a result the earlier precise patterns lose their extracting influence. In WMEB, the pattern pool accumulates all top-k patterns from previous iterations, to ensure previous patterns can contribute. 2.2 Distributional Similarity Distributional similarity has been used to extract semantic lexicons (Grefenstette, 1994), based on the distributional hypothesis that semantically similar words appear in similar contexts (Harris, 1954). Words are represented by context vectors, and words are considered similar if their context vectors are similar. Patterns and distributional methods have been combined previously. Pantel and Ravichandran 2In BASILISK, k is increased by one in each iteration, to ensure at least one new pattern is introduced. 397 TYPE (#) MEDLINE Terms 1 347 002 Contexts 4 090 412 5-grams 72 796 760 Unfiltered tokens 6 642 802 776 Table 1: Filtered 5-gram dataset statistics. (2004) used lexical-syntactic patterns to label clusters of distributionally similar terms. Mirkin et al. (2006) used 11 patterns, and the distributional similarity score of each pair of terms, to construct features for lexical entailment. Pas¸ca et al. (2006) used distributional similarity to find similar terms for verifying the names in date-of-birth facts for their tera-scale bootstrapping system. 2.3 Selecting seeds For the majority of bootstrapping tasks, there is little or no guidance on how to select seeds which will generate the most accurate lexicons. Most previous works used seeds selected based on a user’s or domain expert’s intuition (Curran et al., 2007), which may then have to meet a frequency criterion (Riloff et al., 2003). Eisner and Karakos (2005) focus on this issue by considering an approach called strapping for word sense disambiguation. In strapping, semisupervised bootstrapping instances are used to train a meta-classifier, which given a bootstrapping instance can predict the usefulness (fertility) of its seeds. The most fertile seeds can then be used in place of hand-picked seeds. The design of a strapping algorithm is more complex than that of a supervised learner (Eisner and Karakos, 2005), and it is unclear how well strapping will generalise to other bootstrapping tasks. In our work, we build upon bootstrapping using unsupervised approaches. 3 Experimental setup In our experiments we consider the task of extracting biomedical semantic lexicons from raw text using BASILISK and WMEB. 3.1 Data We compared the performance of BASILISK and WMEB using 5-grams (t1, t2, t3, t4, t5) from raw MEDLINE abstracts3. In our experiments, the candidate terms are the middle tokens (t3), and the patterns are a tuple of the surrounding tokens (t1, 3The set contains all MEDLINE abstracts available up to Oct 2007 (16 140 000 abstracts). CAT DESCRIPTION ANTI Antibodies: Immunoglobulin molecules that react with a specific antigen that induced its synthesis MAb IgG IgM rituximab infliximab (κ1:0.89, κ2:1.0) CELL Cells: A morphological or functional form of a cell RBC HUVEC BAEC VSMC SMC (κ1:0.91, κ2:1.0) CLNE Cell lines: A population of cells that are totally derived from a single common ancestor cell PC12 CHO HeLa Jurkat COS (κ1:0.93, κ2: 1.0) DISE Diseases: A definite pathological process that affects humans, animals and or plants asthma hepatitis tuberculosis HIV malaria (κ1:0.98, κ2:1.0) DRUG Drugs: A pharmaceutical preparation acetylcholine carbachol heparin penicillin tetracyclin (κ1:0.86, κ2:0.99) FUNC Molecular functions and processes kinase ligase acetyltransferase helicase binding (κ1:0.87, κ2:0.99) MUTN Mutations: Gene and protein mutations, and mutants Leiden C677T C282Y 35delG null (κ1:0.89, κ2:1.0) PROT Proteins and genes p53 actin collagen albumin IL-6 (κ1:0.99, κ2:1.0) SIGN Signs and symptoms of diseases anemia hypertension hyperglycemia fever cough (κ1:0.96, κ2:0.99) TUMR Tumors: Types of tumors lymphoma sarcoma melanoma neuroblastoma osteosarcoma (κ1:0.89, κ2:0.95) Table 2: The MEDLINE semantic categories. t2, t4, t5). Unlike Riloff and Jones (1999) and Yangarber (2003), we do not use syntactic knowledge, as we aim to take a language independent approach. The 5-grams were extracted from the MEDLINE abstracts following McIntosh and Curran (2008). The abstracts were tokenised and split into sentences using bio-specific NLP tools (Grover et al., 2006). The 5-grams were filtered to remove patterns appearing with less than 7 terms4. The statistics of the resulting dataset are shown in Table 1. 3.2 Semantic Categories The semantic categories we extract from MEDLINE are shown in Table 2. These are a subset of the TREC Genomics 2007 entities (Hersh et al., 2007). Categories which are predominately multiterm entities, e.g. Pathways and Toxicities, were excluded.5 Genes and Proteins were merged into PROT as they have a high degree of metonymy, particularly out of context. The Cell or Tissue Type category was split into two fine grained classes, CELL and CLNE (cell line). 4This frequency was selected as it resulted in the largest number of patterns and terms loadable by BASILISK 5Note that polysemous terms in these categories may be correctly extracted by another category. For example, all Pathways also belong to FUNC. 398 The five hand-picked seeds used for each category are shown in italics in Table 2. These were carefully chosen based on the evaluators’ intuition, and are as unambiguous as possible with respect to the other categories. We also utilised terms in stop categories which are known to cause semantic drift in specific classes. These extra categories bound the lexical space and reduce ambiguity (Yangarber, 2003; Curran et al., 2007). We used four stop categories introduced in McIntosh and Curran (2008): AMINO ACID, ANIMAL, BODY and ORGANISM. 3.3 Lexicon evaluation The evaluation involves manually inspecting each extracted term and judging whether it was a member of the semantic class. This manual evaluation is extremely time consuming and is necessary due to the limited coverage of biomedical resources. To make later evaluations more efficient, all evaluators’ decisions for each category are cached. Unfamiliar terms were checked using online resources including MEDLINE, Medical Subject Headings (MeSH), Wikipedia. Each ambiguous term was counted as correct if it was classified into one of its correct categories, such as lymphoma which is a TUMR and DISE. If a term was unambiguously part of a multi-word term we considered it correct. Abbreviations, acronyms and typographical variations were included. We also considered obvious spelling mistakes to be correct, such as nuetrophils instead of neutrophils (a type of CELL). Non-specific modifiers are marked as incorrect, for example, gastrointestinal may be incorrectly extracted for TUMR, as part of the entity gastrointestinal carcinoma. However, the modifier may also be used for DISE (gastrointestinal infection) and CELL. The terms were evaluated by two domain experts. Inter-annotator agreement was measured on the top-100 terms extracted by BASILISK and WMEB with the hand-picked seeds for each category. All disagreements were discussed, and the kappa scores, before (κ1) and after (κ2) the discussions, are shown in Table 2. Each score is above 0.8 which reflects an agreement strength of “almost perfect” (Landis and Koch, 1977). For comparing the accuracy of the systems we evaluated the precision of samples of the lexicons extracted for each category. We report average precision over the 10 semantic categories on the 1-200, 401-600 and 801-1000 term samples, and over the first 1000 terms. In each algorithm, each category is initialised with 5 seed terms, and the number of patterns, k, is set to 5. In each iteration, 5 lexicon terms are extracted by each category. Each algorithm is run for 200 iterations. 4 Seed diversity The first step in bootstrapping is to select a set of seeds by hand. These hand-picked seeds are typically chosen by a domain expert who selects a reasonably unambiguous representative sample of the category with high coverage by introspection. To improve the seeds, the frequency of the potential seeds in the corpora is often considered, on the assumption that highly frequent seeds are better (Thelen and Riloff, 2002). Unfortunately, these seeds may be too general and extract many nonspecific patterns. Another approach is to identify seeds using hyponym patterns like, * is a [NAMED ENTITY] (Meij and Katrenko, 2007). This leads us to our first investigation of seed variability and the methodology used to compare bootstrapping algorithms. Typically algorithms are compared using one set of hand-picked seeds for each category (Pennacchiotti and Pantel, 2006; McIntosh and Curran, 2008). This approach does not provide a fair comparison or any detailed analysis of the algorithms under investigation. As we shall see, it is possible that the seeds achieve the maximum precision for one algorithm and the minimum for another, and thus the single comparison is inappropriate. Even evaluating on multiple categories does not ensure the robustness of the evaluation. Secondly, it provides no insight into the sensitivity of an algorithm to different seeds. 4.1 Analysis with random gold seeds Our initial analysis investigated the sensitivity and variability of the lexicons generated using different seeds. We instantiated each algorithm 10 times with different random gold seeds (Sgold) for each category. We randomly sample Sgold from two sets of correct terms extracted from the evaluation cache. UNION: the correct terms extracted by BASILISK and WMEB; and UNIQUE: the correct terms uniquely identified by only one algorithm. The degree of ambiguity of each seed is unknown and term frequency is not considered during the random selection. Firstly, we investigated the variability of the 399 50 60 70 80 90 50 60 70 80 90 100 BASILISK (precision) WMEB (precision) Hand-picked Average Figure 1: Performance relationship between WMEB and BASILISK on Sgold UNION extracted lexicons using UNION. Each extracted lexicon was compared with the other 9 lexicons for each category and the term overlap calculated. For the top 100 terms, BASILISK had an overlap of 18% and WMEB 44%. For the top 500 terms, BASILISK had an overlap of 39% and WMEB 47%. Clearly BASILISK is far more sensitive to the choice of seeds – this also makes the cache a lot less valuable for the manual evaluation of BASILISK. These results match our annotators’ intuition that BASILISK retrieved far more of the esoteric, rare and misspelt results. The overlap between algorithms was even worse: 6.3% for the top 100 terms and 9.1% for the top 500 terms. The plot in Figure 1 shows the variation in precision between WMEB and BASILISK with the 10 seed sets from UNION. Precision is measured on the first 100 terms and averaged over the 10 categories. The Shand is marked with a square, as well as each algorithms’ average precision with 1 standard deviation (S.D.) error bars. The axes start at 50% precision. Visually, the scatter is quite obvious and the S.D. quite large. Note that on our Shand evaluation, BASILISK performed significantly better than average. We applied a linear regression analysis to identify any correlation between the algorithm’s performances. The resulting regression line is shown in Figure 1. The regression analysis identified no correlation between WMEB and BASILISK (R2 = 0.13). It is almost impossible to predict the performance of an algorithm with a given set of seeds from another’s performance, and thus comparisons using only one seed set are unreliable. Table 3 summarises the results on Sgold, including the minimum and maximum averages over the 10 categories. At only 100 terms, lexicon Sgold Shand Avg. Min. Max. S.D. UNION BASILISK 80.5 68.3 58.3 78.8 7.31 WMEB 88.1 87.1 79.3 93.5 5.97 UNIQUE BASILISK 80.5 67.1 56.7 83.5 9.75 WMEB 88.1 91.6 82.4 95.4 3.71 Table 3: Variation in precision with random gold seed sets variations are already obvious. As noted above, Shand on BASILISK performed better than average, whereas WMEB Sgold UNIQUE performed significantly better on average than Shand. This clearly indicates the difficulty of picking the best seeds for an algorithm, and that comparing algorithms with only one set has the potential to penalise an algorithm. These results do show that WMEB is significantly better than BASILISK. In the UNIQUE experiments, we hypothesized that each algorithm would perform well on its own set, but BASILISK performs significantly worse than WMEB, with a S.D. greater than 9.7. BASILISK’s poor performance may be a direct result of it preferring low frequency terms, which are unlikely to be good seeds. These experiments have identified previously unreported performance variations of these systems and their sensitivity to different seeds. The standard evaluation paradigm, using one set of hand-picked seeds over a few categories, does not provide a robust and informative basis for comparing bootstrapping algorithms. 5 Supervised Bagging While the wide variation we reported in the previous section is an impediment to reliable evaluation, it presents an opportunity to improve the performance of bootstrapping algorithms. In the next section, we present a novel unsupervised bagging approach to reducing semantic drift. In this section, we consider the standard bagging approach introduced by Breiman (1996). Bagging was used by Ng and Cardie (2003) to create committees of classifiers for labelling unseen data for retraining. Here, a bootstrapping algorithm is instantiated n = 50 times with random seed sets selected from the UNION evaluation cache. This generates n new lexicons L1, L2, . . . , Ln for each category. The next phase involves aggregating the predictions in L1−n to form the final lexicon for each category, using a weighted voting function. 400 1-200 401-600 801-1000 1-1000 Shand BASILISK 76.3 67.8 58.3 66.7 WMEB 90.3 82.3 62.0 78.6 Sgold BAG BASILISK 84.2 80.2 58.2 78.2 WMEB 95.1 79.7 65.0 78.6 Table 4: Bagging with 50 gold seed sets Our weighting function is based on two related hypotheses of terms in highly accurate lexicons: 1) the more category lexicons in L1−n a term appears in, the more likely the term is a member of the category; 2) terms ranked higher in lexicons are more reliable category members. Firstly, we rank the aggregated terms by the number of lexicons they appear in, and to break ties, we take the term that was extracted in the earliest iteration across the lexicons. 5.1 Supervised results Table 4 compares the average precisions of the lexicons for BASILISK and WMEB using just the hand-picked seeds (Shand) and 50 sample supervised bagging (Sgold BAG). Bagging with samples from Sgold successfully increased the performance of both BASILISK and WMEB in the top 200 terms. While the improvement continued for BASILISK in later sections, it had a more variable effect for WMEB. Overall, BASILISK gets the greater improvement in performance (a 12% gain), almost reaching the performance of WMEB across the top 1000 terms, while WMEB’s performance is the same for both Shand and Sgold BAG. We believe the greater variability in BASILISK meant it benefited from bagging with gold seeds. 6 Unsupervised bagging A significant problem for supervised bagging approaches is that they require a larger set of goldstandard seed terms to sample from – either an existing gazetteer or a large hand-picked set. In our case, we used the evaluation cache which took considerable time to accumulate. This saddles the major application of bootstrapping, the quick construction of accurate semantic lexicons, with a chicken-and-egg problem. However, we propose a novel solution – sampling from the terms extracted with the handpicked seeds (Lhand). WMEB already has very high precision for the top extracted terms (88.1% BAGGING 1-200 401-600 801-1000 1-1000 Top-100 BASILISK 72.3 63.5 58.8 65.1 WMEB 90.2 78.5 66.3 78.5 Top-200 BASILISK 70.7 60.7 45.5 59.8 WMEB 91.0 78.4 62.2 77.0 Top-500 BASILISK 63.5 60.5 45.4 56.3 WMEB 92.5 80.9 59.1 77.2 PDF-500 BASILISK 69.6 68.3 49.6 62.3 WMEB 92.9 80.7 72.1 81.0 Table 5: Bagging with 50 unsupervised seed sets for the top 100 terms) and may provide an acceptable source of seed terms. This approach now only requires the original 50 hand-picked seed terms across the 10 categories, rather than the 2100 terms used above. The process now uses two rounds of bootstrapping: first to create Lhand to sample from and then another round with the 50 sets of randomly unsupervised seeds, Srand. The next decision is how to sample Srand from Lhand. One approach is to use uniform random sampling from restricted sections of Lhand. We performed random sampling from the top 100, 200 and 500 terms of Lhand. The seeds from the smaller samples will have higher precision, but less diversity. In a truly unsupervised approach, it is impossible to know if and when semantic drift occurs and thus using arbitrary cut-offs can reduce the diversity of the selected seeds. To increase diversity we also sampled from the top n=500 using a probability density function (PDF) using rejection sampling, where r is the rank of the term in Lhand: PDF(r) = Pn i=r i−1 Pn i=1 Pn j=i j−1 (1) 6.1 Unsupervised results Table 5 shows the average precision of the lexicons after bagging on the unsupervised seeds, sampled from the top 100 – 500 terms from Lhand. Using the top 100 seed sample is much less effective than Sgold BAG for BASILISK but nearly as effective for WMEB. As the sample size increases, WMEB steadily improves with the increasing variability, however BASILISK is more effective when the more precise seeds are sampled from higher ranking terms in the lexicons. Sampling with PDF-500 results in more accurate lexicons over the first 1000 terms than the other 401 0 0.5 1 1.5 2 2.5 3 0 100 200 300 400 500 600 700 800 900 1000 Drift Number of terms Correct Incorrect Figure 2: Semantic drift in CELL (n=20, m=20) sampling methods for WMEB. In particular, WMEB is more accurate with the unsupervised seeds than the Sgold and Shand (81.0% vs 78.6% and 78.6%). WMEB benefits from the larger variability introduced by the more diverse sets of seeds, and the greater variability available out-weighs the potential noise from incorrect seeds. The PDF-500 distribution allows some variability whilst still preferring the most reliable unsupervised seeds. In the critical later iterations, WMEB PDF-500 improves over supervised bagging (Sgold BAG) by 7% and the original hand-picked seeds (Shand) by 10%. 7 Detecting semantic drift As shown above, semantic drift still dominates the later iterations of bootstrapping even after bagging. In this section, we propose distributional similarity measurements over the extracted lexicon to detect semantic drift during the bootstrapping process. Our hypothesis is that semantic drift has occurred when a candidate term is more similar to recently added terms than to the seed and high precision terms added in the earlier iterations. We experiment with a range of values of both. Given a growing lexicon of size N, LN, let L1...n correspond to the first n terms extracted into L, and L(N−m)...N correspond to the last m terms added to LN. In an iteration, let t be the next candidate term to be added to the lexicon. We calculate the average distributional similarity (sim) of t with all terms in L1...n and those in L(N−m)...N and call the ratio the drift for term t: drift(t, n, m) = sim(L1...n, t) sim(L(N−m)...N, t) (2) Smaller values of drift(t, n, m) correspond to the current term moving further away from the first terms. A drift(t, n, m) of 0.2 corresponds to a 20% difference in average similarity between L1...n and L(N−m)...N for term t. Drift can be used as a post-processing step to filter terms that are a possible consequence of drift. However, our main proposal is to incorporate the drift measure directly within the WMEB bootstrapping algorithm, to detect and then prevent drift occuring. In each iteration, the set of candidate terms to be added to the lexicon are scored and ranked for their suitability. We now additionally determine the drift of each candidate term before it is added to the lexicon. If the term’s drift is below a specified threshold, it is discarded from the extraction process. If the term has zero similarity with the last m terms, but is similar to at least one of the first n terms, the term is selected. Preventing the drifted term from entering the lexicon during the bootstrapping process, has a flow on effect as it will not be able to extract additional divergent patterns which would lead to accelerated drift. For calculating drift we use the distributional similarity approach described in Curran (2004). We extracted window-based features from the filtered 5-grams to form context vectors for each term. We used the standard t-test weight and weighted Jaccard measure functions (Curran, 2004). This system produces a distributional score for each pair of terms presented by the bootstrapping system. 7.1 Drift detection results To evaluate our semantic drift detection we incorporate our process in WMEB. Candidate terms are still weighted in WMEB using the χ2 statistic as described in (McIntosh and Curran, 2008). Many of the MEDLINE categories suffer from semantic drift in WMEB in the later stages. Figure 2 shows the distribution of correct and incorrect terms appearing in the CELL lexicon extracted using Shand with the term’s ranks plotted against their drift scores. Firstly, it is evident that incorrect terms begin to dominate in later iterations. Encouragingly, there is a trend where low values of drift correspond to incorrect terms being added. Drift also occurs in ANTI and MUTN, with an average precision at 8011000 terms of 41.5% and 33.0% respectively. We utilise drift in two ways with WMEB; as a post-processing filter (WMEB+POST) and internally during the term selection phase (WMEB+DIST). Table 6 shows the performance 402 1-200 401-600 801-1000 1000 WMEB 90.3 82.3 62.0 78.6 WMEB+POST n:20 m:5 90.3 82.3 62.1 78.6 n:20 m:20 90.3 81.5 62.0 76.9 n:100 m:5 90.2 82.3 62.1 78.6 n:100 m:20 90.3 82.1 62.1 78.1 WMEB+DIST n:20 m:5 90.8 79.7 72.1 80.2 n:20 m:20 90.6 80.1 76.3 81.4 n:100 m:5 90.5 82.0 79.3 82.8 n:100 m:20 90.5 81.5 77.5 81.9 Table 6: Semantic drift detection results of drift detection with WMEB, using Shand. We use a drift threshold of 0.2 which was selected empirically. A higher value substantially reduced the lexicons’ size, while a lower value resulted in little improvements. We experimented with various sizes of initial terms L1...n (n=20, n=100) and L(N−m)...N (m=5, m=20). There is little performance variation observed in the various WMEB+POST experiments. Overall, WMEB+POST was outperformed slightly by WMEB. The post-filtering removed many incorrect terms, but did not address the underlying drift problem. This only allowed additional incorrect terms to enter the top 1000, resulting in no appreciable difference. Slight variations in precision are obtained using WMEB+DIST in the first 600 terms, but noticeable gains are achieved in the 801-1000 range. This is not surprising as drift in many categories does not start until later (cf. Figure 2). With respect to the drift parameters n and m, we found values of n below 20 to be inadequate. We experimented initially with n=5 terms, but this is equivalent to comparing the new candidate terms to the initial seeds. Setting m to 5 was also less useful than a larger sample, unless n was also large. The best performance gain of 4.2% overall for 1000 terms and 17.3% at 801-1000 terms was obtained using n=100 and m=5. In different phases of WMEB+DIST we reduce semantic drift significantly. In particular, at 801-1000, ANTI increase by 46% to 87.5% and MUTN by 59% to 92.0%. For our final experiments, we report the performance of our best performing WMEB+DIST system (n=100 m=5) using the 10 random GOLD seed sets from section 4.1, in Table 7. On average WMEB+DIST performs above WMEB, especially in the later iterations where the difference is 6.3%. Shand Avg. Min. Max. S.D. 1-200 WMEB 90.3 82.2 73.3 91.5 6.43 WMEB+DIST 90.7 84.8 78.0 91.0 4.61 401-600 WMEB 82.3 66.8 61.4 74.5 4.67 WMEB+DIST 82.0 73.1 65.2 79.3 4.52 Table 7: Final accuracy with drift detection 8 Conclusion In this paper, we have proposed unsupervised bagging and integrated distributional similarity to minimise the problem of semantic drift in iterative bootstrapping algorithms, particularly when extracting large semantic lexicons. There are a number of avenues that require further examination. Firstly, we would like to take our two-round unsupervised bagging further by performing another iteration of sampling and then bootstrapping, to see if we can get a further improvement. Secondly, we also intend to experiment with machine learning methods for identifying the correct cutoff for the drift score. Finally, we intend to combine the bagging and distributional approaches to further improve the lexicons. Our initial analysis demonstrated that the output and accuracy of bootstrapping systems can be very sensitive to the choice of seed terms and therefore robust evaluation requires results averaged across randomised seed sets. We exploited this variability to create both supervised and unsupervised bagging algorithms. The latter requires no more seeds than the original algorithm but performs significantly better and more reliably in later iterations. Finally, we incorporated distributional similarity measurements directly into WMEB which detect and censor terms which could lead to semantic drift. This approach significantly outperformed standard WMEB, with a 17.3% improvement over the last 200 terms extracted (801-1000). The result is an efficient, reliable and accurate system for extracting large-scale semantic lexicons. Acknowledgments We would like to thank Dr Cassie Thornley, our second evaluator who also helped with the evaluation guidelines; and the anonymous reviewers for their helpful feedback. This work was supported by the CSIRO ICT Centre and the Australian Research Council under Discovery project DP0665973. 403 References Leo Breiman. 1996. Bagging predictors. Machine Learning, 26(2):123–140. James R. Curran, Tara Murphy, and Bernhard Scholz. 2007. Minimising semantic drift with mutual exclusion bootstrapping. In Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics, pages 172–180, Melbourne, Australia. James R. Curran. 2004. From Distributional to Semantic Similarity. Ph.D. thesis, University of Edinburgh. Jason Eisner and Damianos Karakos. 2005. Bootstrapping without the boot. In Proceedings of the Conference on Human Language Technology and Conference on Empirical Methods in Natural Language Processing, pages 395– 402, Vancouver, British Columbia, Canada. Gregory Grefenstette. 1994. Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publishers, USA. Claire Grover, Michael Matthews, and Richard Tobin. 2006. Tools to address the interdependence between tokenisation and standoff annotation. In Proceedings of the Multi-dimensional Markup in Natural Language Processing Workshop, Trento, Italy. Zellig Harris. 1954. Distributional structure. Word, 10(2/3):146–162. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th International Conference on Computational Linguistics, pages 539–545, Nantes, France. William Hersh, Aaron M. Cohen, Lynn Ruslen, and Phoebe M. Roberts. 2007. TREC 2007 Genomics Track Overview. In Proceedings of the 16th Text REtrieval Conference, Gaithersburg, MD, USA. Mamoru Komachi, Taku Kudo, Masashi Shimbo, and Yuji Matsumoto. 2008. Graph-based analysis of semantic drift in Espresso-like bootstrapping algorithms. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1011–1020, Honolulu, USA. J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement in categorical data. Biometrics, 33(1):159–174. Tara McIntosh and James R. Curran. 2008. Weighted mutual exclusion bootstrapping for domain independent lexicon and template acquisition. In Proceedings of the Australasian Language Technology Association Workshop, pages 97–105, Hobart, Australia. Edgar Meij and Sophia Katrenko. 2007. Bootstrapping language associated with biomedical entities. The AID group at TREC Genomics 2007. In Proceedings of The 16th Text REtrieval Conference, Gaithersburg, MD, USA. Shachar Mirkin, Ido Dagan, and Maayan Geffet. 2006. Integrating pattern-based and distributional similarity methods for lexical entailment acquistion. In Proceedings of the 21st International Conference on Computational Linguisitics and the 44th Annual Meeting of the Association for Computational Linguistics, pages 579–586, Sydney, Australia. Vincent Ng and Claire Cardie. 2003. Weakly supervised natural language learning without redundant views. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 94–101, Edmonton, USA. Marius Pas¸ca, Dekang Lin, Jeffrey Bigham, Andrei Lifchits, and Alpa Jain. 2006. Names and similarities on the web: Fact extraction in the fast lane. In Proceedings of the 21st International Conference on Computational Linguisitics and the 44th Annual Meeting of the Association for Computational Linguistics, pages 809–816, Sydney, Australia. Patrick Pantel and Deepak Ravichandran. 2004. Automatically labelling semantic classes. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 321–328, Boston, MA, USA. Marco Pennacchiotti and Patrick Pantel. 2006. A bootstrapping algorithm for automatically harvesting semantic relations. In Proceedings of Inference in Computational Semantics (ICoS-06), pages 87–96, Buxton, England. Ellen Riloff and Rosie Jones. 1999. Learning dictionaries for information extraction by multi-level bootstrapping. In Proceedings of the 16th National Conference on Artificial Intelligence and the 11th Innovative Applications of Artificial Intelligence Conference, pages 474–479, Orlando, FL, USA. Ellen Riloff, Janyce Wiebe, and Theresa Wilson. 2003. Learning subjective nouns using extraction pattern bootstrapping. In Proceedings of the Seventh Conference on Natural Language Learning (CoNLL-2003), pages 25–32. Michael Thelen and Ellen Riloff. 2002. A bootstrapping method for learning semantic lexicons using extraction pattern contexts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 214–221, Philadelphia, USA. Xiaofeng Yang and Jian Su. 2007. Coreference resolution using semantic relatedness information from automatically discovered patterns. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, pages 528–535, Prague, Czech Republic. Roman Yangarber. 2003. Counter-training in discovery of semantic patterns. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 343–350, Sapporo, Japan. Hong Yu and Eugene Agichtein. 2003. Extracting synonymous gene and protein terms from biological literature. Bioinformatics, 19(1):i340–i349. 404
2009
45
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 405–413, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Jointly Identifying Temporal Relations with Markov Logic Katsumasa Yoshikawa NAIST, Japan [email protected] Sebastian Riedel University of Tokyo, Japan [email protected] Masayuki Asahara NAIST, Japan [email protected] Yuji Matsumoto NAIST, Japan [email protected] Abstract Recent work on temporal relation identification has focused on three types of relations between events: temporal relations between an event and a time expression, between a pair of events and between an event and the document creation time. These types of relations have mostly been identified in isolation by event pairwise comparison. However, this approach neglects logical constraints between temporal relations of different types that we believe to be helpful. We therefore propose a Markov Logic model that jointly identifies relations of all three relation types simultaneously. By evaluating our model on the TempEval data we show that this approach leads to about 2% higher accuracy for all three types of relations —and to the best results for the task when compared to those of other machine learning based systems. 1 Introduction Temporal relation identification (or temporal ordering) involves the prediction of temporal order between events and/or time expressions mentioned in text, as well as the relation between events in a document and the time at which the document was created. With the introduction of the TimeBank corpus (Pustejovsky et al., 2003), a set of documents annotated with temporal information, it became possible to apply machine learning to temporal ordering (Boguraev and Ando, 2005; Mani et al., 2006). These tasks have been regarded as essential for complete document understanding and are useful for a wide range of NLP applications such as question answering and machine translation. Most of these approaches follow a simple schema: they learn classifiers that predict the temporal order of a given event pair based on a set of the pair’s of features. This approach is local in the sense that only a single temporal relation is considered at a time. Learning to predict temporal relations in this isolated manner has at least two advantages over any approach that considers several temporal relations jointly. First, it allows us to use off-the-shelf machine learning software that, up until now, has been mostly focused on the case of local classifiers. Second, it is computationally very efficient both in terms of training and testing. However, the local approach has a inherent drawback: it can lead to solutions that violate logical constraints we know to hold for any sets of temporal relations. For example, by classifying temporal relations in isolation we may predict that event A happened before, and event B after, the time of document creation, but also that event A happened after event B—a clear contradiction in terms of temporal logic. In order to repair the contradictions that the local classifier predicts, Chambers and Jurafsky (2008) proposed a global framework based on Integer Linear Programming (ILP). They showed that large improvements can be achieved by explicitly incorporating temporal constraints. The approach we propose in this paper is similar in spirit to that of Chambers and Jurafsky: we seek to improve the accuracy of temporal relation identification by predicting relations in a more global manner. However, while they focused only on the temporal relations between events mentioned in a document, we also jointly predict the temporal order between events and time expressions, and between events and the document creation time. Our work also differs in another important aspect from the approach of Chambers and Jurafsky. Instead of combining the output of a set of local classifiers using ILP, we approach the problem of joint temporal relation identification using Markov Logic (Richardson and Domingos, 2006). In this 405 framework global correlations can be readily captured through the addition of weighted first order logic formulae. Using Markov Logic instead of an ILP-based approach has at least two advantages. First, it allows us to easily capture non-deterministic (soft) rules that tend to hold between temporal relations but do not have to. 1 For example, if event A happens before B, and B overlaps with C, then there is a good chance that A also happens before C, but this is not guaranteed. Second, the amount of engineering required to build our system is similar to the efforts required for using an off-the-shelf classifier: we only need to define features (in terms of formulae) and provide input data in the correct format. 2 In particular, we do not need to manually construct ILPs for each document we encounter. Moreover, we can exploit and compare advanced methods of global inference and learning, as long as they are implemented in our Markov Logic interpreter of choice. Hence, in our future work we can focus entirely on temporal relations, as opposed to inference or learning techniques for machine learning. We evaluate our approach using the data of the “TempEval” challenge held at the SemEval 2007 Workshop (Verhagen et al., 2007). This challenge involved three tasks corresponding to three types of temporal relations: between events and time expressions in a sentence (Task A), between events of a document and the document creation time (Task B), and between events in two consecutive sentences (Task C). Our findings show that by incorporating global constraints that hold between temporal relations predicted in Tasks A, B and C, the accuracy for all three tasks can be improved significantly. In comparison to other participants of the “TempEval” challenge our approach is very competitive: for two out of the three tasks we achieve the best results reported so far, by a margin of at least 2%. 3 Only for Task B we were unable to reach the performance of a rule-based entry to the challenge. However, we do perform better than all pure machine 1It is clearly possible to incorporate weighted constraints into ILPs, but how to learn the corresponding weights is not obvious. 2This is not to say that picking the right formulae in Markov Logic, or features for local classification, is always easy. 3To be slightly more precise: for Task C we achieve this margin only for “strict” scoring—see sections 5 and 6 for more details. learning-based entries. The remainder of this paper is organized as follows: Section 2 describes temporal relation identification including TempEval; Section 3 introduces Markov Logic; Section 4 explains our proposed Markov Logic Network; Section 5 presents the setup of our experiments; Section 6 shows and discusses the results of our experiments; and in Section 7 we conclude and present ideas for future research. 2 Temporal Relation Identification Temporal relation identification aims to predict the temporal order of events and/or time expressions in documents, as well as their relations to the document creation time (DCT). For example, consider the following (slightly simplified) sentence of Section 1 in this paper. With the introduction of the TimeBank corpus (Pustejovsky et al., 2003), machine learning approaches to temporal ordering became possible. Here we have to predict that the “Machine learning becoming possible” event happened AFTER the “introduction of the TimeBank corpus” event, and that it has a temporal OVERLAP with the year 2003. Moreover, we need to determine that both events happened BEFORE the time this paper was created. Most previous work on temporal relation identification (Boguraev and Ando, 2005; Mani et al., 2006; Chambers and Jurafsky, 2008) is based on the TimeBank corpus. The temporal relations in the Timebank corpus are divided into 11 classes; 10 of them are defined by the following 5 relations and their inverse: BEFORE, IBEFORE (immediately before), BEGINS, ENDS, INCLUDES; the remaining one is SIMULTANEOUS. In order to drive forward research on temporal relation identification, the SemEval 2007 shared task (Verhagen et al., 2007) (TempEval) included the following three tasks. TASK A Temporal relations between events and time expressions that occur within the same sentence. TASK B Temporal relations between the Document Creation Time (DCT) and events. TASK C Temporal relations between the main events of adjacent sentences.4 4The main event of a sentence is expressed by its syntactically dominant verb. 406 To simplify matters, in the TempEval data, the classes of temporal relations were reduced from the original 11 to 6: BEFORE, OVERLAP, AFTER, BEFORE-OR-OVERLAP, OVERLAP-OR-AFTER, and VAGUE. In this work we are focusing on the three tasks of TempEval, and our running hypothesis is that they should be tackled jointly. That is, instead of learning separate probabilistic models for each task, we want to learn a single one for all three tasks. This allows us to incorporate rules of temporal consistency that should hold across tasks. For example, if an event X happens before DCT, and another event Y after DCT, then surely X should have happened before Y. We illustrate this type of transition rule in Figure 1. Note that the correct temporal ordering of events and time expressions can be controversial. For instance, consider the example sentence again. Here one could argue that “the introduction of the TimeBank” may OVERLAP with “Machine learning becoming possible” because “introduction” can be understood as a process that is not finished with the release of the data but also includes later advertisements and announcements. This is reflected by the low inter-annotator agreement score of 72% on Tasks A and B, and 68% on Task C. 3 Markov Logic It has long been clear that local classification alone cannot adequately solve all prediction problems we encounter in practice.5 This observation motivated a field within machine learning, often referred to as Statistical Relational Learning (SRL), which focuses on the incorporation of global correlations that hold between statistical variables (Getoor and Taskar, 2007). One particular SRL framework that has recently gained momentum as a platform for global learning and inference in AI is Markov Logic (Richardson and Domingos, 2006), a combination of firstorder logic and Markov Networks. It can be understood as a formalism that extends first-order logic to allow formulae that can be violated with some penalty. From an alternative point of view, it is an expressive template language that uses first order logic formulae to instantiate Markov Networks of repetitive structure. From a wide range of SRL languages we chose Markov Logic because it supports discriminative 5It can, however, solve a large number of problems surprisingly well. Figure 1: Example of Transition Rule 1 training (as opposed to generative SRL languages such as PRM (Koller, 1999)). Moreover, several Markov Logic software libraries exist and are freely available (as opposed to other discriminative frameworks such as Relational Markov Networks (Taskar et al., 2002)). In the following we will explain Markov Logic by example. One usually starts out with a set of predicates that model the decisions we need to make. For simplicity, let us assume that we only predict two types of decisions: whether an event e happens before the document creation time (DCT), and whether, for a pair of events e1 and e2, e1 happens before e2. Here the first type of decision can be modeled through a unary predicate beforeDCT(e), while the latter type can be represented by a binary predicate before(e1, e2). Both predicates will be referred to as hidden because we do not know their extensions at test time. We also introduce a set of observed predicates, representing information that is available at test time. For example, in our case we could introduce a predicate futureTense(e) which indicates that e is an event described in the future tense. With our predicates defined, we can now go on to incorporate our intuition about the task using weighted first-order logic formulae. For example, it seems reasonable to assume that futureTense (e) ⇒¬beforeDCT (e) (1) often, but not always, holds. Our remaining uncertainty with regard to this formula is captured by a weight w we associate with it. Generally we can say that the larger this weight is, the more likely/often the formula holds in the solutions described by our model. Note, however, that we do not need to manually pick these weights; instead they are learned from the given training corpus. The intuition behind the previous formula can also be captured using a local classifier.6 However, 6Consider a log-linear binary classifier with a “past-tense” 407 Markov Logic also allows us to say more: beforeDCT (e1) ∧¬beforeDCT (e2) ⇒before (e1, e2) (2) In this case, we made a statement about more global properties of a temporal ordering that cannot be captured with local classifiers. This formula is also an example of the transition rules as seen in Figure 2. This type of rule forms the core idea of our joint approach. A Markov Logic Network (MLN) M is a set of pairs (φ, w) where φ is a first order formula and w is a real number (the formula’s weight). It defines a probability distribution over sets of ground atoms, or so-called possible worlds, as follows: p (y) = 1 Z exp   ∑ (φ,w)∈M w ∑ c∈Cφ fφ c (y)   (3) Here each c is a binding of free variables in φ to constants in our domain. Each fφ c is a binary feature function that returns 1 if in the possible world y the ground formula we get by replacing the free variables in φ with the constants in c is true, and 0 otherwise. Cφ is the set of all bindings for the free variables in φ. Z is a normalisation constant. Note that this distribution corresponds to a Markov Network (the so-called Ground Markov Network) where nodes represent ground atoms and factors represent ground formulae. Designing formulae is only one part of the game. In practice, we also need to choose a training regime (in order to learn the weights of the formulae we added to the MLN) and a search/inference method that picks the most likely set of ground atoms (temporal relations in our case) given our trained MLN and a set of observations. However, implementations of these methods are often already provided in existing Markov Logic interpreters such as Alchemy 7 and Markov thebeast. 8 4 Proposed Markov Logic Network As stated before, our aim is to jointly tackle Tasks A, B and C of the TempEval challenge. In this section we introduce the Markov Logic Network we designed for this goal. We have three hidden predicates, corresponding to Tasks A, B, and C: relE2T(e, t, r) represents the temporal relation of class r between an event e feature: here for every event e the decision “e happens before DCT” becomes more likely with a higher weight for this feature. 7http://alchemy.cs.washington.edu/ 8http://code.google.com/p/thebeast/ Figure 2: Example of Transition Rule 2 and a time expression t; relDCT(e, r) denotes the temporal relation r between an event e and DCT; relE2E(e1, e2, r) represents the relation r between two events of the adjacent sentences, e1 and e2. Our observed predicates reflect information we were given (such as the words of a sentence), and additional information we extracted from the corpus (such as POS tags and parse trees). Note that the TempEval data also contained temporal relations that were not supposed to be predicted. These relations are represented using two observed predicates: relT2T(t1, t2, r) for the relation r between two time expressions t1 and t2; dctOrder(t, r) for the relation r between a time expression t and a fixed DCT. An illustration of all “temporal” predicates, both hidden and observed, can be seen in Figure 3. 4.1 Local Formula Our MLN is composed of several weighted formulae that we divide into two classes. The first class contains local formulae for the Tasks A, B and C. We say that a formula is local if it only considers the hidden temporal relation of a single event-event, event-time or event-DCT pair. The formulae in the second class are global: they involve two or more temporal relations at the same time, and consider Tasks A, B and C simultaneously. The local formulae are based on features employed in previous work (Cheng et al., 2007; Bethard and Martin, 2007) and are listed in Table 1. What follows is a simple example in order to illustrate how we implement each feature as a formula (or set of formulae). Consider the tense-feature for Task C. For this feature we first introduce a predicate tense(e, t) that denotes the tense t for an event e. Then we 408 Figure 3: Predicates for Joint Formulae; observed predicates are indicated with dashed lines. Table 1: Local Features Feature A B C EVENT-word X X EVENT-POS X X EVENT-stem X X EVENT-aspect X X X EVENT-tense X X X EVENT-class X X X EVENT-polarity X X TIMEX3-word X TIMEX3-POS X TIMEX3-value X TIMEX3-type X TIMEX3-DCT order X X positional order X in/outside X unigram(word) X X unigram(POS) X X bigram(POS) X trigram(POS) X X Dependency-Word X X X Dependency-POS X X add a set of formulae such as tense(e1, past) ∧tense(e2, future) ⇒relE2E(e1, e2, before) (4) for all possible combinations of tenses and temporal relations.9 4.2 Global Formula Our global formulae are designed to enforce consistency between the three hidden predicates (and the two observed temporal predicates we mentioned earlier). They are based on the transition 9This type of “template-based” formulae generation can be performed automatically by the Markov Logic Engine. rules we mentioned in Section 3. Table 2 shows the set of formula templates we use to generate the global formulae. Here each template produces several instantiations, one for each assignment of temporal relation classes to the variables R1, R2, etc. One example of a template instantiation is the following formula. dctOrder(t1, before) ∧relDCT(e1, after) ⇒relE2T(e1, t1, after) (5a) This formula is an expansion of the formula template in the second row of Table 2. Note that it utilizes the results of Task B to solve Task A. Formula 5a should always hold,10 and hence we could easily implement it as a hard constraint in an ILP-based framework. However, some transition rules are less determinstic and should rather be taken as “rules of thumb”. For example, formula 5b is a rule which we expect to hold often, but not always. dctOrder(t1, before) ∧relDCT(e1, overlap) ⇒relE2T(e1, t1, after) (5b) Fortunately, this type of soft rule poses no problem for Markov Logic: after training, Formula 5b will simply have a lower weight than Formula 5a. By contrast, in a “Local Classifier + ILP”-based approach as followed by Chambers and Jurafsky (2008) it is less clear how to proceed in the case of soft rules. Surely it is possible to incorporate weighted constraints into ILPs, but how to learn the corresponding weights is not obvious. 5 Experimental Setup With our experiments we want to answer two questions: (1) does jointly tackling Tasks A, B, and C help to increase overall accuracy of temporal relation identification? (2) How does our approach compare to state-of-the-art results? In the following we will present the experimental set-up we chose to answer these questions. In our experiments we use the test and training sets provided by the TempEval shared task. We further split the original training data into a training and a development set, used for optimizing parameters and formulae. For brevity we will refer to the training, development and test set as TRAIN, DEV and TEST, respectively. The numbers of temporal relations in TRAIN, DEV, and TEST are summarized in Table 3. 10However, due to inconsistent annotations one will find violations of this rule in the TempEval data. 409 Table 2: Joint Formulae for Global Model Task Formula A →B dctOrder(t, R1) ∧ relE2T(e, t, R2) ⇒relDCT(e, R3) B →A dctOrder(t, R1) ∧ relDCT(e, R2) ⇒relE2T(e, t, R3) B →C relDCT(e1, R1) ∧ relDCT(e2, R2) ⇒relE2E(e1, e2, R3) C →B relDCT(e1, R1) ∧ relE2E(e1, e2, R2) ⇒relDCT(e2, R3) A →C relE2T(e1, t1, R1) ∧ relT2T(t1, t2, R2) ∧ relE2T(e2, t2, R3) ⇒relE2E(e1, e2, R4) C →A relE2T(e2, t2, R1) ∧ relT2T(t1, t2, R2) ∧ relE2E(e1, e2, R3) ⇒relE2T(e1, t1, R4) Table 3: Numbers of Labeled Relations for All Tasks TRAIN DEV TEST TOTAL Task A 1359 131 169 1659 Task B 2330 227 331 2888 Task C 1597 147 258 2002 For feature generation we use the following tools. 11 POS tagging is performed with TnT ver2.2;12 for our dependency-based features we use MaltParser 1.0.0.13 For inference in our models we use Cutting Plane Inference (Riedel, 2008) with ILP as a base solver. This type of inference is exact and often very fast because it avoids instantiation of the complete Markov Network. For learning we apply one-best MIRA (Crammer and Singer, 2003) with Cutting Plane Inference to find the current model guess. Both training and inference algorithms are provided by Markov thebeast, a Markov Logic interpreter tailored for NLP applications. Note that there are several ways to manually optimize the set of formulae to use. One way is to pick a task and then choose formulae that increase the accuracy for this task on DEV. However, our primary goal is to improve the performance of all the tasks together. Hence we choose formulae with respect to the total score over all three tasks. We will refer to this type of optimization as “averaged optimization”. The total scores of the all three tasks are defined as follows: Ca + Cb + Cc Ga + Gb + Gc where Ca, Cb, and Cc are the number of the correctly identified labels in each task, and Ga, Gb, and Gc are the numbers of gold labels of each task. Our system necessarily outputs one label to one relational link to identify. Therefore, for all our re11Since the TempEval trial has no restriction on preprocessing such as syntactic parsing, most participants used some sort of parsers. 12http://www.coli.uni-saarland.de/ ˜thorsten/tnt/ 13http://w3.msi.vxu.se/˜nivre/research/ MaltParser.html sults, precision, recall, and F-measure are the exact same value. For evaluation, TempEval proposed the two scoring schemes: “strict” and “relaxed”. For strict scoring we give full credit if the relations match, and no credit if they do not match. On the other hand, relaxed scoring gives credit for a relation according to Table 4. For example, if a system picks the relation “AFTER” that should have been “BEFORE” according to the gold label, it gets neither “strict” nor “relaxed” credit. But if the system assigns “B-O (BEFORE-OR-OVERLAP)” to the relation, it gets a 0.5 “relaxed” score (and still no “strict” score). 6 Results In the following we will first present our comparison of the local and global model. We will then go on to put our results into context and compare them to the state-of-the-art. 6.1 Impact of Global Formulae First, let us show the results on TEST in Table 5. You will find two columns, “Global” and “Local”, showing scores achieved with and without joint formulae, respectively. Clearly, the global models scores are higher than the local scores for all three tasks. This is also reflected by the last row of Table 5. Here we see that we have improved the averaged performance across the three tasks by approximately 2.5% (ρ < 0.01, McNemar’s test 2tailed). Note that with 3.5% the improvements are particularly large for Task C. The TempEval test set is relatively small (see Table 3). Hence it is not clear how well our results would generalize in practice. To overcome this issue, we also evaluated the local and global model using 10-fold cross validation on the training data (TRAIN + DEV). The corresponding results can be seen in Table 6. Note that the general picture remains: performance for all tasks is improved, and the averaged score is improved only slightly less than for the TEST results. However, this time the score increase for Task B is lower than before. We 410 Table 4: Evaluation Weights for Relaxed Scoring B O A B-O O-A V B 1 0 0 0.5 0 0.33 O 0 1 0 0.5 0.5 0.33 A 0 0 1 0 0.5 0.33 B-O 0.5 0.5 0 1 0.5 0.67 O-A 0 0.5 0.5 0.5 1 0.67 V 0.33 0.33 0.33 0.67 0.67 1 B: BEFORE O: OVERLAP A: AFTER B-O: BEFORE-OR-OVERLAP O-A: OVERLAP-OR-AFTER V: VAGUE Table 5: Results on TEST Set Local Global task strict relaxed strict relaxed Task A 0.621 0.669 0.645 0.687 Task B 0.737 0.753 0.758 0.777 Task C 0.531 0.599 0.566 0.632 All 0.641 0.682 0.668 0.708 Table 6: Results with 10-fold Cross Validation Local Global task strict relaxed strict relaxed Task A 0.613 0.645 0.662 0.691 Task B 0.789 0.810 0.799 0.819 Task C 0.533 0.608 0.552 0.623 All 0.667 0.707 0.689 0.727 see that this is compensated by much higher scores for Task A and C. Again, the improvements for all three tasks are statistically significant (ρ < 10−8, McNemar’s test, 2-tailed). To summarize, we have shown that by tightly connecting tasks A, B and C, we can improve temporal relation identification significantly. But are we just improving a weak baseline, or can joint modelling help to reach or improve the state-of-theart results? We will try to answer this question in the next section. 6.2 Comparison to the State-of-the-art In order to put our results into context, Table 7 shows them along those of other TempEval participants. In the first row, TempEval Best gives the best scores of TempEval for each task. Note that all but the strict scores of Task C are achieved by WVALI (Puscasu, 2007), a hybrid system that combines machine learning and hand-coded rules. In the second row we see the TempEval average scores of all six participants in TempEval. The third row shows the results of CU-TMP (Bethard and Martin, 2007), an SVM-based system that achieved the second highest scores in TempEval for all three tasks. CU-TMP is of interest because it is the best pure Machine-Learning-based approach so far. The scores of our local and global model come in the fourth and fifth row, respectively. The last row in the table shows task-adjusted scores. Here we essentially designed and applied three global MLNs, each one tailored and optimized for a different task. Note that the task-adjusted scores are always about 1% higher than those of the single global model. Let us discuss the results of Table 7 in detail. We see that for task A, our global model improves an already strong local model to reach the best results both for strict scores (with a 3% points margin) and relaxed scores (with a 5% points margin). For Task C we see a similar picture: here adding global constraints helped to reach the best strict scores, again by a wide margin. We also achieve competitive relaxed scores which are in close range to the TempEval best results. Only for task B our results cannot reach the best TempEval scores. While we perform slightly better than the second-best system (CU-TMP), and hence report the best scores among all pure MachineLearning based approaches, we cannot quite compete with WVALI. 6.3 Discussion Let us discuss some further characteristics and advantages of our approach. First, notice that global formulae not only improve strict but also relaxed scores for all tasks. This suggests that we produce more ambiguous labels (such as BEFOREOR-OVERLAP) in cases where the local model has been overconfident (and wrongly chose BEFORE or OVERLAP), and hence make less “fatal errors”. Intuitively this makes sense: global consistency is easier to achieve if our labels remain ambiguous. For example, a solution that labels every relation as VAGUE is globally consistent (but not very informative). Secondly, one could argue that our solution to joint temporal relation identification is too complicated. Instead of performing global inference, one could simply arrange local classifiers for the tasks into a pipeline. In fact, this has been done by Bethard and Martin (2007): they first solve task B and then use this information as features for Tasks A and C. While they do report improvements (0.7% 411 Table 7: Comparison with Other Systems Task A Task B Task C strict relaxed strict relaxed strict relaxed TempEval Best 0.62 0.64 0.80 0.81 0.55 0.64 TempEval Average 0.56 0.59 0.74 0.75 0.51 0.58 CU-TMP 0.61 0.63 0.75 0.76 0.54 0.58 Local Model 0.62 0.67 0.74 0.75 0.53 0.60 Global Model 0.65 0.69 0.76 0.78 0.57 0.63 Global Model (Task-Adjusted) (0.66) (0.70) (0.76) (0.79) (0.58) (0.64) on Task A, and about 0.5% on Task C), generally these improvements do not seem as significant as ours. What is more, by design their approach can not improve the first stage (Task B) of the pipeline. On the same note, we also argue that our approach does not require more implementation efforts than a pipeline. Essentially we only have to provide features (in the form of formulae) to the Markov Logic Engine, just as we have to provide for a SVM or MaxEnt classifier. Finally, it became more clear to us that there are problems inherent to this task and dataset that we cannot (or only partially) solve using global methods. First, there are inconsistencies in the training data (as reflected by the low inter-annotator agreement) that often mislead the learner—this problem applies to learning of local and global formulae/features alike. Second, the training data is relatively small. Obviously, this makes learning of reliable parameters more difficult, particularly when data is as noisy as in our case. Third, the temporal relations in the TempEval dataset only directly connect a small subset of events. This makes global formulae less effective.14 7 Conclusion In this paper we presented a novel approach to temporal relation identification. Instead of using local classifiers to predict temporal order in a pairwise fashion, our approach uses Markov Logic to incorporate both local features and global transition rules between temporal relations. We have focused on transition rules between temporal relations of the three TempEval subtasks: temporal ordering of events, of events and time expressions, and of events and the document creation time. Our results have shown that global transition rules lead to significantly higher accuracy for all three tasks. Moreover, our global Markov Logic 14See (Chambers and Jurafsky, 2008) for a detailed discussion of this problem, and a possible solution for it. model achieves the highest scores reported so far for two of three tasks, and very competitive results for the remaining one. While temporal transition rules can also be captured with an Integer Linear Programming approach (Chambers and Jurafsky, 2008), Markov Logic has at least two advantages. First, handling of “rules of thumb” between less specific temporal relations (such as OVERLAP or VAGUE) is straightforward—we simply let the Markov Logic Engine learn weights for these rules. Second, there is less engineering overhead for us to perform, because we do not need to generate ILPs for each document. However, potential for further improvements through global approaches seems to be limited by the sparseness and inconsistency of the data. To overcome this problem, we are planning to use external or untagged data along with methods for unsupervised learning in Markov Logic (Poon and Domingos, 2008). Furthermore, TempEval-2 15 is planned for 2010 and it has challenging temporal ordering tasks in five languages. So, we would like to investigate the utility of global formulae for multilingual temporal ordering. Here we expect that while lexical and syntax-based features may be quite language dependent, global transition rules should hold across languages. Acknowledgements This work is partly supported by the Integrated Database Project, Ministry of Education, Culture, Sports, Science and Technology of Japan. References Steven Bethard and James H. Martin. 2007. Cu-tmp: Temporal relation classification using syntactic and semantic features. In Proceedings of the 4th International Workshop on SemEval-2007., pages 129–132. 15http://www.timeml.org/tempeval2/ 412 Branimir Boguraev and Rie Kubota Ando. 2005. Timeml-compliant text analysis for temporal reasoning. In Proceedings of the 19th International Joint Conference on Artificial Intelligence, pages 997– 1003. Nathanael Chambers and Daniel Jurafsky. 2008. Jointly combining implicit constraints improves temporal ordering. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 698–706, Honolulu, Hawaii, October. Association for Computational Linguistics. Yuchang Cheng, Masayuki Asahara, and Yuji Matsumoto. 2007. Naist.japan: Temporal relation identification using dependency parsed tree. In Proceedings of the 4th International Workshop on SemEval2007., pages 245–248. Koby Crammer and Yoram Singer. 2003. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research, 3:951–991. Lise Getoor and Ben Taskar. 2007. Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning). The MIT Press. Daphne Koller, 1999. Probabilistic Relational Models, pages 3–13. Springer, Berlin/Heidelberg, Germany. Inderjeet Mani, Marc Verhagen, Ben Wellner, Chong Min Lee, and James Pustejovsky. 2006. Machine learning of temporal relations. In ACL-44: Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 753–760, Morristown, NJ, USA. Association for Computational Linguistics. Hoifung Poon and Pedro Domingos. 2008. Joint unsupervised coreference resolution with Markov Logic. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 650–659, Honolulu, Hawaii, October. Association for Computational Linguistics. Georgiana Puscasu. 2007. Wvali: Temporal relation identification by syntactico-semantic analysis. In Proceedings of the 4th International Workshop on SemEval-2007., pages 484–487. James Pustejovsky, Jose Castano, Robert Ingria, Reser Sauri, Robert Gaizauskas, Andrea Setzer, and Graham Katz. 2003. The timebank corpus. In Proceedings of Corpus Linguistics 2003, pages 647–656. Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. In Machine Learning. Sebastian Riedel. 2008. Improving the accuracy and efficiency of map inference for markov logic. In Proceedings of UAI 2008. Ben Taskar, Abbeel Pieter, and Daphne Koller. 2002. Discriminative probabilistic models for relational data. In Proceedings of the 18th Annual Conference on Uncertainty in Artificial Intelligence (UAI-02), pages 485–492, San Francisco, CA. Morgan Kaufmann. Marc Verhagen, Robert Gaizaukas, Frank Schilder, Mark Hepple, Graham Katz, and James Pustejovsky. 2007. Semeval-2007 task 15: Tempeval temporal relation identification. In Proceedings of the 4th International Workshop on SemEval-2007., pages 75–80. 413
2009
46
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 414–422, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Profile Based Cross-Document Coreference Using Kernelized Fuzzy Relational Clustering Jian Huang† Sarah M. Taylor‡ Jonathan L. Smith‡ Konstantinos A. Fotiadis‡ C. Lee Giles† †College of Information Sciences and Technology Pennsylvania State University, University Park, PA 16802, USA {jhuang, giles}@ist.psu.edu ‡Advanced Technology Office, Lockheed Martin IS&GS, Arlington, VA 22203, USA {sarah.m.taylor, jonathan.l.smith, konstantinos.a.fotiadis}@lmco.com Abstract Coreferencing entities across documents in a large corpus enables advanced document understanding tasks such as question answering. This paper presents a novel cross document coreference approach that leverages the profiles of entities which are constructed by using information extraction tools and reconciled by using a within-document coreference module. We propose to match the profiles by using a learned ensemble distance function comprised of a suite of similarity specialists. We develop a kernelized soft relational clustering algorithm that makes use of the learned distance function to partition the entities into fuzzy sets of identities. We compare the kernelized clustering method with a popular fuzzy relation clustering algorithm (FRC) and show 5% improvement in coreference performance. Evaluation of our proposed methods on a large benchmark disambiguation collection shows that they compare favorably with the top runs in the SemEval evaluation. 1 Introduction A named entity that represents a person, an organization or a geo-location may appear within and across documents in different forms. Cross document coreference (CDC) is the task of consolidating named entities that appear in multiple documents according to their real referents. CDC is a stepping stone for achieving intelligent information access to vast and heterogeneous text corpora, which includes advanced NLP techniques such as document summarization and question answering. A related and well studied task is within document coreference (WDC), which limits the scope of disambiguation to within the boundary of a document. When namesakes appear in an article, the author can explicitly help to disambiguate, using titles and suffixes (as in the example, “George Bush Sr. ... the younger Bush”) besides other means. Cross document coreference, on the other hand, is a more challenging task because these linguistics cues and sentence structures no longer apply, given the wide variety of context and styles in different documents. Cross document coreference research has recently become more popular due to the increasing interests in the web person search task (Artiles et al., 2007). Here, a search query for a person name is entered into a search engine and the desired outputs are documents clustered according to the identities of the entities in question. In our work, we propose to drill down to the subdocument mention level and construct an entity profile with the support of information extraction tools and reconciled with WDC methods. Hence our IE based approach has access to accurate information such as a person’s mentions and geolocations for disambiguation. Simple IR based CDC approaches (e.g. (Gooi and Allan, 2004)), on the other hand, may simply use all the terms and this can be detrimental to accuracy. For example, a biography of John F. Kennedy is likely to mention members of his family with related positions, besides references to other political figures. Even with careful word selection, these textual features can still confuse the disambiguation system about the true identity of the person. We propose to handle the CDC task using a novel kernelized fuzzy relational clustering algorithm, which allows probabilistic cluster membership assignment. This not only addresses the intrinsic uncertainty nature of the CDC problem, but also yields additional performance improvement. We propose to use a specialist ensemble 414 learning approach to aggregate the diverse set of similarities in comparing attributes and relationships in entity profiles. Our approach is first fully described in Section 2. The effectiveness of the proposed method is demonstrated using real world benchmark test sets in Section 3. We review related work in cross document coreference and conclude in Section 5. 2 Methods 2.1 Document Level and Profile Based CDC We make distinctions between document level and profile based cross document coreference. Document level CDC makes a simplifying assumption that a named entity (and its variants) in a document has one underlying real identity. The assumption is generally acceptable but may be violated when a document refers to namesakes at the same time (e.g. George W. Bush and George H. W. Bush referred to as George or President Bush). Furthermore, the context surrounding the person NE President Clinton can be counterproductive for disambiguating the NE Senator Clinton, with both entities likely to appear in a document at the same time. The simplified document level CDC has nevertheless been used in the WePS evaluation (Artiles et al., 2007), called the web people task. In this work, we advocate profile based disambiguation that aims to leverage the advances in NLP techniques. Rather than treating a document as simply a bag of words, an information extraction tool first extracts NE’s and their relationships. For the NE’s of interest (i.e. persons in this work), a within-document coreference (WDC) module then links the entities deemed as referring to the same underlying identity into a WDC chain. This process includes both anaphora resolution (resolving ‘He’ and its antecedent ‘President Clinton’) and entity tracking (resolving ‘Bill’ and ‘President Clinton’). Let E = {e1, ..., eN} denote the set of N chained entities (each corresponding to a WDC chain), provided as input to the CDC system. We intentionally do not distinguish which document each ej belongs to, as profile based CDC can potentially rectify WDC errors by leveraging information across document boundaries. Each ei is represented as a profile which contains the NE, its attributes and associated relationships, i.e. ej =< ej,1, ..., ej,L > (ej,l can be a textual attribute or a pointer to another entity). The profile based CDC method generates a partition of E, represented by a partition matrix U (where uij denotes the membership of an entity ej to the ith identity cluster). Therefore, the chained entities placed in a name cluster are deemed as coreferent. Profile based CDC addresses a finer grained coreference problem in the mention level, enabled by the recent advances in IE and WDC techniques. In addition, profile based CDC facilitates user information consumption with structured information and short summary passages. Next, we focus on the relational clustering algorithm that lies at the core of the profile based CDC system. We then turn our attention to the specialist learning algorithm for the distance function used in clustering, capable of leveraging the available training data. 2.2 CDC Using Fuzzy Relational Clustering 2.2.1 Preliminaries Traditionally, hard clustering algorithms (where uij ∈{0, 1}) such as complete linkage hierarchical agglomerative clustering (Mann and Yarowsky, 2003) have been applied to the disambiguation problem. In this work, we propose to use fuzzy clustering methods (relaxing the membership condition to uij ∈[0, 1]) as a better way of handling uncertainty in cross document coreference. First, consider the following motivating example, Example. The named entity President Bush is extracted from the sentence “President Bush addressed the nation from the Oval Office Monday.” • Without additional cues, a hard clustering algorithm has to arbitrarily assign the mention “President Bush” to either the NE “George W. Bush” or “George H. W. Bush”. • A soft clustering algorithm, on the other hand, can assign equal probability to the two identities, indicating low entropy or high uncertainty in the solution. Additionally, the soft clustering algorithm can assign lower probability to the identity “Governor Jeb Bush”, reflecting a less likely (though not impossible) coreference decision. We first formalize the cross document coreference problem as a soft clustering problem, which minimizes the following objective function: JC(E) = C P i=1 N P j=1 um ij d2(ej, vi) (1) s.t. C P i=1 uij = 1 and N P j=1 uij > 0, uij ∈[0, 1] 415 where vi is a virtual (implicit) prototype of the i-th cluster (ej, vi ∈D) and m controls the fuzziness of the solution (m > 1; the solution approaches hard clustering as m approaches 1). We will further explain the generic distance function d : D × D →R in the next subsection. The goal of the optimization is to minimize the sum of deviations of patterns to the cluster prototypes. The clustering solution is a fuzzy partition Pθ = {Ci}, where ej ∈Ci if and only if uij > θ. We note from the outset that the optimization functional has the same form as the classical Fuzzy C-Means (FCM) algorithm (Bezdek, 1981), but major differences exist. FCM, as most object clustering algorithms, deals with object data represented in a vectorial form. In our case, the data is purely relational and only the mutual relationships between entities can be determined. To be exact, we can define the similarity/dissimilarity between a pair of attributes or relationships of the same type l between entities ej and ek as s(l)(ej, ek). For instance, the similarity between the occupations ‘President’ and ‘Commander in Chief’ can be computed using the JC semantic distance (Jiang and Conrath, 1997) with WordNet; the similarity of co-occurrence with other people can be measured by the Jaccard coefficient. In the next section, we propose to compute the relation strength r(·, ·) from the component similarities using aggregation weights learned from training data. Hence the N chained entities to be clustered can be represented as relational data using an n×n matrix R, where rj,k = r(ej, ek). The Any Relation Clustering Algorithm (ARCA) (Corsini et al., 2005; Cimino et al., 2006) represents relational data as object data using their mutual relation strength and uses FCM for clustering. We adopt this approach to transform (objectify) a relational pattern ej into an N dimensional vector rj (i.e. the j-th row in the matrix R) using a mapping Θ : D →RN. In other words, each chained entity is represented as a vector of its relation strengths with all the entities. Fuzzy clusters can then be obtained by grouping closely related patterns using object clustering algorithm. Furthermore, it is well known that FCM is a spherical clustering algorithm and thus is not generally applicable to relational data which may yield relational clusters of arbitrary and complicated shapes. Also, the distance in the transformed space may be non-Euclidean, rendering many clustering algorithms ineffective (many FCM extensions theoretically require the underlying distance to satisfy certain metric properties). In this work, we propose kernelized ARCA (called KARC) which uses a kernelinduced metric to handle the objectified relational data, as we introduce next. 2.2.2 Kernelized Fuzzy Clustering Kernelization (Sch¨olkopf and Smola, 2002) is a machine learning technique to transform patterns in the data space to a high-dimensional feature space so that the structure of the data can be more easily and adequately discovered. Specifically, a nonlinear transformation Φ maps data in RN to H of possibly infinite dimensions (Hilbert space). The key idea is the kernel trick – without explicitly specifying Φ and H, the inner product in H can be computed by evaluating a kernel function K in the data space, i.e. < Φ(ri), Φ(rj) >= K(ri, rj) (one of the most frequently used kernel functions is the Gaussian RBF kernel: K(rj, rk) = exp(−λ∥rj −rk∥2)). This technique has been successfully applied to SVMs to classify nonlinearly separable data (Vapnik, 1995). Kernelization preserves the simplicity in the formalism of the underlying clustering algorithm, meanwhile it yields highly nonlinear boundaries so that spherical clustering algorithms can apply (e.g. (Zhang and Chen, 2003) developed a kernelized object clustering algorithm based on FCM). Let wi denote the objectified virtual cluster vi, i.e. wi = Θ(vi). Using the kernel trick, the squared distance between Φ(rj) and Φ(wi) in the feature space H can be computed as: ∥Φ(rj) −Φ(wi)∥2 H (2) = < Φ(rj) −Φ(wi), Φ(rj) −Φ(wi) > = < Φ(rj), Φ(rj) > −2 < Φ(rj), Φ(wi) > + < Φ(wi), Φ(wi) > = 2 −2K(rj, wi) (3) assuming K(r, r) = 1. The KARC algorithm defines the generic distance d as d2(ej, vi) def = ∥Φ(rj) −Φ(wi)∥2 H = ∥Φ(Θ(ej)) −Φ(Θ(vi))∥2 H (we also use d2 ji as a notational shorthand). Using Lagrange Multiplier as in FCM, the optimal solution for Equation (1) is: uij =      " C P h=1 µ d2 ji d2 jh ¶1/(m−1)#−1 , (d2 ji ̸= 0) 1 , (d2 ji = 0) (4) 416 Φ(wi) = N P k=1 um ikΦ(rk) N P k=1 um ik (5) Since Φ is an implicit mapping, Eq. (5) can not be explicitly evaluated. On the other hand, plugging Eq. (5) into Eq. (3), d2 ji can be explicitly represented by using the kernel matrix, d2 ji = 2 −2 · N P k=1 um ikK(rj, rk) N P k=1 um ik (6) With the derivation, the kernelized fuzzy clustering algorithm KARC works as follows. The chained entities E are first objectified into the relation strength matrix R using SEG, the details of which are described in the following section. The Gram matrix K is then computed based on the relation strength vectors using the kernel function. For a given number of clusters C, the initialization step is done by randomly picking C patterns as cluster centers, equivalently, C indices {n1, .., nC} are randomly picked from {1, .., N}. D0 is initialized by setting d2 ji = 2 −2K(rj, rni). KARC alternately updates the membership matrix U and the kernel distance matrix D until convergence or running more than maxIter iterations (Algorithm 1). Finally, the soft partition is generated based on the membership matrix U, which is the desired cross document coreference result. Algorithm 1 KARC Alternating Optimization Input: Gram matrix K; #Clusters C; threshold θ initialize D0 t ←0 repeat t ←t + 1 // 1– Update membership matrix Ut: uij = (d2 ji)− 1 m−1 PC h=1 (d2 jh)− 1 m−1 // 2– Update kernel distance matrix Dt: d2 ji = 2 −2 · NP k=1 um ikKjk NP k=1 um ik until (t > maxIter) or (t > 1 and |Ut −Ut−1| < ϵ) Pθ ←Generate soft partition(Ut, θ) Output: Fuzzy partition Pθ 2.2.3 Cluster Validation In the CDC setting, the number of true underlying identities may vary depending on the entities’ level of ambiguity (e.g. name frequency). Selecting the optimal number of clusters is in general a hard research question in clustering1. We adopt the Xie-Beni Index (XBI) (Xie and Beni, 1991) as in ARCA, which is one of the most popular cluster validities for fuzzy clustering algorithms. XieBeni Index (XBI) measures the goodness of clustering using the ratio of the intra-cluster variation and the inter-cluster separation. We measure the kernelized XBI (KXBI) in the feature space as, KXBI = C P i=1 N P j=1 um ij ∥Φ(rj) −Φ(wi)∥2 H N · min 1≤i<j≤C ∥Φ(wi) −Φ(wj)∥2 H where the nominator is readily computed using D and the inter-cluster separation in the denominator can be evaluated using the similar kernel trick above (details omitted). Note that KXBI is only defined for C > 1. Thus we pick the C that corresponds to the first minimum of KXBI, and then compare its objective function value JC with the cluster variance (J1 for C = 1). The optimal C is chosen from the minimum of the two2. 2.3 Specialist Ensemble Learning of Relation Strengths between Entities One remaining element in the overall CDC approach is how the relation strength rj,k between two entities is computed. In (Cohen et al., 2003), a binary SVM model is trained and its confidence in predicting the non-coreferent class is used as the distance metric. In our case of using information extraction results for disambiguation, however, only some of the similarity features are present based on the available relationships in two profiles. In this work, we propose to treat each similarity function as a specialist that specializes in computing the similarity of a particular type of relationship. Indeed, the similarity function between a pair of attributes or relationships may in itself be a sophisticated component algorithm. We utilize the specialist ensemble learning framework (Freund et al., 1997) to combine these component 1In particular, clustering algorithms that regularize the optimization with cluster size are not applicable in our case. 2In practice, the entities to be disambiguated tend to be dominated by several major identities. Hence performance generally does not vary much in the range of large C values. 417 similarities into the relation strength for clustering. Here, a specialist is awakened for prediction only when the same type of relationships are present in both chained entities. A specialist can choose not to make a prediction if it is not confident enough for an instance. These aspects contrast with the traditional insomniac ensemble learning methods, where each component learner is always available for prediction (Freund et al., 1997). Also, specialists have different weights (in addition to their prediction) on the final relation strength, e.g. a match in a family relationship is considered more important than in a co-occurrence relationship. Algorithm 2 SEG (Freund et al., 1997) Input: Initial weight distribution p1; learning rate η > 0; training set {< st, yt >} 1: for t=1 to T do 2: Predict using: ˜yt = P i∈Et pt ist i P i∈Et pt i (7) 3: Observe the true label yt and incur square loss L(˜yt, yt) = (˜yt −yt)2 4: Update weight distribution: for i ∈Et pt+1 i = pt ie−2ηxt i(˜yt−yt) P j∈Et pt je−2ηxt i(˜yt−yt) · X j∈Et pt j (8) Otherwise: pt+1 i = pt i 5: end for Output: Model p The ensemble relation strength model is learned as follows. Given training data, the set of chained entities Etrain is extracted as described earlier. For a pair of entities ej and ek, a similarity vector s is computed using the component similarity functions for the respective attributes and relationships, and the true label is defined as y = I{ej and ek are coreferent}. The instances are subsampled to yield a balanced pairwise training set {< st, yt >}. We adopt the Specialist Exponentiated Gradient (SEG) (Freund et al., 1997) algorithm to learn the mixing weights of the specialists’ prediction (Algorithm 2) in an online manner. In each training iteration, an instance < st, yt > is presented to the learner (with Et denoting the set of indices of awake specialists in st). The SEG algorithm first predicts the value ˜yt based on the awake specialists’ decisions. The true value yt is then revealed and the learner incurs a square loss between the predicted and the true values. The current weight distribution p is updated to minimize square loss: awake specialists are promoted or demoted in their weights according to the difference between the predicted and the true value. The learning iterations can run a few passes till convergence, and the model is learned in linear time with respect to T and is thus very efficient. In prediction time, let E(jk) denote the set of active specialists for the pair of entities ej and ek, and s(jk) denote the computed similarity vector. The predicted relation strength rj,k is, rj,k = P i∈E(jk) pis(jk) i P i∈E(jk) pi (9) 2.4 Remarks Before we conclude this section, we make several comments on using fuzzy clustering for cross document coreference. First, instead of conducting CDC for all entities concurrently (which can be computationally intensive with a large corpus), chained entities are first distributed into nonoverlapping blocks. Clustering is performed for each block which is a drastically smaller problem space, while entities from different blocks are unlikely to be coreferent. Our CDC system uses phonetic blocking on the full name, so that name variations arising from translation, transliteration and abbreviation can be accommodated. Additional link constraints checking is also implemented to improve scalability though these are not the main focus of the paper. There are several additional benefits in using a fuzzy clustering method besides the capability of probabilistic membership assignments in the CDC solution. In the clustered web search context, splitting a true identity into two clusters is perceived as a more severe error than putting irrelevant records in a cluster, as it is more difficult for the user to collect records in different clusters (to reconstruct the real underlying identity) than to prune away noisy records. While there is no universal way to handle this with hard clustering, soft clustering algorithms can more easily avoid the false negatives by allowing records to probabilistically appear in different clusters (subject to the sum of 1) using a more lenient threshold. Also, while there is no real prototypical elements in relational clustering, soft relational clustering 418 methods can naturally rank the profiles within a cluster according to their membership levels, which is an additional advantage for enhancing user consumption of the disambiguation results. 3 Experiments In this section, we first formally define the evaluation metrics, followed by the introduction to the benchmark test sets and the system’s performance. 3.1 Evaluation Metrics We benchmarked our method using the standard purity and inverse purity clustering metrics as in the WePS evaluation. Let a set of clusters P = {Ci} denote the system’s partition as aforementioned and a set of categories Q = {Dj} be the gold standard. The precision of a cluster Ci with respect to a category Dj is defined as, Precision(Ci, Dj) = |Ci ∩Dj| |Ci| Purity is in turn defined as the weighted average of the maximum precision achieved by the clusters on one of the categories, Purity(P, Q) = C X i=1 |Ci| n max j Precision(Ci, Dj) where n = P |Ci|. Hence purity penalizes putting noise chained entities in a cluster. Trivially, the maximum purity (i.e. 1) can be achieved by making one cluster per chained entity (referred to as the one-in-one baseline). Reversing the role of clusters and categories, Inverse purity(P, Q) def = Purity(Q, P). Inverse Purity penalizes splitting chained entities belonging to the same category into different clusters. The maximum inverse purity can be similarly achieved by putting all entities into one cluster (all-in-one baseline). Purity and inverse purity are similar to the precision and recall measures commonly used in IR. The F score, F = 1/(α 1 Purity + (1 − α) 1 InversePurity), is used in performance evaluation. α = 0.2 is used to give more weight to inverse purity, with the justification for the web person search mentioned earlier. 3.2 Dataset We evaluate our methods using the benchmark test collection from the ACL SemEval-2007 web person search task (WePS) (Artiles et al., 2007). The test collection consists of three sets of 10 different names, sampled from ambiguous names from English Wikipedia (famous people), participants of the ACL 2006 conference (computer scientists) and common names from the US Census data, respectively. For each name, the top 100 documents retrieved from the Yahoo! Search API were annotated, yielding on average 45 real world identities per set and about 3k documents in total. As we note in the beginning of Section 2, the human markup for the entities corresponding to the search queries is on the document level. The profile-based CDC approach, however, is to merge the mention-level entities. In our evaluation, we adopt the document label (and the person search query) to annotate the entity profiles that corresponds to the person name search query. Despite the difference, the results of the one-in-one and all-in-one baselines are almost identical to those reported in the WePS evaluation (F = 0.52, 0.58 respectively). Hence the performance reported here is comparable to the official evaluation results (Artiles et al., 2007). 3.3 Information Extraction and Similarities We use an information extraction tool AeroText (Taylor, 2004) to construct the entity profiles. AeroText extracts two types of information for an entity. First, the attribute information about the person named entity includes first/middle/last names, gender, mention, etc. In addition, AeroText extracts relationship information between named entities, such as Family, List, Employment, Ownership, Citizen-ResidentReligion-Ethnicity and so on, as specified in the ACE evaluation. AeroText resolves the references of entities within a document and produces the entity profiles, used as input to the CDC system. Note that alternative IE or WDC tools, as well as additional attributes or relationships, can be readily used in the CDC methods we proposed. A suite of similarity functions is designed to determine if the attributes relationships in a pair of entity profiles match or not: Text similarity. To decide whether two names in the co-occurrence or family relationship match, we use the SoftTFIDF measure (Cohen et al., 2003), which is a hybrid matching scheme that combines the token-based TFIDF with the JaroWinkler string distance metric. This permits inexact matching of named entities due to name 419 variations, typos, etc. Semantic similarity. Text or syntactic similarity is not always sufficient for matching relationships. WordNet and the information theoretic semantic distance (Jiang and Conrath, 1997) are used to measure the semantic similarity between concepts in relationships such as mention, employment, ownership, etc. Other rule-based similarity. Several other cases require special treatment. For example, the employment relationships of Senator and D-N.Y. should match based on domain knowledge. Also, we design dictionary-based similarity functions to handle nicknames (Bill and William), acronyms (COLING for International Conference on Computational Linguistics), and geo-locations. 3.4 Evaluation Results From the WePS training data, we generated a training set of around 32k pairwise instances as previously stated in Section 2.3. We then used the SEG algorithm to learn the weight distribution model. We tuned the parameters in the KARC algorithm using the training set with discrete grid search and chose m = 1.6 and θ = 0.3. The RBF kernel (Gaussian) is used with γ = 0.015. Table 1: Cross document coreference performance (I. Purity denotes inverse purity). Method Purity I. Purity F KARC-S 0.657 0.795 0.740 KARC-H 0.662 0.762 0.710 FRC 0.484 0.840 0.697 One-in-one 1.000 0.482 0.524 All-in-one 0.279 1.000 0.571 The macro-averaged cross document coreference on the WePS test sets are reported in Table 1. The F score of our CDC system (KARCS) is 0.740, comparable to the test results of the first tier systems in the official evaluation. The two baselines are also included. Since different feature sets, NLP tools, etc are used in different benchmarked systems, we are also interested in comparing the proposed algorithm with different soft relational clustering variants. First, we ‘harden’ the fuzzy partition produced by KARC by allowing an entity to appear in the cluster with highest membership value (KARC-H). Purity improves because of the removal of noise entities, though at the sacrifice of inverse purity and the Table 2: Cross document coreference performance on subsets (I. Purity denotes inverse purity). Test set Identity Purity I. Purity F Wikipedia 56.5 0.666 0.752 0.717 ACL-06 31.0 0.783 0.771 0.773 US Census 50.3 0.554 0.889 0.754 F score deteriorates. We also implement a popular fuzzy relational clustering algorithm called FRC (Dave and Sen, 2002), whose optimization functional directly minimizes with respect to the relation matrix. With the same feature sets and distance function, KARC-S outperforms FRC in F score by about 5%. Because the test set is very ambiguous (on average only two documents per real world entity), the baselines have relatively high F score as observed in the WePS evaluation (Artiles et al., 2007). Table 2 further analyzes KARCS’s result on the three subsets Wikipedia, ACL06 and US Census. The F score is higher in the less ambiguous (the average number of identities) dataset and lower in the more ambiguous one, with a spread of 6%. We study how the cross document coreference performance changes as we vary the fuzziness in the solution (controlled by m). In Figure 1, as m increases from 1.4 to 1.9, purity improves by 10% to 0.67, which indicates that more correct coreference decisions (true positives) can be made in a softer configuration. The complimentary is true for inverse purity, though to a lesser extent. In this case, more false negatives, corresponding to the entities of different coreferents incorrectly 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 1.4 1.5 1.6 1.7 1.8 1.9 m KARC performance with different m purity inverse purity F Figure 1: Purity, inverse purity and F score with different fuzzifiers m. 420 0.6 0.65 0.7 0.75 0.8 0.85 0.1 0.2 0.3 0.4 0.5 0.6 θ KARC performance with different θ purity inverse purity F Figure 2: CDC performance with different θ. linked, are made in a softer partition. The F score peaks at 0.74 (m = 1.6) and then slightly decreases, as the gain in purity is outweighed by the loss in inverse purity. Figure 2 evaluates the impact of the different settings of θ (the threshold of including a chained entity in the fuzzy cluster) on the coreference performance. We observe that as we increase θ, purity improves indicating less ‘noise’ entities are included in the solution. On the other hand, inverse purity decreases meaning more coreferent entities are not linked due to the stricter threshold. Overall, the changes in the two metrics offset each other and the F score is relatively stable across a broad range of θ settings. 4 Related Work The original work in (Bagga and Baldwin, 1998) proposed a CDC system by first performing WDC and then disambiguating based on the summary sentences of the chains. This is similar to ours in that mentions rather than documents are clustered, leveraging the advances in state-of-the-art WDC methods developed in NLP, e.g. (Ng and Cardie, 2001; Yang et al., 2008). On the other hand, our work goes beyond the simple bag-of-word features and vector space model in (Bagga and Baldwin, 1998; Gooi and Allan, 2004) with IE results. (Wan et al., 2005) describes a person resolution system WebHawk that clusters web pages using some extracted personal information including person name, title, organization, email and phone number, besides lexical features. (Mann and Yarowsky, 2003) extracts biographical information, which is relatively scarce in web data, for disambiguation. With the support of state-of-the-art information extraction tools, the profiles of entities in this work covers a broader range of relational information. (Niu et al., 2004) also leveraged IE support, but their approach was evaluated on a small artificial corpus. Also, the pairwise distance model is insomniac (i.e. all similarity specialists are awake for prediction) and our work extends this with a specialist learning framework. Prior work has largely relied on using hierarchical clustering methods for CDC, with the threshold for stopping the merging set using the training data, e.g. (Mann and Yarowsky, 2003; Chen and Martin, 2007; Baron and Freedman, 2008). The fuzzy relational clustering method proposed in this paper we believe better addresses the uncertainty aspect of the CDC problem. There are also orthogonal research directions for the CDC problem. (Li et al., 2004) solved the CDC problem by adopting a probabilistic view on how documents are generated and how names are sprinkled into them. (Bunescu and Pasca, 2006) showed that external information from Wikipedia can improve the disambiguation performance. 5 Conclusions We have presented a profile-based Cross Document Coreference (CDC) approach based on a novel fuzzy relational clustering algorithm KARC. In contrast to traditional hard clustering methods, KARC produces fuzzy sets of identities which better reflect the intrinsic uncertainty of the CDC problem. Kernelization, as used in KARC, enables the optimization of clustering that is spherical in nature to apply to relational data that tend to have complicated shapes. KARC partitions named entities based on their profiles constructed by an information extraction tool. To match the profiles, a specialist ensemble algorithm predicts the pairwise distance by aggregating the similarities of the attributes and relationships in the profiles. We evaluated the proposed methods with experiments on a large benchmark collection and demonstrate that the proposed methods compare favorably with the top runs in the SemEval evaluation. The focus of this work is on the novel learning and clustering methods for coreference. Future research directions include developing rich feature sets and using corpus level or external information. We believe that such efforts can further improve cross document coreference performance. 421 References Javier Artiles, Julio Gonzalo, and Satoshi Sekine. 2007. The SemEval-2007 WePS evaluation: Establishing a benchmark for the web people search task. In Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval2007), pages 64–69. Amit Bagga and Breck Baldwin. 1998. Entity-based cross-document coreferencing using the vector space model. In Proceedings of 36th International Conference On Computational Linguistics (ACL) and 17th international conference on Computational linguistics (COLING), pages 79–85. Alex Baron and Marjorie Freedman. 2008. Who is who and what is what: Experiments in crossdocument co-reference. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 274–283. J. C. Bezdek. 1981. Pattern Recognition with Fuzzy Objective Function Algoritms. Plenum Press, NY. Razvan Bunescu and Marius Pasca. 2006. Using encyclopedic knowledge for named entity disambiguation. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 9–16. Ying Chen and James Martin. 2007. Towards robust unsupervised personal name disambiguation. In Proc. of 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Mario G. C. A. Cimino, Beatrice Lazzerini, and Francesco Marcelloni. 2006. A novel approach to fuzzy clustering based on a dissimilarity relation extracted from data using a TS system. Pattern Recognition, 39(11):2077–2091. William W. Cohen, Pradeep Ravikumar, and Stephen E. Fienberg. 2003. A comparison of string distance metrics for name-matching tasks. In Proceedings of IJCAI Workshop on Information Integration on the Web. Paolo Corsini, Beatrice Lazzerini, and Francesco Marcelloni. 2005. A new fuzzy relational clustering algorithm based on the fuzzy c-means algorithm. Soft Computing, 9(6):439 – 447. Rajesh N. Dave and Sumit Sen. 2002. Robust fuzzy clustering of relational data. IEEE Transactions on Fuzzy Systems, 10(6):713–727. Yoav Freund, Robert E. Schapire, Yoram Singer, and Manfred K. Warmuth. 1997. Using and combining predictors that specialize. In Proceedings of the twenty-ninth annual ACM symposium on Theory of computing (STOC), pages 334–343. Chung H. Gooi and James Allan. 2004. Crossdocument coreference on a large scale corpus. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 9–16. Jay J. Jiang and David W. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings of International Conference Research on Computational Linguistics. Xin Li, Paul Morie, and Dan Roth. 2004. Robust reading: Identification and tracing of ambiguous names. In Proceedings of the Human Language Technology Conference and the North American Chapter of the Association for Computational Linguistics (HLT-NAACL), pages 17–24. Gideon S. Mann and David Yarowsky. 2003. Unsupervised personal name disambiguation. In Conference on Computational Natural Language Learning (CoNLL), pages 33–40. Vincent Ng and Claire Cardie. 2001. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 104–111. Cheng Niu, Wei Li, and Rohini K. Srihari. 2004. Weakly supervised learning for cross-document person name disambiguation supported by information extraction. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics (ACL), pages 597–604. Bernhard Sch¨olkopf and Alex Smola. 2002. Learning with Kernels. MIT Press, Cambridge, MA. Sarah M. Taylor. 2004. Information extraction tools: Deciphering human language. IT Professional, 6(6):28 – 34. Vladimir Vapnik. 1995. The Nature of Statistical Learning Theory. Springer-Verlag New York. Xiaojun Wan, Jianfeng Gao, Mu Li, and Binggong Ding. 2005. Person resolution in person search results: WebHawk. In Proceedings of the 14th ACM international conference on Information and knowledge management (CIKM), pages 163–170. Xuanli Lisa Xie and Gerardo Beni. 1991. A validity measure for fuzzy clustering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(8):841 – 847. Xiaofeng Yang, Jian Su, Jun Lang, Chew L. Tan, Ting Liu, and Sheng Li. 2008. An entitymention model for coreference resolution with inductive logic programming. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL), pages 843–851. Dao-Qiang Zhang and Song-Can Chen. 2003. Clustering incomplete data using kernel-based fuzzy c-means algorithm. Neural Processing Letters, 18(3):155 – 162. 422
2009
47
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 423–431, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Who, What, When, Where, Why? Comparing Multiple Approaches to the Cross-Lingual 5W Task Kristen Parton*, Kathleen R. McKeown*, Bob Coyne*, Mona T. Diab*, Ralph Grishman†, Dilek Hakkani-Tür‡, Mary Harper§, Heng Ji•, Wei Yun Ma*, Adam Meyers†, Sara Stolbach*, Ang Sun†, Gokhan Tur˚, Wei Xu† and Sibel Yaman‡ *Columbia University New York, NY, USA {kristen, kathy, coyne, mdiab, ma, sara}@cs.columbia.edu †New York University New York, NY, USA {grishman, meyers, asun, xuwei} @cs.nyu.edu ‡International Computer Science Institute Berkeley, CA, USA {dilek, sibel} @icsi.berkeley.edu §Human Lang. Tech. Ctr. of Excellence, Johns Hopkins and U. of Maryland, College Park [email protected] •City University of New York New York, NY, USA [email protected] ˚SRI International Palo Alto, CA, USA [email protected] Abstract Cross-lingual tasks are especially difficult due to the compounding effect of errors in language processing and errors in machine translation (MT). In this paper, we present an error analysis of a new cross-lingual task: the 5W task, a sentence-level understanding task which seeks to return the English 5W's (Who, What, When, Where and Why) corresponding to a Chinese sentence. We analyze systems that we developed, identifying specific problems in language processing and MT that cause errors. The best cross-lingual 5W system was still 19% worse than the best monolingual 5W system, which shows that MT significantly degrades sentence-level understanding. Neither source-language nor targetlanguage analysis was able to circumvent problems in MT, although each approach had advantages relative to the other. A detailed error analysis across multiple systems suggests directions for future research on the problem. 1 Introduction In our increasingly global world, it is ever more likely for a mono-lingual speaker to require information that is only available in a foreign language document. Cross-lingual applications address this need by presenting information in the speaker’s language even when it originally appeared in some other language, using machine translation (MT) in the process. In this paper, we present an evaluation and error analysis of a cross-lingual application that we developed for a government-sponsored evaluation, the 5W task. The 5W task seeks to summarize the information in a natural language sentence by distilling it into the answers to the 5W questions: Who, What, When, Where and Why. To solve this problem, a number of different problems in NLP must be addressed: predicate identification, argument extraction, attachment disambiguation, location and time expression recognition, and (partial) semantic role labeling. In this paper, we address the cross-lingual 5W task: given a source-language sentence, return the 5W’s translated (comprehensibly) into the target language. Success in this task requires a synergy of successful MT and answer selection. The questions we address in this paper are: • How much does machine translation (MT) degrade the performance of cross-lingual 5W systems, as compared to monolingual performance? • Is it better to do source-language analysis and then translate, or do target-language analysis on MT? • Which specific problems in language processing and/or MT cause errors in 5W answers? In this evaluation, we compare several different approaches to the cross-lingual 5W task, two that work on the target language (English) and one that works in the source language (Chinese). 423 A central question for many cross-lingual applications is whether to process in the source language and then translate the result, or translate documents first and then process the translation. Depending on how errorful the translation is, results may be more accurate if models are developed for the source language. However, if there are more resources in the target language, then the translate-then-process approach may be more appropriate. We present a detailed analysis, both quantitative and qualitative, of how the approaches differ in performance. We also compare system performance on human translation (which we term reference translations) and MT of the same data in order to determine how much MT degrades system performance. Finally, we do an in-depth analysis of the errors in our 5W approaches, both on the NLP side and the MT side. Our results provide explanations for why different approaches succeed, along with indications of where future effort should be spent. 2 Prior Work The cross-lingual 5W task is closely related to cross-lingual information retrieval and crosslingual question answering (Wang and Oard 2006; Mitamura et al. 2008). In these tasks, a system is presented a query or question in the target language and asked to return documents or answers from a corpus in the source language. Although MT may be used in solving this task, it is only used by the algorithms – the final evaluation is done in the source language. However, in many real-life situations, such as global business, international tourism, or intelligence work, users may not be able to read the source language. In these cases, users must rely on MT to understand the system response. (Parton et al. 2008) examine the case of “translingual” information retrieval, where evaluation is done on translated results in the target language. In cross-lingual information extraction (Sudo et al. 2004) the evaluation is also done on MT, but the goal is to learn knowledge from a large corpus, rather than analyzing individual sentences. The 5W task is also closely related to Semantic Role Labeling (SRL), which aims to efficiently and effectively derive semantic information from text. SRL identifies predicates and their arguments in a sentence, and assigns roles to each argument. For example, in the sentence “I baked a cake yesterday.”, the predicate “baked” has three arguments. “I” is the subject of the predicate, “a cake” is the object and “yesterday” is a temporal argument. Since the release of large data resources annotated with relevant levels of semantic information, such as the FrameNet (Baker et al., 1998) and PropBank corpora (Kingsbury and Palmer, 2003), efficient approaches to SRL have been developed (Carreras and Marquez, 2005). Most approaches to the problem of SRL follow the Gildea and Jurafsky (2002) model. First, for a given predicate, the SRL system identifies its arguments' boundaries. Second, the Argument types are classified depending on an adopted lexical resource such as PropBank or FrameNet. Both steps are based on supervised learning over labeled gold standard data. A final step uses heuristics to resolve inconsistencies when applying both steps simultaneously to the test data. Since many of the SRL resources are English, most of the SRL systems to date have been for English. There has been work in other languages such as German and Chinese (Erk 2006; Sun 2004; Xue and Palmer 2005). The systems for the other languages follow the successful models devised for English, e.g. (Gildea and Palmer, 2002; Chen and Rambow, 2003; Moschitti, 2004; Xue and Palmer, 2004; Haghighi et al., 2005). 3 The Chinese-English 5W Task 3.1 5W Task Description We participated in the 5W task as part of the DARPA GALE (Global Autonomous Language Exploitation) project. The goal is to identify the 5W’s (Who, What, When, Where and Why) for a complete sentence. The motivation for the 5W task is that, as their origin in journalism suggests, the 5W’s cover the key information nuggets in a sentence. If a system can isolate these pieces of information successfully, then it can produce a précis of the basic meaning of the sentence. Note that this task differs from QA tasks, where “Who” and “What” usually refer to definition type questions. In this task, the 5W’s refer to semantic roles within a sentence, as defined in Table 1. In order to get all 5W’s for a sentence correct, a system must identify a top-level predicate, extract the correct arguments, and resolve attachment ambiguity. In the case of multiple top-level predicates, any of the top-level predicates may be chosen. In the case of passive verbs, the Who is the agent (often expressed as a “by clause”, or not stated), and the What should include the syntactic subject. 424 Answers are judged Correct1 if they identify a correct null argument or correctly extract an argument that is present in the sentence. Answers are not penalized for including extra text, such as prepositional phrases or subordinate clauses, unless the extra text includes text from another answer or text from another top-level predicate. In sentence 2a in Table 2, returning “bought and cooked” for the What would be Incorrect. Similarly, returning “bought the fish at the market” for the What would also be Incorrect, since it contains the Where. Answers may also be judged Partial, meaning that only part of the answer was returned. For example, if the What contains the predicate but not the logical object, it is Partial. Since each sentence may have multiple correct sets of 5W’s, it is not straightforward to produce a gold-standard corpus for automatic evaluation. One would have to specify answers for each possible top-level predicate, as well as which parts of the sentence are optional and which are not allowed. This also makes creating training data for system development problematic. For example, in Table 2, the sentence in 2a and 2b is the same, but there are two possible sets of correct answers. Since we could not rely on a goldstandard corpus, we used manual annotation to judge our 5W system, described in section 5. 3.2 The Cross-Lingual 5W Task In the cross-lingual 5W task, a system is given a sentence in the source language and asked to produce the 5W’s in the target language. In this task, both machine translation (MT) and 5W extraction must succeed in order to produce correct answers. One motivation behind the cross-lingual 5W task is MT evaluation. Unlike word- or phrase-overlap measures such as BLEU, the 5W evaluation takes into account “concept” or “nugget” translation. Of course, only the top-level predicate and arguments are evaluated, so it is not a complete evaluation. But it seeks to get at the understandability of the MT output, rather than just n-gram overlap. Translation exacerbates the problem of automatically evaluating 5W systems. Since translation introduces paraphrase, rewording and sentence restructuring, the 5W’s may change from one translation of a sentence to another translation of the same sentence. In some cases, roles may swap. For example, in Table 2, sentences 1a and 1b could be valid translations of the same 1 The specific guidelines for determining correctness were formulated by BAE. Chinese sentence. They contain the same information, but the 5W answers are different. Also, translations may produce answers that are textually similar to correct answers, but actually differ in meaning. These differences complicate processing in the source followed by translation. Example: On Tuesday, President Obama met with French President Sarkozy in Paris to discuss the economic crisis. W Definition Example answer WHO Logical subject of the top-level predicate in WHAT, or null. President Obama WHAT One of the top-level predicates in the sentence, and the predicate’s logical object. met with French President Sarkozy WHEN ARGM-TMP of the top-level predicate in WHAT, or null. On Tuesday WHERE ARGM-LOC of the top-level predicate in WHAT, or null. in Paris WHY ARGM-CAU of the top-level predicate in WHAT, or null. to discuss the economic crisis Table 1. Definition of the 5W task, and 5W answers from the example sentence above. 4 5W System We developed a 5W combination system that was based on five other 5W systems. We selected four of these different systems for evaluation: the final combined system (which was our submission for the official evaluation), two systems that did analysis in the target-language (English), and one system that did analysis in the source language (Chinese). In this section, we describe the individual systems that we evaluated, the combination strategy, the parsers that we tuned for the task, and the MT systems. Sentence WHO WHAT 1a Mary bought a cake from Peter. Mary bought a cake 1b Peter sold Mary a cake. Peter sold Mary 2a I bought the fish at the market yesterday and cooked it today. I bought the fish [WHEN: yesterday] 2b I bought the fish at the market yesterday and cooked it today. I cooked it [WHEN: today] Table 2. Example 5W answers. 425 4.1 Latent Annotation Parser For this work, we have re-implemented and enhanced the Berkeley parser (Petrov and Klein 2007) in several ways: (1) developed a new method to handle rare words in English and Chinese; (2) developed a new model of unknown Chinese words based on characters in the word; (3) increased robustness by adding adaptive modification of pruning thresholds and smoothing of word emission probabilities. While the enhancements to the parser are important for robustness and accuracy, it is even more important to train grammars matched to the conditions of use. For example, parsing a Chinese sentence containing full-width punctuation with a parser trained on half-width punctuation reduces accuracy by over 9% absolute F. In English, parsing accuracy is seriously compromised by training a grammar with punctuation and case to process sentences without them. We developed grammars for English and Chinese trained specifically for each genre by subsampling from available treebanks (for English, WSJ, BN, Brown, Fisher, and Switchboard; for Chinese, CTB5) and transforming them for a particular genre (e.g., for informal speech, we replaced symbolic expressions with verbal forms and remove punctuation and case) and by utilizing a large amount of genre-matched self-labeled training parses. Given these genre-specific parses, we extracted chunks and POS tags by script. We also trained grammars with a subset of function tags annotated in the treebank that indicate case role information (e.g., SBJ, OBJ, LOC, MNR) in order to produce function tags. 4.2 Individual 5W Systems The English systems were developed for the monolingual 5W task and not modified to handle MT. They used hand-crafted rules on the output of the latent annotation parser to extract the 5Ws. English-function used the function tags from the parser to map parser constituents to the 5Ws. First the Who, When, Where and Why were extracted, and then the remaining pieces of the sentence were returned as the What. The goal was to make sure to return a complete What answer and avoid missing the object. English-LF, on the other hand, used a system developed over a period of eight years (Meyers et al. 2001) to map from the parser’s syntactic constituents into logical grammatical relations (GLARF), and then extracted the 5Ws from the logical form. As a back-up, it also extracted GLARF relations from another English-treebank trained parser, the Charniak parser (Charniak 2001). After the parses were both converted to the 5Ws, they were then merged, favoring the system that: recognized the passive, filled more 5W slots or produced shorter 5W slots (providing that the WHAT slot consisted of more than just the verb). A third back-up method extracted 5Ws from part-of-speech tag patterns. Unlike English-function, English-LF explicitly tried to extract the shortest What possible, provided there was a verb and a possible object, in order to avoid multiple predicates or other 5W answers. Chinese-align uses the latent annotation parser (trained for Chinese) to parse the Chinese sentences. A dependency tree converter (Johansson and Nuges 2007) was applied to the constituent-based parse trees to obtain the dependency relations and determine top-level predicates. A set of hand-crafted dependency rules based on observation of Chinese OntoNotes were used to map from the Chinese function tags into Chinese 5Ws. Finally, Chinese-align used the alignments of three separate MT systems to translate the 5Ws: a phrase-based system, a hierarchical phrase-based system, and a syntax augmented hierarchical phrase-based system. Chinese-align faced a number of problems in using the alignments, including the fact that the best MT did not always have the best alignment. Since the predicate is essential, it tried to detect when verbs were deleted in MT, and back-off to a different MT system. It also used strategies for finding and correcting noisy alignments, and for filtering When/Where answers from Who and What. 4.3 Hybrid System A merging algorithm was learned based on a development test set. The algorithm selected all 5W’s from a single system, rather than trying to merge W’s from different systems, since the predicates may vary across systems. For each document genre (described in section 5.4), we ranked the systems by performance on the development data. We also experimented with a variety of features (for instance, does “What” include a verb). The best-performing features were used in combination with the ranked list of priority systems to create a rule-based merger. 4.4 MT Systems The MT Combination system used by both of the English 5W systems combined up to nine separate MT systems. System weights for combination were optimized together with the language 426 model score and word penalty for a combination of BLEU and TER (2*(1-BLEU) + TER). Rescoring was applied after system combination using large language models and lexical trigger models. Of the nine systems, six were phrasedbased systems (one of these used chunk-level reordering of the Chinese, one used word sense disambiguation, and one used unsupervised Chinese word segmentation), two were hierarchical phrase-based systems, one was a string-todependency system, one was syntax-augmented, and one was a combination of two other systems. Bleu scores on the government supplied test set in December 2008 were 35.2 for formal text, 29.2 for informal text, 33.2 for formal speech, and 27.6 for informal speech. More details may be found in (Matusov et al. 2009). 5 Methods 5.1 5W Systems For the purposes of this evaluation2, we compared the output of 4 systems: English-Function, English-LF, Chinese-align, and the combined system. Each English system was also run on reference translations of the Chinese sentence. So for each sentence in the evaluation corpus, there were 6 systems that each provided 5Ws. 5.2 5W Answer Annotation For each 5W output, annotators were presented with the reference translation, the MT version, and the 5W answers. The 5W system names were hidden from the annotators. Annotators had to select “Correct”, “Partial” or “Incorrect” for each W. For answers that were Partial or Incorrect, annotators had to further specify the source of the error based on several categories (described in section 6). All three annotators were native English speakers who were not system developers for any of the 5W systems that were being evaluated (to avoid biased grading, or assigning more blame to the MT system). None of the annotators knew Chinese, so all of the judgments were based on the reference translations. After one round of annotation, we measured inter-annotator agreement on the Correct, Partial, or Incorrect judgment only. The kappa value was 0.42, which was lower than we expected. Another surprise was that the agreement was lower 2 Note that an official evaluation was also performed by DARPA and BAE. This evaluation provides more finegrained detail on error types and gives results for the different approaches. for When, Where and Why (κ=0.31) than for Who or What (κ=0.48). We found that, in cases where a system would get both Who and What wrong, it was often ambiguous how the remaining W’s should be graded. Consider the sentence: “He went to the store yesterday and cooked lasagna today.” A system might return erroneous Who and What answers, and return Where as “to the store” and When as “today.” Since Where and When apply to different predicates, they cannot both be correct. In order to be consistent, if a system returned erroneous Who and What answers, we decided to mark the When, Where and Why answers Incorrect by default. We added clarifications to the guidelines and discussed areas of confusion, and then the annotators reviewed and updated their judgments. After this round of annotating, κ=0.83 on the Correct, Partial, Incorrect judgments. The remaining disagreements were genuinely ambiguous cases, where a sentence could be interpreted multiple ways, or the MT could be understood in various ways. There was higher agreement on 5W’s answers from the reference text compared to MT text, since MT is inherently harder to judge and some annotators were more flexible than others in grading garbled MT. 5.3 5W Error Annotation In addition to judging the system answers by the task guidelines, annotators were asked to provide reason(s) an answer was wrong by selecting from a list of predefined errors. Annotators were asked to use their best judgment to “assign blame” to the 5W system, the MT, or both. There were six types of system errors and four types of MT errors, and the annotator could select any number of errors. (Errors are described further in section 6.) For instance, if the translation was correct, but the 5W system still failed, the blame would be assigned to the system. If the 5W system picked an incorrectly translated argument (e.g., “baked a moon” instead of “baked a cake”), then the error would be assigned to the MT system. Annotators could also assign blame to both systems, to indicate that they both made mistakes. Since this annotation task was a 10-way selection, with multiple selections possible, there were some disagreements. However, if categorized broadly into 5W System errors only, MT errors only, and both 5W System and MT errors, then the annotators had a substantial level of agreement (κ=0.75 for error type, on sentences where both annotators indicated an error). 427 5.4 5 W Corpus The full evaluation corpus is 350 documents, roughly evenly divided between four genres: formal text (newswire), informal text (blogs and newsgroups), formal speech (broadcast news) and informal speech (broadcast conversation). For this analysis, we randomly sampled documents to judge from each of the genres. There were 50 documents (249 sentences) that were judged by a single annotator. A subset of that set, with 22 documents and 103 sentences, was judged by two annotators. In comparing the results from one annotator to the results from both annotators, we found substantial agreement. Therefore, we present results from the single annotator so we can do a more in-depth analysis. Since each sentence had 5W’s, and there were 6 systems that were compared, there were 7,500 single-annotator judgments over 249 sentences. 6 Results Figure 1 shows the cross-lingual performance (on MT) of all the systems for each 5W. The best monolingual performance (on human translations) is shown as a dashed line (% Correct only). If a system returned Incorrect answers for Who and What, then the other answers were marked Incorrect (as explained in section 5.2). For the last 3W’s, the majority of errors were due to this (details in Figure 1), so our error analysis focuses on the Who and What questions. 6.1 Monolingual 5W Performance To establish a monolingual baseline, the English 5W system was run on reference (human) translations of the Chinese text. For each partial or incorrect answer, annotators could select one or more of these reasons: • Wrong predicate or multiple predicates. • Answer contained another 5W answer. • Passive handled wrong (WHO/WHAT). • Answer missed. • Argument attached to wrong predicate. Figure 1 shows the performance of the best monolingual system for each 5W as a dashed line. The What question was the hardest, since it requires two pieces of information (the predicate and object). The When, Where and Why questions were easier, since they were null most of the time. (In English OntoNotes 2.0, 38% of sentences have a When, 15% of sentences have a Where, and only 2.6% of sentences have a Why.) The most common monolingual system error on these three questions was a missed answer, accounting for all of the Where errors, all but one Why error and 71% of the When errors. The remaining When errors usually occurred when the system assumed the wrong sense for adverbs (such as “then” or “just”). Missing Other 5W Wrong/Multiple Predicates Wrong REF-func 37 29 22 7 REF-LF 54 20 17 13 MT-func 18 18 18 8 MT-LF 26 19 10 11 Chinese 23 17 14 8 Hybrid 13 17 15 12 Table 3. Percentages of Who/What errors attributed to each system error type. The top half of Table 3 shows the reasons attributed to the Who/What errors for the reference corpus. Since English-LF preferred shorter answers, it frequently missed answers or parts of Figure 1. System performance on each 5W. “Partial” indicates that part of the answer was missing. Dashed lines show the performance of the best monolingual system (% Correct on human translations). For the last 3W’s, the percent of answers that were Incorrect “by default” were: 30%, 24%, 27% and 22%, respectively, and 8% for the best monolingual system 60 60 56 66 36 40 38 42 56 59 59 64 63 70 66 73 68 75 71 78 19 20 19 14 0 10 20 30 40 50 60 70 80 90 100 Engfunc Eng-LF Chinese Hybrid Engfunc Eng-LF Chinese Hybrid Engfunc Eng-LF Chinese Hybrid Engfunc Eng-LF Chinese Hybrid Engfunc Eng-LF Chinese Hybrid WHO WHAT WHEN WHERE WHY Partial Correct 90 75 81 83 90 Best monolingual 428 answers. English-LF also had more Partial answers on the What question: 66% Correct and 12% Partial, versus 75% Correct and 1% Partial for English-function. On the other hand, Englishfunction was more likely to return answers that contained incorrect extra information, such as another 5W or a second predicate. 6.2 Effect of MT on 5W Performance The cross-lingual 5W task requires that systems return intelligible responses that are semantically equivalent to the source sentence (or, in the case of this evaluation, equivalent to the reference). As can be seen in Figure 1, MT degrades the performance of the 5W systems significantly, for all question types, and for all systems. Averaged over all questions, the best monolingual system does 19% better than the best cross-lingual system. Surprisingly, even though English-function outperformed English-LF on the reference data, English-LF does consistently better on MT. This is likely due to its use of multiple back-off methods when the parser failed. 6.3 Source-Language vs. Target-Language The Chinese system did slightly worse than either English system overall, but in the formal text genre, it outperformed both English systems. Although the accuracies for the Chinese and English systems are similar, the answers vary a lot. Nearly half (48%) of the answers can be answered correctly by both the English system and the Chinese system. But 22% of the time, the English system returned the correct answer when the Chinese system did not. Conversely, 10% of the answers were returned correctly by the Chinese system and not the English systems. The hybrid system described in section 4.2 attempts to exploit these complementary advantages. After running the hybrid system, 61% of the answers were from English-LF, 25% from English-function, 7% from Chinese-align, and the remaining 7% were from the other Chinese methods (not evaluated here). The hybrid did better than its parent systems on all 5Ws, and the numbers above indicate that further improvement is possible with a better combination strategy. 6.4 Cross-Lingual 5W Error Analysis For each Partial or Incorrect answer, annotators were asked to select system errors, translation errors, or both. (Further analysis is necessary to distinguish between ASR errors and MT errors.) The translation errors considered were: • Word/phrase deleted. • Word/phrase mistranslated. • Word order mixed up. • MT unreadable. Table 4 shows the translation reasons attributed to the Who/What errors. For all systems, the errors were almost evenly divided between system-only, MT-only and both, although the Chinese system had a higher percentage of systemonly errors. The hybrid system was able to overcome many system errors (for example, in Table 2, only 13% of the errors are due to missing answers), but still suffered from MT errors. Table 4. Percentages of Who/What errors by each system attributed to each translation error type. Mistranslation was the biggest translation problem for all the systems. Consider the first example in Figure 3. Both English systems correctly extracted the Who and the When, but for Mistranslation Deletion Word Order Unreadable MT-func 34 18 24 18 MT-LF 29 22 21 14 Chinese 32 17 9 13 Hybrid 35 19 27 18 MT: After several rounds of reminded, I was a little bit Ref: After several hints, it began to come back to me. 经过几番提醒,我回忆起来了一点点。 MT: The Guizhou province, within a certain bank robber, under the watchful eyes of a weak woman, and, with a knife stabbed the woman. Ref: I saw that in a bank in Guizhou Province, robbers seized a vulnerable young woman in front of a group of onlookers and stabbed the woman with a knife. 看到贵州省某银行内,劫匪在众目睽睽之下,抢夺一个弱女子,并且,用刀刺伤该女子。 MT: Woke up after it was discovered that the property is not more than eleven people do not even said that the memory of the receipt of the country into the country. Ref: Well, after waking up, he found everything was completely changed. Apart from having additional eleven grandchildren, even the motherland as he recalled has changed from a socialist country to a capitalist country. 那么醒来之后却发现物是人非,多了十一个孙子不说,连祖国也从记忆当中的社会主义国家变成了资本主义国家 Figure 3 Example sentences that presented problems for the 5W systems. 429 What they returned “was a little bit.” This is the correct predicate for the sentence, but it does not match the meaning of the reference. The Chinese 5W system was able to select a better translation, and instead returned “remember a little bit.” Garbled word order was chosen for 21-24% of the target-language system Who/What errors, but only 9% of the source-language system Who/What errors. The source-language word order problems tended to be local, within-phrase errors (e.g., “the dispute over frozen funds” was translated as “the freezing of disputes”). The target-language system word order problems were often long-distance problems. For example, the second sentence in Figure 3 has many phrases in common with the reference translation, but the overall sentence makes no sense. The watchful eyes actually belong to a “group of onlookers” (deleted). Ideally, the robber would have “stabbed the woman” “with a knife,” rather than vice versa. Long-distance phrase movement is a common problem in Chinese-English MT, and many MT systems try to handle it (e.g., Wang et al. 2007). By doing analysis in the source language, the Chinese 5W system is often able to avoid this problem – for example, it successfully returned “robbers” “grabbed a weak woman” for the Who/What of this sentence. Although we expected that the Chinese system would have fewer problems with MT deletion, since it could choose from three different MT versions, MT deletion was a problem for all systems. In looking more closely at the deletions, we noticed that over half of deletions were verbs that were completely missing from the translated sentence. Since MT systems are tuned for wordbased overlap measures (such as BLEU), verb deletion is penalized equally as, for example, determiner deletion. Intuitively, a verb deletion destroys the central meaning of a sentence, while a determiner is rarely necessary for comprehension. Other kinds of deletions included noun phrases, pronouns, named entities, negations and longer connecting phrases. Deletion also affected When and Where. Deleting particles such as “in” and “when” that indicate a location or temporal argument caused the English systems to miss the argument. Word order problems in MT also caused attachment ambiguity in When and Where. The “unreadable” category was an option of last resort for very difficult MT sentences. The third sentence in Figure 3 is an example where ASR and MT errors compounded to create an unparseable sentence. 7 Conclusions In our evaluation of various 5W systems, we discovered several characteristics of the task. The What answer was the hardest for all systems, since it is difficult to include enough information to cover the top-level predicate and object, without getting penalized for including too much. The challenge in the When, Where and Why questions is due to sparsity – these responses occur in much fewer sentences than Who and What, so systems most often missed these answers. Since this was a new task, this first evaluation showed clear issues on the language analysis side that can be improved in the future. The best cross-lingual 5W system was still 19% worse than the best monolingual 5W system, which shows that MT significantly degrades sentence-level understanding. A serious problem in MT for systems was deletion. Chinese constituents that were never translated caused serious problems, even when individual systems had strategies to recover. When the verb was deleted, no top level predicate could be found and then all 5Ws were wrong. One of our main research questions was whether to extract or translate first. We hypothesized that doing source-language analysis would be more accurate, given the noise in Chinese MT, but the systems performed about the same. This is probably because the English tools (logical form extraction and parser) were more mature and accurate than the Chinese tools. Although neither source-language nor targetlanguage analysis was able to circumvent problems in MT, each approach had advantages relative to the other, since they did well on different sets of sentences. For example, Chinese-align had fewer problems with word order, and most of those were due to local word-order problems. Since the source-language and target-language systems made different kinds of mistakes, we were able to build a hybrid system that used the relative advantages of each system to outperform all systems. The different types of mistakes made by each system suggest features that can be used to improve the combination system in the future. Acknowledgments This work was supported in part by the Defense Advanced Research Projects Agency (DARPA) under contract number HR0011-06-C-0023. Any opinions, findings and conclusions or recommendations expressed in this material are the authors' and do not necessarily reflect those of the sponsors. 430 References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In COLING-ACL '98: Proceedings of the Conference, held at the University of Montréal, pages 86–90. Xavier Carreras and Lluís Màrquez. 2005. Introduction to the CoNLL-2005 shared task: Semantic role labeling. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 152–164. Eugene Charniak. 2001. Immediate-head parsing for language models. In Proceedings of the 39th Annual Meeting on Association For Computational Linguistics (Toulouse, France, July 06 - 11, 2001). John Chen and Owen Rambow. 2003. Use of deep linguistic features for the recognition and labeling of semantic arguments. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, Sapporo, Japan. Katrin Erk and Sebastian Pado. 2006. Shalmaneser – a toolchain for shallow semantic parsing. Proceedings of LREC. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245–288. Daniel Gildea and Martha Palmer. 2002. The necessity of parsing for predicate argument recognition. In Proceedings of the 40th Annual Conference of the Association for Computational Linguistics (ACL-02), Philadelphia, PA, USA. Mary Harper and Zhongqiang Huang. 2009. Chinese Statistical Parsing, chapter to appear. Aria Haghighi, Kristina Toutanova, and Christopher Manning. 2005. A joint model for semantic role labeling. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 173–176. Paul Kingsbury and Martha Palmer. 2003. Propbank: the next level of treebank. In Proceedings of Treebanks and Lexical Theories. Evgeny Matusov, Gregor Leusch, & Hermann Ney: Learning to combine machine translation systems. In: Cyril Goutte, Nicola Cancedda, Marc Dymetman, & George Foster (eds.) Learning machine translation. (Cambridge, Mass.: The MIT Press, 2009); pp.257-276. Adam Meyers, Ralph Grishman, Michiko Kosaka and Shubin Zhao. 2001. Covering Treebanks with GLARF. In Proceedings of the ACL 2001 Workshop on Sharing Tools and Resources. Annual Meeting of the ACL. Association for Computational Linguistics, Morristown, NJ, 51-58. Teruko Mitamura, Eric Nyberg, Hideki Shima, Tsuneaki Kato, Tatsunori Mori, Chin-Yew Lin, Ruihua Song, Chuan-Jie Lin, Tetsuya Sakai, Donghong Ji, and Noriko Kando. 2008. Overview of the NTCIR-7 ACLIA Tasks: Advanced CrossLingual Information Access. In Proceedings of the Seventh NTCIR Workshop Meeting. Alessandro Moschitti, Silvia Quarteroni, Roberto Basili, and Suresh Manandhar. 2007. Exploiting syntactic and shallow semantic kernels for question answer classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 776–783. Kristen Parton, Kathleen R. McKeown, James Allan, and Enrique Henestroza. Simultaneous multilingual search for translingual information retrieval. In Proceedings of ACM 17th Conference on Information and Knowledge Management (CIKM), Napa Valley, CA, 2008. Slav Petrov and Dan Klein. 2007. Improved Inference for Unlexicalized Parsing. North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2007). Sudo, K., Sekine, S., and Grishman, R. 2004. Crosslingual information extraction system evaluation. In Proceedings of the 20th international Conference on Computational Linguistics. Honglin Sun and Daniel Jurafsky. 2004. Shallow Semantic Parsing of Chinese. In Proceedings of NAACL-HLT. Cynthia A. Thompson, Roger Levy, and Christopher Manning. 2003. A generative model for semantic role labeling. In 14th European Conference on Machine Learning. Nianwen Xue and Martha Palmer. 2004. Calibrating features for semantic role labeling. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 88–94, Barcelona, Spain, July. Association for Computational Linguistics. Xue, Nianwen and Martha Palmer. 2005. Automatic semantic role labeling for Chinese verbs. InProceedings of the Nineteenth International Joint Conference on Artificial Intelligence, pages 1160-1165. Chao Wang, Michael Collins, and Philipp Koehn. 2007. Chinese Syntactic Reordering for Statistical Machine Translation. Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), 737-745. Jianqiang Wang and Douglas W. Oard, 2006. "Combining Bidirectional Translation and Synonymy for Cross-Language Information Retrieval," in 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 202-209. 431
2009
48
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 432–440, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Bilingual Co-Training for Monolingual Hyponymy-Relation Acquisition Jong-Hoon Oh, Kiyotaka Uchimoto, and Kentaro Torisawa Language Infrastructure Group, MASTAR Project, National Institute of Information and Communications Technology (NICT) 3-5 Hikaridai Seika-cho, Soraku-gun, Kyoto 619-0289 Japan {rovellia,uchimoto,torisawa}@nict.go.jp Abstract This paper proposes a novel framework called bilingual co-training for a largescale, accurate acquisition method for monolingual semantic knowledge. In this framework, we combine the independent processes of monolingual semanticknowledge acquisition for two languages using bilingual resources to boost performance. We apply this framework to largescale hyponymy-relation acquisition from Wikipedia. Experimental results show that our approach improved the F-measure by 3.6–10.3%. We also show that bilingual co-training enables us to build classifiers for two languages in tandem with the same combined amount of data as required for training a single classifier in isolation while achieving superior performance. 1 Motivation Acquiring and accumulating semantic knowledge are crucial steps for developing high-level NLP applications such as question answering, although it remains difficult to acquire a large amount of highly accurate semantic knowledge. This paper proposes a novel framework for a large-scale, accurate acquisition method for monolingual semantic knowledge, especially for semantic relations between nominals such as hyponymy and meronymy. We call the framework bilingual cotraining. The acquisition of semantic relations between nominals can be seen as a classification task of semantic relations – to determine whether two nominals hold a particular semantic relation (Girju et al., 2007). Supervised learning methods, which have often been applied to this classification task, have shown promising results. In those methods, however, a large amount of training data is usually required to obtain high performance, and the high costs of preparing training data have always been a bottleneck. Our research on bilingual co-training sprang from a very simple idea: perhaps training data in a language can be enlarged without much cost if we translate training data in another language and add the translation to the training data in the original language. We also noticed that it may be possible to further enlarge the training data by translating the reliable part of the classification results in another language. Since the learning settings (feature sets, feature values, training data, corpora, and so on) are usually different in two languages, the reliable part in one language may be overlapped by an unreliable part in another language. Adding the translated part of the classification results to the training data will improve the classification results in the unreliable part. This process can also be repeated by swapping the languages, as illustrated in Figure 1. Actually, this is nothing other than a bilingual version of co-training (Blum and Mitchell, 1998). Language 1 Language 2 Iteration Manually Prepared Training Data for Language 1 Classifier Classifier Training Training Enlarged Training Data for Language 1 Enlarged Training Data for Language 2 Manually Prepared Training Data for Language 2 Classifier Classifier Further Enlarged Training Data for Language 1 Further Enlarged Training Data for Language 2 Translate reliable parts of classification results Training Training Training Training ….. ….. Translate reliable parts of classification results Figure 1: Concept of bilingual co-training Let us show an example in our current task: hyponymy-relation acquisition from Wikipedia. Our original approach for this task was super432 vised learning based on the approach proposed by Sumida et al. (2008), which was only applied for Japanese and achieved around 80% in F-measure. In their approach, a common substring in a hypernym and a hyponym is assumed to be one strong clue for recognizing that the two words constitute a hyponymy relation. For example, recognizing a proper hyponymy relation between two Japanese words,  (kouso meaning enzyme) and   (kasuibunkaikouso meaning hydrolase), is relatively easy because they share a common suffix: kouso. On the other hand, judging whether their English translations (enzyme and hydrolase) have a hyponymy relation is probably more difficult since they do not share any substrings. A classifier for Japanese will regard the hyponymy relation as valid with high confidence, while a classifier for English may not be so positive. In this case, we can compensate for the weak part of the English classifier by adding the English translation of the Japanese hyponymy relation, which was recognized with high confidence, to the English training data. In addition, if we repeat this process by swapping English and Japanese, further improvement may be possible. Furthermore, the reliable parts that are automatically produced by a classifier can be larger than manually tailored training data. If this is the case, the effect of adding the translation to the training data can be quite large, and the same level of effect may not be achievable by a reasonable amount of labor for preparing the training data. This is the whole idea. Through a series of experiments, this paper shows that the above idea is valid at least for one task: large-scale monolingual hyponymy-relation acquisition from English and Japanese Wikipedia. Experimental results showed that our method based on bilingual co-training improved the performance of monolingual hyponymy-relation acquisition about 3.6–10.3% in the F-measure. Bilingual co-training also enables us to build classifiers for two languages in tandem with the same combined amount of data as would be required for training a single classifier in isolation while achieving superior performance. People probably expect that a key factor in the success of this bilingual co-training is how to translate the training data. We actually did translation by a simple look-up procedure in the existing translation dictionaries without any machine translation systems or disambiguation processes. Despite this simple approach, we obtained consistent improvement in our task using various translation dictionaries. This paper is organized as follows. Section 2 presents bilingual co-training, and Section 3 precisely describes our system. Section 4 describes our experiments and presents results. Section 5 discusses related work. Conclusions are drawn and future work is mentioned in Section 6. 2 Bilingual Co-Training Let S and T be two different languages, and let CL be a set of class labels to be obtained as a result of learning/classification. To simplify the discussion, we assume that a class label is binary; i.e., the classification results are “yes” or “no.” Thus, CL = {yes, no}. Also, we denote the set of all nonnegative real numbers by R+. Assume X = XS ∪XT is a set of instances in languages S and T to be classified. In the context of a hyponymy-relation acquisition task, the instances are pairs of nominals. Then we assume that classifier c assigns class label cl in CL and confidence value r for assigning the label, i.e., c(x) = (x, cl, r), where x ∈X, cl ∈CL, and r ∈R+. Note that we used support vector machines (SVMs) in our experiments and (the absolute value of) the distance between a sample and the hyperplane determined by the SVMs was used as confidence value r. The training data are denoted by L ⊂X×CL, and we denote the learning by function LEARN; if classifier c is trained by training data L, then c = LEARN(L). Particularly, we denote the training sets for S and T that are manually prepared by LS and LT , respectively. Also, bilingual instance dictionary DBI is defined as the translation pairs of instances in XS and XT . Thus, DBI = {(s, t)} ⊂XS × XT . In the case of hyponymy-relation acquisition in English and Japanese, (s, t) ∈DBI could be (s=(enzyme, hydrolase), t=( (meaning enzyme),   (meaning hydrolase))). Our bilingual co-training is given in Figure 2. In the initial stage, c0 S and c0 T are learned with manually labeled instances LS and LT (lines 2–5). Then ci S and ci T are applied to classify instances in XS and XT (lines 6–7). Denote CRi S as a set of the classification results of ci S on instances XS that is not in Li S and is registered in DBI. Lines 10–18 describe a way of selecting from CRi S newly la433 1: i = 0 2: L0 S = LS; L0 T = LT 3: repeat 4: ci S := LEARN(Li S) 5: ci T := LEARN(Li T ) 6: CRi S := {ci S(xS)|xS ∈XS, ∀cl (xS, cl) /∈Li S, ∃xT (xS, xT ) ∈DBI} 7: CRi T := {ci T (xT )|xT ∈XT , ∀cl (xT , cl) /∈Li T , ∃xS (xS, xT ) ∈DBI} 8: L(i+1) S := Li S 9: L(i+1) T := Li T 10: for each (xS, clS, rS) ∈TopN(CRi S) do 11: for each xT such that (xS, xT ) ∈DBI and (xT , clT , rT ) ∈CRi T do 12: if rS > θ then 13: if rT < θ or clS = clT then 14: L(i+1) T := L(i+1) T ∪{(xT , clS)} 15: end if 16: end if 17: end for 18: end for 19: for each (xT , clT , rT ) ∈TopN(CRi T ) do 20: for each xS such that (xS, xT ) ∈DBI and (xS, clS, rS) ∈CRi S do 21: if rT > θ then 22: if rS < θ or clS = clT then 23: L(i+1) S := L(i+1) S ∪{(xS, clT )} 24: end if 25: end if 26: end for 27: end for 28: i = i + 1 29: until a fixed number of iterations is reached Figure 2: Pseudo-code of bilingual co-training beled instances to be added to a new training set in T. TopN(CRi S) is a set of ci S(x), whose rS is top-N highest in CRi S. (In our experiments, N = 900.) During the selection, ci S acts as a teacher and ci T as a student. The teacher instructs his student in the class label of xT , which is actually a translation of xS by bilingual instance dictionary DBI, through clS only if he can do it with a certain level of confidence, say rS > θ, and if one of two other condition meets (rT < θ or clS = clT ). clS = clT is a condition to avoid problems, especially when the student also has a certain level of confidence in his opinion on a class label but disagrees with the teacher: rT > θ and clS ̸= clT . In that case, the teacher does nothing and ignores the instance. Condition rT < θ enables the teacher to instruct his student in the class label of xT in spite of their disagreement in a class label. If every condition is satisfied, (xT , clS) is added to existing labeled instances L(i+1) T . The roles are reversed in lines 19–27 so that ci T becomes a teacher and ci S a student. Similar to co-training (Blum and Mitchell, 1998), one classifier seeks another’s opinion to select new labeled instances. One main difference between co-training and bilingual co-training is the space of instances: co-training is based on different features of the same instances, and bilingual co-training is based on different spaces of instances divided by languages. Since some of the instances in different spaces are connected by a bilingual instance dictionary, they seem to be in the same space. Another big difference lies in the role of the two classifiers. The two classifiers in co-training work on the same task, but those in bilingual co-training do the same type of task rather than the same task. 3 Acquisition of Hyponymy Relations from Wikipedia Our system, which acquires hyponymy relations from Wikipedia based on bilingual co-training, is described in Figure 3. The following three main parts are described in this section: candidate extraction, hyponymy-relation classification, and bilingual instance dictionary construction. Classifier in E Classifier in J Labeled instances Labeled instances Wikipedia Articles in E Wikipedia Articles in J Candidates in J Candidates in E Acquisition of translation dictionary Bilingual Co-Training Unlabeled instances in J Unlabeled instances in E Bilingual instance dictionary Newly labeled instances for E Newly labeled instances for J Translation dictionary Hyponymy-relation candidate extraction Hyponymy-relation candidate extraction Figure 3: System architecture 3.1 Candidate Extraction We follow Sumida et al. (2008) to extract hyponymy-relation candidates from English and Japanese Wikipedia. A layout structure is chosen 434 (a) Layout structure of article TIGER Range Siberian tiger Bengal tiger Subspecies Taxonomy Tiger Malayan tiger (b) Tree structure of Figure 4(a) Figure 4: Wikipedia article and its layout structure as a source of hyponymy relations because it can provide a huge amount of them (Sumida et al., 2008; Sumida and Torisawa, 2008)1, and recognition of the layout structure is easy regardless of languages. Every English and Japanese Wikipedia article was transformed into a tree structure like Figure 4, where layout items title, (sub)section headings, and list items in an article were used as nodes in a tree structure. Sumida et al. (2008) found that some pairs consisting of a node and one of its descendants constituted a proper hyponymy relation (e.g., (TIGER, SIBERIAN TIGER)), and this could be a knowledge source of hyponymy relation acquisition. A hyponymy-relation candidate is then extracted from the tree structure by regarding a node as a hypernym candidate and all its subordinate nodes as hyponym candidates of the hypernym candidate (e.g., (TIGER, TAXONOMY) and (TIGER, SIBERIAN TIGER) from Figure 4). 39 M English hyponymy-relation candidates and 10 M Japanese ones were extracted from Wikipedia. These candidates are classified into proper hyponymy relations and others by using the classifiers described below. 3.2 Hyponymy-Relation Classification We use SVMs (Vapnik, 1995) as classifiers for the classification of the hyponymy relations on the hyponymy-relation candidates. Let hyper be a hypernym candidate, hypo be a hyper’s hyponym candidate, and (hyper, hypo) be a hyponymyrelation candidate. The lexical, structure-based, and infobox-based features of (hyper, hypo) in Table 1 are used for building English and Japanese classifiers. Note that SF3–SF5 and IF were not 1Sumida et al. (2008) reported that they obtained 171 K, 420 K, and 1.48 M hyponymy relations from a definition sentence, a category system, and a layout structure in Japanese Wikipedia, respectively. used in Sumida et al. (2008) but LF1–LF5 and SF1–SF2 are the same as their feature set. Let us provide an overview of the feature sets used in Sumida et al. (2008). See Sumida et al. (2008) for more details. Lexical features LF1–LF5 are used to recognize the lexical evidence encoded in hyper and hypo for hyponymy relations. For example, (hyper,hypo) is often a proper hyponymy relation if hyper and hypo share the same head morpheme or word. In LF1 and LF2, such information is provided along with the words/morphemes and the parts of speech of hyper and hypo, which can be multiword/morpheme nouns. TagChunk (Daum´e III et al., 2005) for English and MeCab (MeCab, 2008) for Japanese were used to provide the lexical features. Several simple lexical patterns2 were also applied to hyponymy-relation candidates. For example, “List of artists” is converted into “artists” by lexical pattern “list of X.” Hyponymy-relation candidates whose hypernym candidate matches such a lexical pattern are likely to be valid (e.g., (List of artists, Leonardo da Vinci)). We use LF4 for dealing with these cases. If a typical or frequently used section heading in a Wikipedia article, such as “History” or “References,” is used as a hyponym candidate in a hyponymy-relation candidate, the hyponymy-relation candidate is usually not a hyponymy relation. LF5 is used to recognize these hyponymy-relation candidates. Structure-based features are related to the tree structure of Wikipedia articles from which hyponymy-relation candidate (hyper,hypo) is extracted. SF1 provides the distance between hyper and hypo in the tree structure. SF2 represents the type of layout items from which hyper and hypo are originated. These are the feature sets used in Sumida et al. (2008). We also added some new items to the above feature sets. SF3 represents the types of tree nodes including root, leaf, and others. For example, (hyper,hypo) is seldom a hyponymy relation if hyper is from a root node (or title) and hypo is from a hyper’s child node (or section headings). SF4 and SF5 represent the structural contexts of hyper and hypo in a tree structure. They can provide evidence related to similar hyponymyrelation candidates in the structural contexts. An infobox-based feature, IF, is based on a 2We used the same Japanese lexical patterns in Sumida et al. (2008) to build English lexical patterns with them. 435 Type Description Example LF1 Morphemes/words hyper: tiger∗, hypo: Siberian, hypo: tiger∗ LF2 POS of morphemes/words hyper: NN∗, hypo: NP, hypo: NN∗ LF3 hyper and hypo, themselves hyper: Tiger, hypo: Siberian tiger LF4 Used lexical patterns hyper: “List of X”, hypo: “Notable X” LF5 Typical section headings hyper: History, hypo: Reference SF1 Distance between hyper and hypo 3 SF2 Type of layout items hyper: title, hypo: bulleted list SF3 Type of tree nodes hyper: root node, hypo: leaf node SF4 LF1 and LF3 of hypo’s parent node LF3:Subspecies SF5 LF1 and LF3 of hyper’s child node LF3: Taxonomy IF Semantic properties of hyper and hypo hyper: (taxobox,species), hypo: (taxobox,name) Table 1: Feature type and its value. ∗in LF1 and LF2 represent the head morpheme/word and its POS. Except those in LF4 and LF5, examples are derived from (TIGER, SIBERIAN TIGER) in Figure 4. Wikipedia infobox, a special kind of template, that describes a tabular summary of an article subject expressed by attribute-value pairs. An attribute type coupled with the infobox name to which it belongs provides the semantic properties of its value that enable us to easily understand what the attribute value means (Auer and Lehmann, 2007; Wu and Weld, 2007). For example, infobox template City Japan in Wikipedia article Kyoto contains several attribute-value pairs such as “Mayor=Daisaku Kadokawa” as attribute=its value. What Daisaku Kadokawa, the attribute value of mayor in the example, represents is hard to understand alone if we lack knowledge, but its attribute type, mayor, gives a clue–Daisaku Kadokawa is a mayor related to Kyoto. These semantic properties enable us to discover semantic evidence for hyponymy relations. We extract triples (infobox name, attribute type, attribute value) from the Wikipedia infoboxes and encode such information related to hyper and hypo in our feature set IF.3 3.3 Bilingual Instance Dictionary Construction Multilingual versions of Wikipedia articles are connected by cross-language links and usually have titles that are bilinguals of each other (Erdmann et al., 2008). English and Japanese articles connected by a cross-language link are extracted from Wikipedia, and their titles are regarded as translation pairs4. The translation pairs between 3We obtained 1.6 M object-attribute-value triples in Japanese and 5.9 M in English. 4197 K translation pairs were extracted. English and Japanese terms are used for building bilingual instance dictionary DBI for hyponymyrelation acquisition, where DBI is composed of translation pairs between English and Japanese hyponymy-relation candidates5. 4 Experiments We used the MAY 2008 version of English Wikipedia and the JUNE 2008 version of Japanese Wikipedia for our experiments. 24,000 hyponymy-relation candidates, randomly selected in both languages, were manually checked to build training, development, and test sets6. Around 8,000 hyponymy relations were found in the manually checked data for both languages7. 20,000 of the manually checked data were used as a training set for training the initial classifier. The rest were equally divided into development and test sets. The development set was used to select the optimal parameters in bilingual co-training and the test set was used to evaluate our system. We used TinySVM (TinySVM, 2002) with a polynomial kernel of degree 2 as a classifier. The maximum iteration number in the bilingual cotraining was set as 100. Two parameters, θ and TopN, were selected through experiments on the development set. θ = 1 and TopN=900 showed 5We also used redirection links in English and Japanese Wikipedia for recognizing the variations of terms when we built a bilingual instance dictionary with Wikipedia crosslanguage links. 6It took about two or three months to check them in each language. 7Regarding a hyponymy relation as a positive sample and the others as a negative sample for training SVMs, “positive sample:negative sample” was about 8,000:16,000=1:2 436 the best performance and were used as the optimal parameter in the following experiments. We conducted three experiments to show effects of bilingual co-training, training data size, and bilingual instance dictionaries. In the first two experiments, we experimented with a bilingual instance dictionary derived from Wikipedia crosslanguage links. Comparison among systems based on three different bilingual instance dictionaries is shown in the third experiment. Precision (P), recall (R), and F1-measure (F1), as in Eq (1), were used as the evaluation measures, where Rel represents a set of manually checked hyponymy relations and HRbyS represents a set of hyponymy-relation candidates classified as hyponymy relations by the system: P = |Rel ∩HRbyS|/|HRbyS| (1) R = |Rel ∩HRbyS|/|Rel| F1 = 2 × (P × R)/(P + R) 4.1 Effect of Bilingual Co-Training ENGLISH JAPANESE P R F1 P R F1 SYT 78.5 63.8 70.4 75.0 77.4 76.1 INIT 77.9 67.4 72.2 74.5 78.5 76.6 TRAN 76.8 70.3 73.4 76.7 79.3 78.0 BICO 78.0 83.7 80.7 78.3 85.2 81.6 Table 2: Performance of different systems (%) Table 2 shows the comparison results of the four systems. SYT represents the Sumida et al. (2008) system that we implemented and tested with the same data as ours. INIT is a system based on initial classifier c0 in bilingual co-training. We translated training data in one language by using our bilingual instance dictionary and added the translation to the existing training data in the other language like bilingual co-training did. The size of the English and Japanese training data reached 20,729 and 20,486. We trained initial classifier c0 with the new training data. TRAN is a system based on the classifier. BICO is a system based on bilingual co-training. For Japanese, SYT showed worse performance than that reported in Sumida et al. (2008), probably due to the difference in training data size (ours is 20,000 and Sumida et al. (2008) was 29,900). The size of the test data was also different – ours is 2,000 and Sumida et al. (2008) was 1,000. Comparison between INIT and SYT shows the effect of SF3–SF5 and IF, newly introduced feature types, in hyponymy-relation classification. INIT consistently outperformed SYT, although the difference was merely around 0.5–1.8% in F1. BICO showed significant performance improvement (around 3.6–10.3% in F1) over SYT, INIT, and TRAN regardless of the language. Comparison between TRAN and BICO showed that bilingual co-training is useful for enlarging the training data and that the performance gain by bilingual co-training cannot be achieved by simply translating the existing training data. 81 79 77 75 73 60 55 50 45 40 35 30 25 20 F1 Training Data (103) English Japanese Figure 5: F1 curves based on the increase of training data size during bilingual co-training Figure 5 shows F1 curves based on the size of the training data including those manually tailored and automatically obtained through bilingual co-training. The curve starts from 20,000 and ends around 55,000 in Japanese and 62,000 in English. As the training data size increases, the F1 curves tend to go upward in both languages. This indicates that the two classifiers cooperate well to boost their performance through bilingual cotraining. We recognized 5.4 M English and 2.41 M Japanese hyponymy relations from the classification results of BICO on all hyponymy-relation candidates in both languages. 4.2 Effect of Training Data Size We performed two tests to investigate the effect of the training data size on bilingual co-training. The first test posed the following question: “If we build 2n training samples by hand and the building cost is the same in both languages, which is better from the monolingual aspects: 2n monolingual training samples or n bilingual training samples?” Table 3 and Figure 6 show the results. 437 In INIT-E and INIT-J, a classifier in each language, which was trained with 2n monolingual training samples, did not learn through bilingual co-training. In BICO-E and BICO-J, bilingual cotraining was applied to the initial classifiers trained with n training samples in both languages. As shown in Table 3, BICO, with half the size of the training samples used in INIT, always performed better than INIT in both languages. This indicates that bilingual co-training enables us to build classifiers for two languages in tandem with the same combined amount of data as required for training a single classifier in isolation while achieving superior performance. 81 79 77 75 73 71 69 67 65 20000 15000 10000 7500 5000 2500 F1 Training Data Size INIT-E INIT-J BICO-E BICO-J Figure 6: F1 based on training data size: with/without bilingual co-training n 2n n INIT-E INIT-J BICO-E BICO-J 2500 67.3 72.3 70.5 73.0 5000 69.2 74.3 74.6 76.9 10000 72.2 76.6 76.9 78.6 Table 3: F1 based on training data size: with/without bilingual co-training (%) The second test asked: “Can we always improve performance through bilingual co-training with one strong and one weak classifier?” If the answer is yes, then we can apply our framework to acquisition of hyponymy-relations in other languages, i.e., German and French, without much effort for preparing a large amount of training data, because our strong classifier in English or Japanese can boost the performance of a weak classifier in other languages. To answer the question, we tested the performance of classifiers by using all training data (20,000) for a strong classifier and by changing the training data size of the other from 1,000 to 15,000 ({1,000, 5,000, 10,000, 15,000}) for a weak classifier. INIT-E BICO-E INIT-J BICO-J 1,000 72.2 79.6 64.0 72.7 5,000 72.2 79.6 73.1 75.3 10,000 72.2 79.8 74.3 79.0 15,000 72.2 80.4 77.0 80.1 Table 4: F1 based on training data size: when English classifier is strong one INIT-E BICO-E INIT-J BICO-J 1,000 60.3 69.7 76.6 79.3 5,000 67.3 74.6 76.6 79.6 10,000 69.2 77.7 76.6 80.1 15,000 71.0 79.3 76.6 80.6 Table 5: F1 based on training data size: when Japanese classifier is strong one Tables 4 and 5 show the results, where “INIT” represents a system based on the initial classifier in each language and “BICO” represents a system based on bilingual co-training. The results were encouraging because the classifiers showed better performance than their initial ones in every setting. In other words, a strong classifier always taught a weak classifier well, and the strong one also got help from the weak one, regardless of the size of the training data with which the weaker one learned. The test showed that bilingual co-training can work well if we have one strong classifier. 4.3 Effect of Bilingual Instance Dictionaries We tested our method with different bilingual instance dictionaries to investigate their effect. We built bilingual instance dictionaries based on different translation dictionaries whose translation entries came from different domains (i.e., general domain, technical domain, and Wikipedia) and had a different degree of translation ambiguity. In Table 6, D1 and D2 correspond to systems based on a bilingual instance dictionary derived from two handcrafted translation dictionaries, EDICT (Breen, 2008) (a general-domain dictionary) and “The Japan Science and Technology Agency Dictionary,” (a translation dictionary for technical terms) respectively. D3, which is the same as BICO in Table 2, is based on a bilingual 438 instance dictionary derived from Wikipedia. ENTRY represents the number of translation dictionary entries used for building a bilingual instance dictionary. E2J (or J2E) represents the average translation ambiguities of English (or Japanese) terms in the entries. To show the effect of these translation ambiguities, we used each dictionary under two different conditions, α=5 and ALL. α=5 represents the condition where only translation entries with less than five translation ambiguities are used; ALL represents no restriction on translation ambiguities. DIC F1 DIC STATISTICS TYPE E J ENTRY E2J J2E D1 α=5 76.5 78.4 588K 1.80 1.77 D1 ALL 75.0 77.2 990K 7.17 2.52 D2 α=5 76.9 78.5 667K 1.89 1.55 D2 ALL 77.0 77.9 750K 3.05 1.71 D3 α=5 80.7 81.6 197K 1.03 1.02 D3 ALL 80.7 81.6 197K 1.03 1.02 Table 6: Effect of different bilingual instance dictionaries The results showed that D3 was the best and that the performances of the others were similar to each other. The differences in the F1 scores between α=5 and ALL were relatively small within the same system triggered by translation ambiguities. The performance gap between D3 and the other systems might explain the fact that both hyponymy-relation candidates and the translation dictionary used in D3 were extracted from the same dataset (i.e., Wikipedia), and thus the bilingual instance dictionary built with the translation dictionary in D3 had better coverage of the Wikipedia entries consisting of hyponymyrelation candidates than the other bilingual instance dictionaries. Although D1 and D2 showed lower performance than D3, the experimental results showed that bilingual co-training was always effective no matter which dictionary was used (Note that F1 of INIT in Table 2 was 72.2 in English and 76.6 in Japanese.) 5 Related Work Li and Li (2002) proposed bilingual bootstrapping for word translation disambiguation. Similar to bilingual co-training, classifiers for two languages cooperated in learning with bilingual resources in bilingual bootstrapping. However, the two classifiers in bilingual bootstrapping were for a bilingual task but did different tasks from the monolingual viewpoint. A classifier in each language is for word sense disambiguation, where a class label (or word sense) is different based on the languages. On the contrary, classifiers in bilingual co-training cooperate in doing the same type of tasks. Bilingual resources have been used for monolingual tasks including verb classification and noun phrase semantic interpolation (Merlo et al., 2002; Girju, 2006). However, unlike ours, their focus was limited to bilingual features for one monolingual classifier based on supervised learning. Recently, there has been increased interest in semantic relation acquisition from corpora. Some regarded Wikipedia as the corpora and applied hand-crafted or machine-learned rules to acquire semantic relations (Herbelot and Copestake, 2006; Kazama and Torisawa, 2007; Ruiz-casado et al., 2005; Nastase and Strube, 2008; Sumida et al., 2008; Suchanek et al., 2007). Several researchers who participated in SemEval-07 (Girju et al., 2007) proposed methods for the classification of semantic relations between simple nominals in English sentences. However, the previous work seldom considered the bilingual aspect of semantic relations in the acquisition of monolingual semantic relations. 6 Conclusion We proposed a bilingual co-training approach and applied it to hyponymy-relation acquisition from Wikipedia. Experiments showed that bilingual co-training is effective for improving the performance of classifiers in both languages. We further showed that bilingual co-training enables us to build classifiers for two languages in tandem, outperforming classifiers trained individually for each language while requiring no more training data in total than a single classifier trained in isolation. We showed that bilingual co-training is also helpful for boosting the performance of a weak classifier in one language with the help of a strong classifier in the other language without lowering the performance of either classifier. This indicates that the framework can reduce the cost of preparing training data in new languages with the help of our English and Japanese strong classifiers. Our future work focuses on this issue. 439 References S¨oren Auer and Jens Lehmann. 2007. What have Innsbruck and Leipzig in common? Extracting semantics from wiki content. In Proc. of the 4th European Semantic Web Conference (ESWC 2007), pages 503–517. Springer. Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In COLT’ 98: Proceedings of the eleventh annual conference on Computational learning theory, pages 92–100. Jim Breen. 2008. EDICT Japanese/English dictionary file, The Electronic Dictionary Research and Development Group, Monash University. Hal Daum´e III, John Langford, and Daniel Marcu. 2005. Search-based structured prediction as classification. In Proc. of NIPS Workshop on Advances in Structured Learning for Text and Speech Processing, Whistler, Canada. Maike Erdmann, Kotaro Nakayama, Takahiro Hara, and Shojiro Nishio. 2008. A bilingual dictionary extracted from the Wikipedia link structure. In Proc. of DASFAA, pages 686–689. Roxana Girju, Preslav Nakov, Vivi Nastase, Stan Szpakowicz, Peter Turney, and Deniz Yuret. 2007. Semeval-2007 task 04: Classification of semantic relations between nominals. In Proc. of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 13–18. Roxana Girju. 2006. Out-of-context noun phrase semantic interpretation with cross-linguistic evidence. In CIKM ’06: Proceedings of the 15th ACM international conference on Information and knowledge management, pages 268–276. Aurelie Herbelot and Ann Copestake. 2006. Acquiring ontological relationships from Wikipedia using RMRS. In Proc. of the ISWC 2006 Workshop on Web Content Mining with Human Language Technologies. Jun’ichi Kazama and Kentaro Torisawa. 2007. Exploiting Wikipedia as external knowledge for named entity recognition. In Proc. of Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 698–707. Cong Li and Hang Li. 2002. Word translation disambiguation using bilingual bootstrapping. In Proc. of the 40th Annual Meeting of the Association for Computational Linguistics, pages 343–351. MeCab. 2008. MeCab: Yet another part-of-speech and morphological analyzer. http://mecab. sourceforge.net/. Paola Merlo, Suzanne Stevenson, Vivian Tsang, and Gianluca Allaria. 2002. A multilingual paradigm for automatic verb classification. In Proc. of the 40th Annual Meeting of the Association for Computational Linguistics, pages 207–214. Vivi Nastase and Michael Strube. 2008. Decoding Wikipedia categories for knowledge acquisition. In Proc. of AAAI 08, pages 1219–1224. Maria Ruiz-casado, Enrique Alfonseca, and Pablo Castells. 2005. Automatic extraction of semantic relationships for Wordnet by means of pattern learning from Wikipedia. In Proc. of NLDB, pages 67– 79. Springer Verlag. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: A Core of Semantic Knowledge. In Proc. of the 16th international conference on World Wide Web, pages 697–706. Asuka Sumida and Kentaro Torisawa. 2008. Hacking Wikipedia for hyponymy relation acquisition. In Proc. of the Third International Joint Conference on Natural Language Processing (IJCNLP), pages 883–888, January. Asuka Sumida, Naoki Yoshinaga, and Kentaro Torisawa. 2008. Boosting precision and recall of hyponymy relation acquisition from hierarchical layouts in Wikipedia. In Proceedings of the 6th International Conference on Language Resources and Evaluation. TinySVM. 2002. http://chasen.org/˜taku/ software/TinySVM. Vladimir N. Vapnik. 1995. The nature of statistical learning theory. Springer-Verlag New York, Inc., New York, NY, USA. Fei Wu and Daniel S. Weld. 2007. Autonomously semantifying Wikipedia. In CIKM ’07: Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 41– 50. 440
2009
49
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 37–45, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Brutus: A Semantic Role Labeling System Incorporating CCG, CFG, and Dependency Features Stephen A. Boxwell, Dennis Mehay, and Chris Brew Department of Linguistics The Ohio State University {boxwe11,mehay,cbrew}@1ing.ohio-state.edu Abstract We describe a semantic role labeling system that makes primary use of CCG-based features. Most previously developed systems are CFG-based and make extensive use of a treepath feature, which suffers from data sparsity due to its use of explicit tree configurations. CCG affords ways to augment treepathbased features to overcome these data sparsity issues. By adding features over CCG wordword dependencies and lexicalized verbal subcategorization frames (“supertags”), we can obtain an F-score that is substantially better than a previous CCG-based SRL system and competitive with the current state of the art. A manual error analysis reveals that parser errors account for many of the errors of our system. This analysis also suggests that simultaneous incremental parsing and semantic role labeling may lead to performance gains in both tasks. 1 Introduction Semantic Role Labeling (SRL) is the process of assigning semantic roles to strings of words in a sentence according to their relationship to the semantic predicates expressed in the sentence. The task is difficult because the relationship between syntactic relations like “subject” and “object” do not always correspond to semantic relations like “agent” and “patient”. An effective semantic role labeling system must recognize the differences between different configurations: (a) [The man]Arg0 opened [the door]Arg1 [for him]Arg3 [today]ArgM−T MP . (b) [The door]Arg1 opened. (c) [The door]Arg1 was opened by [a man]Arg0. We use Propbank (Palmer et al., 2005), a corpus of newswire text annotated with verb predicate semantic role information that is widely used in the SRL literature (M`arquez et al., 2008). Rather than describe semantic roles in terms of “agent” or “patient”, Propbank defines semantic roles on a verb-by-verb basis. For example, the verb open encodes the OPENER as Arg0, the OPENEE as Arg1, and the beneficiary of the OPENING action as Arg3. Propbank also defines a set of adjunct roles, denoted by the letter M instead of a number. For example, ArgM-TMP denotes a temporal role, like “today”. By using verb-specific roles, Propbank avoids specific claims about parallels between the roles of different verbs. We follow the approach in (Punyakanok et al., 2008) in framing the SRL problem as a two-stage pipeline: identification followed by labeling. During identification, every word in the sentence is labeled either as bearing some (as yet undetermined) semantic role or not . This is done for each verb. Next, during labeling, the precise verb-specific roles for each word are determined. In contrast to the approach in (Punyakanok et al., 2008), which tags constituents directly, we tag headwords and then associate them with a constituent, as in a previous CCG-based approach (Gildea and Hockenmaier, 2003). Another difference is our choice of parsers. Brutus uses the CCG parser of (Clark and Curran, 2007, henceforth the C&C parser), Charniak’s parser (Charniak, 2001) for additional CFG-based features, and MALT parser (Nivre et al., 2007) for dependency features, while (Punyakanok et al., 2008) use results from an ensemble of parses from Charniak’s Parser and a Collins parser (Collins, 2003; Bikel, 2004). Finally, the system described in (Punyakanok et al., 2008) uses a joint inference model to resolve discrepancies between multiple automatic parses. We do not employ a similar strategy due to the differing notions of constituency represented in our parsers (CCG having a much more fluid notion of constituency and the MALT parser using a different approach entirely). For the identification and labeling steps, we train a maximum entropy classifier (Berger et al., 1996) over sections 02-21 of a version of the CCGbank corpus (Hockenmaier and Steedman, 2007) that has been augmented by projecting the Propbank semantic annotations (Boxwell and White, 2008). We evaluate our SRL system’s argument predictions at the word string level, making our results directly comparable for each argument labeling.1 In the following, we briefly introduce the CCG grammatical formalism and motivate its use in SRL (Sections 2–3). Our main contribution is to demonstrate that CCG — arguably a more expressive and lin1This is guaranteed by our string-to-string mapping from the original Propbank to the CCGbank. 37 guistically appealing syntactic framework than vanilla CFGs — is a viable basis for the SRL task. This is supported by our experimental results, the setup and details of which we give in Sections 4–10. In particular, using CCG enables us to map semantic roles directly onto verbal categories, an innovation of our approach that leads to performance gains (Section 7). We conclude with an error analysis (Section 11), which motivates our discussion of future research for computational semantics with CCG (Section 12). 2 Combinatory Categorial Grammar Combinatory Categorial Grammar (Steedman, 2000) is a grammatical framework that describes syntactic structure in terms of the combinatory potential of the lexical (word-level) items. Rather than using standard part-of-speech tags and grammatical rules, CCG encodes much of the combinatory potential of each word by assigning a syntactically informative category. For example, the verb loves has the category (s\np)/np, which could be read “the kind of word that would be a sentence if it could combine with a noun phrase on the right and a noun phrase on the left”. Further, CCG has the advantage of a transparent interface between the way the words combine and their dependencies with other words. Word-word dependencies in the CCGbank are encoded using predicate-argument (PARG) relations. PARG relations are defined by the functor word, the argument word, the category of the functor word and which argument slot of the functor category is being filled. For example, in the sentence John loves Mary (figure 1), there are two slots on the verbal category to be filled by NP arguments. The first argument (the subject) fills slot 1. This can be encoded as <loves,john,(s\np)/np,1>, indicating the head of the functor, the head of the argument, the functor category and the argument slot. The second argument (the direct object) fills slot 2. This can be encoded as <loves,mary,(s\np)/np,2>. One of the potential advantages to using CCGbank-style PARG relations is that they uniformly encode both local and long-range dependencies — e.g., the noun phrase the Mary that John loves expresses the same set of two dependencies. We will show this to be a valuable tool for semantic role prediction. 3 Potential Advantages to using CCG There are many potential advantages to using the CCG formalism in SRL. One is the uniformity with which CCG can express equivalence classes of local and longrange (including unbounded) dependencies. CFGbased approaches often rely on examining potentially long sequences of categories (or treepaths) between the verb and the target word. Because there are a number of different treepaths that correspond to a single relation (figure 2), this approach can suffer from data sparsity. CCG, however, can encode all treepath-distinct expressions of a single grammatical relation into a single predicate-argument relationship (figure 3). This feature has been shown (Gildea and Hockenmaier, 2003) to be an effective substitute for treepath-based features. But while predicate-argument-based features are very effective, they are still vulnerable both to parser errors and to cases where the semantics of a sentence do not correspond directly to syntactic dependencies. To counteract this, we use both kinds of features with the expectation that the treepath feature will provide low-level detail to compensate for missed, incorrect or syntactically impossible dependencies. Another advantage of a CCG-based approach (and lexicalist approaches in general) is the ability to encode verb-specific argument mappings. An argument mapping is a link between the CCG category and the semantic roles that are likely to go with each of its arguments. The projection of argument mappings onto CCG verbal categories is explored in (Boxwell and White, 2008). We describe this feature in more detail in section 7. 4 Identification and Labeling Models As in previous approaches to SRL, Brutus uses a twostage pipeline of maximum entropy classifiers. In addition, we train an argument mapping classifier (described in more detail below) whose predictions are used as features for the labeling model. The same features are extracted for both treebank and automatic parses. Automatic parses were generated using the C&C CCG parser (Clark and Curran, 2007) with its derivation output format converted to resemble that of the CCGbank. This involved following the derivational bracketings of the C&C parser’s output and reconstructing the backpointers to the lexical heads using an in-house implementation of the basic CCG combinatory operations. All classifiers were trained to 500 iterations of L-BFGS training — a quasi-Newton method from the numerical optimization literature (Liu and Nocedal, 1989) — using Zhang Le’s maxent toolkit.2 To prevent overfitting we used Gaussian priors with global variances of 1 and 5 for the identifier and labeler, respectively.3 The Gaussian priors were determined empirically by testing on the development set. Both the identifier and the labeler use the following features: (1) Words. Words drawn from a 3 word window around the target word,4 with each word associated with a binary indicator feature. (2) Part of Speech. Part of Speech tags drawn from a 3 word window around the target word, 2Available for download at http://homepages. inf.ed.ac.uk/s0450736/maxent_toolkit. html. 3Gaussian priors achieve a smoothing effect (to prevent overfitting) by penalizing very large feature weights. 4The size of the window was determined experimentally on the development set – we use the same window sizes throughout. 38 John loves Mary np (s[dcl]\np)/np np > s[dcl]\np < s[dcl] Figure 1: This sentence has two dependencies: <loves,mary,(s\np)/np,2> and <loves,john,(s\np)/np,1> Saaa ! ! ! NP Robin VP bb b " " " V fixed NP @@ Det the N car NP HHH    Det the N HHH    N car RC HHH    Rel that S ZZ   NP Robin VP V fixed Figure 2: The semantic relation (Arg1) between ‘car’ and ‘fixed’ in both phrases is the same, but the treepaths — traced with arrows above — are different: (V>VP<NP<N and V>VP>S>RC>N<N, respectively). Robin fixed the car np (s\np)/np np/n n > np > s\np < s the car that Robin fixed np/n n (np\np)/(s/np) np (s\np)/np >T s/(s\np) > >B np s/np > np\np < np Figure 3: CCG word-word dependencies are passed up through subordinate clauses, encoding the relation between car and fixed the same in both cases: (s\np)/np.2.→(Gildea and Hockenmaier, 2003) with each associated with a binary indicator feature. (3) CCG Categories. CCG categories drawn from a 3 word window around the target word, with each associated with a binary indicator feature. (4) Predicate. The lemma of the predicate we are tagging. E.g. fix is the lemma of fixed. (5) Result Category Detail. The grammatical feature on the category of the predicate (indicating declarative, passive, progressive, etc). This can be read off the verb category: declarative for eats: (s[dcl]\np)/np or progressive for running: s[ng]\np. (6) Before/After. A binary indicator variable indicating whether the target word is before or after the verb. (7) Treepath. The sequence of CCG categories representing the path through the derivation from the predicate to the target word. For the relationship between fixed and car in the first sentence of figure 3, the treepath is (s[dcl]\np)/np>s[dcl]\np<np<n, with > and < indicating movement up and down the tree, respectively. (8) Short Treepath. Similar to the above treepath feature, except the path stops at the highest node under the least common subsumer that is headed by the target word (this is the constituent that the role would be marked on if we identified this terminal as a role-bearing word). Again, for the relationship between fixed and car in the first sentence of figure 3, the short treepath is (s[dcl]\np)/np>s[dcl]\np<np. (9) NP Modified. A binary indicator feature indicating whether the target word is modified by an NP modifier.5 5This is easily read off of the CCG PARG relationships. 39 (10) Subcategorization. A sequence of the categories that the verb combines with in the CCG derivation tree. For the first sentence in figure 3, the correct subcategorization would be np,np. Notice that this is not necessarily a restatement of the verbal category – in the second sentence of figure 3, the correct subcategorization is s/(s\np),(np\np)/(s[dcl]/np),np. (11) PARG feature. We follow a previous CCGbased approach (Gildea and Hockenmaier, 2003) in using a feature to describe the PARG relationship between the two words, if one exists. If there is a dependency in the PARG structure between the two words, then this feature is defined as the conjunction of (1) the category of the functor, (2) the argument slot that is being filled in the functor category, and (3) an indication as to whether the functor (→) or the argument (←) is the lexical head. For example, to indicate the relationship between car and fixed in both sentences of figure 3, the feature is (s\np)/np.2.→. The labeler uses all of the previous features, plus the following: (12) Headship. A binary indicator feature as to whether the functor or the argument is the lexical head of the dependency between the two words, if one exists. (13) Predicate and Before/After. The conjunction of two earlier features: the predicate lemma and the Before/After feature. (14) Rel Clause. Whether the path from predicate to target word passes through a relative clause (e.g., marked by the word ‘that’ or any other word with a relativizer category). (15) PP features. When the target word is a preposition, we define binary indicator features for the word, POS, and CCG category of the head of the topmost NP in the prepositional phrase headed by a preposition (a.k.a. the ‘lexical head’ of the PP). So, if on heads the phrase ‘on the third Friday’, then we extract features relating to Friday for the preposition on. This is null when the target word is not a preposition. (16) Argument Mappings. If there is a PARG relation between the predicate and the target word, the argument mapping is the most likely predicted role to go with that argument. These mappings are predicted using a separate classifier that is trained primarily on lexical information of the verb, its immediate string-level context, and its observed arguments in the training data. This feature is null when there is no PARG relation between the predicate and the target word. The Argument Mapping feature can be viewed as a simple prediction about some of the non-modifier semantic roles that a verb is likely to express. We use this information as a feature and not a hard constraint to allow other features to overrule the recommendation made by the argument mapping classifier. The features used in the argument mapping classifier are described in detail in section 7. 5 CFG based Features In addition to CCG-based features, features can be drawn from a traditional CFG-style approach when they are available. Our motivation for this is twofold. First, others (Punyakanok et al., 2008, e.g.), have found that different parsers have different error patterns, and so using multiple parsers can yield complementary sources of correct information. Second, we noticed that, although the CCG-based system performed well on head word labeling, performance dropped when projecting these labels to the constituent level (see sections 8 and 9 for more). This may have to do with the fact that CCG is not centered around a constituencybased analysis, as well as with inconsistencies between CCG and Penn Treebank-style bracketings (the latter being what was annotated in the original Propbank). Penn Treebank-derived features are used in the identifier, labeler, and argument mapping classifiers. For automatic parses, we use Charniak’s parser (Charniak, 2001). For gold-standard parses, we remove functional tag and trace information from the Penn Treebank parses before we extract features over them, so as to simulate the conditions of an automatic parse. The Penn Treebank features are as follows: (17) CFG Treepath. A sequence of traditional CFG-style categories representing the path from the verb to the target word. (18) CFG Short Treepath. Analogous to the CCGbased short treepath feature. (19) CFG Subcategorization. Analogous to the CCG-based subcategorization feature. (20) CFG Least Common Subsumer. The category of the root of the smallest tree that dominates both the verb and the target word. 6 Dependency Parser Features Finally, several features can be extracted from a dependency representation of the same sentence. Automatic dependency relations were produced by the MALT parser. We incorporate MALT into our collection of parses because it provides detailed information on the exact syntactic relations between word pairs (subject, object, adverb, etc) that is not found in other automatic parsers. The features used from the dependency parses are listed below: 40 (21) DEP-Exists A binary indicator feature showing whether or not there is a dependency between the target word and the predicate. (22) DEP-Type If there is a dependency between the target word and the predicate, what type of dependency it is (SUBJ, OBJ, etc). 7 Argument Mapping Model An innovation in our approach is to use a separate classifier to predict an argument mapping feature. An argument mapping is a mapping from the syntactic arguments of a verbal category to the semantic arguments that should correspond to them (Boxwell and White, 2008). In order to generate examples of the argument mapping for training purposes, it is necessary to employ the PARG relations for a given sentence to identify the headwords of each of the verbal arguments. That is, we use the PARG relations to identify the headwords of each of the constituents that are arguments of the verb. Next, the appropriate semantic role that corresponds to that headword (given by Propbank) is identified. This is done by climbing the CCG derivation tree towards the root until we find a semantic role corresponding to the verb in question — i.e., by finding the point where the constituent headed by the verbal category combines with the constituent headed by the argument in question. These semantic roles are then marked on the corresponding syntactic argument of the verb. As an example, consider the sentence The boy loves a girl. (figure 4). By examining the arguments that the verbal category combines with in the treebank, we can identify the corresponding semantic role for each argument that is marked on the verbal category. We then use these tags to train the Argument Mapping model, which will predict likely argument mappings for verbal categories based on their local surroundings and the headwords of their arguments, similar to the supertagging approaches used to label the informative syntactic categories of the verbs (Bangalore and Joshi, 1999; Clark, 2002), except tagging “one level above” the syntax. The Argument Mapping Predictor uses the following features: (23) Predicate. The lemma of the predicate, as before. (24) Words. Words drawn from a 5 word window around the target word, with each word associated with a binary indicator feature, as before. (25) Parts of Speech. Part of Speech tags drawn from a 5 word window around the target word, with each tag associated with a binary indicator feature, as before. (26) CCG Categories. CCG categories drawn from a 5 word window around the target word, with each category associated with a binary indicator feature, as before. the boy loves a girl np/n n (s[dcl]\npArg0)/npArg1 np/n n > > np −Arg0 np −Arg1 > s[dcl]\np < s[dcl] Figure 4: By looking at the constituents that the verb combines with, we can identify the semantic roles corresponding to the arguments marked on the verbal category. (27) Argument Data. The word, POS, and CCG category, and treepath of the headwords of each of the verbal arguments (i.e., PARG dependents), each encoded as a separate binary indicator feature. (28) Number of arguments. The number of arguments marked on the verb. (29) Words of Arguments. The head words of each of the verb’s arguments. (30) Subcategorization. The CCG categories that combine with this verb. This includes syntactic adjuncts as well as arguments. (31) CFG-Sisters. The POS categories of the sisters of this predicate in the CFG representation. (32) DEP-dependencies. The individual dependency types of each of the dependencies relating to the verb (SBJ, OBJ, ADV, etc) taken from the dependency parse. We also incorporate a single feature representing the entire set of dependency types associated with this verb into a single feature, representing the set of dependencies as a whole. Given these features with gold standard parses, our argument mapping model can predict entire argument mappings with an accuracy rate of 87.96% on the test set, and 87.70% on the development set. We found the features generated by this model to be very useful for semantic role prediction, as they enable us to make decisions about entire sets of semantic roles associated with individual lemmas, rather than choosing them independently of each other. 8 Enabling Cross-System Comparison The Brutus system is designed to label headwords of semantic roles, rather than entire constituents. However, because most SRL systems are designed to label constituents rather than headwords, it is necessary to project the roles up the derivation to the correct constituent in order to make a meaningful comparison of the system’s performance. This introduces the potential for further error, so we report results on the accuracy of headwords as well as the correct string of words. We deterministically move the role to the highest constituent in the derivation that is headed by the 41 a man with glasses spoke np/n n (np\np)/np np s\np > > np np\np < np −speak.Arg0 < s Figure 5: The role is moved towards the root until the original node is no longer the head of the marked constituent. P R F G&H (treebank) 67.5% 60.0% 63.5% Brutus (treebank) 88.18% 85.00% 86.56% G&H (automatic) 55.7% 49.5% 52.4% Brutus (automatic) 76.06% 70.15% 72.99% Table 1: Accuracy of semantic role prediction using only CCG based features. originally tagged terminal. In most cases, this corresponds to the node immediately dominated by the lowest common subsuming node of the the target word and the verb (figure 5). In some cases, the highest constituent that is headed by the target word is not immediately dominated by the lowest common subsuming node (figure 6). 9 Results Using a version of Brutus incorporating only the CCGbased features described above, we achieve better results than a previous CCG based system (Gildea and Hockenmaier, 2003, henceforth G&H). This could be due to a number of factors, including the fact that our system employs a different CCG parser, uses a more complete mapping of the Propbank onto the CCGbank, uses a different machine learning approach,6 and has a richer feature set. The results for constituent tagging accuracy are shown in table 1. As expected, by incorporating Penn Treebank-based features and dependency features, we obtain better results than with the CCG-only system. The results for gold standard parses are comparable to the winning system of the CoNLL 2005 shared task on semantic role labeling (Punyakanok et al., 2008). Other systems (Toutanova et al., 2008; Surdeanu et al., 2007; Johansson and Nugues, 2008) have also achieved comparable results – we compare our system to (Punyakanok et al., 2008) due to the similarities in our approaches. The performance of the full system is shown in table 2. Table 3 shows the ability of the system to predict the correct headwords of semantic roles. This is a necessary condition for correctness of the full constituent, but not a sufficient one. In parser evaluation, Carroll, Minnen, and Briscoe (Carroll et al., 2003) have argued 6G&H use a generative model with a back-off lattice, whereas we use a maximum entropy classifier. P R F P. et al (treebank) 86.22% 87.40% 86.81% Brutus (treebank) 88.29% 86.39% 87.33% P. et al (automatic) 77.09% 75.51% 76.29% Brutus (automatic) 76.73% 70.45% 73.45% Table 2: Accuracy of semantic role prediction using CCG, CFG, and MALT based features. P R F Headword (treebank) 88.94% 86.98% 87.95% Boundary (treebank) 88.29% 86.39% 87.33% Headword (automatic) 82.36% 75.97% 79.04% Boundary (automatic) 76.33% 70.59% 73.35% Table 3: Accuracy of the system for labeling semantic roles on both constituent boundaries and headwords. Headwords are easier to predict than boundaries, reflecting CCG’s focus on word-word relations rather than constituency. for dependencies as a more appropriate means of evaluation, reflecting the focus on headwords from constituent boundaries. We argue that, especially in the heavily lexicalized CCG framework, headword evaluation is more appropriate, reflecting the emphasis on headword combinatorics in the CCG formalism. 10 The Contribution of the New Features Two features which are less frequently used in SRL research play a major role in the Brutus system: The PARG feature (Gildea and Hockenmaier, 2003) and the argument mapping feature. Removing them has a strong effect on accuracy when labeling treebank parses, as shown in our feature ablation results in table 4. We do not report results including the Argument Mapping feature but not the PARG feature, because some predicate-argument relation information is assumed in generating the Argument Mapping feature. P R F +PARG +AM 88.77% 86.15% 87.44% +PARG -AM 88.42% 85.78% 87.08% -PARG -AM 87.92% 84.65% 86.26% Table 4: The effects of removing key features from the system on gold standard parses. The same is true for automatic parses, as shown in table 5. 11 Error Analysis Many of the errors made by the Brutus system can be traced directly to erroneous parses, either in the automatic or treebank parse. In some cases, PP attachment 42 with even brief exposures causing symptoms (((vp\vp)/vp[ng])/np n/n n/n n (s[ng]\np)/np np > > n s[ng]\np > n np −cause.Arg0 > (vp\vp)/vp[ng] > vp\vp Figure 6: In this case, with is the head of with even brief exposures, so the role is correctly marked on even brief exposures (based on wsj 0003.2). P R F +PARG +AM 74.14% 62.09% 67.58% +PARG -AM 70.02% 64.68% 67.25% -PARG -AM 73.90% 61.15% 66.93% Table 5: The effects of removing key features from the system on automatic parses. ambiguities cause a role to be marked too high in the derivation. In the sentence the company stopped using asbestos in 1956 (figure 7), the correct Arg1 of stopped is using asbestos. However, because in 1956 is erroneously modifying the verb using rather than the verb stopped in the treebank parse, the system trusts the syntactic analysis and places Arg1 of stopped on using asbestos in 1956. This particular problem is caused by an annotation error in the original Penn Treebank that was carried through in the conversion to CCGbank. Another common error deals with genitive constructions. Consider the phrase a form of asbestos used to make filters. By CCG combinatorics, the relative clause could either attach to asbestos or to a form of asbestos. The gold standard CCG parse attaches the relative clause to a form of asbestos (figure 8). Propbank agrees with this analysis, assigning Arg1 of use to the constituent a form of asbestos. The automatic parser, however, attaches the relative clause low – to asbestos (figure 9). When the system is given the automatically generated parse, it incorrectly assigns the semantic role to asbestos. In cases where the parser attaches the relative clause correctly, the system is much more likely to assign the role correctly. Problems with relative clause attachment to genitives are not limited to automatic parses – errors in goldstandard treebank parses cause similar problems when Treebank parses disagree with Propbank annotator intuitions. In the phrase a group of workers exposed to asbestos (figure 10), the gold standard CCG parse attaches the relative clause to workers. Propbank, however, annotates a group of workers as Arg1 of exposed, rather than following the parse and assigning the role only to workers. The system again follows the parse and incorrectly assigns the role to workers instead of a group of workers. Interestingly, the C&C parser opts for high attachment in this instance, resulting in the a form of asbestos used to make filters np (np\np)/np np np\np > np\np < np −Arg1 < np Figure 8: CCGbank gold-standard parse of a relative clause attachment. The system correctly identifies a form of asbestos as Arg1 of used. (wsj 0003.1) a form of asbestos used to make filters np (np\np)/np np −Arg1 np\np < np > np\np < np Figure 9: Automatic parse of the noun phrase in figure 8. Incorrect relative clause attachment causes the misidentification of asbestos as a semantic role bearing unit. (wsj 0003.1) correct prediction of a group of workers as Arg1 of exposed in the automatic parse. 12 Future Work As described in the error analysis section, a large number of errors in the system are attributable to errors in the CCG derivation, either in the gold standard or in automatically generated parses. Potential future work may focus on developing an improved CCG parser using the revised (syntactic) adjunct-argument distinctions (guided by the Propbank annotation) described in (Boxwell and White, 2008). This resource, together with the reasonable accuracy (≈90%) with which argument mappings can be predicted, suggests the possibility of an integrated, simultaneous syntactic-semantic parsing process, similar to that of (Musillo and Merlo, 2006; Merlo and Musillo, 2008). We expect this would improve the reliability and accuracy of both the syntactic and semantic analysis components. 13 Acknowledgments This research was funded by NSF grant IIS-0347799. We are deeply indebted to Julia Hockenmaier for the 43 the company stopped using asbestos in 1956 np ((s[dcl]\np)/(s[ng]\np)) (s[ng]\np)/np np (s\np)\(s\np) > s[ng]\np < s[ng]\np −stop.Arg1 > s[dcl]\np < s[dcl] Figure 7: An example of how incorrect PP attachment can cause an incorrect labeling. Stop.Arg1 should cover using asbestos rather than using asbestos in 1956. This sentence is based on wsj 0003.3, with the structure simplified for clarity. a group of workers exposed to asbestos np (np\np)/np np −exposed.Arg1 np\np < np > np\np < np Figure 10: Propbank annotates a group of workers as Arg1 of exposed, while CCGbank attaches the relative clause low. The system incorrectly labels workers as a role bearing unit. (Gold standard – wsj 0003.1) use of her PARG generation tool. References Srinivas Bangalore and Aravind Joshi. 1999. Supertagging: An approach to almost parsing. Computational Linguistics, 25(2):237–265. Adam L. Berger, S. Della Pietra, and V. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. D.M. Bikel. 2004. Intricacies of Collins’ parsing model. Computational Linguistics, 30(4):479–511. Stephen A. Boxwell and Michael White. 2008. Projecting propbank roles onto the ccgbank. In Proceedings of the Sixth International Language Resources and Evaluation Conference (LREC-08), Marrakech, Morocco. J. Carroll, G. Minnen, and T. Briscoe. 2003. Parser evaluation. Treebanks: Building and Using Parsed Corpora, pages 299–316. E. Charniak. 2001. Immediate-head parsing for language models. In Proc. ACL-01, volume 39, pages 116–123. Stephen Clark and James R. Curran. 2007. Widecoverage Efficient Statistical Parsing with CCG and Log-linear Models. Computational Linguistics, 33(4):493–552. Stephen Clark. 2002. Supertagging for combinatory categorial grammar. In Proceedings of the 6th International Workshop on Tree Adjoining Grammars and Related Frameworks (TAG+6), pages 19–24, Venice, Italy. M. Collins. 2003. Head-driven statistical models for natural language parsing. Computational Linguistics, 29(4):589–637. Daniel Gildea and Julia Hockenmaier. 2003. Identifying semantic roles using Combinatory Categorial Grammar. In Proc. EMNLP-03. Julia Hockenmaier and Mark Steedman. 2007. CCGbank: A Corpus of CCG Derivations and Dependency Structures Extracted from the Penn Treebank. Computational Linguistics, 33(3):355–396. R. Johansson and P. Nugues. 2008. Dependencybased syntactic–semantic analysis with PropBank and NomBank. Proceedings of CoNLL–2008. D C Liu and Jorge Nocedal. 1989. On the limited memory method for large scale optimization. Mathematical Programming B, 45(3). Llu´ıs M`arquez, Xavier Carreras, Kenneth C. Litowski, and Suzanne Stevenson. 2008. Semantic Role Labeling: An Introduction to the Special Issue. Computational Linguistics, 34(2):145–159. Paola Merlo and Gabrile Musillo. 2008. Semantic parsing for high-precision semantic role labelling. In Proceedings of CONLL-08, Manchester, UK. Gabriele Musillo and Paola Merlo. 2006. Robust parsing of the proposition bank. In Proceedings of the EACL 2006 Workshop ROMAND, Trento. J. Nivre, J. Hall, J. Nilsson, A. Chanev, G. Eryigit, S. K¨ubler, S. Marinov, and E. Marsi. 2007. MaltParser: A language-independent system for datadriven dependency parsing. Natural Language Engineering, 13(02):95–135. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An Annotated Corpus of Semantic Roles. Computational Linguistics, 31(1):71–106. 44 Vasin Punyakanok, Dan Roth, and Wen tau Yih. 2008. The Importance of Syntactic Parsing and Inference in Semantic Role Labeling. Computational Linguistics, 34(2):257–287. Mark Steedman. 2000. The Syntactic Process. MIT Press. M. Surdeanu, L. M`arquez, X. Carreras, and P. Comas. 2007. Combination strategies for semantic role labeling. Journal of Artificial Intelligence Research, 29:105–151. K. Toutanova, A. Haghighi, and C.D. Manning. 2008. A global joint model for semantic role labeling. Computational Linguistics, 34(2):161–191. 45
2009
5
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 441–449, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Automatic Set Instance Extraction using the Web Richard C. Wang Language Technologies Institute Carnegie Mellon University [email protected] William W. Cohen Machine Learning Department Carnegie Mellon University [email protected] Abstract An important and well-studied problem is the production of semantic lexicons from a large corpus. In this paper, we present a system named ASIA (Automatic Set Instance Acquirer), which takes in the name of a semantic class as input (e.g., “car makers”) and automatically outputs its instances (e.g., “ford”, “nissan”, “toyota”). ASIA is based on recent advances in webbased set expansion - the problem of finding all instances of a set given a small number of “seed” instances. This approach effectively exploits web resources and can be easily adapted to different languages. In brief, we use languagedependent hyponym patterns to find a noisy set of initial seeds, and then use a state-of-the-art language-independent set expansion system to expand these seeds. The proposed approach matches or outperforms prior systems on several Englishlanguage benchmarks. It also shows excellent performance on three dozen additional benchmark problems from English, Chinese and Japanese, thus demonstrating language-independence. 1 Introduction An important and well-studied problem is the production of semantic lexicons for classes of interest; that is, the generation of all instances of a set (e.g., “apple”, “orange”, “banana”) given a name of that set (e.g., “fruits”). This task is often addressed by linguistically analyzing very large collections of text (Hearst, 1992; Kozareva et al., 2008; Etzioni et al., 2005; Pantel and Ravichandran, 2004; Pasca, 2004), often using hand-constructed or machine-learned shallow linguistic patterns to detect hyponym instances. A hyponym is a word or phrase whose semantic range Figure 1: Examples of SEAL’s input and output. English entities are reality TV shows, Chinese entities are popular Taiwanese foods, and Japanese entities are famous cartoon characters. is included within that of another word. For example, x is a hyponym of y if x is a (kind of) y. The opposite of hyponym is hypernym. In this paper, we evaluate a novel approach to this problem, embodied in a system called ASIA1 (Automatic Set Instance Acquirer). ASIA takes a semantic class name as input (e.g., “car makers”) and automatically outputs instances (e.g., “ford”, “nissan”, “toyota”). Unlike prior methods, ASIA makes heavy use of tools for web-based set expansion. Set expansion is the task of finding all instances of a set given a small number of example (seed) instances. ASIA uses SEAL (Wang and Cohen, 2007), a language-independent web-based system that performed extremely well on a large number of benchmark sets – given three correct seeds, SEAL obtained average MAP scores in the high 90’s for 36 benchmark problems, including a dozen test problems each for English, Chinese and Japanese. SEAL works well in part because it can efficiently find and process many semi-structured web documents containing instances of the set being expanded. Figure 1 shows some examples of SEAL’s input and output. SEAL has been recently extended to be robust to errors in its initial set of seeds (Wang et al., 1http://rcwang.com/asia 441 2008), and to use bootstrapping to iteratively improve its performance (Wang and Cohen, 2008). These extensions allow ASIA to extract instances of sets from the Web, as follows. First, given a semantic class name (e.g., “fruits”), ASIA uses a small set of language-dependent hyponym patterns (e.g., “fruits such as ”) to find a large but noisy set of seed instances. Second, ASIA uses the extended version of SEAL to expand the noisy set of seeds. ASIA’s approach is motivated by the conjecture that for many natural classes, the amount of information available in semi-structured documents on the Web is much larger than the amount of information available in free-text documents; hence, it is natural to attempt to augment search for set instances in free-text with semi-structured document analysis. We show that ASIA performs extremely well experimentally. On the 36 benchmarks used in (Wang and Cohen, 2007), which are relatively small closed sets (e.g., countries, constellations, NBA teams), ASIA has excellent performance for both recall and precision. On four additional English-language benchmark problems (US states, countries, singers, and common fish), we compare to recent work by Kozareva, Riloff, and Hovy (Kozareva et al., 2008), and show comparable or better performance on each of these benchmarks; this is notable because ASIA requires less information than the work of Kozareva et al (their system requires a concept name and a seed). We also compare ASIA on twelve additional benchmarks to the extended Wordnet 2.1 produced by Snow et al (Snow et al., 2006), and show that for these twelve sets, ASIA produces more than five times as many set instances with much higher precision (98% versus 70%). Another advantage of ASIA’s approach is that it is nearly language-independent: since the underlying set-expansion tools are language-independent, all that is needed to support a new target language is a new set of hyponym patterns for that language. In this paper, we present experimental results for Chinese and Japanese, as well as English, to demonstrate this language-independence. We present related work in Section 2, and explain our proposed approach for ASIA in Section 3. Section 4 presents the details of our experiments, as well as the experimental results. A comparison of results are illustrated in Section 5, and the paper concludes in Section 6. 2 Related Work There has been a significant amount of research done in the area of semantic class learning (aka lexical acquisition, lexicon induction, hyponym extraction, or open-domain information extraction). However, to the best of our knowledge, there is not a system that can perform set instance extraction in multiple languages given only the name of the set. Hearst (Hearst, 1992) presented an approach that utilizes hyponym patterns for extracting candidate instances given the name of a semantic set. The approach presented in Section 3.1 is based on this work, except that we extended it to two other languages: Chinese and Japanese. Pantel et al (Pantel and Ravichandran, 2004) presented an algorithm for automatically inducing names for semantic classes and for finding their instances by using “concept signatures” (statistics on co-occuring instances). Pasca (Pasca, 2004) presented a method for acquiring named entities in arbitrary categories using lexico-syntactic extraction patterns. Etzioni et al (Etzioni et al., 2005) presented the KnowItAll system that also utilizes hyponym patterns to extract class instances from the Web. All the systems mentioned rely on either a English part-of-speech tagger, a parser, or both, and hence are language-dependent. Kozareva et al (Kozareva et al., 2008) illustrated an approach that uses a single hyponym pattern combined with graph structures to learn semantic class from the Web. Section 5.1 shows that our approach is competitive experimentally; however, their system requires more information, as it uses the name of the semantic set and a seed instance. Pasca (Pas¸ca, 2007b; Pas¸ca, 2007a) illustrated a set expansion approach that extracts instances from Web search queries given a set of input seed instances. This approach is similar in flavor to SEAL but, addresses a different task from that addressed here: for ASIA the user provides no seeds, but instead provides the name of the set being expanded. We compare to Pasca’s system in Section 5.2. Snow et al (Snow et al., 2006) use known hypernym/hyponym pairs to generate training data for a machine-learning system, which then learns many lexico-syntactic patterns. The patterns learned are based on English-language dependency parsing. We compare to Snow et al’s results in Section 5.3. 442 3 Proposed Approach ASIA is composed of three main components: the Noisy Instance Provider, the Noisy Instance Expander, and the Bootstrapper. Given a semantic class name, the Provider extracts a initial set of noisy candidate instances using hand-coded patterns, and ranks the instances by using a simple ranking model. The Expander expands and ranks the instances using evidence from semistructured web documents, such that irrelevant ones are ranked lower in the list. The Bootstrapper enhances the quality and completeness of the ranked list by using an unsupervised iterative technique. Note that the Expander and Bootstrapper rely on SEAL to accomplish their goals. In this section, we first describe the Noisy Instance Provider, then we briefly introduce SEAL, followed by the Noisy Instance Expander, and finally, the Bootstrapper. 3.1 Noisy Instance Provider Noisy Instance Provider extracts candidate instances from free text (i.e., web snippets) using the methods presented in Hearst’s early work (Hearst, 1992). Hearst exploited several patterns for identifying hyponymy relation (e.g., such author as Shakespeare) that many current state-ofthe-art systems (Kozareva et al., 2008; Pantel and Ravichandran, 2004; Etzioni et al., 2005; Pasca, 2004) are using. However, unlike all of those systems, ASIA does not use any NLP tool (e.g., partsof-speech tagger, parser) or rely on capitalization for extracting candidates (since we wanted ASIA to be as language-independent as possible). This leads to sets of instances that are noisy; however, we will show that set expansion and re-ranking can improve the initial sets dramatically. Below, we will refer to the initial set of noisy instances extracted by the Provider as the initial set. In more detail, the Provider first constructs a few queries of hyponym phrase by using a semantic class name and a set of pre-defined hyponym patterns. For every query, the Provider retrieves a hundred snippets from Yahoo!, and splits each snippet into multiple excerpts (a snippet often contains multiple continuous excerpts from its web page). For each excerpt, the Provider extracts all chunks of characters that would then be used as candidate instances. Here, we define a chunk as a sequence of characters bounded by punctuation marks or the beginning and end of an excerpt. Figure 2: Hyponym patterns in English, Chinese, and Japanese. In each pattern, <C> is a placeholder for the semantic class name and <I> is a placeholder for its instances. Lastly, the Provider ranks each candidate instance x based on its weight assigned by the simple ranking model presented below: weight(x) = sf (x, S) |S| × ef (x, E) |E| × wcf (x, E) |C| where S is the set of snippets, E is the set of excerpts, and C is the set of chunks. sf (x, S) is the snippet frequency of x (i.e., the number of snippets containing x) and ef (x, E) is the excerpt frequency of x. Furthermore, wcf (x, E) is the weighted chunk frequency of x, which is defined as follows: wcf (x, E) = X e∈E X x∈e 1 dist(x, e) + 1 where dist(x, e) is the number of characters between x and the hyponym phrase in excerpt e. This model weights every occurrence of x based on the assumption that chunks closer to a hyponym phrase are usually more important than those further away. It also heavily rewards frequency, as our assumption is that the most common instances will be more useful as seeds for SEAL. Figure 2 shows the hyponym patterns we use for English, Chinese, and Japanese. There are two types of hyponym patterns: The first type are the ones that require the class name C to precede its instance I (e.g., C such as I), and the second type are the opposite ones (e.g., I and other C). In order to reduce irrelevant chunks, when excerpts were extracted, the Provider drops all characters preceding the hyponym phrase in excerpts that contain the first type, and also drops all characters following the hyponym phrase in excerpts that contain the second type. For some semantic class names (e.g., “cmu buildings”), there are no web 443 documents containing any of the hyponym-phrase queries that were constructed using the name. In this case, the Provider turns to a back-off strategy which simply treats the semantic class name as the hyponym phrase and extracts/ranks all chunks cooccurring with the class name in the excerpts. 3.2 Set Expander - SEAL In this paper, we rely on a set expansion system named SEAL (Wang and Cohen, 2007), which stands for Set Expander for Any Language. The system accepts as input a few seeds of some target set S (e.g., “fruits”) and automatically finds other probable instances (e.g., “apple”, “banana”) of S in web documents. As its name implies, SEAL is independent of document languages: both the written (e.g., English) and the markup language (e.g., HTML). SEAL is a research system that has shown good performance in published results (Wang and Cohen, 2007; Wang et al., 2008; Wang and Cohen, 2008). Figure 1 shows some examples of SEAL’s input and output. In more detail, SEAL contains three major components: the Fetcher, Extractor, and Ranker. The Fetcher is responsible for fetching web documents, and the URLs of the documents come from top results retrieved from the search engine using the concatenation of all seeds as the query. This ensures that every fetched web page contains all seeds. The Extractor automatically constructs “wrappers” (i.e. page-specific extraction rules) for each page that contains the seeds. Every wrapper comprises two character strings that specify the left and right contexts necessary for extracting candidate instances. These contextual strings are maximally-long contexts that bracket at least one occurrence of every seed string on a page. All other candidate instances bracketed by these contextual strings derived from a particular page are extracted from the same page. After the candidates are extracted, the Ranker constructs a graph that models all the relations between documents, wrappers, and candidate instances. Figure 3 shows an example graph where each node di represents a document, wi a wrapper, and mi a candidate instance. The Ranker performs Random Walk with Restart (Tong et al., 2006) on this graph (where the initial “restart” set is the set of seeds) until all node weights converge, and then ranks nodes by their final score; thus nodes are weighted higher if they are connected to many Figure 3: An example graph constructed by SEAL. Every edge from node x to y actually has an inverse relation edge from node y to x that is not shown here (e.g., m1 is extracted by w1). seed nodes by many short, low fan-out paths. The final expanded set contains all candidate instance nodes, ranked by their weights in the graph. 3.3 Noisy Instance Expander Wang (Wang et al., 2008) illustrated that it is feasible to perform set expansion on noisy input seeds. The paper showed that the noisy output of any Question Answering system for list questions can be improved by using a noise-resistant version of SEAL (An example of a list question is “Who were the husbands of Heddy Lamar?”). Since the initial set of candidate instances obtained using Hearst’s method are noisy, the Expander expands them by performing multiple iterations of set expansion using the noise-resistant SEAL. For every iteration, the Expander performs set expansion on a static collection of web pages. This collection is pre-fetched by querying Google and Yahoo! using the input class name and words such as “list”, “names”, “famous”, and “common” for discovering web pages that might contain lists of the input class. In the first iteration, the Expander expands instances with scores of at least k in the initial set. In every upcoming iteration, it expands instances obtained in the last iteration that have scores of at least k and that also exist in the initial set. We have determined k to be 0.4 based on our development set2. This process repeats until the set of seeds for ith iteration is identical to that of (i −1)th iteration. There are several differences between the original SEAL and the noise-resistant SEAL. The most important difference is the Extractor. In the origi2A collection of closed-set lists such as planets, Nobel prizes, and continents in English, Chinese and Japanese 444 nal SEAL, the Extractor requires the longest common contexts to bracket at least one instance of every seed per web page. However, when seeds are noisy, such common contexts usually do not exist. The Extractor in noise-resistant SEAL solves this problem by requiring the contexts to bracket at least one instance of a minimum of two seeds, rather than every seed. This is implemented using a trie-based method described briefly in the original SEAL paper (Wang and Cohen, 2007). In this paper, the Expander utilizes a slightly-modified version of the Extractor, which requires the contexts to bracket as many seed instances as possible. This idea is based on the assumption that irrelevant instances usually do not have common contexts; whereas relevant ones do. 3.4 Bootstrapper Bootstrapping (Etzioni et al., 2005; Kozareva, 2006; Nadeau et al., 2006) is an unsupervised iterative process in which a system continuously consumes its own outputs to improve its own performance. Wang (Wang and Cohen, 2008) showed that it is feasible to bootstrap the results of set expansion to improve the quality of a list. The paper introduces an iterative version of SEAL called iSEAL, which expands a list in multiple iterations. In each iteration, iSEAL expands a few candidates extracted in previous iterations and aggregates statistics. The Bootstrapper utilizes iSEAL to further improve the quality of the list returned by the Expander. In every iteration, the Bootstrapper retrieves 25 web pages by using the concatenation of three seeds as query to each of Google and Yahoo!. In the first iteration, the Bootstrapper expands randomly-selected instances returned by the Expander that exist in the initial set. In every upcoming iteration, the Bootstrapper expands randomlyselected unsupervised instances obtained in the last iteration that also exist in the initial set. This process terminates when all possible seed combinations have been consumed or five iterations3 have been reached, whichever comes first. Notice that from iteration to iteration, statistics are aggregated by growing the graph described in Section 3.2. We perform Random Walk with Restart (Tong et al., 2006) on this graph to determine the final ranking of the extracted instances. 3To keep the overall runtime minimal. 4 Experiments 4.1 Datasets We evaluated our approach using the evaluation set presented in (Wang and Cohen, 2007), which contains 36 manually constructed lists across three different languages: English, Chinese, and Japanese (12 lists per language). Each list contains all instances of a particular semantic class in a certain language, and each instance contains a set of synonyms (e.g., USA, America). There are a total of 2515 instances, with an average of 70 instances per semantic class. Figure 4 shows the datasets and their corresponding semantic class names that we use in our experiments. 4.2 Evaluation Metric Since the output of ASIA is a ranked list of extracted instances, we choose mean average precision (MAP) as our evaluation metric. MAP is commonly used in the field of Information Retrieval for evaluating ranked lists because it is sensitive to the entire ranking and it contains both recall and precision-oriented aspects. The MAP for multiple ranked lists is simply the mean value of average precisions calculated separately for each ranked list. We define the average precision of a single ranked list as: AvgPrec(L) = |L| X r=1 Prec(r) × isFresh(r) Total # of Correct Instances where L is a ranked list of extracted instances, r is the rank ranging from 1 to |L|, Prec(r) is the precision at rank r. isFresh(r) is a binary function for ensuring that, if a list contains multiple synonyms of the same instance, we do not evaluate that instance more than once. More specifically, the function returns 1 if a) the synonym at r is correct, and b) it is the highest-ranked synonym of its instance in the list; it returns 0 otherwise. 4.3 Experimental Results For each semantic class in our dataset, the Provider first produces a noisy list of candidate instances, using its corresponding class name shown in Figure 4. This list is then expanded by the Expander and further improved by the Bootstrapper. We present our experimental results in Table 1. As illustrated, although the Provider performs badly, the Expander substantially improves the 445 Figure 4: The 36 datasets and their semantic class names used as inputs to ASIA in our experiments. English Dataset NP Chinese Dataset NP Japanese Dataset NP NP NP +NE NP NP +NE NP NP +NE # NP +BS +NE +BS # NP +BS +NE +BS # NP +BS +NE +BS 1. 0.22 0.83 0.82 0.87 13. 0.09 0.75 0.80 0.80 25. 0.20 0.63 0.71 0.76 2. 0.31 1.00 1.00 1.00 14. 0.08 0.99 0.80 0.89 26. 0.20 0.40 0.90 0.96 3. 0.54 0.99 0.99 0.98 15. 0.29 0.66 0.84 0.91 27. 0.16 0.96 0.97 0.96 4. 0.48 1.00 1.00 1.00 *16. 0.09 0.00 0.93 0.93 *28. 0.01 0.00 0.80 0.87 5. 0.54 1.00 1.00 1.00 17. 0.21 0.00 1.00 1.00 29. 0.09 0.00 0.95 0.95 6. 0.64 0.98 1.00 1.00 *18. 0.00 0.00 0.19 0.23 *30. 0.02 0.00 0.73 0.73 7. 0.32 0.82 0.98 0.97 19. 0.11 0.90 0.68 0.89 31. 0.20 0.49 0.83 0.89 8. 0.41 1.00 1.00 1.00 20. 0.18 0.00 0.94 0.97 32. 0.09 0.00 0.88 0.88 9. 0.81 1.00 1.00 1.00 21. 0.64 1.00 1.00 1.00 33. 0.07 0.00 0.95 1.00 *10. 0.00 0.00 0.00 0.00 22. 0.08 0.00 0.67 0.80 34. 0.04 0.32 0.98 0.97 11. 0.11 0.62 0.51 0.76 23. 0.47 1.00 1.00 1.00 35. 0.15 1.00 1.00 1.00 12. 0.01 0.00 0.30 0.30 24. 0.60 1.00 1.00 1.00 36. 0.20 0.90 1.00 1.00 Avg. 0.37 0.77 0.80 0.82 Avg. 0.24 0.52 0.82 0.87 Avg. 0.12 0.39 0.89 0.91 Table 1: Performance of set instance extraction for each dataset measured in MAP. NP is the Noisy Instance Provider, NE is the Noisy Instance Expander, and BS is the Bootstrapper. quality of the initial list, and the Bootstrapper then enhances it further more. On average, the Expander improves the performance of the Provider from 37% to 80% for English, 24% to 82% for Chinese, and 12% to 89% for Japanese. The Bootstrapper then further improves the performance of the Expander to 82%, 87% and 91% respectively. In addition, the results illustrate that the Bootstrapper is also effective even without the Expander; it directly improves the performance of the Provider from 37% to 77% for English, 24% to 52% for Chinese, and 12% to 39% for Japanese. The simple back-off strategy seems to be effective as well. There are five datasets (marked with * in Table 1) of which their hyponym phrases return zero web documents. For those datasets, ASIA automatically uses the back-off strategy described in Section 3.1. Considering only those five datasets, the Expander, on average, improves the performance of the Provider from 2% to 53% and the Bootstrapper then improves it to 55%. 5 Comparison to Prior Work We compare ASIA’s performance to the results of three previously published work. We use the best-configured ASIA (NP+NE+BS) for all comparisons, and we present the comparison results in this section. 5.1 (Kozareva et al., 2008) Table 2 shows a comparison of our extraction performance to that of Kozareva (Kozareva et al., 2008). They report results on four tasks: US states, countries, singers, and common fish. We evaluated our results manually. The results indicate that ASIA outperforms theirs for all four datasets that they reported. Note that the input to their system is a semantic class name plus one seed instance; whereas, the input to ASIA is only the class name. In terms of system runtime, for each semantic class, Kozareva et al reported that their extraction process usually finished overnight; however, ASIA usually finished within a minute. 446 N Kozareva ASIA N Kozareva ASIA US States Countries 25 1.00 1.00 50 1.00 1.00 50 1.00 1.00 100 1.00 1.00 64 0.78 0.78 150 1.00 1.00 200 0.90 0.93 300 0.61 0.67 323 0.57 0.62 Singers Common Fish 10 1.00 1.00 10 1.00 1.00 25 1.00 1.00 25 1.00 1.00 50 0.97 1.00 50 1.00 1.00 75 0.96 1.00 75 0.93 1.00 100 0.96 1.00 100 0.84 1.00 150 0.95 0.97 116 0.80 1.00 180 0.91 0.96 Table 2: Set instance extraction performance compared to Kozareva et al. We report our precision for all semantic classes and at the same ranks reported in their work. 5.2 (Pas¸ca, 2007b) We compare ASIA to Pasca (Pas¸ca, 2007b) and present comparison results in Table 3. There are ten semantic classes in his evaluation dataset, and the input to his system for each class is a set of seed entities rather than a class name. We evaluate every instance manually for each class. The results show that, on average, ASIA performs better. However, we should emphasize that for the three classes: movie, person, and video game, ASIA did not initially converge to the correct instance list given the most natural concept name. Given “movies”, ASIA returns as instances strings like “comedy”, “action”, “drama”, and other kinds of movies. Given “video games”, it returns “PSP”, “Xbox”, “Wii”, etc. Given “people”, it returns “musicians”, “artists”, “politicians”, etc. We addressed this problem by simply re-running ASIA with a more specific class name (i.e., the first one returned); however, the result suggests that future work is needed to support automatic construction of hypernym hierarchy using semi-structured web documents. 5.3 (Snow et al., 2006) Snow (Snow et al., 2006) has extended the WordNet 2.1 by adding thousands of entries (synsets) at a relatively high precision. They have made several versions of extended WordNet available4. For comparison purposes, we selected the version (+30K) that achieved the best F-score in their experiments. 4http://ai.stanford.edu/˜rion/swn/ Precision @ Target Class System 25 50 100 150 250 Cities Pasca 1.00 0.96 0.88 0.84 0.75 ASIA 1.00 1.00 0.97 0.98 0.96 Countries Pasca 1.00 0.98 0.95 0.82 0.60 ASIA 1.00 1.00 1.00 1.00 0.79 Drugs Pasca 1.00 1.00 0.96 0.92 0.75 ASIA 1.00 1.00 1.00 1.00 0.98 Food Pasca 0.88 0.86 0.82 0.78 0.62 ASIA 1.00 1.00 0.93 0.95 0.90 Locations Pasca 1.00 1.00 1.00 1.00 1.00 ASIA 1.00 1.00 1.00 1.00 1.00 Newspapers Pasca 0.96 0.98 0.93 0.86 0.54 ASIA 1.00 1.00 0.98 0.99 0.85 Universities Pasca 1.00 1.00 1.00 1.00 0.99 ASIA 1.00 1.00 1.00 1.00 1.00 Movies Pasca 0.92 0.90 0.88 0.84 0.79 Comedy Movies ASIA 1.00 1.00 1.00 1.00 1.00 People Pasca 1.00 1.00 1.00 1.00 1.00 Jazz Musicians ASIA 1.00 1.00 1.00 0.94 0.88 Video Games Pasca 1.00 1.00 0.99 0.98 0.98 PSP Games ASIA 1.00 1.00 1.00 0.99 0.97 Pasca 0.98 0.97 0.94 0.90 0.80 Average ASIA 1.00 1.00 0.99 0.98 0.93 Table 3: Set instance extraction performance compared to Pasca. We report our precision for all semantic classes and at the same ranks reported in his work. For the experimental comparison, we focused on leaf semantic classes from the extended WordNet that have many hypernyms, so that a meaningful comparison could be made: specifically, we selected nouns that have at least three hypernyms, such that the hypernyms are the leaf nodes in the hypernym hierarchy of WordNet. Of these, 210 were extended by Snow. Preliminary experiments showed that (as in the experiments with Pasca’s classes above) ASIA did not always converge to the intended meaning; to avoid this problem, we instituted a second filter, and discarded ASIA’s results if the intersection of hypernyms from ASIA and WordNet constituted less than 50% of those in WordNet. About 50 of the 210 nouns passed this filter. Finally, we manually evaluated precision and recall of a randomly selected set of twelve of these 50 nouns. We present the results in Table 4. We used a fixed cut-off score5 of 0.3 to truncate the ranked list produced by ASIA, so that we can compute precision. Since only a few of these twelve nouns are closed sets, we cannot generally compute recall; instead, we define relative recall to be the ratio of correct instances to the union of correct instances from both systems. As shown in the results, ASIA has much higher precision, and much higher relative recall. When we evaluated Snow’s extended WordNet, we assumed all instances that 5Determined from our development set. 447 Snow’s Wordnet (+30k) Relative ASIA Relative Class Name # Right # Wrong Prec. Recall # Right # Wrong Prec. Recall Film Directors 4 4 0.50 0.01 457 0 1.00 1.00 Manias 11 0 1.00 0.09 120 0 1.00 1.00 Canadian Provinces 10 82 0.11 1.00 10 3 0.77 1.00 Signs of the Zodiac 12 10 0.55 1.00 12 0 1.00 1.00 Roman Emperors 44 4 0.92 0.47 90 0 1.00 0.96 Academic Departments 20 0 1.00 0.67 27 0 1.00 0.90 Choreographers 23 10 0.70 0.14 156 0 1.00 0.94 Elected Officials 5 102 0.05 0.31 12 0 1.00 0.75 Double Stars 11 1 0.92 0.46 20 0 1.00 0.83 South American Countries 12 1 0.92 1.00 12 0 1.00 1.00 Prizefighters 16 4 0.80 0.23 63 1 0.98 0.89 Newspapers 20 0 1.00 0.23 71 0 1.00 0.81 Average 15.7 18.2 0.70 0.47 87.5 0.3 0.98 0.92 Table 4: Set instance extraction performance compared to Snow et al. Figure 5: Examples of ASIA’s input and output. Input class for Chinese is “holidays” and for Japanese is “dramas”. were in the original WordNet are correct. The three incorrect instances of Canadian provinces from ASIA are actually the three Canadian territories. 6 Conclusions In this paper, we have shown that ASIA, a SEALbased system, extracts set instances with high precision and recall in multiple languages given only the set name. It obtains a high MAP score (87%) averaged over 36 benchmark problems in three languages (Chinese, Japanese, and English). Figure 5 shows some real examples of ASIA’s input and output in those three languages. ASIA’s approach is based on web-based set expansion using semi-structured documents, and is motivated by the conjecture that for many natural classes, the amount of information available in semi-structured documents on the Web is much larger than the amount of information available in free-text documents. This conjecture is given some support by our experiments: for instance, ASIA finds 457 instances of the set “film director” with perfect precision, whereas Snow et al’s state-of-the-art methods for extraction from free text extract only four correct instances, with only 50% precision. ASIA’s approach is also quite languageindependent. By adding a few simple hyponym patterns, we can easily extend the system to support other languages. We have also shown that Hearst’s method works not only for English, but also for other languages such as Chinese and Japanese. We note that the ability to construct semantic lexicons in diverse languages has obvious applications in machine translation. We have also illustrated that ASIA outperforms three other English systems (Kozareva et al., 2008; Pas¸ca, 2007b; Snow et al., 2006), even though many of these use more input than just a semantic class name. In addition, ASIA is also quite efficient, requiring only a few minutes of computation and couple hundreds of web pages per problem. In the future, we plan to investigate the possibility of constructing hypernym hierarchy automatically using semi-structured documents. We also plan to explore whether lexicons can be constructed using only the back-off method for hyponym extraction, to make ASIA completely language independent. We also wish to explore whether performance can be improved by simultaneously finding class instances in multiple languages (e.g., Chinese and English) while learning translations between the extracted instances. 7 Acknowledgments This work was supported by the Google Research Awards program. 448 References Oren Etzioni, Michael J. Cafarella, Doug Downey, Ana-Maria Popescu, Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. 2005. Unsupervised named-entity extraction from the web: An experimental study. Artif. Intell., 165(1):91–134. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In In Proceedings of the 14th International Conference on Computational Linguistics, pages 539–545. Zornitsa Kozareva, Ellen Riloff, and Eduard Hovy. 2008. Semantic class learning from the web with hyponym pattern linkage graphs. In Proceedings of ACL-08: HLT, pages 1048–1056, Columbus, Ohio, June. Association for Computational Linguistics. Zornitsa Kozareva. 2006. Bootstrapping named entity recognition with automatically generated gazetteer lists. In EACL. The Association for Computer Linguistics. David Nadeau, Peter D. Turney, and Stan Matwin. 2006. Unsupervised named-entity recognition: Generating gazetteers and resolving ambiguity. In Luc Lamontagne and Mario Marchand, editors, Canadian Conference on AI, volume 4013 of Lecture Notes in Computer Science, pages 266–277. Springer. Marius Pas¸ca. 2007a. Organizing and searching the world wide web of facts – step two: harnessing the wisdom of the crowds. In WWW ’07: Proceedings of the 16th international conference on World Wide Web, pages 101–110, New York, NY, USA. ACM. Marius Pas¸ca. 2007b. Weakly-supervised discovery of named entities using web search queries. In CIKM ’07: Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 683–690, New York, NY, USA. ACM. Patrick Pantel and Deepak Ravichandran. 2004. Automatically labeling semantic classes. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings, pages 321–328, Boston, Massachusetts, USA, May 2 May 7. Association for Computational Linguistics. Marius Pasca. 2004. Acquisition of categorized named entities for web search. In CIKM ’04: Proceedings of the thirteenth ACM international conference on Information and knowledge management, pages 137–145, New York, NY, USA. ACM. Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In ACL ’06: Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACL, pages 801– 808, Morristown, NJ, USA. Association for Computational Linguistics. Hanghang Tong, Christos Faloutsos, and Jia-Yu Pan. 2006. Fast random walk with restart and its applications. In ICDM, pages 613–622. IEEE Computer Society. Richard C. Wang and William W. Cohen. 2007. Language-independent set expansion of named entities using the web. In ICDM, pages 342–350. IEEE Computer Society. Richard C. Wang and William W. Cohen. 2008. Iterative set expansion of named entities using the web. In ICDM, pages 1091–1096. IEEE Computer Society. Richard C. Wang, Nico Schlaefer, William W. Cohen, and Eric Nyberg. 2008. Automatic set expansion for list question answering. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 947–954, Honolulu, Hawaii, October. Association for Computational Linguistics. 449
2009
50
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 450–458, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Extracting Lexical Reference Rules from Wikipedia Eyal Shnarch Computer Science Department Bar-Ilan University Ramat-Gan 52900, Israel [email protected] Libby Barak Dept. of Computer Science University of Toronto Toronto, Canada M5S 1A4 [email protected] Ido Dagan Computer Science Department Bar-Ilan University Ramat-Gan 52900, Israel [email protected] Abstract This paper describes the extraction from Wikipedia of lexical reference rules, identifying references to term meanings triggered by other terms. We present extraction methods geared to cover the broad range of the lexical reference relation and analyze them extensively. Most extraction methods yield high precision levels, and our rule-base is shown to perform better than other automatically constructed baselines in a couple of lexical expansion and matching tasks. Our rule-base yields comparable performance to WordNet while providing largely complementary information. 1 Introduction A most common need in applied semantic inference is to infer the meaning of a target term from other terms in a text. For example, a Question Answering system may infer the answer to a question regarding luxury cars from a text mentioning Bentley, which provides a concrete reference to the sought meaning. Aiming to capture such lexical inferences we followed (Glickman et al., 2006), which coined the term lexical reference (LR) to denote references in text to the specific meaning of a target term. They further analyzed the dataset of the First Recognizing Textual Entailment Challenge (Dagan et al., 2006), which includes examples drawn from seven different application scenarios. It was found that an entailing text indeed includes a concrete reference to practically every term in the entailed (inferred) sentence. The lexical reference relation between two terms may be viewed as a lexical inference rule, denoted LHS ⇒RHS. Such rule indicates that the left-hand-side term would generate a reference, in some texts, to a possible meaning of the right hand side term, as the Bentley ⇒luxury car example. In the above example the LHS is a hyponym of the RHS. Indeed, the commonly used hyponymy, synonymy and some cases of the meronymy relations are special cases of lexical reference. However, lexical reference is a broader relation. For instance, the LR rule physician ⇒medicine may be useful to infer the topic medicine in a text categorization setting, while an information extraction system may utilize the rule Margaret Thatcher ⇒United Kingdom to infer a UK announcement from the text “Margaret Thatcher announced”. To perform such inferences, systems need large scale knowledge bases of LR rules. A prominent available resource is WordNet (Fellbaum, 1998), from which classical relations such as synonyms, hyponyms and some cases of meronyms may be used as LR rules. An extension to WordNet was presented by (Snow et al., 2006). Yet, available resources do not cover the full scope of lexical reference. This paper presents the extraction of a largescale rule base from Wikipedia designed to cover a wide scope of the lexical reference relation. As a starting point we examine the potential of definition sentences as a source for LR rules (Ide and Jean, 1993; Chodorow et al., 1985; Moldovan and Rus, 2001). When writing a concept definition, one aims to formulate a concise text that includes the most characteristic aspects of the defined concept. Therefore, a definition is a promising source for LR relations between the defined concept and the definition terms. In addition, we extract LR rules from Wikipedia redirect and hyperlink relations. As a guideline, we focused on developing simple extraction methods that may be applicable for other Web knowledge resources, rather than focusing on Wikipedia-specific attributes. Overall, our rule base contains about 8 million candidate lexical ref450 erence rules. 1 Extensive analysis estimated that 66% of our rules are correct, while different portions of the rule base provide varying recall-precision tradeoffs. Following further error analysis we introduce rule filtering which improves inference performance. The rule base utility was evaluated within two lexical expansion applications, yielding better results than other automatically constructed baselines and comparable results to WordNet. A combination with WordNet achieved the best performance, indicating the significant marginal contribution of our rule base. 2 Background Many works on machine readable dictionaries utilized definitions to identify semantic relations between words (Ide and Jean, 1993). Chodorow et al. (1985) observed that the head of the defining phrase is a genus term that describes the defined concept and suggested simple heuristics to find it. Other methods use a specialized parser or a set of regular expressions tuned to a particular dictionary (Wilks et al., 1996). Some works utilized Wikipedia to build an ontology. Ponzetto and Strube (2007) identified the subsumption (IS-A) relation from Wikipedia’s category tags, while in Yago (Suchanek et al., 2007) these tags, redirect links and WordNet were used to identify instances of 14 predefined specific semantic relations. These methods depend on Wikipedia’s category system. The lexical reference relation we address subsumes most relations found in these works, while our extractions are not limited to a fixed set of predefined relations. Several works examined Wikipedia texts, rather than just its structured features. Kazama and Torisawa (2007) explores the first sentence of an article and identifies the first noun phrase following the verb be as a label for the article title. We reproduce this part of their work as one of our baselines. Toral and Mu˜noz (2007) uses all nouns in the first sentence. Gabrilovich and Markovitch (2007) utilized Wikipedia-based concepts as the basis for a high-dimensional meaning representation space. Hearst (1992) utilized a list of patterns indicative for the hyponym relation in general texts. Snow et al. (2006) use syntactic path patterns as features for supervised hyponymy and synonymy 1For download see Textual Entailment Resource Pool at the ACL-wiki (http://aclweb.org/aclwiki) classifiers, whose training examples are derived automatically from WordNet. They use these classifiers to suggest extensions to the WordNet hierarchy, the largest one consisting of 400K new links. Their automatically created resource is regarded in our paper as a primary baseline for comparison. Many works addressed the more general notion of lexical associations, or association rules (e.g. (Ruge, 1992; Rapp, 2002)). For example, The Beatles, Abbey Road and Sgt. Pepper would all be considered lexically associated. However this is a rather loose notion, which only indicates that terms are semantically “related” and are likely to co-occur with each other. On the other hand, lexical reference is a special case of lexical association, which specifies concretely that a reference to the meaning of one term may be inferred from the other. For example, Abbey Road provides a concrete reference to The Beatles, enabling to infer a sentence like “I listened to The Beatles” from “I listened to Abbey Road”, while it does not refer specifically to Sgt. Pepper. 3 Extracting Rules from Wikipedia Our goal is to utilize the broad knowledge of Wikipedia to extract a knowledge base of lexical reference rules. Each Wikipedia article provides a definition for the concept denoted by the title of the article. As the most concise definition we take the first sentence of each article, following (Kazama and Torisawa, 2007). Our preliminary evaluations showed that taking the entire first paragraph as the definition rarely introduces new valid rules while harming extraction precision significantly. Since a concept definition usually employs more general terms than the defined concept (Ide and Jean, 1993), the concept title is more likely to refer to terms in its definition rather than vice versa. Therefore the title is taken as the LHS of the constructed rule while the extracted definition term is taken as its RHS. As Wikipedia’s titles are mostly noun phrases, the terms we extract as RHSs are the nouns and noun phrases in the definition. The remainder of this section describes our methods for extracting rules from the definition sentence and from additional Wikipedia information. Be-Comp Following the general idea in (Kazama and Torisawa, 2007), we identify the ISA pattern in the definition sentence by extracting nominal complements of the verb ‘be’, taking 451 No. Extraction Rule James Eugene ”Jim” Carrey is a Canadian-American actor and comedian 1 Be-Comp Jim Carrey ⇒Canadian-American actor 2 Be-Comp Jim Carrey ⇒actor 3 Be-Comp Jim Carrey ⇒comedian Abbey Road is an album released by The Beatles 4 All-N Abbey Road ⇒The Beatles 5 Parenthesis Graph ⇒mathematics 6 Parenthesis Graph ⇒data structure 7 Redirect CPU ⇔Central processing unit 8 Redirect Receptors IgG ⇔Antibody 9 Redirect Hypertension ⇔Elevated blood-pressure 10 Link pet ⇒Domesticated Animal 11 Link Gestaltist ⇒Gestalt psychology Table 1: Examples of rule extraction methods them as the RHS of a rule whose LHS is the article title. While Kazama and Torisawa used a chunker, we parsed the definition sentence using Minipar (Lin, 1998b). Our initial experiments showed that parse-based extraction is more accurate than chunk-based extraction. It also enables us extracting additional rules by splitting conjoined noun phrases and by taking both the head noun and the complete base noun phrase as the RHS for separate rules (examples 1–3 in Table 1). All-N The Be-Comp extraction method yields mostly hypernym relations, which do not exploit the full range of lexical references within the concept definition. Therefore, we further create rules for all head nouns and base noun phrases within the definition (example 4). An unsupervised reliability score for rules extracted by this method is investigated in Section 4.3. Title Parenthesis A common convention in Wikipedia to disambiguate ambiguous titles is adding a descriptive term in parenthesis at the end of the title, as in The Siren (Musical), The Siren (sculpture) and Siren (amphibian). From such titles we extract rules in which the descriptive term inside the parenthesis is the RHS and the rest of the title is the LHS (examples 5–6). Redirect As any dictionary and encyclopedia, Wikipedia contains Redirect links that direct different search queries to the same article, which has a canonical title. For instance, there are 86 different queries that redirect the user to United States (e.g. U.S.A., America, Yankee land). Redirect links are hand coded, specifying that both terms refer to the same concept. We therefore generate a bidirectional entailment rule for each redirect link (examples 7–9). Link Wikipedia texts contain hyper links to articles. For each link we generate a rule whose LHS is the linking text and RHS is the title of the linked article (examples 10–11). In this case we generate a directional rule since links do not necessarily connect semantically equivalent entities. We note that the last three extraction methods should not be considered as Wikipedia specific, since many Web-like knowledge bases contain redirects, hyper-links and disambiguation means. Wikipedia has additional structural features such as category tags, structured summary tablets for specific semantic classes, and articles containing lists which were exploited in prior work as reviewed in Section 2. As shown next, the different extraction methods yield different precision levels. This may allow an application to utilize only a portion of the rule base whose precision is above a desired level, and thus choose between several possible recallprecision tradeoffs. 4 Extraction Methods Analysis We applied our rule extraction methods over a version of Wikipedia available in a database constructed by (Zesch et al., 2007)2. The extraction yielded about 8 million rules altogether, with over 2.4 million distinct RHSs and 2.8 million distinct LHSs. As expected, the extracted rules involve mostly named entities and specific concepts, typically covered in encyclopedias. 4.1 Judging Rule Correctness Following the spirit of the fine-grained human evaluation in (Snow et al., 2006), we randomly sampled 800 rules from our rule-base and presented them to an annotator who judged them for correctness, according to the lexical reference notion specified above. In cases which were too difficult to judge the annotator was allowed to abstain, which happened for 20 rules. 66% of the remaining rules were annotated as correct. 200 rules from the sample were judged by another annotator for agreement measurement. The resulting Kappa score was 0.7 (substantial agreement (Landis and 2English version from February 2007, containing 1.6 million articles. www.ukp.tu-darmstadt.de/software/JWPL 452 Extraction Per Method Accumulated Method P Est. #Rules P %obtained Redirect 0.87 1,851,384 0.87 31 Be-Comp 0.78 1,618,913 0.82 60 Parenthesis 0.71 94,155 0.82 60 Link 0.7 485,528 0.80 68 All-N 0.49 1,580,574 0.66 100 Table 2: Manual analysis: precision and estimated number of correct rules per extraction method, and precision and % of correct rules obtained of rule-sets accumulated by method. Koch, 1997)), either when considering all the abstained rules as correct or as incorrect. The middle columns of Table 2 present, for each extraction method, the obtained percentage of correct rules (precision) and their estimated absolute number. This number is estimated by multiplying the number of annotated correct rules for the extraction method by the sampling proportion. In total, we estimate that our resource contains 5.6 million correct rules. For comparison, Snow’s published extension to WordNet3, which covers similar types of terms but is restricted to synonyms and hyponyms, includes 400,000 relations. The right part of Table 2 shows the performance figures for accumulated rule bases, created by adding the extraction methods one at a time in order of their precision. % obtained is the percentage of correct rules in each rule base out of the total number of correct rules extracted jointly by all methods (the union set). We can see that excluding the All-N method all extraction methods reach quite high precision levels of 0.7-0.87, with accumulated precision of 0.84. By selecting only a subset of the extraction methods, according to their precision, one can choose different recall-precision tradeoff points that suit application preferences. The less accurate All-N method may be used when high recall is important, accounting for 32% of the correct rules. An examination of the paths in All-N reveals, beyond standard hyponymy and synonymy, various semantic relations that satisfy lexical reference, such as Location, Occupation and Creation, as illustrated in Table 3. Typical relations covered by Redirect and Link rules include 3http://ai.stanford.edu/∼rion/swn/ 4As a non-comparable reference, Snow’s fine-grained evaluation showed a precision of 0.84 on 10K rules and 0.68 on 20K rules; however, they were interested only in the hyponym relation while we evaluate our rules according to the broader LR relation. synonyms (NY State Trooper ⇒New York State Police), morphological derivations (irritate ⇒irritation), different spellings or naming (Pytagoras ⇒Pythagoras) and acronyms (AIS ⇒Alarm Indication Signal). 4.2 Error Analysis We sampled 100 rules which were annotated as incorrect and examined the causes of errors. Figure 1 shows the distribution of error types. Wrong NP part - The most common error (35% of the errors) is taking an inappropriate part of a noun phrase (NP) as the rule right hand side (RHS). As described in Section 3, we create two rules from each extracted NP, by taking both the head noun and the complete base NP as RHSs. While both rules are usually correct, there are cases in which the left hand side (LHS) refers to the NP as a whole but not to part of it. For example, Margaret Thatcher refers to United Kingdom but not to Kingdom. In Section 5 we suggest a filtering method which addresses some of these errors. Future research may exploit methods for detecting multi-words expressions. All-N pattern errors 13% Transparent head 11% Wrong NP part 35% Technical errors 10% Dates and Places 5% Link errors 5% Redirect errors 5% Related but not Referring 16% Figure 1: Error analysis: type of incorrect rules Related but not Referring - Although all terms in a definition are highly related to the defined concept, not all are referred by it. For example the origin of a person (*The Beatles ⇒Liverpool5) or family ties such as ‘daughter of’ or ‘sire of’. All-N errors - Some of the articles start with a long sentence which may include information that is not directly referred by the title of the article. For instance, consider *Interstate 80 ⇒California from “Interstate 80 runs from California to New Jersey”. In Section 4.3 we further analyze this type of error and point at a possible direction for addressing it. Transparent head - This is the phenomenon in which the syntactic head of a noun phrase does 5The asterisk denotes an incorrect rule 453 Relation Rule Path Pattern Location Lovek ⇒Cambodia Lovek city in Cambodia Occupation Thomas H. Cormen ⇒computer science Thomas H. Cormen professor of computer science Creation Genocidal Healer ⇒James White Genocidal Healer novel by James White Origin Willem van Aelst ⇒Dutch Willem van Aelst Dutch artist Alias Dean Moriarty ⇒Benjamin Linus Dean Moriarty is an alias of Benjamin Linus on Lost. Spelling Egushawa ⇒Agushaway Egushawa, also spelled Agushaway... Table 3: All-N rules exemplifying various types of LR relations not bear its primary meaning, while it has a modifier which serves as the semantic head (Fillmore et al., 2002; Grishman et al., 1986). Since parsers identify the syntactic head, we extract an incorrect rule in such cases. For instance, deriving *Prince William ⇒member instead of Prince William ⇒ British Royal Family from “Prince William is a member of the British Royal Family”. Even though we implemented the common solution of using a list of typical transparent heads, this solution is partial since there is no closed set of such phrases. Technical errors - Technical extraction errors were mainly due to erroneous identification of the title in the definition sentence or mishandling nonEnglish texts. Dates and Places - Dates and places where a certain person was born at, lived in or worked at often appear in definitions but do not comply to the lexical reference notion (*Galileo Galilei ⇒ 15 February 1564). Link errors - These are usually the result of wrong assignment of the reference direction. Such errors mostly occur when a general term, e.g. revolution, links to a more specific albeit typical concept, e.g. French Revolution. Redirect errors - These may occur in some cases in which the extracted rule is not bidirectional. E.g. *Anti-globalization ⇒Movement of Movements is wrong but the opposite entailment direction is correct, as Movement of Movements is a popular term in Italy for Anti-globalization. 4.3 Scoring All-N Rules We observed that the likelihood of nouns mentioned in a definition to be referred by the concept title depends greatly on the syntactic path connecting them (which was exploited also in (Snow et al., 2006)). For instance, the path produced by Minipar for example 4 in Table 1 is title subj ←−album vrel −→released by−subj −→by pcomp−n −→ noun. In order to estimate the likelihood that a syntactic path indicates lexical reference we collected from Wikipedia all paths connecting a title to a noun phrase in the definition sentence. We note that since there is no available resource which covers the full breadth of lexical reference we could not obtain sufficiently broad supervised training data for learning which paths correspond to correct references. This is in contrast to (Snow et al., 2005) which focused only on hyponymy and synonymy relations and could therefore extract positive and negative examples from WordNet. We therefore propose the following unsupervised reference likelihood score for a syntactic path p within a definition, based on two counts: the number of times p connects an article title with a noun in its definition, denoted by Ct(p), and the total number of p’s occurrences in Wikipedia definitions, C(p). The score of a path is then defined as Ct(p) C(p) . The rational for this score is that C(p) −Ct(p) corresponds to the number of times in which the path connects two nouns within the definition, none of which is the title. These instances are likely to be non-referring, since a concise definition typically does not contain terms that can be inferred from each other. Thus our score may be seen as an approximation for the probability that the two nouns connected by an arbitrary occurrence of the path would satisfy the reference relation. For instance, the path of example 4 obtained a score of 0.98. We used this score to sort the set of rules extracted by the All-N method and split the sorted list into 3 thirds: top, middle and bottom. As shown in Table 4, this obtained reasonably high precision for the top third of these rules, relative to the other two thirds. This precision difference indicates that our unsupervised path score provides useful information about rule reliability. It is worth noting that in our sample 57% of AllN errors, 62% of Related but not Referring incorrect rules and all incorrect rules of type Dates and 454 Extraction Per Method Accumulated Method P Est. #Rules P %obtained All-Ntop 0.60 684,238 0.76 83 All-Nmiddle 0.46 380,572 0.72 90 All-Nbottom 0.41 515,764 0.66 100 Table 4: Splitting All-N extraction method into 3 sub-types. These three rows replace the last row of Table 2 Places were extracted by the All-Nbottom method and thus may be identified as less reliable. However, this split was not observed to improve performance in the application oriented evaluations of Section 6. Further research is thus needed to fully exploit the potential of the syntactic path as an indicator for rule correctness. 5 Filtering Rules Following our error analysis, future research is needed for addressing each specific type of error. However, during the analysis we observed that all types of erroneous rules tend to relate terms that are rather unlikely to co-occur together. We therefore suggest, as an optional filter, to recognize such rules by their co-occurrence statistics using the common Dice coefficient: 2 · C(LHS, RHS) C(LHS) + C(RHS) where C(x) is the number of articles in Wikipedia in which all words of x appear. In order to partially overcome the Wrong NP part error, identified in Section 4.2 to be the most common error, we adjust the Dice equation for rules whose RHS is also part of a larger noun phrase (NP): 2 · (C(LHS, RHS) −C(LHS, NPRHS)) C(LHS) + C(RHS) where NPRHS is the complete NP whose part is the RHS. This adjustment counts only cooccurrences in which the LHS appears with the RHS alone and not with the larger NP. This substantially reduces the Dice score for those cases in which the LHS co-occurs mainly with the full NP. Given the Dice score rules whose score does not exceed a threshold may be filtered. For example, the incorrect rule *aerial tramway ⇒car was filtered, where the correct RHS for this LHS is the complete NP cable car. Another filtered rule is magic ⇒cryptography which is correct only for a very idiosyncratic meaning.6 We also examined another filtering score, the cosine similarity between the vectors representing the two rule sides in LSA (Latent Semantic Analysis) space (Deerwester et al., 1990). However, as the results with this filter resemble those for Dice we present results only for the simpler Dice filter. 6 Application Oriented Evaluations Our primary application oriented evaluation is within an unsupervised lexical expansion scenario applied to a text categorization data set (Section 6.1). Additionally, we evaluate the utility of our rule base as a lexical resource for recognizing textual entailment (Section 6.2). 6.1 Unsupervised Text Categorization Our categorization setting resembles typical query expansion in information retrieval (IR), where the category name is considered as the query. The advantage of using a text categorization test set is that it includes exhaustive annotation for all documents. Typical IR datasets, on the other hand, are partially annotated through a pooling procedure. Thus, some of our valid lexical expansions might retrieve non-annotated documents that were missed by the previously pooled systems. 6.1.1 Experimental Setting Our categorization experiment follows a typical keywords-based text categorization scheme (McCallum and Nigam, 1999; Liu et al., 2004). Taking a lexical reference perspective, we assume that the characteristic expansion terms for a category should refer to the term (or terms) denoting the category name. Accordingly, we construct the category’s feature vector by taking first the category name itself, and then expanding it with all lefthand sides of lexical reference rules whose righthand side is the category name. For example, the category “Cars” is expanded by rules such as Ferrari F50 ⇒car. During classification cosine similarity is measured between the feature vector of the classified document and the expanded vectors of all categories. The document is assigned to the category which yields the highest similarity score, following a single-class classification approach (Liu et al., 2004). 6Magic was the United States codename for intelligence derived from cryptanalysis during World War II. 455 Rule Base R P F1 Baselines: No Expansion 0.19 0.54 0.28 WikiBL 0.19 0.53 0.28 Snow400K 0.19 0.54 0.28 Lin 0.25 0.39 0.30 WordNet 0.30 0.47 0.37 Extraction Methods from Wikipedia: Redirect + Be-Comp 0.22 0.55 0.31 All rules 0.31 0.38 0.34 All rules + Dice filter 0.31 0.49 0.38 Union: WordNet + WikiAll rules+Dice 0.35 0.47 0.40 Table 5: Results of different rule bases for 20 newsgroups category name expansion It should be noted that keyword-based text categorization systems employ various additional steps, such as bootstrapping, which generalize to multi-class settings and further improve performance. Our basic implementation suffices to evaluate comparatively the direct impact of different expansion resources on the initial classification. For evaluation we used the test set of the “bydate” version of the 20-News Groups collection,7 which contains 18,846 documents partitioned (nearly) evenly over the 20 categories8. 6.1.2 Baselines Results We compare the quality of our rule base expansions to 5 baselines (Table 5). The first avoids any expansion, classifying documents based on cosine similarity with category names only. As expected, it yields relatively high precision but low recall, indicating the need for lexical expansion. The second baseline is our implementation of the relevant part of the Wikipedia extraction in (Kazama and Torisawa, 2007), taking the first noun after a be verb in the definition sentence, denoted as WikiBL. This baseline does not improve performance at all over no expansion. The next two baselines employ state-of-the-art lexical resources. One uses Snow’s extension to WordNet which was mentioned earlier. This resource did not yield a noticeable improvement, ei7www.ai.mit.edu/people/jrennie/20Newsgroups. 8The keywords used as category names are: atheism; graphic; microsoft windows; ibm,pc,hardware; mac,hardware; x11,x-windows; sale; car; motorcycle; baseball; hockey; cryptography; electronics; medicine; outer space; christian(noun & adj); gun; mideast,middle east; politics; religion ther over the No Expansion baseline or over WordNet when joined with its expansions. The second uses Lin dependency similarity, a syntacticdependency based distributional word similarity resource described in (Lin, 1998a)9. We used various thresholds on the length of the expansion list derived from this resource. The best result, reported here, provides only a minor F1 improvement over No Expansion, with modest recall increase and significant precision drop, as can be expected from such distributional method. The last baseline uses WordNet for expansion. First we expand all the senses of each category name by their derivations and synonyms. Each obtained term is then expanded by its hyponyms, or by its meronyms if it has no hyponyms. Finally, the results are further expanded by their derivations and synonyms.10 WordNet expansions improve substantially both Recall and F1 relative to No Expansion, while decreasing precision. 6.1.3 Wikipedia Results We then used for expansion different subsets of our rule base, producing alternative recallprecision tradeoffs. Table 5 presents the most interesting results. Using any subset of the rules yields better performance than any of the other automatically constructed baselines (Lin, Snow and WikiBL). Utilizing the most precise extraction methods of Redirect and Be-Comp yields the highest precision, comparable to No Expansion, but just a small recall increase. Using the entire rule base yields the highest recall, while filtering rules by the Dice coefficient (with 0.1 threshold) substantially increases precision without harming recall. With this configuration our automaticallyconstructed resource achieves comparable performance to the manually built WordNet. Finally, since a dictionary and an encyclopedia are complementary in nature, we applied the union of WordNet and the filtered Wikipedia expansions. This configuration yields the best results: it maintains WordNet’s precision and adds nearly 50% to the recall increase of WordNet over No Expansion, indicating the substantial marginal contribution of Wikipedia. Furthermore, with the fast growth of Wikipedia the recall of our resource is expected to increase while maintaining its precision. 9Downloaded from www.cs.ualberta.ca/lindek/demos.htm 10We also tried expanding by the entire hyponym hierarchy and considering only the first sense of each synset, but the method described above achieved the best performance. 456 Category Name Expanding Terms Politics opposition, coalition, whip(a) Cryptography adversary, cryptosystem, key Mac PowerBook, Radius(b), Grab(c) Religion heaven, creation, belief, missionary Medicine doctor, physician, treatment, clinical Computer Graphics radiosity(d), rendering, siggraph(e) Table 6: Some Wikipedia rules not in WordNet, which contributed to text categorization. (a) a legislator who enforce leadership desire (b) a hardware firm specializing in Macintosh equipment (c) a Macintosh screen capture software (d) an illumination algorithm (e) a computer graphics conference Configuration Accuracy Accuracy Drop WordNet + Wikipedia 60.0 % Without WordNet 57.7 % 2.3 % Without Wikipedia 58.9 % 1.1 % Table 7: RTE accuracy results for ablation tests. Table 6 illustrates few examples of useful rules that were found in Wikipedia but not in WordNet. We conjecture that in other application settings the rules extracted from Wikipedia might show even greater marginal contribution, particularly in specialized domains not covered well by WordNet. Another advantage of a resource based on Wikipedia is that it is available in many more languages than WordNet. 6.2 Recognizing Textual Entailment (RTE) As a second application-oriented evaluation we measured the contributions of our (filtered) Wikipedia resource and WordNet to RTE inference (Giampiccolo et al., 2007). To that end, we incorporated both resources within a typical basic RTE system architecture (Bar-Haim et al., 2008). This system determines whether a text entails another sentence based on various matching criteria that detect syntactic, logical and lexical correspondences (or mismatches). Most relevant for our evaluation, lexical matches are detected when a Wikipedia rule’s LHS appears in the text and its RHS in the hypothesis, or similarly when pairs of WordNet synonyms, hyponyms-hypernyms and derivations appear across the text and hypothesis. The system’s weights were trained on the development set of RTE-3 and tested on RTE-4 (which included this year only a test set). To measure the marginal contribution of the two resources we performed ablation tests, comparing the accuracy of the full system to that achieved when removing either resource. Table 7 presents the results, which are similar in nature to those obtained for text categorization. Wikipedia obtained a marginal contribution of 1.1%, about half of the analogous contribution of WordNet’s manuallyconstructed information. We note that for current RTE technology it is very typical to gain just a few percents in accuracy thanks to external knowledge resources, while individual resources usually contribute around 0.5–2% (Iftene and BalahurDobrescu, 2007; Dinu and Wang, 2009). Some Wikipedia rules not in WordNet which contributed to RTE inference are Jurassic Park ⇒Michael Crichton, GCC ⇒Gulf Cooperation Council. 7 Conclusions and Future Work We presented construction of a large-scale resource of lexical reference rules, as useful in applied lexical inference. Extensive rule-level analysis showed that different recall-precision tradeoffs can be obtained by utilizing different extraction methods. It also identified major reasons for errors, pointing at potential future improvements. We further suggested a filtering method which significantly improved performance. Even though the resource was constructed by quite simple extraction methods, it was proven to be beneficial within two different application setting. While being an automatically built resource, extracted from a knowledge-base created for human consumption, it showed comparable performance to WordNet, which was manually created for computational purposes. Most importantly, it also provides complementary knowledge to WordNet, with unique lexical reference rules. Future research is needed to improve resource’s precision, especially for the All-N method. As a first step, we investigated a novel unsupervised score for rules extracted from definition sentences. We also intend to consider the rule base as a directed graph and exploit the graph structure for further rule extraction and validation. Acknowledgments The authors would like to thank Idan Szpektor for valuable advices. This work was partially supported by the NEGEV project (www.negevinitiative.org), the PASCAL-2 Network of Excellence of the European Community FP7-ICT-20071-216886 and by the Israel Science Foundation grant 1112/08. 457 References Roy Bar-Haim, Jonathan Berant, Ido Dagan, Iddo Greental, Shachar Mirkin, Eyal Shnarch, and Idan Szpektor. 2008. Efficient semantic deduction and approximate matching over compact parse forests. In Proceedings of TAC. Martin S. Chodorow, Roy J. Byrd, and George E. Heidorn. 1985. Extracting semantic hierarchies from a large on-line dictionary. In Proceedings of ACL. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Lecture Notes in Computer Science, volume 3944, pages 177–190. Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41:391– 407. Georgiana Dinu and Rui Wang. 2009. Inference rules for recognizing textual entailment. In Proceedings of the IWCS. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database (Language, Speech, and Communication). The MIT Press. Charles J. Fillmore, Collin F. Baker, and Hiroaki Sato. 2002. Seeing arguments through transparent structures. In Proceedings of LREC. Evgeniy Gabrilovich and Shaul Markovitch. 2007. Computing semantic relatedness using wikipediabased explicit semantic analysis. In Proceedings of IJCAI. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third pascal recognizing textual entailment challenge. In Proceedings of ACL-WTEP Workshop. Oren Glickman, Eyal Shnarch, and Ido Dagan. 2006. Lexical reference: a semantic matching subtask. In Proceedings of EMNLP. Ralph Grishman, Lynette Hirschman, and Ngo Thanh Nhan. 1986. Discovery procedures for sublanguage selectional patterns: Initial experiments. Computational Linguistics, 12(3):205–215. Marti Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of COLING. Nancy Ide and V´eronis Jean. 1993. Extracting knowledge bases from machine-readable dictionaries: Have we wasted our time? In Proceedings of KB & KS Workshop. Adrian Iftene and Alexandra Balahur-Dobrescu. 2007. Hypothesis transformation and semantic variability rules used in recognizing textual entailment. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. Jun’ichi Kazama and Kentaro Torisawa. 2007. Exploiting Wikipedia as external knowledge for named entity recognition. In Proceedings of EMNLPCoNLL. J. Richard Landis and Gary G. Koch. 1997. The measurements of observer agreement for categorical data. In Biometrics, pages 33:159–174. Dekang Lin. 1998a. Automatic retrieval and clustering of similar words. In Proceedings of COLING-ACL. Dekang Lin. 1998b. Dependency-based evaluation of MINIPAR. In Proceedings of the Workshop on Evaluation of Parsing Systems at LREC. Bing Liu, Xiaoli Li, Wee Sun Lee, and Philip S. Yu. 2004. Text classification by labeling words. In Proceedings of AAAI. Andrew McCallum and Kamal Nigam. 1999. Text classification by bootstrapping with keywords, EM and shrinkage. In Proceedings of ACL Workshop for unsupervised Learning in NLP. Dan Moldovan and Vasile Rus. 2001. Logic form transformation of wordnet and its applicability to question answering. In Proceedings of ACL. Simone P. Ponzetto and Michael Strube. 2007. Deriving a large scale taxonomy from wikipedia. In Proceedings of AAAI. Reinhard Rapp. 2002. The computation of word associations: comparing syntagmatic and paradigmatic approaches. In Proceedings of COLING. Gerda Ruge. 1992. Experiment on linguistically-based term associations. Information Processing & Management, 28(3):317–332. Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2005. Learning syntactic patterns for automatic hypernym discovery. In NIPS. Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of COLING-ACL. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: A core of semantic knowledge - unifying wordnet and wikipedia. In Proceedings of WWW. Antonio Toral and Rafael Mu˜noz. 2007. A proposal to automatically build and maintain gazetteers for named entity recognition by using wikipedia. In Proceedings of NAACL/HLT. Yorick A. Wilks, Brian M. Slator, and Louise M. Guthrie. 1996. Electric words: dictionaries, computers, and meanings. MIT Press, Cambridge, MA, USA. Torsten Zesch, Iryna Gurevych, and Max M¨uhlh¨auser. 2007. Analyzing and accessing wikipedia as a lexical semantic resource. In Data Structures for Linguistic Resources and Applications, pages 197–205. 458
2009
51
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 459–467, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Employing Topic Models for Pattern-based Semantic Class Discovery Huibin Zhang1* Mingjie Zhu2* Shuming Shi3 Ji-Rong Wen3 1Nankai University 2University of Science and Technology of China 3Microsoft Research Asia {v-huibzh, v-mingjz, shumings, jrwen}@microsoft.com Abstract A semantic class is a collection of items (words or phrases) which have semantically peer or sibling relationship. This paper studies the employment of topic models to automatically construct semantic classes, taking as the source data a collection of raw semantic classes (RASCs), which were extracted by applying predefined patterns to web pages. The primary requirement (and challenge) here is dealing with multi-membership: An item may belong to multiple semantic classes; and we need to discover as many as possible the different semantic classes the item belongs to. To adopt topic models, we treat RASCs as “documents”, items as “words”, and the final semantic classes as “topics”. Appropriate preprocessing and postprocessing are performed to improve results quality, to reduce computation cost, and to tackle the fixed-k constraint of a typical topic model. Experiments conducted on 40 million web pages show that our approach could yield better results than alternative approaches. 1 Introduction Semantic class construction (Lin and Pantel, 2001; Pantel and Lin, 2002; Pasca, 2004; Shinzato and Torisawa, 2005; Ohshima et al., 2006) tries to discover the peer or sibling relationship among terms or phrases by organizing them into semantic classes. For example, {red, white, black…} is a semantic class consisting of color instances. A popular way for semantic class discovery is pattern-based approach, where predefined patterns (Table 1) are applied to a  This work was performed when the authors were interns at Microsoft Research Asia collection of web pages or an online web search engine to produce some raw semantic classes (abbreviated as RASCs, Table 2). RASCs cannot be treated as the ultimate semantic classes, because they are typically noisy and incomplete, as shown in Table 2. In addition, the information of one real semantic class may be distributed in lots of RASCs (R2 and R3 in Table 2). Type Pattern SENT NP {, NP}*{,} (and|or) {other} NP TAG <UL> <LI>item</LI> … <LI>item</LI> </UL> TAG <SELECT> <OPTION>item…<OPTION>item </SELECT> * SENT: Sentence structure patterns; TAG: HTML Tag patterns Table 1. Sample patterns R1: {gold, silver, copper, coal, iron, uranium} R2: {red, yellow, color, gold, silver, copper} R3: {red, green, blue, yellow} R4: {HTML, Text, PDF, MS Word, Any file type} R5: {Today, Tomorrow, Wednesday, Thursday, Friday, Saturday, Sunday} R6: {Bush, Iraq, Photos, USA, War} Table 2. Sample raw semantic classes (RASCs) This paper aims to discover high-quality semantic classes from a large collection of noisy RASCs. The primary requirement (and challenge) here is to deal with multi-membership, i.e., one item may belong to multiple different semantic classes. For example, the term “Lincoln” can simultaneously represent a person, a place, or a car brand name. Multi-membership is more popular than at a first glance, because quite a lot of English common words have also been borrowed as company names, places, or product names. For a given item (as a query) which belongs to multiple semantic classes, we intend to return the semantic classes separately, rather than mixing all their items together. Existing pattern-based approaches only provide very limited support to multi-membership. For example, RASCs with the same labels (or hypernyms) are merged in (Pasca, 2004) to gen459 erate the ultimate semantic classes. This is problematic, because RASCs may not have (accurate) hypernyms with them. In this paper, we propose to use topic models to address the problem. In some topic models, a document is modeled as a mixture of hidden topics. The words of a document are generated according to the word distribution over the topics corresponding to the document (see Section 2 for details). Given a corpus, the latent topics can be obtained by a parameter estimation procedure. Topic modeling provides a formal and convenient way of dealing with multi-membership, which is our primary motivation of adopting topic models here. To employ topic models, we treat RASCs as “documents”, items as “words”, and the final semantic classes as “topics”. There are, however, several challenges in applying topic models to our problem. To begin with, the computation is intractable for processing a large collection of RASCs (our dataset for experiments contains 2.7 million unique RASCs extracted from 40 million web pages). Second, typical topic models require the number of topics (k) to be given. But it lacks an easy way of acquiring the ideal number of semantic classes from the source RASC collection. For the first challenge, we choose to apply topic models to the RASCs containing an item q, rather than the whole RASC collection. In addition, we also perform some preprocessing operations in which some items are discarded to further improve efficiency. For the second challenge, considering that most items only belong to a small number of semantic classes, we fix (for all items q) a topic number which is slightly larger than the number of classes an item could belong to. And then a postprocessing operation is performed to merge the results of topic models to generate the ultimate semantic classes. Experimental results show that, our topic model approach is able to generate higher-quality semantic classes than popular clustering algorithms (e.g., K-Medoids and DBSCAN). We make two contributions in the paper: On one hand, we find an effective way of constructing high-quality semantic classes in the patternbased category which deals with multimembership. On the other hand, we demonstrate, for the first time, that topic modeling can be utilized to help mining the peer relationship among words. In contrast, the general related relationship between words is extracted in existing topic modeling applications. Thus we expand the application scope of topic modeling. 2 Topic Models In this section we briefly introduce the two widely used topic models which are adopted in our paper. Both of them model a document as a mixture of hidden topics. The words of every document are assumed to be generated via a generative probability process. The parameters of the model are estimated from a training process over a given corpus, by maximizing the likelihood of generating the corpus. Then the model can be utilized to inference a new document. pLSI: The probabilistic Latent Semantic Indexing Model (pLSI) was introduced in Hofmann (1999), arose from Latent Semantic Indexing (Deerwester et al., 1990). The following process illustrates how to generate a document d in pLSI: 1. Pick a topic mixture distribution 𝑝(∙|𝑑). 2. For each word wi in d a. Pick a latent topic z with the probability 𝑝(𝑧|𝑑) for wi b. Generate wi with probability 𝑝(𝑤𝑖|𝑧) So with k latent topics, the likelihood of generating a document d is 𝑝(𝑑) = 𝑝 𝑤𝑖 𝑧 𝑝(𝑧|𝑑) 𝑧 𝑖 (2.1) LDA (Blei et al., 2003): In LDA, the topic mixture is drawn from a conjugate Dirichlet prior that remains the same for all documents (Figure 1). The generative process for each document in the corpus is, 1. Choose document length N from a Poisson distribution Poisson(𝜉). 2. Choose 𝜃 from a Dirichlet distribution with parameter α. 3. For each of the N words wi. a. Choose a topic z from a Multinomial distribution with parameter 𝜃. b. Pick a word wi from 𝑝 𝑤𝑖 𝑧, 𝛽 . So the likelihood of generating a document is 𝑝(𝑑) = 𝑝(𝜃|𝛼) 𝜃 𝑝(𝑧|𝜃)𝑝 𝑤𝑖 𝑧, 𝛽 𝑑𝜃 𝑧 𝑖 (2.2) Figure 1. Graphical model representation of LDA, from Blei et al. (2003) w θ z α β N M 460 3 Our Approach The source data of our approach is a collection (denoted as CR) of RASCs extracted via applying patterns to a large collection of web pages. Given an item as an input query, the output of our approach is one or multiple semantic classes for the item. To be applicable in real-world dataset, our approach needs to be able to process at least millions of RASCs. 3.1 Main Idea As reviewed in Section 2, topic modeling provides a formal and convenient way of grouping documents and words to topics. In order to apply topic models to our problem, we map RASCs to documents, items to words, and treat the output topics yielded from topic modeling as our semantic classes (Table 3). The motivation of utilizing topic modeling to solve our problem and building the above mapping comes from the following observations. 1) In our problem, one item may belong to multiple semantic classes; similarly in topic modeling, a word can appear in multiple topics. 2) We observe from our source data that some RASCs are comprised of items in multiple semantic classes. And at the same time, one document could be related to multiple topics in some topic models (e.g., pLSI and LDA). Topic modeling Semantic class construction word item (word or phrase) document RASC topic semantic class Table 3. The mapping from the concepts in topic modeling to those in semantic class construction Due to the above observations, we hope topic modeling can be employed to construct semantic classes from RASCs, just as it has been used in assigning documents and words to topics. There are some critical challenges and issues which should be properly addressed when topic models are adopted here. Efficiency: Our RASC collection CR contains about 2.7 million unique RASCs and 26 million (1 million unique) items. Building topic models directly for such a large dataset may be computationally intractable. To overcome this challenge, we choose to apply topic models to the RASCs containing a specific item rather than the whole RASC collection. Please keep in mind that our goal in this paper is to construct the semantic classes for an item when the item is given as a query. For one item q, we denote CR(q) to be all the RASCs in CR containing the item. We believe building a topic model over CR(q) is much more effective because it contains significantly fewer “documents”, “words”, and “topics”. To further improve efficiency, we also perform preprocessing (refer to Section 3.4 for details) before building topic models for CR(q), where some lowfrequency items are removed. Determine the number of topics: Most topic models require the number of topics to be known beforehand1. However, it is not an easy task to automatically determine the exact number of semantic classes an item q should belong to. Actually the number may vary for different q. Our solution is to set (for all items q) the topic number to be a fixed value (k=5 in our experiments) which is slightly larger than the number of semantic classes most items could belong to. Then we perform postprocessing for the k topics to produce the final properly semantic classes. In summary, our approach contains three phases (Figure 2). We build topic models for every CR(q), rather than the whole collection CR. A preprocessing phase and a postprocessing phase are added before and after the topic modeling phase to improve efficiency and to overcome the fixed-k problem. The details of each phase are presented in the following subsections. Figure 2. Main phases of our approach 3.2 Adopting Topic Models For an item q, topic modeling is adopted to process the RASCs in CR(q) to generate k semantic classes. Here we use LDA as an example to 1 Although there is study of non-parametric Bayesian models (Li et al., 2007) which need no prior knowledge of topic number, the computational complexity seems to exceed our efficiency requirement and we shall leave this to future work. R580 R1 R2 CR Item q Preprocessing 𝑅400 ∗ 𝑅1 ∗ 𝑅2 ∗ T5 T1 T2 C3 C1 C2 Topic modeling Postprocessing T3 T4 CR(q) 461 illustrate the process. The case of other generative topic models (e.g., pLSI) is very similar. According to the assumption of LDA and our concept mapping in Table 3, a RASC (“document”) is viewed as a mixture of hidden semantic classes (“topics”). The generative process for a RASC R in the “corpus” CR(q) is as follows, 1) Choose a RASC size (i.e., the number of items in R): NR ~ Poisson(𝜉). 2) Choose a k-dimensional vector 𝜃𝑅 from a Dirichlet distribution with parameter 𝛼. 3) For each of the NR items an: a) Pick a semantic class 𝑧𝑛 from a multinomial distribution with parameter 𝜃𝑅. b) Pick an item an from 𝑝(𝑎𝑛|𝑧𝑛,𝛽) , where the item probabilities are parameterized by the matrix 𝛽. There are three parameters in the model: 𝜉 (a scalar), 𝛼 (a k-dimensional vector), and 𝛽 (a 𝑘× 𝑉 matrix where V is the number of distinct items in CR(q)). The parameter values can be obtained from a training (or called parameter estimation) process over CR(q), by maximizing the likelihood of generating the corpus. Once 𝛽 is determined, we are able to compute 𝑝(𝑎|𝑧, 𝛽), the probability of item a belonging to semantic class z. Therefore we can determine the members of a semantic class z by selecting those items with high 𝑝 𝑎 𝑧, 𝛽 values. The number of topics k is assumed known and fixed in LDA. As has been discussed in Section 3.1, we set a constant k value for all different CR(q). And we rely on the postprocessing phase to merge the semantic classes produced by the topic model to generate the ultimate semantic classes. When topic modeling is used in document classification, an inference procedure is required to determine the topics for a new document. Please note that inference is not needed in our problem. One natural question here is: Considering that in most topic modeling applications, the words within a resultant topic are typically semantically related but may not be in peer relationship, then what is the intuition that the resultant topics here are semantic classes rather than lists of generally related words? The magic lies in the “documents” we used in employing topic models. Words co-occurred in real documents tend to be semantically related; while items co-occurred in RASCs tend to be peers. Experimental results show that most items in the same output semantic class have peer relationship. It might be noteworthy to mention the exchangeability or “bag-of-words” assumption in most topic models. Although the order of words in a document may be important, standard topic models neglect the order for simplicity and other reasons2. The order of items in a RASC is clearly much weaker than the order of words in an ordinary document. In some sense, topic models are more suitable to be used here than in processing an ordinary document corpus. 3.3 Preprocessing and Postprocessing Preprocessing is applied to CR(q) before we build topic models for it. In this phase, we discard from all RASCs the items with frequency (i.e., the number of RASCs containing the item) less than a threshold h. A RASC itself is discarded from CR(q) if it contains less than two items after the item-removal operations. We choose to remove low-frequency items, because we found that low-frequency items are seldom important members of any semantic class for q. So the goal is to reduce the topic model training time (by reducing the training data) without sacrificing results quality too much. In the experiments section, we compare the approaches with and without preprocessing in terms of results quality and efficiency. Interestingly, experimental results show that, for some small threshold values, the results quality becomes higher after preprocessing is performed. We will give more discussions in Section 4. In the postprocessing phase, the output semantic classes (“topics”) of topic modeling are merged to generate the ultimate semantic classes. As indicated in Sections 3.1 and 3.2, we fix the number of topics (k=5) for different corpus CR(q) in employing topic models. For most items q, this is a larger value than the real number of semantic classes the item belongs to. As a result, one real semantic class may be divided into multiple topics. Therefore one core operation in this phase is to merge those topics into one semantic class. In addition, the items in each semantic class need to be properly ordered. Thus main operations include, 1) Merge semantic classes 2) Sort the items in each semantic class Now we illustrate how to perform the operations. Merge semantic classes: The merge process is performed by repeatedly calculating the simi 2 There are topic model extensions considering word order in documents, such as Griffiths et al. (2005). 462 larity between two semantic classes and merging the two ones with the highest similarity until the similarity is under a threshold. One simple and straightforward similarity measure is the Jaccard coefficient, 𝑠𝑖𝑚 𝐶1, 𝐶2 = 𝐶1 ∩𝐶2 𝐶1 ∪𝐶2 (3.1) where 𝐶1 ∩𝐶2 and 𝐶1 ∪𝐶2 are respectively the intersection and union of semantic classes C1 and C2. This formula might be over-simple, because the similarity between two different items is not exploited. So we propose the following measure, 𝑠𝑖𝑚 𝐶1, 𝐶2 = 𝑠𝑖𝑚(𝑎, 𝑏) 𝑏∈𝐶2 𝑎∈𝐶1 𝐶1 ∙ 𝐶2 (3.2) where |C| is the number of items in semantic class C, and sim(a,b) is the similarity between items a and b, which will be discussed shortly. In Section 4, we compare the performance of the above two formulas by experiments. Sort items: We assign an importance score to every item in a semantic class and sort them according to the importance scores. Intuitively, an item should get a high rank if the average similarity between the item and the other items in the semantic class is high, and if it has high similarity to the query item q. Thus we calculate the importance of item a in a semantic class C as follows, 𝑔 𝑎|𝐶 = 𝜆∙sim(a,C)+(1-𝜆) ∙sim(a,q) (3.3) where 𝜆 is a parameter in [0,1], sim(a,q) is the similarity between a and the query item q, and sim(a,C) is the similarity between a and C, calculated as, 𝑠𝑖𝑚 𝑎, 𝐶 = 𝑠𝑖𝑚(𝑎, 𝑏) 𝑏∈𝐶 𝐶 (3.4) Item similarity calculation: Formulas 3.2, 3.3, and 3.4 rely on the calculation of the similarity between two items. One simple way of estimating item similarity is to count the number of RASCs containing both of them. We extend such an idea by distinguishing the reliability of different patterns and punishing term similarity contributions from the same site. The resultant similarity formula is, 𝑠𝑖𝑚(𝑎, 𝑏) = log(1 + 𝑤(𝑃(𝐶𝑖,𝑗)) 𝑘𝑖 𝑗=1 ) 𝑚 𝑖=1 (3.5) where Ci,j is a RASC containing both a and b, P(Ci,j) is the pattern via which the RASC is extracted, and w(P) is the weight of pattern P. Assume all these RASCs belong to m sites with Ci,j extracted from a page in site i, and ki being the number of RASCs corresponding to site i. To determine the weight of every type of pattern, we randomly selected 50 RASCs for each pattern and labeled their quality. The weight of each kind of pattern is then determined by the average quality of all labeled RASCs corresponding to it. The efficiency of postprocessing is not a problem, because the time cost of postprocessing is much less than that of the topic modeling phase. 3.4 Discussion 3.4.1 Efficiency of processing popular items Our approach receives a query item q from users and returns the semantic classes containing the query. The maximal query processing time should not be larger than several seconds, because users would not like to wait more time. Although the average query processing time of our approach is much shorter than 1 second (see Table 4 in Section 4), it takes several minutes to process a popular item such as “Washington”, because it is contained in a lot of RASCs. In order to reduce the maximal online processing time, our solution is offline processing popular items and storing the resultant semantic classes on disk. The time cost of offline processing is feasible, because we spent about 15 hours on a 4core machine to complete the offline processing for all the items in our RASC collection. 3.4.2 Alternative approaches One may be able to easily think of other approaches to address our problem. Here we discuss some alternative approaches which are treated as our baseline in experiments. RASC clustering: Given a query item q, run a clustering algorithm over CR(q) and merge all RASCs in the same cluster as one semantic class. Formula 3.1 or 3.2 can be used to compute the similarity between RASCs in performing clustering. We try two clustering algorithms in experiments: K-Medoids and DBSCAN. Please note kmeans cannot be utilized here because coordinates are not available for RASCs. One drawback of RASC clustering is that it cannot deal with the case of one RASC containing the items from multiple semantic classes. Item clustering: By Formula 3.5, we are able to construct an item graph GI to record the neighbors (in terms of similarity) of each item. Given a query item q, we first retrieve its neighbors from GI, and then run a clustering algorithm over the neighbors. As in the case of RASC clustering, we try two clustering algorithms in experiments: K-Medoids and DBSCAN. The primary disadvantage of item clustering is that it cannot assign an item (except for the query item q) to 463 multiple semantic classes. As a result, when we input “gold” as the query, the item “silver” can only be assigned to one semantic class, although the term can simultaneously represents a color and a chemical element. 4 Experiments 4.1 Experimental Setup Datasets: By using the Open Directory Project (ODP3) URLs as seeds, we crawled about 40 million English web pages in a breadth-first way. RASCs are extracted via applying a list of sentence structure patterns and HTML tag patterns (see Table 1 for some examples). Our RASC collection CR contains about 2.7 million unique RASCs and 1 million distinct items. Query set and labeling: We have volunteers to try Google Sets4, record their queries being used, and select overall 55 queries to form our query set. For each query, the results of all approaches are mixed together and labeled by following two steps. In the first step, the standard (or ideal) semantic classes (SSCs) for the query are manually determined. For example, the ideal semantic classes for item “Georgia” may include Countries, and U.S. states. In the second step, each item is assigned a label of “Good”, “Fair”, or “Bad” with respect to each SSC. For example, “silver” is labeled “Good” with respect to “colors” and “chemical elements”. We adopt metric MnDCG (Section 4.2) as our evaluation metric. Approaches for comparison: We compare our approach with the alternative approaches discussed in Section 3.4.2. LDA: Our approach with LDA as the topic model. The implementation of LDA is based on Blei’s code of variational EM for LDA5. pLSI: Our approach with pLSI as the topic model. The implementation of pLSI is based on Schein, et al. (2002). KMedoids-RASC: The RASC clustering approach illustrated in Section 3.4.2, with the K-Medoids clustering algorithm utilized. DBSCAN-RASC: The RASC clustering approach with DBSCAN utilized. KMedoids-Item: The item clustering approach with the K-Medoids utilized. DBSCAN-Item: The item clustering approach with the DBSCAN clustering algorithm utilized. 3 http://www.dmoz.org 4 http://labs.google.com/sets 5 http://www.cs.princeton.edu/~blei/lda-c/ K-Medoids clustering needs to predefine the cluster number k. We fix the k value for all different query item q, as has been done for the topic model approach. For fair comparison, the same postprocessing is made for all the approaches. And the same preprocessing is made for all the approaches except for the item clustering ones (to which the preprocessing is not applicable). 4.2 Evaluation Methodology Each produced semantic class is an ordered list of items. A couple of metrics in the information retrieval (IR) community like Precision@10, MAP (mean average precision), and nDCG (normalized discounted cumulative gain) are available for evaluating a single ranked list of items per query (Croft et al., 2009). Among the metrics, nDCG (Jarvelin and Kekalainen, 2000) can handle our three-level judgments (“Good”, “Fair”, and “Bad”, refer to Section 4.1), 𝑛𝐷𝐶𝐺@𝑘= 𝐺 𝑖 /log(𝑖+ 1) 𝑘 𝑖=1 𝐺∗ 𝑖 /log(𝑖+ 1) 𝑘 𝑖=1 (4.1) where G(i) is the gain value assigned to the i’th item, and G*(i) is the gain value assigned to the i’th item of an ideal (or perfect) ranking list. Here we extend the IR metrics to the evaluation of multiple ordered lists per query. We use nDCG as the basic metric and extend it to MnDCG. Assume labelers have determined m SSCs (SSC1~SSCm, refer to Section 4.1) for query q and the weight (or importance) of SSCi is wi. Assume n semantic classes are generated by an approach and n1 of them have corresponding SSCs (i.e., no appropriate SSC can be found for the remaining n-n1 semantic classes). We define the MnDCG score of an approach (with respect to query q) as, 𝑀𝑛𝐷𝐶𝐺 𝑞 = 𝑛1 𝑛∙ 𝑤𝑖∙𝑆𝑐𝑜𝑟𝑒(SSC𝑖) 𝑚 i=1 𝑤𝑖 m i=1 (4.2) where 𝑆𝑐𝑜𝑟𝑒 𝑆𝑆𝐶𝑖 = 0 𝑖𝑓 𝑘𝑖= 0 1 𝑘𝑖 max 𝑗∈[1, 𝑘𝑖](𝑛𝐷𝐶𝐺 𝐺𝑖,𝑗 ) 𝑖𝑓 𝑘𝑖≠0 (4.3) In the above formula, nDCG(Gi,j) is the nDCG score of semantic class Gi,j; and ki denotes the number of semantic classes assigned to SSCi. For a list of queries, the MnDCG score of an algorithm is the average of all scores for the queries. The metric is designed to properly deal with the following cases, 464 i). One semantic class is wrongly split into multiple ones: Punished by dividing 𝑘𝑖 in Formula 4.3; ii). A semantic class is too noisy to be assigned to any SSC: Processed by the “n1/n” in Formula 4.2; iii). Fewer semantic classes (than the number of SSCs) are produced: Punished in Formula 4.3 by assigning a zero value. iv). Wrongly merge multiple semantic classes into one: The nDCG score of the merged one will be small because it is computed with respect to only one single SSC. The gain values of nDCG for the three relevance levels (“Bad”, “Fair”, and “Good”) are respectively -1, 1, and 2 in experiments. 4.3 Experimental Results 4.3.1 Overall performance comparison Figure 3 shows the performance comparison between the approaches listed in Section 4.1, using metrics MnDCG@n (n=1…10). Postprocessing is performed for all the approaches, where Formula 3.2 is adopted to compute the similarity between semantic classes. The results show that that the topic modeling approaches produce higher-quality semantic classes than the other approaches. It indicates that the topic mixture assumption of topic modeling can handle the multi-membership problem very well here. Among the alternative approaches, RASC clustering behaves better than item clustering. The reason might be that an item cannot belong to multiple clusters in the two item clustering approaches, while RASC clustering allows this. For the RASC clustering approaches, although one item has the chance to belong to different semantic classes, one RASC can only belong to one semantic class. Figure 3. Quality comparison (MnDCG@n) among approaches (frequency threshold h = 4 in preprocessing; k = 5 in topic models) 4.3.2 Preprocessing experiments Table 4 shows the average query processing time and results quality of the LDA approach, by varying frequency threshold h. Similar results are observed for the pLSI approach. In the table, h=1 means no preprocessing is performed. The average query processing time is calculated over all items in our dataset. As the threshold h increases, the processing time decreases as expected, because the input of topic modeling gets smaller. The second column lists the results quality (measured by MnDCG@10). Interestingly, we get the best results quality when h=4 (i.e., the items with frequency less than 4 are discarded). The reason may be that most low-frequency items are noisy ones. As a result, preprocessing can improve both results quality and processing efficiency; and h=4 seems a good choice in preprocessing for our dataset. h Avg. Query Proc. Time (seconds) Quality (MnDCG@10) 1 0.414 0.281 2 0.375 0.294 3 0.320 0.322 4 0.268 0.331 5 0.232 0.328 6 0.210 0.315 7 0.197 0.315 8 0.184 0.313 9 0.173 0.288 Table 4. Time complexity and quality comparison among LDA approaches of different thresholds 4.3.3 Postprocessing experiments Figure 4. Results quality comparison among topic modeling approaches with and without postprocessing (metric: MnDCG@10) The effect of postprocessing is shown in Figure 4. In the figure, NP means no postprocessing is performed. Sim1 and Sim2 respectively mean Formula 3.1 and Formula 3.2 are used in postprocessing as the similarity measure between 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 1 2 3 4 5 6 7 8 9 10 pLSI LDA KMedoids-RASC DBSCAN-RASC KMedoids-Item DBSCAN-Item n 0.27 0.28 0.29 0.3 0.31 0.32 0.33 0.34 LDA pLSI NP Sim1 Sim2 465 semantic classes. The same preprocessing (h=4) is performed in generating the data. It can be seen that postprocessing improves results quality. Sim2 achieves more performance improvement than Sim1, which demonstrates the effectiveness of the similarity measure in Formula 3.2. 4.3.4 Sample results Table 5 shows the semantic classes generated by our LDA approach for some sample queries in which the bad classes or bad members are highlighted (to save space, 10 items are listed here, and the query itself is omitted in the resultant semantic classes). Query Semantic Classes apple C1: ibm, microsoft, sony, dell, toshiba, samsung, panasonic, canon, nec, sharp … C2: peach, strawberry, cherry, orange, banana, lemon, pineapple, raspberry, pear, grape … gold C1: silver, copper, platinum, zinc, lead, iron, nickel, tin, aluminum, manganese … C2: silver, red, black, white, blue, purple, orange, pink, brown, navy … C3: silver, platinum, earrings, diamonds, rings, bracelets, necklaces, pendants, jewelry, watches … C4: silver, home, money, business, metal, furniture, shoes, gypsum, hematite, fluorite … lincoln C1: ford, mazda, toyota, dodge, nissan, honda, bmw, chrysler, mitsubishi, audi … C2: bristol, manchester, birmingham, leeds, london, cardiff, nottingham, newcastle, sheffield, southampton … C3: jefferson, jackson, washington, madison, franklin, sacramento, new york city, monroe, Louisville, marion … computer science C1: chemistry, mathematics, physics, biology, psychology, education, history, music, business, economics … Table 5. Semantic classes generated by our approach for some sample queries (topic model = LDA) 5 Related Work Several categories of work are related to ours. The first category is about set expansion (i.e., retrieving one semantic class given one term or a couple of terms). Syntactic context information is used (Hindle, 1990; Ruge, 1992; Lin, 1998) to compute term similarities, based on which similar words to a particular word can directly be returned. Google sets is an online service which, given one to five items, predicts other items in the set. Ghahramani and Heller (2005) introduce a Bayesian Sets algorithm for set expansion. Set expansion is performed by feeding queries to web search engines in Wang and Cohen (2007) and Kozareva (2008). All of the above work only yields one semantic class for a given query. Second, there are pattern-based approaches in the literature which only do limited integration of RASCs (Shinzato and Torisawa, 2004; Shinzato and Torisawa, 2005; Pasca, 2004), as discussed in the introduction section. In Shi et al. (2008), an ad-hoc approach was proposed to discover the multiple semantic classes for one item. The third category is distributional similarity approaches which provide multi-membership support (Harris, 1985; Lin and Pantel, 2001; Pantel and Lin, 2002). Among them, the CBC algorithm (Pantel and Lin, 2002) addresses the multi-membership problem. But it relies on term vectors and centroids which are not available in pattern-based approaches. It is therefore not clear whether it can be borrowed to deal with multi-membership here. Among the various applications of topic modeling, maybe the efforts of using topic model for Word Sense Disambiguation (WSD) are most relevant to our work. In Cai et al (2007), LDA is utilized to capture the global context information as the topic features for better performing the WSD task. In Boyd-Graber et al. (2007), Latent Dirichlet with WordNet (LDAWN) is developed for simultaneously disambiguating a corpus and learning the domains in which to consider each word. They do not generate semantic classes. 6 Conclusions We presented an approach that employs topic modeling for semantic class construction. Given an item q, we first retrieve all RASCs containing the item to form a collection CR(q). Then we perform some preprocessing to CR(q) and build a topic model for it. Finally, the output semantic classes of topic modeling are post-processed to generate the final semantic classes. For the CR(q) which contains a lot of RASCs, we perform offline processing according to the above process and store the results on disk, in order to reduce the online query processing time. We also proposed an evaluation methodology for measuring the quality of semantic classes. We show by experiments that our topic modeling approach outperforms the item clustering and RASC clustering approaches. Acknowledgments We wish to acknowledge help from Xiaokang Liu for mining RASCs from web pages, Changliang Wang and Zhongkai Fu for data process. 466 References David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022. Bruce Croft, Donald Metzler, and Trevor Strohman. 2009. Search Engines: Information Retrieval in Practice. Addison Wesley. Jordan Boyd-Graber, David Blei, and Xiaojin Zhu.2007. A topic model for word sense disambiguation. In Proceedings EMNLP-CoNLL 2007, pages 1024–1033, Prague, Czech Republic, June. Association for Computational Linguistics. Jun Fu Cai, Wee Sun Lee, and Yee Whye Teh. 2007. NUS-ML: Improving word sense disambiguation using topic features. In Proceedings of the International Workshop on Semantic Evaluations, volume 4. Scott Deerwester, Susan T. Dumais, GeorgeW. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41:391–407. Zoubin Ghahramani and Katherine A. Heller. 2005. Bayesian Sets. In Advances in Neural Information Processing Systems (NIPS05). Thomas L. Griffiths, Mark Steyvers, David M. Blei,and Joshua B. Tenenbaum. 2005. Integrating topics and syntax. In Advances in Neural Information Processing Systems 17, pages 537–544. MIT Press Zellig Harris. Distributional Structure. The Philosophy of Linguistics. New York: Oxford University Press. 1985. Donald Hindle. 1990. Noun Classification from Predicate-Argument Structures. In Proceedings of ACL90, pages 268–275. Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22nd annual international ACM SIGIR99, pages 50–57, New York, NY, USA. ACM. Kalervo Jarvelin, and Jaana Kekalainen. 2000. IR Evaluation Methods for Retrieving Highly Relevant Documents. In Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR2000). Zornitsa Kozareva, Ellen Riloff and Eduard Hovy. 2008. Semantic Class Learning from the Web with Hyponym Pattern Linkage Graphs, In Proceedings of ACL-08. Wei Li, David M. Blei, and Andrew McCallum. Nonparametric Bayes Pachinko Allocation. In Proceedings of Conference on Uncertainty in Artificial Intelligence (UAI), 2007. Dekang Lin. 1998. Automatic Retrieval and Clustering of Similar Words. In Proceedings of COLINGACL98, pages 768-774. Dekang Lin and Patrick Pantel. 2001. Induction of Semantic Classes from Natural Language Text. In Proceedings of SIGKDD01, pages 317-322. Hiroaki Ohshima, Satoshi Oyama, and Katsumi Tanaka. 2006. Searching coordinate terms with their context from the web. In WISE06, pages 40–47. Patrick Pantel and Dekang Lin. 2002. Discovering Word Senses from Text. In Proceedings of SIGKDD02. Marius Pasca. 2004. Acquisition of Categorized Named Entities for Web Search. In Proc. of 2004 CIKM. Gerda Ruge. 1992. Experiments on LinguisticallyBased Term Associations. In Information Processing & Management, 28(3), pages 317-32. Andrew I. Schein, Alexandrin Popescul, Lyle H. Ungar and David M. Pennock. 2002. Methods and metrics for cold-start recommendations. In Proceedings of SIGIR02, pages 253-260. Shuming Shi, Xiaokang Liu and Ji-Rong Wen. 2008. Pattern-based Semantic Class Discovery with Multi-Membership Support. In CIKM2008, pages 1453-1454. Keiji Shinzato and Kentaro Torisawa. 2004. Acquiring Hyponymy Relations from Web Documents. In HLT/NAACL04, pages 73–80. Keiji Shinzato and Kentaro Torisawa. 2005. A Simple WWW-based Method for Semantic Word Class Acquisition. In RANLP05. Richard C. Wang and William W. Cohen. 2007. Langusage-Independent Set Expansion of Named Entities Using the Web. In ICDM2007. 467
2009
52
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 468–476, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Paraphrase Identification as Probabilistic Quasi-Synchronous Recognition Dipanjan Das and Noah A. Smith Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213, USA {dipanjan,nasmith}@cs.cmu.edu Abstract We present a novel approach to deciding whether two sentences hold a paraphrase relationship. We employ a generative model that generates a paraphrase of a given sentence, and we use probabilistic inference to reason about whether two sentences share the paraphrase relationship. The model cleanly incorporates both syntax and lexical semantics using quasi-synchronous dependency grammars (Smith and Eisner, 2006). Furthermore, using a product of experts (Hinton, 2002), we combine the model with a complementary logistic regression model based on state-of-the-art lexical overlap features. We evaluate our models on the task of distinguishing true paraphrase pairs from false ones on a standard corpus, giving competitive state-of-the-art performance. 1 Introduction The problem of modeling paraphrase relationships between natural language utterances (McKeown, 1979) has recently attracted interest. For computational linguists, solving this problem may shed light on how best to model the semantics of sentences. For natural language engineers, the problem bears on information management systems like abstractive summarizers that must measure semantic overlap between sentences (Barzilay and Lee, 2003), question answering modules (Marsi and Krahmer, 2005) and machine translation (Callison-Burch et al., 2006). The paraphrase identification problem asks whether two sentences have essentially the same meaning. Although paraphrase identification is defined in semantic terms, it is usually solved using statistical classifiers based on shallow lexical, n-gram, and syntactic “overlap” features. Such overlap features give the best-published classification accuracy for the paraphrase identification task (Zhang and Patrick, 2005; Finch et al., 2005; Wan et al., 2006; Corley and Mihalcea, 2005, inter alia), but do not explicitly model correspondence structure (or “alignment”) between the parts of two sentences. In this paper, we adopt a model that posits correspondence between the words in the two sentences, defining it in loose syntactic terms: if two sentences are paraphrases, we expect their dependency trees to align closely, though some divergences are also expected, with some more likely than others. Following Smith and Eisner (2006), we adopt the view that the syntactic structure of sentences paraphrasing some sentence s should be “inspired” by the structure of s. Because dependency syntax is still only a crude approximation to semantic structure, we augment the model with a lexical semantics component, based on WordNet (Miller, 1995), that models how words are probabilistically altered in generating a paraphrase. This combination of loose syntax and lexical semantics is similar to the “Jeopardy” model of Wang et al. (2007). This syntactic framework represents a major departure from useful and popular surface similarity features, and the latter are difficult to incorporate into our probabilistic model. We use a product of experts (Hinton, 2002) to bring together a logistic regression classifier built from n-gram overlap features and our syntactic model. This combined model leverages complementary strengths of the two approaches, outperforming a strong state-ofthe-art baseline (Wan et al., 2006). This paper is organized as follows. We introduce our probabilistic model in §2. The model makes use of three quasi-synchronous grammar models (Smith and Eisner, 2006, QG, hereafter) as components (one modeling paraphrase, one modeling not-paraphrase, and one a base grammar); these are detailed, along with latent-variable inference and discriminative training algorithms, in §3. We discuss the Microsoft Research Paraphrase Corpus, upon which we conduct experiments, in §4. In §5, we present experiments on paraphrase 468 identification with our model and make comparisons with the existing state-of-the-art. We describe the product of experts and our lexical overlap model, and discuss the results achieved in §6. We relate our approach to prior work (§7) and conclude (§8). 2 Probabilistic Model Since our task is a classification problem, we require our model to provide an estimate of the posterior probability of the relationship (i.e., “paraphrase,” denoted p, or “not paraphrase,” denoted n), given the pair of sentences.1 Here, pQ denotes model probabilities, c is a relationship class (p or n), and s1 and s2 are the two sentences. We choose the class according to: ˆc = argmax c∈{p,n} pQ(c | s1, s2) = argmax c∈{p,n} pQ(c) × pQ(s1, s2 | c) (1) We define the class-conditional probabilities of the two sentences using the following generative story. First, grammar G0 generates a sentence s. Then a class c is chosen, corresponding to a classspecific probabilistic quasi-synchronous grammar Gc. (We will discuss QG in detail in §3. For the present, consider it a specially-defined probabilistic model that generates sentences with a specific property, like “paraphrases s,” when c = p.) Given s, Gc generates the other sentence in the pair, s′. When we observe a pair of sentences s1 and s2 we do not presume to know which came first (i.e., which was s and which was s′). Both orderings are assumed to be equally probable. For class c, pQ(s1, s2 | c) = 0.5 × pQ(s1 | G0) × pQ(s2 | Gc(s1)) + 0.5 × pQ(s2 | G0) × pQ(s1 | Gc(s2))(2) where c can be p or n; Gp(s) is the QG that generates paraphrases for sentence s, while Gn(s) is the QG that generates sentences that are not paraphrases of sentence s. This latter model may seem counter-intuitive: since the vast majority of possible sentences are not paraphrases of s, why is a special grammar required? Our use of a Gn follows from the properties of the corpus currently used for learning, in which the negative examples 1Although we do not explore the idea here, the model could be adapted for other sentence-pair relationships like entailment or contradiction. were selected to have high lexical overlap. We return to this point in §4. 3 QG for Paraphrase Modeling Here, we turn to the models Gp and Gn in detail. 3.1 Background Smith and Eisner (2006) introduced the quasisynchronous grammar formalism. Here, we describe some of its salient aspects. The model arose out of the empirical observation that translated sentences have some isomorphic syntactic structure, but divergences are possible. Therefore, rather than an isomorphic structure over a pair of source and target sentences, the syntactic tree over a target sentence is modeled by a source sentencespecific grammar “inspired” by the source sentence’s tree. This is implemented by associating with each node in the target tree a subset of the nodes in the source tree. Since it loosely links the two sentences’ syntactic structures, QG is well suited for problems like word alignment for MT (Smith and Eisner, 2006) and question answering (Wang et al., 2007). Consider a very simple quasi-synchronous context-free dependency grammar that generates one dependent per production rule.2 Let s = ⟨s1, ..., sm⟩be the source sentence. The grammar rules will take one of the two forms: ⟨t, l⟩→⟨t, l⟩⟨t′, k⟩or ⟨t, l⟩→⟨t′, k⟩⟨t, l⟩ where t and t′ range over the vocabulary of the target language, and l and k ∈{0, ..., m} are indices in the source sentence, with 0 denoting null.3 Hard or soft constraints can be applied between l and k in a rule. These constraints imply permissible “configurations.” For example, requiring l ̸= 0 and, if k ̸= 0 then sk must be a child of sl in the source tree, we can implement a synchronous dependency grammar similar to (Melamed, 2004). Smith and Eisner (2006) used a quasisynchronous grammar to discover the correspondence between words implied by the correspondence between the trees. We follow Wang et al. (2007) in treating the correspondences as latent variables, and in using a WordNet-based lexical semantics model to generate the target words. 2Our actual model is more complicated; see §3.2. 3A more general QG could allow one-to-many alignments, replacing l and k with sets of indices. 469 3.2 Detailed Model We describe how we model pQ(t | Gp(s)) and pQ(t | Gn(s)) for source and target sentences s and t (appearing in Eq. 2 alternately as s1 and s2). A dependency tree on a sequence w = ⟨w1, ..., wk⟩is a mapping of indices of words to indices of syntactic parents, τp : {1, ..., k} → {0, ..., k}, and a mapping of indices of words to dependency relation types in L, τℓ: {1, ..., k} → L. The set of indices children of wi to its left, {j : τ w(j) = i, j < i}, is denoted λw(i), and ρw(i) is used for right children. wi has a single parent, denoted by wτp(i). Cycles are not allowed, and w0 is taken to be the dummy “wall” symbol, $, whose only child is the root word of the sentence (normally the main verb). The label for wi is denoted by τℓ(i). We denote the whole tree of a sentence w by τ w, the subtree rooted at the ith word by τ w,i. Consider two sentences: let the source sentence s contain m words and the target sentence t contain n words. Let the correspondence x : {1, ..., n} →{0, ..., m} be a mapping from indices of words in t to indices of words in s. (We require each target word to map to at most one source word, though multiple target words can map to the same source word, i.e., x(i) = x(j) while i ̸= j.) When x(i) = 0, the ith target word maps to the wall symbol, equivalently a “null” word. Each of our QGs Gp and Gn generates the alignments x, the target tree τ t, and the sentence t. Both Gp and Gn are structured in the same way, differing only in their parameters; henceforth we discuss Gp; Gn is similar. We assume that the parse trees of s and t are known.4 Therefore our model defines: pQ(t | Gp(s)) = p(τ t | Gp(τ s)) = P x p(τ t, x | Gp(τ s)) (3) Because the QG is essentially a context-free dependency grammar, we can factor it into recursive steps as follows (let i be an arbitrary index in {1, ..., n}): P(τ t,i | ti, x(i), τ s) = pval(|λt(i)|, |ρt(i)| | ti) 4In our experiments, we use the parser described by McDonald et al. (2005), trained on sections 2–21 of the WSJ Penn Treebank, transformed to dependency trees following Yamada and Matsumoto (2003). (The same treebank data were also to estimate many of the parameters of our model, as discussed in the text.) Though it leads to a partial “pipeline” approximation of the posterior probability p(c | s, t), we believe that the relatively high quality of English dependency parsing makes this approximation reasonable. × Y j∈λt(i)∪ρt(i) m X x(j)=0 P(τ t,j | tj, x(j), τ s) ×pkid(tj, τ t ℓ(j), x(j) | ti, x(i), τ s) (4) where pval and pkid are valence and childproduction probabilities parameterized as discussed in §3.4. Note the recursion in the secondto-last line. We next describe a dynamic programming solution for calculating p(τ t | Gp(τ s)). In §3.4 we discuss the parameterization of the model. 3.3 Dynamic Programming Let C(i, l) refer to the probability of τ t,i, assuming that the parent of ti, tτ tp(i), is aligned to sl. For leaves of τ t, the base case is: C(i, l) = pval(0, 0 | ti) × (5) Pm k=0 pkid(ti, τ t ℓ(i), k | tτ tp(i), l, τ s) where k ranges over possible values of x(i), the source-tree node to which ti is aligned. The recursive case is: C(i, l) = pval(|λt(i)|, |ρt(i)| | ti) (6) × Pm k=0 pkid(ti, τ t ℓ(i), k | tτ tp(i), l, τ s) × Q j∈λt(i)∪ρt(i) C(j, k) We assume that the wall symbols t0 and s0 are aligned, so p(τ t | Gp(τ s)) = C(r, 0), where r is the index of the root word of the target tree τ t. It is straightforward to show that this algorithm requires O(m2n) runtime and O(mn) space. 3.4 Parameterization The valency distribution pval in Eq. 4 is estimated in our model using the transformed treebank (see footnote 4). For unobserved cases, the conditional probability is estimated by backing off to the parent POS tag and child direction. We discuss next how to parameterize the probability pkid that appears in Equations 4, 5, and 6. This conditional distribution forms the core of our QGs, and we deviate from earlier research using QGs in defining pkid in a fully generative way. In addition to assuming that dependency parse trees for s and t are observable, we also assume each word wi comes with POS and named entity tags. In our experiments these were obtained automatically using MXPOST (Ratnaparkhi, 1996) and BBN’s Identifinder (Bikel et al., 1999). 470 For clarity, let j = τ t p(i) and let l = x(j). pkid(ti, τ t ℓ(i), x(i) | tj, l, τ s) = pconfig(config(ti, tj, sx(i), sl) | tj, l, τ s) (7) ×punif (x(i) | config(ti, tj, sx(i), sl)) (8) ×plab(τ t ℓ(i) | config(ti, tj, sx(i), sl)) (9) ×ppos(pos(ti) | pos(sx(i))) (10) ×pne(ne(ti) | ne(sx(i))) (11) ×plsrel(lsrel(ti) | sx(i)) (12) ×pword(ti | lsrel(ti), sx(i)) (13) We consider each of the factors above in turn. Configuration In QG, “configurations” refer to the tree relationship among source-tree nodes (above, sl and sx(i)) aligned to a pair of parentchild target-tree nodes (above, tj and ti). In deriving τ t,j, the model first chooses the configuration that will hold among ti, tj, sx(i) (which has yet to be chosen), and sl (line 7). This is defined for configuration c log-linearly by:5 pconfig(c | tj, l, τ s) = αc X c′:∃sk,config(ti,tj,sk,sl)=c′ αc′ (14) Permissible configurations in our model are shown in Table 1. These are identical to prior work (Smith and Eisner, 2006; Wang et al., 2007), except that we add a “root” configuration that aligns the target parent-child pair to null and the head word of the source sentence, respectively. Using many permissible configurations helps remove negative effects from noisy parses, which our learner treats as evidence. Fig. 1 shows some examples of major configurations that Gp discovers in the data. Source tree alignment After choosing the configuration, the specific node in τ s that ti will align to, sx(i) is drawn uniformly (line 8) from among those in the configuration selected. Dependency label, POS, and named entity class The newly generated target word’s dependency label, POS, and named entity class drawn from multinomial distributions plab, ppos, and pne that condition, respectively, on the configuration and the POS and named entity class of the aligned source-tree word sx(i) (lines 9–11). 5We use log-linear models three times: for the configuration, the lexical semantics class, and the word. Each time, we are essentially assigning one weight per outcome and renormalizing among the subset of outcomes that are possible given what has been derived so far. Configuration Description parent-child τ s p(x(i)) = x(j), appended with τ s ℓ(x(i)) child-parent x(i) = τ s p(x(j)), appended with τ s ℓ(x(j)) grandparentgrandchild τ s p(τ s p(x(i))) = x(j), appended with τ s ℓ(x(i)) siblings τ s p(x(i)) = τ s p(x(j)), x(i) ̸= x(j) same-node x(i) = x(j) c-command the parent of one source-side word is an ancestor of the other source-side word root x(j) = 0, x(i) is the root of s child-null x(i) = 0 parent-null x(j) = 0, x(i) is something other than root of s other catch-all for all other types of configurations, which are permitted Table 1: Permissible configurations. i is an index in t whose configuration is to be chosen; j = τ t p(i) is i’s parent. WordNet relation(s) The model next chooses a lexical semantics relation between sx(i) and the yet-to-be-chosen word ti (line 12). Following Wang et al. (2007),6 we employ a 14-feature loglinear model over all logically possible combinations of the 14 WordNet relations (Miller, 1995).7 Similarly to Eq. 14, we normalize this log-linear model based on the set of relations that are nonempty in WordNet for the word sx(i). Word Finally, the target word is randomly chosen from among the set of words that bear the lexical semantic relationship just chosen (line 13). This distribution is, again, defined log-linearly: pword(ti | lsrel(ti) = R, sx(i)) = αti P w′:sx(i)Rw′ αw′ (15) Here αw is the Good-Turing unigram probability estimate of a word w from the Gigaword corpus (Graff, 2003). 3.5 Base Grammar G0 In addition to the QG that generates a second sentence bearing the desired relationship (paraphrase or not) to the first sentence s, our model in §2 also requires a base grammar G0 over s. We view this grammar as a trivial special case of the same QG model already described. G0 assumes the empty source sentence consists only of 6Note that Wang et al. (2007) designed pkid as an interpolation between a log-linear lexical semantics model and a word model. Our approach is more fully generative. 7These are: identical-word, synonym, antonym (including extended and indirect antonym), hypernym, hyponym, derived form, morphological variation (e.g., plural form), verb group, entailment, entailed-by, see-also, causal relation, whether the two words are same and is a number, and no relation. 471 (a) parent-child fill questionnaire complete questionnaire dozens wounded injured dozens (b) child-parent (c) grandparent-grandchild will chief will Secretary Liscouski quarter first first-quarter (e) same-node U.S refunding massive (f) siblings U.S treasury treasury (g) root null fell null dropped (d) c-command signatures necessary signatures needed 897,158 the twice approaching collected Figure 1: Some example configurations from Table 1 that Gp discovers in the dev. data. Directed arrows show head-modifier relationships, while dotted arrows show alignments. a single wall node. Thus every word generated under G0 aligns to null, and we can simplify the dynamic programming algorithm that scores a tree τ s under G0: C′(i) = pval(|λt(i)|, |ρt(i)| | si) ×plab(τ t ℓ(i)) × ppos(pos(ti)) × pne(ne(ti)) ×pword(ti) × Q j:τ t(j)=i C′(j) (16) where the final product is 1 when ti has no children. It should be clear that p(s | G0) = C′(0). We estimate the distributions over dependency labels, POS tags, and named entity classes using the transformed treebank (footnote 4). The distribution over words is taken from the Gigaword corpus (as in §3.4). It is important to note that G0 is designed to give a smoothed estimate of the probability of a particular parsed, named entity-tagged sentence. It is never used for parsing or for generation; it is only used as a component in the generative probability model presented in §2 (Eq. 2). 3.6 Discriminative Training Given training data D ⟨s(i) 1 , s(i) 2 , c(i)⟩ EN i=1, we train the model discriminatively by maximizing regularized conditional likelihood: max Θ N X i=1 log pQ(c(i) | s(i) 1 , s(i) 2 , Θ) | {z } Eq. 2 relates this to G{0,p,n} −C∥Θ∥2 2 (17) The parameters Θ to be learned include the class priors, the conditional distributions of the dependency labels given the various configurations, the POS tags given POS tags, the NE tags given NE tags appearing in expressions 9–11, the configuration weights appearing in Eq. 14, and the weights of the various features in the log-linear model for the lexical-semantics model. As noted, the distributions pval, the word unigram weights in Eq. 15, and the parameters of the base grammar are fixed using the treebank (see footnote 4) and the Gigaword corpus. Since there is a hidden variable (x), the objective function is non-convex. We locally optimize using the L-BFGS quasi-Newton method (Liu and Nocedal, 1989). Because many of our parameters are multinomial probabilities that are constrained to sum to one and L-BFGS is not designed to handle constraints, we treat these parameters as unnormalized weights that get renormalized (using a softmax function) before calculating the objective. 4 Data and Task In all our experiments, we have used the Microsoft Research Paraphrase Corpus (Dolan et al., 2004; Quirk et al., 2004). The corpus contains 5,801 pairs of sentences that have been marked as “equivalent” or “not equivalent.” It was constructed from thousands of news sources on the web. Dolan and Brockett (2005) remark that this corpus was created semi-automatically by first training an SVM classifier on a disjoint annotated 10,000 sentence pair dataset and then applying the SVM on an unseen 49,375 sentence pair corpus, with its output probabilities skewed towards over-identification, i.e., towards generating some false paraphrases. 5,801 out of these 49,375 pairs were randomly selected and presented to human judges for refinement into true and false paraphrases. 3,900 of the pairs were marked as having 472 About 120 potential jurors were being asked to complete a lengthy questionnaire . The jurors were taken into the courtroom in groups of 40 and asked to fill out a questionnaire . Figure 2: Discovered alignment of Ex. 19 produced by Gp. Observe that the model aligns identical words and also “complete” and “fill” in this specific case. This kind of alignment provides an edge over a simple lexical overlap model. “mostly bidirectional entailment,” a standard definition of the paraphrase relation. Each sentence was labeled first by two judges, who averaged 83% agreement, and a third judge resolved conflicts. We use the standard data split into 4,076 (2,753 paraphrase, 1,323 not) training and 1,725 (1147 paraphrase, 578 not) test pairs. We reserved a randomly selected 1,075 training pairs for tuning.We cite some examples from the training set here: (18) Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier. With the scandal hanging over Stewart’s company, revenue in the first quarter of the year dropped 15 percent from the same period a year earlier. (19) About 120 potential jurors were being asked to complete a lengthy questionnaire. The jurors were taken into the courtroom in groups of 40 and asked to fill out a questionnaire. Ex. 18 is a true paraphrase pair. Notice the high lexical overlap between the two sentences (unigram overlap of 100% in one direction and 72% in the other). Ex. 19 is another true paraphrase pair with much lower lexical overlap (unigram overlap of 50% in one direction and 30% in the other). Notice the use of similar-meaning phrases and irrelevant modifiers that retain the same meaning in both sentences, which a lexical overlap model cannot capture easily, but a model like a QG might. Also, in both pairs, the relationship cannot be called total bidirectional equivalence because there is some extra information in one sentence which cannot be inferred from the other. Ex. 20 was labeled “not paraphrase”: (20) “There were a number of bureaucratic and administrative missed signals - there’s not one person who’s responsible here,” Gehman said. In turning down the NIMA offer, Gehman said, “there were a number of bureaucratic and administrative missed signals here. There is significant content overlap, making a decision difficult for a na¨ıve lexical overlap classifier. (In fact, pQ labels this example n while the lexical overlap models label it p.) The fact that negative examples in this corpus were selected because of their high lexical overlap is important. It means that any discriminative model is expected to learn to distinguish mere overlap from paraphrase. This seems appropriate, but it does mean that the “not paraphrase” relation ought to be denoted “not paraphrase but deceptively similar on the surface.” It is for this reason that we use a special QG for the n relation. 5 Experimental Evaluation Here we present our experimental evaluation using pQ. We trained on the training set (3,001 pairs) and tuned model metaparameters (C in Eq. 17) and the effect of different feature sets on the development set (1,075 pairs). We report accuracy on the official MSRPC test dataset. If the posterior probability pQ(p | s1, s2) is greater than 0.5, the pair is labeled “paraphrase” (as in Eq. 1). 5.1 Baseline We replicated a state-of-the-art baseline model for comparison. Wan et al. (2006) report the best published accuracy, to our knowledge, on this task, using a support vector machine. Our baseline is a reimplementation of Wan et al. (2006), using features calculated directly from s1 and s2 without recourse to any hidden structure: proportion of word unigram matches, proportion of lemmatized unigram matches, BLEU score (Papineni et al., 2001), BLEU score on lemmatized tokens, F measure (Turian et al., 2003), difference of sentence length, and proportion of dependency relation overlap. The SVM was trained to classify positive and negative examples of paraphrase using SVMlight (Joachims, 1999).8 Metaparameters, tuned on the development data, were the regularization constant and the degree of the polynomial kernel (chosen in [10−5, 102] and 1–5 respectively.).9 It is unsurprising that the SVM performs very well on the MSRPC because of the corpus creation process (see Sec. 4) where an SVM was applied as well, with very similar features and a skewed decision process (Dolan and Brockett, 2005). 8http://svmlight.joachims.org 9Our replication of the Wan et al. model is approximate, because we used different preprocessing tools: MXPOST for POS tagging (Ratnaparkhi, 1996), MSTParser for parsing (McDonald et al., 2005), and Dan Bikel’s interface (http://www.cis.upenn.edu/˜dbikel/ software.html#wn) to WordNet (Miller, 1995) for lemmatization information. Tuning led to C = 17 and polynomial degree 4. 473 Model Accuracy Precision Recall baselines all p 66.49 66.49 100.00 Wan et al. SVM (reported) 75.63 77.00 90.00 Wan et al. SVM (replication) 75.42 76.88 90.14 pQ lexical semantics features removed 68.64 68.84 96.51 all features 73.33 74.48 91.10 c-command disallowed (best; see text) 73.86 74.89 91.28 §6 pL 75.36 78.12 87.44 product of experts 76.06 79.57 86.05 oracles Wan et al. SVM and pL 80.17 100.00 92.07 Wan et al. SVM and pQ 83.42 100.00 96.60 pQ and pL 83.19 100.00 95.29 Table 2: Accuracy, p-class precision, and p-class recall on the test set (N = 1,725). See text for differences in implementation between Wan et al. and our replication; their reported score does not include the full test set. 5.2 Results Tab. 2 shows performance achieved by the baseline SVM and variations on pQ on the test set. We performed a few feature ablation studies, evaluating on the development data. We removed the lexical semantics component of the QG,10 and disallowed the syntactic configurations one by one, to investigate which components of pQ contributes to system performance. The lexical semantics component is critical, as seen by the drop in accuracy from the table (without this component, pQ behaves almost like the “all p” baseline). We found that the most important configurations are “parent-child,” and “child-parent” while damage from ablating other configurations is relatively small. Most interestingly, disallowing the “ccommand” configuration resulted in the best absolute accuracy, giving us the best version of pQ. The c-command configuration allows more distant nodes in a source sentence to align to parent-child pairs in a target (see Fig. 1d). Allowing this configuration guides the model in the wrong direction, thus reducing test accuracy. We tried disallowing more than one configuration at a time, without getting improvements on development data. We also tried ablating the WordNet relations, and observed that the “identical-word” feature hurt the model the most. Ablating the rest of the features did not produce considerable changes in accuracy. The development data-selected pQ achieves higher recall by 1 point than Wan et al.’s SVM, but has precision 2 points worse. 5.3 Discussion It is quite promising that a linguistically-motivated probabilistic model comes so close to a stringsimilarity baseline, without incorporating stringlocal phrases. We see several reasons to prefer 10This is accomplished by eliminating lines 12 and 13 from the definition of pkid and redefining pword to be the unigram word distribution estimated from the Gigaword corpus, as in G0, without the help of WordNet. the more intricate QG to the straightforward SVM. First, the QG discovers hidden alignments between words. Alignments have been leveraged in related tasks such as textual entailment (Giampiccolo et al., 2007); they make the model more interpretable in analyzing system output (e.g., Fig. 2). Second, the paraphrases of a sentence can be considered to be monolingual translations. We model the paraphrase problem using a direct machine translation model, thus providing a translation interpretation of the problem. This framework could be extended to permit paraphrase generation, or to exploit other linguistic annotations, such as representations of semantics (see, e.g., Qiu et al., 2006). Nonetheless, the usefulness of surface overlap features is difficult to ignore. We next provide an efficient way to combine a surface model with pQ. 6 Product of Experts Incorporating structural alignment and surface overlap features inside a single model can make exact inference infeasible. As an example, consider features like n-gram overlap percentages that provide cues of content overlap between two sentences. One intuitive way of including these features in a QG could be including these only at the root of the target tree, i.e. while calculating C(r, 0). These features have to be included in estimating pkid, which has log-linear component models (Eq. 7- 13). For these bigram or trigram overlap features, a similar log-linear model has to be normalized with a partition function, which considers the (unnormalized) scores of all possible target sentences, given the source sentence. We therefore combine pQ with a lexical overlap model that gives another posterior probability estimate pL(c | s1, s2) through a product of experts (PoE; Hinton, 2002), pJ(c | s1, s2) = pQ(c | s1, s2) × pL(c | s1, s2) X c′∈{p,n} pQ(c′ | s1, s2) × pL(c′ | s1, s2) (21) 474 Eq. 21 takes the product of the two models’ posterior probabilities, then normalizes it to sum to one. PoE models are used to efficiently combine several expert models that individually constrain different dimensions in high-dimensional data, the product therefore constraining all of the dimensions. Combining models in this way grants to each expert component model the ability to “veto” a class by giving it low probability; the most probable class is the one that is least objectionable to all experts. Probabilistic Lexical Overlap Model We devised a logistic regression (LR) model incorporating 18 simple features, computed directly from s1 and s2, without modeling any hidden correspondence. LR (like the QG) provides a probability distribution, but uses surface features (like the SVM). The features are of the form precisionn (number of n-gram matches divided by the number of n-grams in s1), recalln (number of n-gram matches divided by the number of n-grams in s2) and Fn (harmonic mean of the previous two features), where 1 ≤n ≤3. We also used lemmatized versions of these features. This model gives the posterior probability pL(c | s1, s2), where c ∈{p, n}. We estimated the model parameters analogously to Eq. 17. Performance is reported in Tab. 2; this model is on par with the SVM, though trading recall in favor of precision. We view it as a probabilistic simulation of the SVM more suitable for combination with the QG. Training the PoE Various ways of training a PoE exist. We first trained pQ and pL separately as described, then initialized the PoE with those parameters. We then continued training, maximizing (unregularized) conditional likelihood. Experiment We used pQ with the “c-command” configuration excluded, and the LR model in the product of experts. Tab. 2 includes the final results achieved by the PoE. The PoE model outperforms all the other models, achieving an accuracy of 76.06%.11 The PoE is conservative, labeling a pair as p only if the LR and the QG give it strong p probabilities. This leads to high precision, at the expense of recall. Oracle Ensembles Tab. 2 shows the results of three different oracle ensemble systems that correctly classify a pair if either of the two individual systems in the combination is correct. Note that the combinations involving pQ achieve 83%, the 11This accuracy is significant over pQ under a paired t-test (p < 0.04), but is not significant over the SVM. human agreement level for the MSRPC. The LR and SVM are highly similar, and their oracle combination does not perform as well. 7 Related Work There is a growing body of research that uses the MSRPC (Dolan et al., 2004; Quirk et al., 2004) to build models of paraphrase. As noted, the most successful work has used edit distance (Zhang and Patrick, 2005) or bag-of-words features to measure sentence similarity, along with shallow syntactic features (Finch et al., 2005; Wan et al., 2006; Corley and Mihalcea, 2005). Qiu et al. (2006) used predicate-argument annotations. Most related to our approach, Wu (2005) used inversion transduction grammars—a synchronous context-free formalism (Wu, 1997)—for this task. Wu reported only positive-class (p) precision (not accuracy) on the test set. He obtained 76.1%, while our PoE model achieves 79.6% on that measure. Wu’s model can be understood as a strict hierarchical maximum-alignment method. In contrast, our alignments are soft (we sum over them), and we do not require strictly isomorphic syntactic structures. Most importantly, our approach is founded on a stochastic generating process and estimated discriminatively for this task, while Wu did not estimate any parameters from data at all. 8 Conclusion In this paper, we have presented a probabilistic model of paraphrase incorporating syntax, lexical semantics, and hidden loose alignments between two sentences’ trees. Though it fully defines a generative process for both sentences and their relationship, the model is discriminatively trained to maximize conditional likelihood. We have shown that this model is competitive for determining whether there exists a semantic relationship between them, and can be improved by principled combination with more standard lexical overlap approaches. Acknowledgments The authors thank the three anonymous reviewers for helpful comments and Alan Black, Frederick Crabbe, Jason Eisner, Kevin Gimpel, Rebecca Hwa, David Smith, and Mengqiu Wang for helpful discussions. This work was supported by DARPA grant NBCH-1080004. 475 References Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: an unsupervised approach using multiple-sequence alignment. In Proc. of NAACL. Daniel M. Bikel, Richard L. Schwartz, and Ralph M. Weischedel. 1999. An algorithm that learns what’s in a name. Machine Learning, 34(1-3):211–231. Chris Callison-Burch, Philipp Koehn, and Miles Osborne. 2006. Improved statistical machine translation using paraphrases. In Proc. of HLT-NAACL. Courtney Corley and Rada Mihalcea. 2005. Measuring the semantic similarity of texts. In Proc. of ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment. William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proc. of IWP. Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: exploiting massively parallel news sources. In Proc. of COLING. Andrew Finch, Young Sook Hwang, and Eiichiro Sumita. 2005. Using machine translation evaluation techniques to determine sentence-level semantic equivalence. In Proc. of IWP. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In Proc. of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. David Graff. 2003. English Gigaword. Linguistic Data Consortium. Geoffrey E. Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural Computation, 14:1771–1800. Thorsten Joachims. 1999. Making large-scale SVM learning practical. In Advances in Kernel Methods Support Vector Learning. MIT Press. Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Math. Programming (Ser. B), 45(3):503–528. Erwin Marsi and Emiel Krahmer. 2005. Explorations in sentence fusion. In Proc. of EWNLG. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proc. of ACL. Kathleen R. McKeown. 1979. Paraphrasing using given and new information in a question-answer system. In Proc. of ACL. I. Dan Melamed. 2004. Statistical machine translation by parsing. In Proc. of ACL. George A. Miller. 1995. Wordnet: a lexical database for English. Commun. ACM, 38(11):39–41. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2001. BLEU: a method for automatic evaluation of machine translation. In Proc. of ACL. Long Qiu, Min-Yen Kan, and Tat-Seng Chua. 2006. Paraphrase recognition via dissimilarity significance classification. In Proc. of EMNLP. Chris Quirk, Chris Brockett, and William B. Dolan. 2004. Monolingual machine translation for paraphrase generation. In Proc. of EMNLP. Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proc. of EMNLP. David A. Smith and Jason Eisner. 2006. Quasisynchronous grammars: Alignment by soft projection of syntactic dependencies. In Proc. of the HLTNAACL Workshop on Statistical Machine Translation. Joseph P. Turian, Luke Shen, and I. Dan Melamed. 2003. Evaluation of machine translation and its evaluation. In Proc. of Machine Translation Summit IX. Stephen Wan, Mark Dras, Robert Dale, and C´ecile Paris. 2006. Using dependency-based features to take the “para-farce” out of paraphrase. In Proc. of ALTW. Mengqiu Wang, Noah A. Smith, and Teruko Mitamura. 2007. What is the Jeopardy model? a quasisynchronous grammar for QA. In Proc. of EMNLPCoNLL. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Comput. Linguist., 23(3). Dekai Wu. 2005. Recognizing paraphrases and textual entailment using inversion transduction grammars. In Proc. of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proc. of IWPT. Yitao Zhang and Jon Patrick. 2005. Paraphrase identification by text canonicalization. In Proc. of ALTW. 476
2009
53
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 477–485, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Stochastic Gradient Descent Training for L1-regularized Log-linear Models with Cumulative Penalty Yoshimasa Tsuruoka†‡ Jun’ichi Tsujii†‡∗ Sophia Ananiadou†‡ † School of Computer Science, University of Manchester, UK ‡ National Centre for Text Mining (NaCTeM), UK ∗Department of Computer Science, University of Tokyo, Japan {yoshimasa.tsuruoka,j.tsujii,sophia.ananiadou}@manchester.ac.uk Abstract Stochastic gradient descent (SGD) uses approximate gradients estimated from subsets of the training data and updates the parameters in an online fashion. This learning framework is attractive because it often requires much less training time in practice than batch training algorithms. However, L1-regularization, which is becoming popular in natural language processing because of its ability to produce compact models, cannot be efficiently applied in SGD training, due to the large dimensions of feature vectors and the fluctuations of approximate gradients. We present a simple method to solve these problems by penalizing the weights according to cumulative values for L1 penalty. We evaluate the effectiveness of our method in three applications: text chunking, named entity recognition, and part-of-speech tagging. Experimental results demonstrate that our method can produce compact and accurate models much more quickly than a state-of-the-art quasiNewton method for L1-regularized loglinear models. 1 Introduction Log-linear models (a.k.a maximum entropy models) are one of the most widely-used probabilistic models in the field of natural language processing (NLP). The applications range from simple classification tasks such as text classification and history-based tagging (Ratnaparkhi, 1996) to more complex structured prediction tasks such as partof-speech (POS) tagging (Lafferty et al., 2001), syntactic parsing (Clark and Curran, 2004) and semantic role labeling (Toutanova et al., 2005). Loglinear models have a major advantage over other discriminative machine learning models such as support vector machines—their probabilistic output allows the information on the confidence of the decision to be used by other components in the text processing pipeline. The training of log-liner models is typically performed based on the maximum likelihood criterion, which aims to obtain the weights of the features that maximize the conditional likelihood of the training data. In maximum likelihood training, regularization is normally needed to prevent the model from overfitting the training data, The two most common regularization methods are called L1 and L2 regularization. L1 regularization penalizes the weight vector for its L1-norm (i.e. the sum of the absolute values of the weights), whereas L2 regularization uses its L2-norm. There is usually not a considerable difference between the two methods in terms of the accuracy of the resulting model (Gao et al., 2007), but L1 regularization has a significant advantage in practice. Because many of the weights of the features become zero as a result of L1-regularized training, the size of the model can be much smaller than that produced by L2-regularization. Compact models require less space on memory and storage, and enable the application to start up quickly. These merits can be of vital importance when the application is deployed in resource-tight environments such as cell-phones. A common way to train a large-scale L1regularized model is to use a quasi-Newton method. Kazama and Tsujii (2003) describe a method for training a L1-regularized log-linear model with a bound constrained version of the BFGS algorithm (Nocedal, 1980). Andrew and Gao (2007) present an algorithm called OrthantWise Limited-memory Quasi-Newton (OWLQN), which can work on the BFGS algorithm without bound constraints and achieve faster convergence. 477 An alternative approach to training a log-linear model is to use stochastic gradient descent (SGD) methods. SGD uses approximate gradients estimated from subsets of the training data and updates the weights of the features in an online fashion—the weights are updated much more frequently than batch training algorithms. This learning framework is attracting attention because it often requires much less training time in practice than batch training algorithms, especially when the training data is large and redundant. SGD was recently used for NLP tasks including machine translation (Tillmann and Zhang, 2006) and syntactic parsing (Smith and Eisner, 2008; Finkel et al., 2008). Also, SGD is very easy to implement because it does not need to use the Hessian information on the objective function. The implementation could be as simple as the perceptron algorithm. Although SGD is a very attractive learning framework, the direct application of L1 regularization in this learning framework does not result in efficient training. The first problem is the inefficiency of applying the L1 penalty to the weights of all features. In NLP applications, the dimension of the feature space tends to be very large—it can easily become several millions, so the application of L1 penalty to all features significantly slows down the weight updating process. The second problem is that the naive application of L1 penalty in SGD does not always lead to compact models, because the approximate gradient used at each update is very noisy, so the weights of the features can be easily moved away from zero by those fluctuations. In this paper, we present a simple method for solving these two problems in SGD learning. The main idea is to keep track of the total penalty and the penalty that has been applied to each weight, so that the L1 penalty is applied based on the difference between those cumulative values. That way, the application of L1 penalty is needed only for the features that are used in the current sample, and also the effect of noisy gradient is smoothed away. We evaluate the effectiveness of our method by using linear-chain conditional random fields (CRFs) and three traditional NLP tasks, namely, text chunking (shallow parsing), named entity recognition, and POS tagging. We show that our enhanced SGD learning method can produce compact and accurate models much more quickly than the OWL-QN algorithm. This paper is organized as follows. Section 2 provides a general description of log-linear models used in NLP. Section 3 describes our stochastic gradient descent method for L1-regularized loglinear models. Experimental results are presented in Section 4. Some related work is discussed in Section 5. Section 6 gives some concluding remarks. 2 Log-Linear Models In this section, we briefly describe log-linear models used in NLP tasks and L1 regularization. A log-linear model defines the following probabilistic distribution over possible structure y for input x: p(y|x) = 1 Z(x) exp X i wifi(y, x), where fi(y, x) is a function indicating the occurrence of feature i, wi is the weight of the feature, and Z(x) is a partition (normalization) function: Z(x) = X y exp X i wifi(y, x). If the structure is a sequence, the model is called a linear-chain CRF model, and the marginal probabilities of the features and the partition function can be efficiently computed by using the forwardbackward algorithm. The model is used for a variety of sequence labeling tasks such as POS tagging, chunking, and named entity recognition. If the structure is a tree, the model is called a tree CRF model, and the marginal probabilities can be computed by using the inside-outside algorithm. The model can be used for tasks like syntactic parsing (Finkel et al., 2008) and semantic role labeling (Cohn and Blunsom, 2005). 2.1 Training The weights of the features in a log-linear model are optimized in such a way that they maximize the regularized conditional log-likelihood of the training data: Lw = N X j=1 log p(yj|xj; w) −R(w), (1) where N is the number of training samples, yj is the correct output for input xj, and R(w) is the 478 regularization term which prevents the model from overfitting the training data. In the case of L1 regularization, the term is defined as: R(w) = C X i |wi|, where C is the meta-parameter that controls the degree of regularization, which is usually tuned by cross-validation or using the heldout data. In what follows, we denote by L(j, w) the conditional log-likelihood of each sample log p(yj|xj; w). Equation 1 is rewritten as: Lw = N X j=1 L(j, w) −C X i |wi|. (2) 3 Stochastic Gradient Descent SGD uses a small randomly-selected subset of the training samples to approximate the gradient of the objective function given by Equation 2. The number of training samples used for this approximation is called the batch size. When the batch size is N, the SGD training simply translates into gradient descent (hence is very slow to converge). By using a small batch size, one can update the parameters more frequently than gradient descent and speed up the convergence. The extreme case is a batch size of 1, and it gives the maximum frequency of updates and leads to a very simple perceptron-like algorithm, which we adopt in this work.1 Apart from using a single training sample to approximate the gradient, the optimization procedure is the same as simple gradient descent,2 so the weights of the features are updated at training sample j as follows: wk+1 = wk + ηk ∂ ∂w(L(j, w) −C N X i |wi|), where k is the iteration counter and ηk is the learning rate, which is normally designed to decrease as the iteration proceeds. The actual learning rate scheduling methods used in our experiments are described later in Section 3.3. 1In the actual implementation, we randomly shuffled the training samples at the beginning of each pass, and then picked them up sequentially. 2What we actually do here is gradient ascent, but we stick to the term “gradient descent”. 3.1 L1 regularization The update equation for the weight of each feature i is as follows: wik+1 = wik + ηk ∂ ∂wi (L(j, w) −C N |wi|). The difficulty with L1 regularization is that the last term on the right-hand side of the above equation is not differentiable when the weight is zero. One straightforward solution to this problem is to consider a subgradient at zero and use the following update equation: wik+1 = wik + ηk ∂L(j, w) ∂wi −C N ηksign(wk i ), where sign(x) = 1 if x > 0, sign(x) = −1 if x < 0, and sign(x) = 0 if x = 0. In this paper, we call this weight updating method “SGD-L1 (Naive)”. This naive method has two serious problems. The first problem is that, at each update, we need to perform the application of L1 penalty to all features, including the features that are not used in the current training sample. Since the dimension of the feature space can be very large, it can significantly slow down the weight update process. The second problem is that it does not produce a compact model, i.e. most of the weights of the features do not become zero as a result of training. Note that the weight of a feature does not become zero unless it happens to fall on zero exactly, which rarely happens in practice. Carpenter (2008) describes an alternative approach. The weight updating process is divided into two steps. First, the weight is updated without considering the L1 penalty term. Then, the L1 penalty is applied to the weight to the extent that it does not change its sign. In other words, the weight is clipped when it crosses zero. Their weight update procedure is as follows: w k+ 1 2 i = wk i + ηk ∂L(j, w) ∂wi w=wk , if w k+ 1 2 i > 0 then wk+1 i = max(0, w k+ 1 2 i −C N ηk), else if w k+ 1 2 i < 0 then wk+1 i = min(0, w k+ 1 2 i + C N ηk). In this paper, we call this update method “SGDL1 (Clipping)”. It should be noted that this method 479 -0.1 -0.05 0 0.05 0.1 0 1000 2000 3000 4000 5000 6000 Weight Updates Figure 1: An example of weight updates. is actually a special case of the FOLOS algorithm (Duchi and Singer, 2008) and the truncated gradient method (Langford et al., 2009). The obvious advantage of using this method is that we can expect many of the weights of the features to become zero during training. Another merit is that it allows us to perform the application of L1 penalty in a lazy fashion, so that we do not need to update the weights of the features that are not used in the current sample, which leads to much faster training when the dimension of the feature space is large. See the aforementioned papers for the details. In this paper, we call this efficient implementation “SGD-L1 (Clipping + LazyUpdate)”. 3.2 L1 regularization with cumulative penalty Unfortunately, the clipping-at-zero approach does not solve all problems. Still, we often end up with many features whose weights are not zero. Recall that the gradient used in SGD is a crude approximation to the true gradient and is very noisy. The weight of a feature is, therefore, easily moved away from zero when the feature is used in the current sample. Figure 1 gives an illustrative example in which the weight of a feature fails to become zero. The figure shows how the weight of a feature changes during training. The weight goes up sharply when it is used in the sample and then is pulled back toward zero gradually by the L1 penalty. Therefore, the weight fails to become zero if the feature is used toward the end of training, which is the case in this example. Note that the weight would become zero if the true (fluctuationless) gradient were used—at each update the weight would go up a little and be pulled back to zero straightaway. Here, we present a different strategy for applying the L1 penalty to the weights of the features. The key idea is to smooth out the effect of fluctuating gradients by considering the cumulative effects from L1 penalty. Let uk be the absolute value of the total L1penalty that each weight could have received up to the point. Since the absolute value of the L1 penalty does not depend on the weight and we are using the same regularization constant C for all weights, it is simply accumulated as: uk = C N k X t=1 ηt. (3) At each training sample, we update the weights of the features that are used in the sample as follows: w k+ 1 2 i = wk i + ηk ∂L(j, w) ∂wi w=wk , if w k+ 1 2 i > 0 then wk+1 i = max(0, w k+ 1 2 i −(uk + qk−1 i )), else if w k+ 1 2 i < 0 then wk+1 i = min(0, w k+ 1 2 i + (uk −qk−1 i )), where qk i is the total L1-penalty that wi has actually received up to the point: qk i = k X t=1 (wt+1 i −w t+ 1 2 i ). (4) This weight updating method penalizes the weight according to the difference between uk and qk−1 i . In effect, it forces the weight to receive the total L1 penalty that would have been applied if the weight had been updated by the true gradients, assuming that the current weight vector resides in the same orthant as the true weight vector. It should be noted that this method is basically equivalent to a “SGD-L1 (Clipping + LazyUpdate)” method if we were able to use the true gradients instead of the stochastic gradients. In this paper, we call this weight updating method “SGD-L1 (Cumulative)”. The implementation of this method is very simple. Figure 2 shows the whole SGD training algorithm with this strategy in pseudo-code. 480 1: procedure TRAIN(C) 2: u ←0 3: Initialize wi and qi with zero for all i 4: for k = 0 to MaxIterations 5: η ←LEARNINGRATE(k) 6: u ←u + ηC/N 7: Select sample j randomly 8: UPDATEWEIGHTS(j) 9: 10: procedure UPDATEWEIGHTS(j) 11: for i ∈features used in sample j 12: wi ←wi + η ∂L(j,w) ∂wi 13: APPLYPENALTY(i) 14: 15: procedure APPLYPENALTY(i) 16: z ←wi 17: if wi > 0 then 18: wi ←max(0, wi −(u + qi)) 19: else if wi < 0 then 20: wi ←min(0, wi + (u −qi)) 21: qi ←qi + (wi −z) 22: Figure 2: Stochastic gradient descent training with cumulative L1 penalty. z is a temporary variable. 3.3 Learning Rate The scheduling of learning rates often has a major impact on the convergence speed in SGD training. A typical choice of learning rate scheduling can be found in (Collins et al., 2008): ηk = η0 1 + k/N , (5) where η0 is a constant. Although this scheduling guarantees ultimate convergence, the actual speed of convergence can be poor in practice (Darken and Moody, 1990). In this work, we also tested simple exponential decay: ηk = η0α−k/N, (6) where α is a constant. In our experiments, we found this scheduling more practical than that given in Equation 5. This is mainly because exponential decay sweeps the range of learning rates more smoothly—the learning rate given in Equation 5 drops too fast at the beginning and too slowly at the end. It should be noted that exponential decay is not a good choice from a theoretical point of view, because it does not satisfy one of the necessary conditions for convergence—the sum of the learning rates must diverge to infinity (Spall, 2005). However, this is probably not a big issue for practitioners because normally the training has to be terminated at a certain number of iterations in practice.3 4 Experiments We evaluate the effectiveness our training algorithm using linear-chain CRF models and three NLP tasks: text chunking, named entity recognition, and POS tagging. To compare our algorithm with the state-of-theart, we present the performance of the OWL-QN algorithm on the same data. We used the publicly available OWL-QN optimizer developed by Andrew and Gao.4 The meta-parameters for learning were left unchanged from the default settings of the software: the convergence tolerance was 1e-4; and the L-BFGS memory parameter was 10. 4.1 Text Chunking The first set of experiments used the text chunking data set provided for the CoNLL 2000 shared task.5 The training data consists of 8,936 sentences in which each token is annotated with the “IOB” tags representing text chunks such as noun and verb phrases. We separated 1,000 sentences from the training data and used them as the heldout data. The test data provided by the shared task was used only for the final accuracy report. The features used in this experiment were unigrams and bigrams of neighboring words, and unigrams, bigrams and trigrams of neighboring POS tags. To avoid giving any advantage to our SGD algorithms over the OWL-QN algorithm in terms of the accuracy of the resulting model, the OWL-QN algorithm was used when tuning the regularization parameter C. The tuning was performed in such a way that it maximized the likelihood of the heldout data. The learning rate parameters for SGD were then tuned in such a way that they maximized the value of the objective function in 30 passes. We first determined η0 by testing 1.0, 0.5, 0.2, and 0.1. We then determined α by testing 0.9, 0.85, and 0.8 with the fixed η0. 3This issue could also be sidestepped by, for example, adding a small O(1/k) term to the learning rate. 4Available from the original developers’ websites: http://research.microsoft.com/en-us/people/galena/ or http://research.microsoft.com/en-us/um/people/jfgao/ 5http://www.cnts.ua.ac.be/conll2000/chunking/ 481 Passes Lw/N # Features Time (sec) F-score OWL-QN 160 -1.583 18,109 598 93.62 SGD-L1 (Naive) 30 -1.671 455,651 1,117 93.64 SGD-L1 (Clipping + Lazy-Update) 30 -1.671 87,792 144 93.65 SGD-L1 (Cumulative) 30 -1.653 28,189 149 93.68 SGD-L1 (Cumulative + Exponential-Decay) 30 -1.622 23,584 148 93.66 Table 1: CoNLL-2000 Chunking task. Training time and accuracy of the trained model on the test data. -2.4 -2.2 -2 -1.8 -1.6 0 10 20 30 40 50 Objective function Passes OWL-QN SGD-L1 (Clipping) SGD-L1 (Cumulative) SGD-L1 (Cumulative + ED) Figure 3: CoNLL 2000 chunking task: Objective 0 50000 100000 150000 200000 0 10 20 30 40 50 # Active features Passes OWL-QN SGD-L1 (Clipping) SGD-L1 (Cumulative) SGD-L1 (Cumulative + ED) Figure 4: CoNLL 2000 chunking task: Number of active features. Figures 3 and 4 show the training process of the model. Each figure contains four curves representing the results of the OWL-QN algorithm and three SGD-based algorithms. “SGD-L1 (Cumulative + ED)” represents the results of our cumulative penalty-based method that uses exponential decay (ED) for learning rate scheduling. Figure 3 shows how the value of the objective function changed as the training proceeded. SGD-based algorithms show much faster convergence than the OWL-QN algorithm. Notice also that “SGD-L1 (Cumulative)” improves the objective slightly faster than “SGD-L1 (Clipping)”. The result of “SGD-L1 (Naive)” is not shown in this figure, but the curve was almost identical to that of “SGD-L1 (Clipping)”. Figure 4 shows the numbers of active features (the features whose weight are not zero). It is clearly seen that the clipping-at-zero approach fails to reduce the number of active features, while our algorithms succeeded in reducing the number of active features to the same level as OWL-QN. We then trained the models using the whole training data (including the heldout data) and evaluated the accuracy of the chunker on the test data. The number of passes performed over the training data in SGD was set to 30. The results are shown in Table 1. The second column shows the number of passes performed in the training. The third column shows the final value of the objective function per sample. The fourth column shows the number of resulting active features. The fifth column show the training time. The last column shows the f-score (harmonic mean of recall and precision) of the chunking results. There was no significant difference between the models in terms of accuracy. The naive SGD training took much longer than OWL-QN because of the overhead of applying L1 penalty to all dimensions. Our SGD algorithms finished training in 150 seconds on Xeon 2.13GHz processors. The CRF++ version 0.50, a popular CRF library developed by Taku Kudo,6 is reported to take 4,021 seconds on Xeon 3.0GHz processors to train the model using a richer feature set.7 CRFsuite version 0.4, a much faster library for CRFs, is reported to take 382 seconds on Xeon 3.0GHz, using the same feature set as ours.8 Their library uses the OWL-QN algorithm for optimization. Although direct comparison of training times is not impor6http://crfpp.sourceforge.net/ 7http://www.chokkan.org/software/crfsuite/benchmark.html 8ditto 482 tant due to the differences in implementation and hardware platforms, these results demonstrate that our algorithm can actually result in a very fast implementation of a CRF trainer. 4.2 Named Entity Recognition The second set of experiments used the named entity recognition data set provided for the BioNLP/NLPBA 2004 shared task (Kim et al., 2004).9 The training data consist of 18,546 sentences in which each token is annotated with the “IOB” tags representing biomedical named entities such as the names of proteins and RNAs. The training and test data were preprocessed by the GENIA tagger,10 which provided POS tags and chunk tags. We did not use any information on the named entity tags output by the GENIA tagger. For the features, we used unigrams of neighboring chunk tags, substrings (shorter than 10 characters) of the current word, and the shape of the word (e.g. “IL-2” is converted into “AA-#”), on top of the features used in the text chunking experiments. The results are shown in Figure 5 and Table 2. The trend in the results is the same as that of the text chunking task: our SGD algorithms show much faster convergence than the OWL-QN algorithm and produce compact models. Okanohara et al. (2006) report an f-score of 71.48 on the same data, using semi-Markov CRFs. 4.3 Part-Of-Speech Tagging The third set of experiments used the POS tagging data in the Penn Treebank (Marcus et al., 1994). Following (Collins, 2002), we used sections 0-18 of the Wall Street Journal (WSJ) corpus for training, sections 19-21 for development, and sections 22-24 for final evaluation. The POS tags were extracted from the parse trees in the corpus. All experiments for this work, including the tuning of features and parameters for regularization, were carried out using the training and development sets. The test set was used only for the final accuracy report. It should be noted that training a CRF-based POS tagger using the whole WSJ corpus is not a trivial task and was once even deemed impractical in previous studies. For example, Wellner and Vilain (2006) abandoned maximum likelihood train9The data is available for download at http://wwwtsujii.is.s.u-tokyo.ac.jp/GENIA/ERtask/report.html 10http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA/tagger/ -3.8 -3.6 -3.4 -3.2 -3 -2.8 -2.6 -2.4 -2.2 0 10 20 30 40 50 Objective function Passes OWL-QN SGD-L1 (Clipping) SGD-L1 (Cumulative) SGD-L1 (Cumulative + ED) Figure 5: NLPBA 2004 named entity recognition task: Objective. -2.8 -2.7 -2.6 -2.5 -2.4 -2.3 -2.2 -2.1 -2 -1.9 -1.8 0 10 20 30 40 50 Objective function Passes OWL-QN SGD-L1 (Clipping) SGD-L1 (Cumulative) SGD-L1 (Cumulative + ED) Figure 6: POS tagging task: Objective. ing because it was “prohibitive” (7-8 days for sections 0-18 of the WSJ corpus). For the features, we used unigrams and bigrams of neighboring words, prefixes and suffixes of the current word, and some characteristics of the word. We also normalized the current word by lowering capital letters and converting all the numerals into ‘#’, and used the normalized word as a feature. The results are shown in Figure 6 and Table 3. Again, the trend is the same. Our algorithms finished training in about 30 minutes, producing accurate models that are as compact as that produced by OWL-QN. Shen et al., (2007) report an accuracy of 97.33% on the same data set using a perceptron-based bidirectional tagging model. 5 Discussion An alternative approach to producing compact models for log-linear models is to reformulate the 483 Passes Lw/N # Features Time (sec) F-score OWL-QN 161 -2.448 30,710 2,253 71.76 SGD-L1 (Naive) 30 -2.537 1,032,962 4,528 71.20 SGD-L1 (Clipping + Lazy-Update) 30 -2.538 279,886 585 71.20 SGD-L1 (Cumulative) 30 -2.479 31,986 631 71.40 SGD-L1 (Cumulative + Exponential-Decay) 30 -2.443 25,965 631 71.63 Table 2: NLPBA 2004 Named entity recognition task. Training time and accuracy of the trained model on the test data. Passes Lw/N # Features Time (sec) Accuracy OWL-QN 124 -1.941 50,870 5,623 97.16% SGD-L1 (Naive) 30 -2.013 2,142,130 18,471 97.18% SGD-L1 (Clipping + Lazy-Update) 30 -2.013 323,199 1,680 97.18% SGD-L1 (Cumulative) 30 -1.987 62,043 1,777 97.19% SGD-L1 (Cumulative + Exponential-Decay) 30 -1.954 51,857 1,774 97.17% Table 3: POS tagging on the WSJ corpus. Training time and accuracy of the trained model on the test data. problem as a L1-constrained problem (Lee et al., 2006), where the conditional log-likelihood of the training data is maximized under a fixed constraint of the L1-norm of the weight vector. Duchi et al. (2008) describe efficient algorithms for projecting a weight vector onto the L1-ball. Although L1-regularized and L1-constrained learning algorithms are not directly comparable because the objective functions are different, it would be interesting to compare the two approaches in terms of practicality. It should be noted, however, that the efficient algorithm presented in (Duchi et al., 2008) needs to employ a red-black tree and is rather complex. In SGD learning, the need for tuning the metaparameters for learning rate scheduling can be annoying. In the case of exponential decay, the setting of α = 0.85 turned out to be a good rule of thumb in our experiments—it always produced near best results in 30 passes, but the other parameter η0 needed to be tuned. It would be very useful if those meta-parameters could be tuned in a fully automatic way. There are some sophisticated algorithms for adaptive learning rate scheduling in SGD learning (Vishwanathan et al., 2006; Huang et al., 2007). However, those algorithms use second-order information (i.e. Hessian information) and thus need access to the weights of the features that are not used in the current sample, which should slow down the weight updating process for the same reason discussed earlier. It would be interesting to investigate whether those sophisticated learning scheduling algorithms can actually result in fast training in large-scale NLP tasks. 6 Conclusion We have presented a new variant of SGD that can efficiently train L1-regularized log-linear models. The algorithm is simple and extremely easy to implement. We have conducted experiments using CRFs and three NLP tasks, and demonstrated empirically that our training algorithm can produce compact and accurate models much more quickly than a state-of-the-art quasi-Newton method for L1regularization. Acknowledgments We thank N. Okazaki, N. Yoshinaga, D. Okanohara and the anonymous reviewers for their useful comments and suggestions. The work described in this paper has been funded by the Biotechnology and Biological Sciences Research Council (BBSRC; BB/E004431/1). The research team is hosted by the JISC/BBSRC/EPSRC sponsored National Centre for Text Mining. References Galen Andrew and Jianfeng Gao. 2007. Scalable training of L1-regularized log-linear models. In Proceedings of ICML, pages 33–40. 484 Bob Carpenter. 2008. Lazy sparse stochastic gradient descent for regularized multinomial logistic regression. Technical report, Alias-i. Stephen Clark and James R. Curran. 2004. Parsing the WSJ using CCG and log-linear models. In Proceedings of COLING 2004, pages 103–110. Trevor Cohn and Philip Blunsom. 2005. Semantic role labeling with tree conditional random fields. In Proceedings of CoNLL, pages 169–172. Michael Collins, Amir Globerson, Terry Koo, Xavier Carreras, and Peter L. Bartlett. 2008. Exponentiated gradient algorithms for conditional random fields and max-margin markov networks. The Journal of Machine Learning Research (JMLR), 9:1775– 1822. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of EMNLP, pages 1–8. Christian Darken and John Moody. 1990. Note on learning rate schedules for stochastic optimization. In Proceedings of NIPS, pages 832–838. Juhn Duchi and Yoram Singer. 2008. Online and batch learning using forward-looking subgradients. In NIPS Workshop: OPT 2008 Optimization for Machine Learning. Juhn Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. 2008. Efficient projections onto the l1-ball for learning in high dimensions. In Proceedings of ICML, pages 272–279. Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based, conditional random field parsing. In Proceedings of ACL08:HLT, pages 959–967. Jianfeng Gao, Galen Andrew, Mark Johnson, and Kristina Toutanova. 2007. A comparative study of parameter estimation methods for statistical natural language processing. In Proceedings of ACL, pages 824–831. Han-Shen Huang, Yu-Ming Chang, and Chun-Nan Hsu. 2007. Training conditional random fields by periodic step size adaptation for large-scale text mining. In Proceedings of ICDM, pages 511–516. Jun’ichi Kazama and Jun’ichi Tsujii. 2003. Evaluation and extension of maximum entropy models with inequality constraints. In Proceedings of EMNLP 2003. J.-D. Kim, T. Ohta, Y. Tsuruoka, Y. Tateisi, and N. Collier. 2004. Introduction to the bio-entity recognition task at JNLPBA. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (JNLPBA), pages 70–75. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML, pages 282– 289. John Langford, Lihong Li, and Tong Zhang. 2009. Sparse online learning via truncated gradient. The Journal of Machine Learning Research (JMLR), 10:777–801. Su-In Lee, Honglak Lee, Pieter Abbeel, and Andrew Y. Ng. 2006. Efficient l1 regularized logistic regression. In Proceedings of AAAI-06, pages 401–408. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1994. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Jorge Nocedal. 1980. Updating quasi-newton matrices with limited storage. Mathematics of Computation, 35(151):773–782. Daisuke Okanohara, Yusuke Miyao, Yoshimasa Tsuruoka, and Jun’ichi Tsujii. 2006. Improving the scalability of semi-markov conditional random fields for named entity recognition. In Proceedings of COLING/ACL, pages 465–472. Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proceedings of EMNLP 1996, pages 133–142. Libin Shen, Giorgio Satta, and Aravind Joshi. 2007. Guided learning for bidirectional sequence classification. In Proceedings of ACL, pages 760–767. David Smith and Jason Eisner. 2008. Dependency parsing by belief propagation. In Proceedings of EMNLP, pages 145–156. James C. Spall. 2005. Introduction to Stochastic Search and Optimization. Wiley-IEEE. Christoph Tillmann and Tong Zhang. 2006. A discriminative global training algorithm for statistical MT. In Proceedings of COLING/ACL, pages 721–728. Kristina Toutanova, Aria Haghighi, and Christopher Manning. 2005. Joint learning improves semantic role labeling. In Proceedings of ACL, pages 589– 596. S. V. N. Vishwanathan, Nicol N. Schraudolph, Mark W. Schmidt, and Kevin P. Murphy. 2006. Accelerated training of conditional random fields with stochastic gradient methods. In Proceedings of ICML, pages 969–976. Ben Wellner and Marc Vilain. 2006. Leveraging machine readable dictionaries in discriminative sequence models. In Proceedings of LREC 2006. 485
2009
54
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 486–494, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP A global model for joint lemmatization and part-of-speech prediction Kristina Toutanova Microsoft Research Redmond, WA 98052 [email protected] Colin Cherry Microsoft Research Redmond, WA 98052 [email protected] Abstract We present a global joint model for lemmatization and part-of-speech prediction. Using only morphological lexicons and unlabeled data, we learn a partiallysupervised part-of-speech tagger and a lemmatizer which are combined using features on a dynamically linked dependency structure of words. We evaluate our model on English, Bulgarian, Czech, and Slovene, and demonstrate substantial improvements over both a direct transduction approach to lemmatization and a pipelined approach, which predicts part-of-speech tags before lemmatization. 1 Introduction The traditional problem of morphological analysis is, given a word form, to predict the set of all of its possible morphological analyses. A morphological analysis consists of a part-of-speech tag (POS), possibly other morphological features, and a lemma (basic form) corresponding to this tag and features combination (see Table 1 for examples). We address this problem in the setting where we are given a morphological dictionary for training, and can additionally make use of un-annotated text in the language. We present a new machine learning model for this task setting. In addition to the morphological analysis task we are interested in performance on two subtasks: tag-set prediction (predicting the set of possible tags of words) and lemmatization (predicting the set of possible lemmas). The result of these subtasks is directly useful for some applications.1 If we are interested in the results of each of these two 1Tag sets are useful, for example, as a basis of sparsityreducing features for text labeling tasks; lemmatization is useful for information retrieval and machine translation from a morphologically rich to a morphologically poor language, where full analysis may not be important. subtasks in isolation, we might build independent solutions which ignore the other subtask. In this paper, we show that there are strong dependencies between the two subtasks and we can improve performance on both by sharing information between them. We present a joint model for these two subtasks: it is joint not only in that it performs both tasks simultaneously, sharing information, but also in that it reasons about multiple words jointly. It uses component tag-set and lemmatization models and combines their predictions while incorporating joint features in a loglinear model, defined on a dynamically linked dependency structure of words. The model is formalized in Section 5 and evaluated in Section 6. We report results on English, Bulgarian, Slovene, and Czech and show that joint modeling reduces the lemmatization error by up to 19%, the tag-prediction error by up to 26% and the error on the complete morphological analysis task by up to 22.6%. 2 Task formalization The main task that we would like to solve is as follows: given a lexicon L which contains all morphological analyses for a set of words {w1, . . . , wn}, learn to predict all morphological analyses for other words which are outside of L. In addition to the lexicon, we are allowed to make use of unannotated text T in the language. We will predict morphological analyses for words which occur in T. Note that the task is defined on word types and not on words in context. A morphological analysis of a word w consists of a (possibly structured) POS tag t, together with one or several lemmas, which are the possible basic forms of w when it has tag t. As an example, Table 1 illustrates the morphological analyses of several words taken from the CELEX lexical database of English (Baayen et al., 1995) and the Multext-East lexicon of Bulgarian (Erjavec, 2004). The Bulgarian words are transcribed in 486 Word Forms Morphological Analyses Tags Lemmas tell verb base (VB), tell VB tell told verb past tense (VBD), tell VBD,VBN tell verb past participle (VBN), tell tells verb present 3rd person sing (VBZ), tell VBZ tell telling verb present continuous (VBG), tell VBG,JJ tell adjective (JJ), telling telling izpravena adjective fem sing indef (A–FS-N), izpraven A–FS-N izpraven verb main part past sing fem pass indef (VMPS-SFP-N), izpravia VMPS-SFP-N izpravia izpraviha verb main indicative 3rd person plural (VMIA3P), izpravia VMIA3P izpravia Table 1: Examples of morphological analyses of words in English and Bulgarian. Latin characters. Here by “POS tags” we mean both simple main pos-tags such as noun or verb, and detailed tags which include grammatical features, such as VBZ for English indicating present tense third person singular verb and A–FS-N for Bulgarian indicating a feminine singular adjective in indefinite form. In this work we predict only main POS tags for the Multext-East languages, as detailed tags were less useful for lemmatization. Since the predicted elements are sets, we use precision, recall, and F-measure (F1) to evaluate performance. The two subtasks, tag-set prediction and lemmatization are also evaluated in this way. Table 1 shows the correct tag-sets and lemmas for each of the example words in separate columns. Our task setting differs from most work on lemmatization which uses either no or a complete rootlist (Wicentowski, 2002; Dreyer et al., 2008).2 We can use all forms occurring in the unlabeled text T but there are no guarantees about the coverage of the target lemmas or the number of noise words which may occur in T (see Table 2 for data statistics). Our setting is thus more realistic since it is what one would have in a real application scenario. 3 Related work In work on morphological analysis using machine learning, the task is rarely addressed in the form described above. Some exceptions are the work (Bosch and Daelemans, 1999) which presents a model for segmenting, stemming, and tagging words in Dutch, and requires the prediction of all possible analyses, and (Antal van den Bosch and Soudi, 2007) which similarly requires the prediction of all morpho-syntactically annotated segmentations of words for Arabic. As opposed to 2These settings refer to the availability of a set of word forms which are possible lemmas; in the no rootlist setting, no other word forms in the language are given in addition to the forms in the training set; in the complete rootlist setting, a set of word forms which consists of exactly all correct lemmas for the words in the test set is given. our work, these approaches do not make use of unlabeled data and make predictions for each word type in isolation. In machine learning work on lemmatization for highly inflective languages, it is most often assumed that a word form and a POS tag are given, and the task is to predict the set of corresponding lemma(s) (Mooney and Califf, 1995; Clark, 2002; Wicentowski, 2002; Erjavec and Dˇzeroski, 2004; Dreyer et al., 2008). In our task setting, we do not assume the availability of gold-standard POS tags. As a component model, we use a lemmatizing string transducer which is related to these approaches and draws on previous work in this and related string transduction areas. Our transducer is described in detail in Section 4.1. Another related line of work approaches the disambiguation problem directly, where the task is to predict the correct analysis of word-forms in context (in sentences), and not all possible analyses. In such work it is often assumed that the correct POS tags can be predicted with high accuracy using labeled POS-disambiguated sentences (Erjavec and Dˇzeroski, 2004; Habash and Rambow, 2005). A notable exception is the work of (Adler et al., 2008), which uses unlabeled data and a morphological analyzer to learn a semi-supervised HMM model for disambiguation in context, and also guesses analyses for unknown words using a guesser of likely POS-tags. It is most closely related to our work, but does not attempt to predict all possible analyses, and does not have to tackle a complex string transduction problem for lemmatization since segmentation is mostly sufficient for the focus language of that study (Hebrew). The idea of solving two related tasks jointly to improve performance on both has been successful for other pairs of tasks (e.g., (Andrew et al., 2004)). Doing joint inference instead of taking a pipeline approach has also been shown useful for other problems (e.g., (Finkel et al., 2006; Cohen and Smith, 2007)). 487 4 Component models We use two component models as the basis of addressing the task: one is a partially-supervised POS tagger which is trained using L and the unlabeled text T; the other is a lemmatizing transducer which is trained from L and can use T. The transducer can optionally be given input POS tags in training and testing, which can inform the lemmatization. The tagger is described in Section 4.2 and the transducer is described in Section 4.1. In a pipeline approach to combining the tagging and lemmatization components, we first predict a set of tags for each word using the tagger, and then ask the lemmatizer to predict one lemma for each of the possible tags. In a direct transduction approach to the lemmatization subtask, we train the lemmatizer without access to tags and ask it to predict a single lemma for each word in testing. Our joint model, described in Section 5, is defined in a re-ranking framework, and can choose from among k-best predictions of tag-sets and lemmas generated from the component tagger and lemmatizer models. 4.1 Morphological analyser We employ a discriminative character transducer as a component morphological analyzer. The input to the transducer is an inflected word (the source) and possibly an estimated part-of-speech; the output is the lemma of the word (the target). The transducer is similar to the one described by Jiampojamarn et al. (2008) for letter-to-phoneme conversion, but extended to allow for whole-word features on both the input and the output. The core of our engine is the dynamic programming algorithm for monotone phrasal decoding (Zens and Ney, 2004). The main feature of this algorithm is its capability to transduce many consecutive characters with a single operation; the same algorithm is employed to tag subsequences in semi-Markov CRFs (Sarawagi and Cohen, 2004). We employ three main categories of features: context, transition, and vocabulary (rootlist) features. The first two are described in detail by Jiampojamarn et al. (2008), while the final is novel to this work. Context features are centered around a transduction operation such as es →e, as employed in gives →give. Context features include an indicator for the operation itself, conjoined with indicators for all n-grams of source context within a fixed window of the operation. We also employ a copy feature that indicates if the operation simply copies the source character, such as e →e. Transition features are our Markov, or n-gram features on transduction operations. Vocabulary features are defined on complete target words, according to the frequency of said word in a provided unlabeled text T. We have chosen to bin frequencies; experiments on a development set suggested that two indicators are sufficient: the first fires for any word that occurred fewer than five times, while a second also fires for those words that did not occur at all. By encoding our vocabulary in a trie and adding the trie index to the target context tracked by our dynamic programming chart, we can efficiently track these frequencies during transduction. We incorporate the source part-of-speech tag by appending it to each feature, thus the context feature es →e may become es →e, VBZ. To enable communication between the various parts-ofspeech, a universal set of unannotated features also fires, regardless of the part-of-speech, acting as a back-off model of how words in general behave during stemming. Linear weights are assigned to each of the transducer’s features using an averaged perceptron for structure prediction (Collins, 2002). Note that our features are defined in terms of the operations employed during transduction, therefore to create gold-standard feature vectors, we require not only target outputs, but also derivations to produce those outputs. We employ a deterministic heuristic to create these derivations; given a goldstandard source-target pair, we construct a derivation that uses only trivial copy operations until the first character mismatch. The remainder of the transduction is performed with a single multicharacter replacement. For example, the derivation for living →live would be l →l, i →i, v →v, ing →e. For languages with morphologies affecting more than just the suffix, one can either develop a more complex heuristic, or determine the derivations using a separate aligner such as that of Ristad and Yianilos (1998). 4.2 Tag-set prediction model The tag-set model uses a training lexicon L and unlabeled text T to learn to predict sets of tags for words. It is based on the semi-supervised tagging model of (Toutanova and Johnson, 2008). It has two sub-models: one is an ambiguity class 488 or a tag-set model, which can assign probabilities for possible sets of tags of words PTSM(ts|w) and the other is a word context model, which can assign probabilities PCM(contextsw|w, ts) to all contexts of occurrence of word w in an unlabeled text T. The word-context model is Bayesian and utilizes a sparse Dirichlet prior on the distributions of tags given words. In addition, it uses information on a four word context of occurrences of w in the unlabeled text. Note that the (Toutanova and Johnson, 2008) model is a tagger that assigns tags to occurrences of words in the text, whereas we only need to predict sets of possible tags for word types, such as the set {VBD, VBN} for the word told. Their component sub-model PTSM predicts sets of tags and it is possible to use it on its own, but by also using the context model we can take into account information from the context of occurrence of words and compute probabilities of tag-sets given the observed occurrences in T. The two are combined to make a prediction for a tag-set of a test word w, given unlabeled text T, using Bayes rule: p(ts|w) ∝PTSM(ts|w)PCM(contextsw|w, ts). We use a direct re-implementation of the wordcontext model, using variational inference following (Toutanova and Johnson, 2008). For the tagset sub-model, we employ a more sophisticated approach. First, we learn a log-linear classifier instead of a Naive Bayes model, and second, we use features derived from related words appearing in T. The possible classes predicted by the classifier are as many as the observed tag-sets in L. The sparsity is relieved by adding features for individual tags t which get shared across tag-sets containing t. There are two types of features in the model: (i) word-internal features: word suffixes, capitalization, existence of hyphen, and word prefixes (such features were also used in (Toutanova and Johnson, 2008)), and (ii) features based on related words. These latter features are inspired by (Cucerzan and Yarowsky, 2000) and are defined as follows: for a word w such as telling, there is an indicator feature for every combination of two suffixes α and β, such that there is a prefix p where telling= pα and pβ exists in T. For example, if the word tells is found in T, there would be a feature for the suffixes α=ing,β=s that fires. The suffixes are defined as all character suffixes up to length three which occur with at least 100 words. b o u n c e d VBD VBN JJ VBD VBN b o u n c e r JJR NN bounce bouncer bounce … bounc bouncer boucer f bounce bounce bounced bounced b o u n c e VB NN VB bounce bounce … … … f Figure 1: A small subset of the graphical model. The tag-sets and lemmas active in the illustrated assignment are shown in bold. The extent of joint features firing for the lemma bounce is shown as a factor indicated by the blue circle and connected to the assignments of the three words. 5 A global joint model for morphological analysis The idea of this model is to jointly predict the set of possible tags and lemmas of words. In addition to modeling dependencies between the tags and lemmas of a single word, we incorporate dependencies between the predictions for multiple words. The dependencies among words are determined dynamically. Intuitively, if two words have the same lemma, their tag-sets are dependent. For example, imagine that we need to determine the tag-set and lemmas of the word bouncer. The tagset model may guess that the word is an adjective in comparative form, because of its suffix, and because its occurrences in T might not strongly indicate that it is a noun. The lemmatizer can then lemmatize the word like an adjective and come up with bounce as a lemma. If the tag-set model is fairly certain that bounce is not an adjective, but is a verb or a noun, a joint model which looks simultaneously at the tags and lemmas of bouncer and bounce will detect a problem with this assignments and will be able to correct the tagging and lemmatization error for bouncer. The main source of information our joint model uses is information about the assignments of all words that have the same lemma l. If the tag-set model is better able to predict the tags of some of these words, the information can propagate to the other words. If some of them are lemmatized correctly, the model can be pushed to lemmatize the others correctly as well. Since the lemmas of test words are not given, the dependencies between as489 signments of words are determined dynamically by the currently chosen set of lemmas. As an example, Figure 1 shows three sample English words and their possible tag-sets and lemmas determined by the component models. It also illustrates the dependencies between the variables induced by the features of our model active for the current (incorrect) assignment. 5.1 Formal model description Given a set of test words w1, . . . wn and additional word forms occurring in unlabeled data T, we derive an extended set of words w1, . . . , wm which contains the original test words and additional related words, which can provide useful information about the test words. For example, if bouncer is a test word and bounce and bounced occur in T these two words can be added to the set of test words because they can contribute to the classification of bouncer. The algorithm for selecting related words is simple: we add any word for which the pipelined model predicts a lemma which is also predicted as one of the top k lemmas for a word from the test set. We define a joint model over tag-sets and lemmas for all words in the extended set, using features defined on a dynamically linked structure of words and their assigned analyses. It is a reranking model because the tag-sets and possible lemmas are limited to the top k options provided by the pipelined model.3 Our model is defined on a very large set of variables, each of which can take a large set of values. For example, for a test set of size about 4,000 words for Slovene an additional about 9,000 words from T were added to the extended set. Each of these words has a corresponding variable which indicates its tag-set and lemma assignment. The possible assignments range over all combinations available from the tagging and lemmatizer component models; using the top three tag-sets per word and top three lemmas per tag gives an average of around 11.2 possible assignments per word. This is because the tagsets have about 1.2 tags on average and we need to choose a lemma for each. While it is not the case that all variables are connected to each other by features, the connectivity structure can be complex. More formally, let tsj i denote possible tag-sets 3We used top three tag-sets and top three lemmas for each tag for training. for word wi, for j = 1 . . . k. Also, let li(t)j denote the top lemmas for word wi given tag t. An assignment of a tag-set and lemmas to a word wi consists of a choice of a tag-set, tsi (one of the possible k tag-sets for the word) and, for each tag t in the chosen tag-set, a choice of a lemma out of the possible lemmas for that tag and word. For brevity, we denote such joint assignment by tli. As a concrete example, in Figure 1, we can see the current assignments for three words: the assigned tag-sets are shown underlined and in bolded boxes (e.g., for bounced, the tag-set {VBD,VBN} is chosen; for both tags, the lemma bounce is assigned). Other possible tag-sets and other possible lemmas for each chosen tag are shown in greyed boxes. Our joint model defines a distribution over assignments to all words w1, . . . , wm. The form of the model is as follows: P(tl1, . . . , tlm) = eF (tl1,...,tlm)′θ P tl′ 1,...,tl′m eF (tl′ 1,...,tl′m)′θ Here F denotes the vector of features defined over an assignment for all words in the set and θ is a vector of parameters for the features. Next we detail the types of features used. Word-local features. The aim of such features is to look at the set of all tags assigned to a word together with all lemmas and capture coarse-grained dependencies at this level. These features introduce joint dependencies between the tags and lemmas of a word, but they are still local to the assignment of single words. One such feature is the number of distinct lemmas assigned across the different tags in the assigned tag-set. Another such feature is the above joined with the identity of the tag-set. For example, if a word’s tag-set is {VBD,VBN}, it will likely have the same lemma for both tags and the number of distinct lemmas will be one (e.g., the word bounced), whereas if it has the tags VBG, JJ the lemmas will be distinct for the two tags (e.g. telling). In this class of features are also the log-probabilities from the tag-set and lemmatizer models. Non-local features. Our non-local features look, for every lemma l, at all words which have that lemma as the lemma for at least one of their assigned tags, and derive several predicates on the joint assignment to these words. For example, using our word graph in the figure, the lemma bounce is assigned to bounced for tags VBD and VBN, to bounce for tags VB and NN, and to bouncer for tag JJR. One feature looks at the combination of tags corresponding to the differ490 ent forms of the lemma. In this case this would be [JJR,NN+VB-lem,VBD+VBN]. The feature also indicates any word which is exactly equal to the lemma with lem as shown for the NN and VB tags corresponding to bounce. Our model learns a negative weight for this feature, because the lemma of a word with tag JJR is most often a word with at least one tag equal to JJ. A variant of this feature also appends the final character of each word, like this: [JJR+r,NN+VB+e-lem,VBD+VBNd]. This variant was helpful for the Slavic languages because when using only main POS tags, the granularity of the feature is too coarse. Another feature simply counts the number of distinct words having the same lemma, encouraging reusing the same lemma for different words. An additional feature fires for every distinct lemma, in effect counting the number of assigned lemmas. 5.2 Training and inference Since the model is defined to re-rank candidates from other component models, we need two different training sets: one for training the component models, and another for training the joint model features. This is because otherwise the accuracy of the component models would be overestimated by the joint model. Therefore, we train the component models on the training lexicons LTrain and select their hyperparameters on the LDev lexicons. We then train the joint model on the LDev lexicons and evaluate it on the LTest lexicons. When applying models to the LTest set, the component models are first retrained on the union of LTrain and LDev so that all models can use the same amount of training data, without giving unfair advantage to the joint model. Such set-up is also used for other re-ranking models (Collins, 2000). For training the joint model, we maximize the log-likelihood of the correct assignment to the words in LDev, marginalizing over the assignments of other related words added to the graphical model. We compute the gradient approximately by computing expectations of features given the observed assignments and marginal expectations of features. For computing these expectations we use Gibbs sampling to sample complete assignments to all words in the graph.4 We 4We start the Gibbs sampler by the assignments found by the pipeline method and then use an annealing schedule to find a neighborhood of high-likelihood assignments, before taking about 10 complete samples from the graph to compute expectations. use gradient descent with a small learning rate, selected to optimize the accuracy on the LDev set. For finding a most likely assignment at test time, we use the sampling procedure, this time using a slower annealing schedule before taking a single sample to output as a guessed answer. For the Gibbs sampler, we need to sample an assignment for each word in turn, given the current assignments of all other words. Let us denote the current assignment to all words except wi as tl−i. The conditional probability of an assignment tli for word wi is given by: P(tli|tl−i) = eF (tli,tl−i)′θ P tl′ i eF (tl′ i,tl−i)′θ The summation in the denominator is over all possible assignments for word wi. To compute these quantities we need to consider only the features involving the current word. Because of the nature of the features in our model, it is possible to isolate separate connected components which do not share features for any assignment. If two words do not share lemmas for any of their possible assignments, they will be in separate components. Block sampling within a component could be used if the component is relatively small; however, for the common case where there are five or more words in a fully connected component approximate inference is necessary. 6 Experiments 6.1 Data We use datasets for four languages: English, Bulgarian, Slovene, and Czech. For each of the languages, we need a lexicon with morphological analyses L and unlabeled text. For English we derive the lexicon from CELEX (Baayen et al., 1995), and for the other languages we use the Multext-East resources (Erjavec, 2004). For English we use only open-class words (nouns, verbs, adjectives, and adverbs), and for the other languages we use words of all classes. The unlabeled data for English we use is the union of the Penn Treebank tagged WSJ data (Marcus et al., 1993) and the BLLIP corpus.5 For the rest of the languages we use only the text of George Orwell’s novel 1984, which is provided in morphologically disambiguated form as part of MultextEast (but we don’t use the annotations). Table 2 5The BLLIP corpus contains approximately 30 million words of automatically parsed WSJ data. We used these corpora as plain text, without the annotations. 491 Lang LTrain LDev LTest Text ws tl nf ws tl nf ws tl nf Eng 5.2 1.5 0.3 7.4 1.4 0.8 7.4 1.4 0.8 320 Bgr 6.9 1.2 40.8 3.8 1.1 53.6 3.8 1.1 52.8 16.3 Slv 7.5 1.2 38.3 4.2 1.2 49.1 4.2 1.2 49.8 17.8 Cz 7.9 1.1 32.8 4.5 1.1 43.2 4.5 1.1 43.0 19.1 Table 2: Data sets used in experiments. The number of word types (ws) is shown approximately in thousands. Also shown are average number of complete analyses (tl) and percent target lemmas not found in the unlabeled text (nf). details statistics about the data set sizes for different languages. We use three different lexicons for each language: one for training (LTrain), one for development (LDev), and one for testing (LTest). The global model weights are trained on the development set as described in section 5.2. The lexicons are derived such that very frequent words are likely to be in the training lexicon and less frequent words in the dev and test lexicons, to simulate a natural process of lexicon construction. The English lexicons were constructed as follows: starting with the full CELEX dictionary and the text of the Penn Treebank corpus, take all word forms appearing in the first 2000 sentences (and are found in CELEX) to form the training lexicon, and then take all other words occurring in the corpus and split them equally between the development and test lexicons (every second word is placed in the test set, in the order of first occurrence in the corpus). For the rest of the languages, the same procedure is applied, starting with the full Multext-East lexicons and the text of the novel 1984. Note that while it is not possible for training words to be included in the other lexicons, it is possible for different forms of the same lemma to be in different lexicons. The size of the training lexicons is relatively small and we believe this is a realistic scenario for application of such models. In Table 2 we can see the number of words in each lexicon and the unlabeled corpora (by type), the average number of tag-lemma combinations per word,6 as well as the percentage of word lemmas which do not occur in the unlabeled text. For English, the large majority of target lemmas are available in T (with only 0.8% missing), whereas for the Multext-East languages around 40 to 50% of the target lemmas are not found in T; this partly explains the lower performance on these languages. 6The tags are main tags for the Multext-East languages and detailed tags for English. Language Tag Model Tag Lem T+L English none – 94.0 – full 89.9 95.3 88.9 no unlab data 80.0 94.1 78.3 Bulgarian none – 73.2 – full 87.9 79.9 75.3 no unlab data 80.2 76.3 70.4 Table 3: Development set results using different tag-set models and pipelined prediction. 6.2 Evaluation of direct and pipelined models for lemmatization As a first experiment which motivates our joint modeling approach, we present a comparison on lemmatization performance in two settings: (i) when no tags are used in training or testing by the transducer, and (ii) when correct tags are used in training and tags predicted by the tagging model are used in testing. In this section, we report performance on English and Bulgarian only. Comparable performance on the other Multext-East languages is shown in Section 6. Results are presented in Table 3. The experiments are performed using LTrain for training and LDev for testing. We evaluate the models on tagset F-measure (Tag), lemma-set F-measure(Lem) and complete analysis F-measure (T+L). We show the performance on lemmatization when tags are not predicted (Tag Model is none), and when tags are predicted by the tag-set model. We can see that on both languages lemmatization is significantly improved when a latent tag-set variable is used as a basis for prediction: the relative error reduction in Lem F-measure is 21.7% for English and 25% for Bulgarian. For Bulgarian and the other Slavic languages we predicted only main POS tags, because this resulted in better lemmatization performance. It is also interesting to evaluate the contribution of the unlabeled data T to the performance of the tag-set model. This can be achieved by removing the word-context sub-model of the tagger and also removing related word features. The results achieved in this setting for English and Bulgarian are shown in the rows labeled “no unlab data”. We can see that the tag-set F-measure of such models is reduced by 8 to 9 points and the lemmatization F-measure is similarly reduced. Thus a large portion of the positive impact tagging has on lemmatization is due to the ability of tagging models to exploit unlabeled data. The results of this experiment show there are strong dependencies between the tagging and 492 lemmatization subtasks, which a joint model could exploit. 6.3 Evaluation of joint models Since our joint model re-ranks candidates produced by the component tagger and lemmatizer, there is an upper bound on the achievable performance. We report these upper bounds for the four languages in Table 4, at the rows which list m-best oracle under Model. The oracle is computed using five-best tag-set predictions and three-best lemma predictions per tag. We can see that the oracle performance on tag F-measure is quite high for all languages, but the performance on lemmatization and the complete task is close to only 90 percent for the Slavic languages. As a second oracle we also report the perfect tag oracle, which selects the lemmas determined by the transducer using the correct part-of-speech tags. This shows how well we could do if we made the tagging model perfect without changing the lemmatizer. For the Slavic languages this is quite a bit lower than the m-best oracles, showing that the majority of errors of the pipelined approach cannot be fixed by simply improving the tagging model. Our global model has the potential to improve lemma assignments even given correct tags, by sharing information among multiple words. The actual achieved performance for three different models is also shown. For comparison, the lemmatization performance of the direct transduction approach which makes no use of tags is also shown. The pipelined models select onebest tag-set predictions from the tagging model, and the 1-best lemmas for each tag, like the models used in Section 6.2. The model name local FS denotes a joint log-linear model which has only word-internal features. Even with only word-internal features, performance is improved for most languages. The the highest improvement is for Slovene and represents a 7.8% relative reduction in F-measure error on the complete task. When features looking at the joint assignments of multiple words are added, the model achieves much larger improvements (models joint FS in the Table) across all languages.7 The highest overall improvement compared to the pipelined approach is again for Slovene and represents 22.6% reduction in error for the full task; the reduction is 40% 7Since the optimization is stochastic, the results are averaged over four runs. The standard deviations are between 0.02 and 0.11. Language Model Tag Lem T+L English tag oracle 100 98.9 98.7 English m-best oracle 97.9 99.0 97.5 English no tags – 94.3 – English pipelined 90.9 95.9 90.0 English local FS 90.8 95.9 90.0 English joint FS 91.7 96.1 91.0 Bulgarian tag oracle 100 84.3 84.3 Bulgarian m-best oracle 98.4 90.7 89.9 Bulgarian no tags – 73.2 – Bulgarian pipelined 87.9 78.5 74.6 Bulgarian local FS 88.9 79.2 75.8 Bulgarian joint FS 89.5 81.0 77.8 Slovene tag oracle 100 85.9 85.9 Slovene m-best oracle 98.7 91.2 90.5 Slovene no tags – 78.4 – Slovene pipelined 89.7 82.1 78.3 Slovene local FS 90.8 82.7 80.0 Slovene joint FS 92.4 85.5 83.2 Czech tag oracle 100 83.2 83.2 Czech m-best oracle 98.1 88.7 87.4 Czech no tags – 78.7 – Czech pipelined 92.3 80.7 77.5 Czech local FS 92.3 80.9 78.0 Czech joint FS 93.7 83.0 80.5 Table 4: Results on the test set achieved by joint and pipelined models and oracles. The numbers represent tag-set prediction F-measure (Tag), lemma-set prediction F-measure (Lem) and F-measure on predicting complete tag, lemma analysis sets (T+L). relative to the upper bound achieved by the m-best oracle. The smallest overall improvement is for English, representing a 10% error reduction overall, which is still respectable. The larger improvement for Slavic languages might be due to the fact that there are many more forms of a single lemma and joint reasoning allows us to pool information across the forms. 7 Conclusion In this paper we concentrated on the task of morphological analysis, given a lexicon and unannotated data. We showed that the tasks of tag prediction and lemmatization are strongly dependent and that by building state-of-the art models for the two subtasks and performing joint inference we can improve performance on both tasks. The main contribution of our work was that we introduced a joint model for the two subtasks which incorporates dependencies between predictions for multiple word types. We described a set of features and an approximate inference procedure for a global log-linear model capturing such dependencies, and demonstrated its effectiveness on English and three Slavic languages. Acknowledgements We would like to thank Galen Andrew and Lucy Vanderwende for useful discussion relating to this work. 493 References Meni Adler, Yoav Goldberg, and Michael Elhadad. 2008. Unsupervised lexicon-based resolution of unknown words for full morpholological analysis. In Proceedings of ACL08: HLT. Galen Andrew, Trond Grenager, and Christopher Manning. 2004. Verb sense and subcategorization: Using joint inference to improve performance on complementary tasks. In EMNLP. Erwin Marsi Antal van den Bosch and Abdelhadi Soudi. 2007. Memory-based morphological analysis and partof-speech tagging of arabic. In Abdelhadi Soudi, Antal van den Bosch, and Gunter Neumann, editors, Arabic Computational Morphology Knowledge-based and Empirical Methods. Springer. R. H. Baayen, R. Piepenbrock, and L. Gulikers. 1995. The CELEX lexical database. Antal Van Den Bosch and Walter Daelemans. 1999. Memory-based morphological analysis. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. Alexander Clark. 2002. Memory-based learning of morphology with stochastic transducers. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 513–520. Shay B. Cohen and Noah A. Smith. 2007. Joint morphological and syntactic disambiguation. In EMNLP. Michael Collins. 2000. Discriminative reranking for natural language parsing. In ICML. M. Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In EMNLP. S. Cucerzan and D. Yarowsky. 2000. Language independent minimally supervised induction of lexical probabilities. In Proceedings of ACL 2000. Markus Dreyer, Jason R. Smith, and Jason Eisner. 2008. Latent-variable modeling of string transductions with finite-state methods. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1080–1089, Honolulu, October. Tomaˇz Erjavec and Saˇao Dˇzeroski. 2004. Machine learning of morphosyntactic structure: lemmatizing unknown Slovene words. Applied Artificial Intelligence, 18:17— 41. Tomaˇz Erjavec. 2004. Multext-east version 3: Multilingual morphosyntactic specifications, lexicons and corpora. In Proceedings of LREC-04. Jenny Rose Finkel, Christopher D. Manning, and Andrew Y. Ng. 2006. Solving the problem of cascading errors: Approximate bayesian inference for linguistic annotation pipelines. In EMNLP. Nizar Habash and Owen Rambow. 2005. Arabic tokenization, part-of-speech tagging and morphological disambiguation in one fell swoop. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics. Sittichai Jiampojamarn, Colin Cherry, and Grzegorz Kondrak. 2008. Joint processing and discriminative training for letter-to-phoneme conversion. In Proceedings of ACL08: HLT, pages 905–913, Columbus, Ohio, June. M. Marcus, B. Santorini, and Marcinkiewicz. 1993. Building a large annotated coprus of english: the penn treebank. Computational Linguistics, 19. Raymond J. Mooney and Mary Elaine Califf. 1995. Induction of first-order decision lists: Results on learning the past tense of english verbs. Journal of Artificial Intelligence Research, 3:1—24. Eric Sven Ristad and Peter N. Yianilos. 1998. Learning string-edit distance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(5):522–532. Sunita Sarawagi and William Cohen. 2004. Semimarkov conditional random fields for information extraction. In ICML. Kristina Toutanova and Mark Johnson. 2008. A bayesian LDA-based model for semi-supervised part-of-speech tagging. In nips08. Richard Wicentowski. 2002. Modeling and Learning Multilingual Inflectional Morphology in a Minimally Supervised Framework. Ph.D. thesis, Johns-Hopkins University. R. Zens and H. Ney. 2004. Improvements in phrase-based statistical machine translation. In HLT-NAACL, pages 257–264, Boston, USA, May. 494
2009
55
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 495–503, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Distributional Representations for Handling Sparsity in Supervised Sequence-Labeling Fei Huang Temple University 1805 N. Broad St. Wachman Hall 324 [email protected] Alexander Yates Temple University 1805 N. Broad St. Wachman Hall 324 [email protected] Abstract Supervised sequence-labeling systems in natural language processing often suffer from data sparsity because they use word types as features in their prediction tasks. Consequently, they have difficulty estimating parameters for types which appear in the test set, but seldom (or never) appear in the training set. We demonstrate that distributional representations of word types, trained on unannotated text, can be used to improve performance on rare words. We incorporate aspects of these representations into the feature space of our sequence-labeling systems. In an experiment on a standard chunking dataset, our best technique improves a chunker from 0.76 F1 to 0.86 F1 on chunks beginning with rare words. On the same dataset, it improves our part-of-speech tagger from 74% to 80% accuracy on rare words. Furthermore, our system improves significantly over a baseline system when applied to text from a different domain, and it reduces the sample complexity of sequence labeling. 1 Introduction Data sparsity and high dimensionality are the twin curses of statistical natural language processing (NLP). In many traditional supervised NLP systems, the feature space includes dimensions for each word type in the data, or perhaps even combinations of word types. Since vocabularies can be extremely large, this leads to an explosion in the number of parameters. To make matters worse, language is Zipf-distributed, so that a large fraction of any training data set will be hapax legomena, very many word types will appear only a few times, and many word types will be left out of the training set altogether. As a consequence, for many word types supervised NLP systems have very few, or even zero, labeled examples from which to estimate parameters. The negative effects of data sparsity have been well-documented in the NLP literature. The performance of state-of-the-art, supervised NLP systems like part-of-speech (POS) taggers degrades significantly on words that do not appear in the training data, or out-of-vocabulary (OOV) words (Lafferty et al., 2001). Performance also degrades when the domain of the test set differs from the domain of the training set, in part because the test set includes more OOV words and words that appear only a few times in the training set (henceforth, rare words) (Blitzer et al., 2006; Daum´e III and Marcu, 2006; Chelba and Acero, 2004). We investigate the use of distributional representations, which model the probability distribution of a word’s context, as techniques for finding smoothed representations of word sequences. That is, we use the distributional representations to share information across unannotated examples of the same word type. We then compute features of the distributional representations, and provide them as input to our supervised sequence labelers. Our technique is particularly well-suited to handling data sparsity because it is possible to improve performance on rare words by supplementing the training data with additional unannotated text containing more examples of the rare words. We provide empirical evidence that shows how distributional representations improve sequencelabeling in the face of data sparsity. Specifically, we investigate empirically the effects of our smoothing techniques on two sequence-labeling tasks, POS tagging and chunking, to answer the following: 1. What is the effect of smoothing on sequencelabeling accuracy for rare word types? Our best smoothing technique improves a POS tagger by 11% on OOV words, and a chunker by an impressive 21% on OOV words. 495 2. Can smoothing improve adaptability to new domains? After training our chunker on newswire text, we apply it to biomedical texts. Remarkably, we find that the smoothed chunker achieves a higher F1 on the new domain than the baseline chunker achieves on a test set from the original newswire domain. 3. How does our smoothing technique affect sample complexity? We show that smoothing drastically reduces sample complexity: our smoothed chunker requires under 100 labeled samples to reach 85% accuracy, whereas the unsmoothed chunker requires 3500 samples to reach the same level of performance. The remainder of this paper is organized as follows. Section 2 discusses the smoothing problem for word sequences, and introduces three smoothing techniques. Section 3 presents our empirical study of the effects of smoothing on two sequencelabeling tasks. Section 4 describes related work, and Section 5 concludes and suggests items for future work. 2 Smoothing Natural Language Sequences To smooth a dataset is to find an approximation of it that retains the important patterns of the original data while hiding the noise or other complicating factors. Formally, we define the smoothing task as follows: let D = {(x, z)|x is a word sequence, z is a label sequence} be a labeled dataset of word sequences, and let M be a machine learning algorithm that will learn a function f to predict the correct labels. The smoothing task is to find a function g such that when M is applied to D′ = {(g(x), z)|(x, z) ∈D}, it produces a function f′ that is more accurate than f. For supervised sequence-labeling problems in NLP, the most important “complicating factor” that we seek to avoid through smoothing is the data sparsity associated with word-based representations. Thus, the task is to find g such that for every word x, g(x) is much less sparse, but still retains the essential features of x that are useful for predicting its label. As an example, consider the string “Researchers test reformulated gasolines on newer engines.” In a common dataset for NP chunking, the word “reformulated” never appears in the training data, but appears four times in the test set as part of the NP “reformulated gasolines.” Thus, a learning algorithm supplied with word-level features would have a difficult time determining that “reformulated” is the start of a NP. Character-level features are of little help as well, since the “-ed” suffix is more commonly associated with verb phrases. Finally, context may be of some help, but “test” is ambiguous between a noun and verb, and “gasolines” is only seen once in the training data, so there is no guarantee that context is sufficient to make a correct judgment. On the other hand, some of the other contexts in which “reformulated” appears in the test set, such as “testing of reformulated gasolines,” provide strong evidence that it can start a NP, since “of” is a highly reliable indicator that a NP is to follow. This example provides the intuition for our approach to smoothing: we seek to share information about the contexts of a word across multiple instances of the word, in order to provide more information about words that are rarely or never seen in training. In particular, we seek to represent each word by a distribution over its contexts, and then provide the learning algorithm with features computed from this distribution. Importantly, we seek distributional representations that will provide features that are common in both training and test data, to avoid data sparsity. In the next three sections, we develop three techniques for smoothing text using distributional representations. 2.1 Multinomial Representation In its simplest form, the context of a word may be represented as a multinomial distribution over the terms that appear on either side of the word. If V is the vocabulary, or the set of word types, and X is a sequence of random variables over V, the left and right context of Xi = v may each be represented as a probability distribution over V: P(Xi−1|Xi = v) and P(Xi+1|X = v) respectively. We learn these distributions from unlabeled texts in two different ways. The first method computes word count vectors for the left and right contexts of each word type in the vocabulary of the training and test texts. We also use a large collection of additional text to determine the vectors. We then normalize each vector to form a probability distribution. The second technique first applies TF-IDF weighting to each vector, where the context words of each word type constitute a document, before applying normalization. This gives greater weight to words with more idiosyncratic distributions and may improve the informativeness of a distributional representation. We refer to these techniques as TF and TF-IDF. 496 To supply a sequence-labeling algorithm with information from these distributional representations, we compute real-valued features of the context distributions. In particular, for every word xi in a sequence, we provide the sequence labeler with a set of features of the left and right contexts indexed by v ∈V: F left v (xi) = P(Xi−1 = v|xi) and F right v (xi) = P(Xi+1 = v|xi). For example, the left context for “reformulated” in our example above would contain a nonzero probability for the word “of.” Using the features F(xi), a sequence labeler can learn patterns such as, if xi has a high probability of following “of,” it is a good candidate for the start of a noun phrase. These features provide smoothing by aggregating information across multiple unannotated examples of the same word. 2.2 LSA Model One drawback of the multinomial representation is that it does not handle sparsity well enough, because the multinomial distributions themselves are so high-dimensional. For example, the two phrases “red lamp” and “magenta tablecloth” share no words in common. If “magenta” is never observed in training, the fact that “tablecloth” appears in its right context is of no help in connecting it with the phrase “red lamp.” But if we can group similar context words together, putting “lamp” and “tablecloth” into a category for household items, say, then these two adjectives will share that category in their context distributions. Any patterns learned for the more common “red lamp” will then also apply to the less common “magenta tablecloth.” Our second distributional representation aggregates information from multiple context words by grouping together the distributions P(xi−1 = v|xi = w) and P(xi−1 = v′|xi = w) if v and v′ appear together with many of the same words w. Aggregating counts in this way smooths our representations even further, by supplying better estimates when the data is too sparse to estimate P(xi−1|xi) accurately. Latent Semantic Analysis (LSA) (Deerwester et al., 1990) is a widely-used technique for computing dimensionality-reduced representations from a bag-of-words model. We apply LSA to the set of right context vectors and the set of left context vectors separately, to find compact versions of each vector, where each dimension represents a combination of several context word types. We normalize each vector, and then calculate features as above. After experimenting with different choices for the number of dimensions to reduce our vectors to, we choose a value of 10 dimensions as the one that maximizes the performance of our supervised sequence labelers on held-out data. 2.3 Latent Variable Language Model Representation To take smoothing one step further, we present a technique that aggregates context distributions both for similar context words xi−1 = v and v′, and for similar words xi = w and w′. Latent variable language models (LVLMs) can be used to produce just such a distributional representation. We use Hidden Markov Models (HMMs) as the main example in the discussion and as the LVLMs in our experiments, but the smoothing technique can be generalized to other forms of LVLMs, such as factorial HMMs and latent variable maximum entropy models (Ghahramani and Jordan, 1997; Smith and Eisner, 2005). An HMM is a generative probabilistic model that generates each word xi in the corpus conditioned on a latent variable Yi. Each Yi in the model takes on integral values from 1 to S, and each one is generated by the latent variable for the preceding word, Yi−1. The distribution for a corpus x = (x1, . . . , xN) given a set of state vectors y = (y1, . . . , yN) is given by: P(x|y) = Y i P(xi|yi)P(yi|yi−1) Using Expectation-Maximization (Dempster et al., 1977), it is possible to estimate the distributions for P(xi|yi) and P(yi|yi−1) from unlabeled data. We use a trained HMM to determine the optimal sequence of latent states ˆyi using the wellknown Viterbi algorithm (Rabiner, 1989). The output of this process is an integer (ranging from 1 to S) for every word xi in the corpus; we include a new boolean feature for each possible value of yi in our sequence labelers. To compare our models, note that in the multinomial representation we directly model the probability that a word v appears before a word w: P(xi−1 = v|xi = w)). In our LSA model, we find latent categories of context words z, and model the probability that a category appears before the current word w: P(xi−1 = z|xi = w). The HMM finds (probabilistic) categories Y for both the current word xi and the context word xi−1, and models the probability that one category follows the 497 other: P(Yi|Yi−1). Thus the HMM is our most extreme smoothing model, as it aggregates information over the greatest number of examples: for a given consecutive pair of words xi−1, xi in the test set, it aggregates over all pairs of consecutive words x′ i−1, x′ i where x′ i−1 is similar to xi−1 and x′ i is similar to xi. 3 Experiments We tested the following hypotheses in our experiments: 1. Smoothing can improve the performance of a supervised sequence labeling system on words that are rare or nonexistent in the training data. 2. A supervised sequence labeler achieves greater accuracy on new domains with smoothing. 3. A supervised sequence labeler has a better sample complexity with smoothing. 3.1 Experimental Setup We investigate the use of smoothing in two test systems, conditional random field (CRF) models for POS tagging and chunking. To incorporate smoothing into our models, we follow the following general procedure: first, we collect a set of unannotated text from the same domain as the test data set. Second, we train a smoothing model on the text of the training data, the test data, and the additional collection. We then automatically annotate both the training and test data with features calculated from the distributional representation. Finally, we train the CRF model on the annotated training set and apply it to the test set. We use an open source CRF software package designed by Sunita Sajarwal and William W. Cohen to implement our CRF models.1 We use a set of boolean features listed in Table 1. Our baseline CRF system for POS tagging follows the model described by Lafferty et al.(2001). We include transition features between pairs of consecutive tag variables, features between tag variables and words, and a set of orthographic features that Lafferty et al. found helpful for performance on OOV words. Our smoothed models add features computed from the distributional representations, as discussed above. Our chunker follows the system described by Sha and Pereira (2003). In addition to the transition, word-level, and orthographic features, we include features relating automatically-generated POS tags and the chunk labels. Unlike Sha and 1Available from http://sourceforge.net/projects/crf/ CRF Feature Set Transition zi=z zi=z and zi−1=z′ Word xi=w and zi=z POS ti=t and zi=z Orthography for every s ∈{-ing, -ogy, ed, -s, -ly, -ion, -tion, -ity}, suffix(xi)= s and zi=z xi is capitalized and zi = z xi has a digit and zi = z TF, TF-IDF, and LSA features for every context type v, F left v (xi) and F right v (xi) HMM features yi=y and zi = z Table 1: Features used in our CRF systems. zi variables represent labels to be predicted, ti represent tags (for the chunker), and xi represent word tokens. All features are boolean except for the TF, TF-IDF, and LSA features. Pereira, we exclude features relating consecutive pairs of words and a chunk label, or features relating consecutive tag labels and a chunk label, in order to expedite our experiments. We found that including such features does improve chunking F1 by approximately 2%, but it also significantly slows down CRF training. 3.2 Rare Word Accuracy For these experiments, we use the Wall Street Journal portion of the Penn Treebank (Marcus et al., 1993). Following the CoNLL shared task from 2000, we use sections 15-18 of the Penn Treebank for our labeled training data for the supervised sequence labeler in all experiments (Tjong et al., 2000). For the tagging experiments, we train and test using the gold standard POS tags contained in the Penn Treebank. For the chunking experiments, we train and test with POS tags that are automatically generated by a standard tagger (Brill, 1994). We tested the accuracy of our models for chunking and POS tagging on section 20 of the Penn Treebank, which corresponds to the test set from the CoNLL 2000 task. Our distributional representations are trained on sections 2-22 of the Penn Treebank. Because we include the text from the train and test sets in our training data for the distributional representations, we do not need to worry about smoothing them — when they are decoded on the test set, they 498 Freq: 0 1 2 0-2 all #Samples 438 508 588 1534 46661 Baseline .62 .77 .81 .74 .93 TF .76 .72 .77 .75 .92 TF-IDF .82 .75 .76 .78 .94 LSA .78 .80 .77 .78 .94 HMM .73 .81 .86 .80 .94 Table 2: POS tagging accuracy: our HMM-smoothed tagger outperforms the baseline tagger by 6% on rare words. Differences between the baseline and the HMM are statistically significant at p < 0.01 for the OOV, 0-2, and all cases using the two-tailed Chi-squared test with 1 degree of freedom. will not encounter any previously unseen words. However, to speed up training during our experiments and, in some cases, to avoid running out of memory, we replaced words appearing twice or fewer times in the data with the special symbol *UNKNOWN*. In addition, all numbers were replaced with another special symbol. For the LSA model, we had to use a more drastic cutoff to fit the singular value decomposition computation into memory: we replaced words appearing 10 times or fewer with the *UNKNOWN* symbol. We initialize our HMMs randomly. We run EM ten times and take the model with the best cross-entropy on a held-out set. After experimenting with different variations of HMM models, we settled on a model with 80 latent states as a good compromise between accuracy and efficiency. For our POS tagging experiments, we measured the accuracy of the tagger on “rare” words, or words that appear at most twice in the training data. For our chunking experiments, we focus on chunks that begin with rare words, as we found that those were the most difficult for the chunker to identify correctly. So we define “rare” chunks as those that begin with words appearing at most twice in training data. To ensure that our smoothing models have enough training data for our test set, we further narrow our focus to those words that appear rarely in the labeled training data, but appear at least ten times in sections 2-22. Tables 2 and 3 show the accuracy of our smoothed models and the baseline model on tagging and chunking, respectively. The line for “all” in both tables indicates results on the complete test set. Both our baseline tagger and chunker achieve respectable results on their respective tasks for all words, and the results were good enough for Freq: 0 1 2 0-2 all #Samples 133 199 231 563 21900 Baseline .69 .75 .81 .76 .90 TF .70 .82 .79 .77 .89 TF-IDF .77 .77 .80 .78 .90 LSA .84 .82 .83 .84 .90 HMM .90 .85 .85 .86 .93 Table 3: Chunking F1: our HMM-smoothed chunker outperforms the baseline CRF chunker by 0.21 on chunks that begin with OOV words, and 0.10 on chunks that begin with rare words. us to be satisfied that performance on rare words closely follows how a state-of-the-art supervised sequence-labeler behaves. The chunker’s accuracy is roughly in the middle of the range of results for the original CoNLL 2000 shared task (Tjong et al., 2000) . While several systems have achieved slightly higher accuracy on supervised POS tagging, they are usually trained on larger training sets. As expected, the drop-off in the baseline system’s performance from all words to rare words is impressive for both tasks. Comparing performance on all terms and OOV terms, the baseline tagger’s accuracy drops by 0.31, and the baseline chunker’s F1 drops by 0.21. Comparing performance on all terms and rare terms, the drop is less severe but still dramatic: 0.19 for tagging and 0.15 for chunking. Our hypothesis that smoothing would improve performance on rare terms is validated by these experiments. In fact, the more aggregation a smoothing model performs, the better it appears to be at smoothing. The HMM-smoothed system outperforms all other systems in all categories except tagging on OOV words, where TF-IDF performs best. And in most cases, the clear trend is for HMM smoothing to outperform LSA, which in turn outperforms TF and TF-IDF. HMM tagging performance on OOV terms improves by 11%, and chunking performance by 21%. Tagging performance on all of the rare terms improves by 6%, and chunking by 10%. In chunking, there is a clear trend toward larger increases in performance as words become rarer in the labeled data set, from a 0.02 improvement on words of frequency 2, to an improvement of 0.21 on OOV words. Because the test data for this experiment is drawn from the same domain (newswire) as the 499 training data, the rare terms make up a relatively small portion of the overall dataset (approximately 4% of both the tagged words and the chunks). Still, the increased performance by the HMMsmoothed model on the rare-word subset contributes in part to an increase in performance on the overall dataset of 1% for tagging and 3% for chunking. In our next experiment, we consider a common scenario where rare terms make up a much larger fraction of the test data. 3.3 Domain Adaptation For our experiment on domain adaptation, we focus on NP chunking and POS tagging, and we use the labeled training data from the CoNLL 2000 shared task as before. For NP chunking, we use 198 sentences from the biochemistry domain in the Open American National Corpus (OANC) (Reppen et al., 2005) as or our test set. We manually tagged the test set with POS tags and NP chunk boundaries. The test set contains 5330 words and a total of 1258 NP chunks. We used sections 15-18 of the Penn Treebank as our labeled training set, including the gold standard POS tags. We use our best-performing smoothing model, the HMM, and train it on sections 13 through 19 of the Penn Treebank, plus the written portion of the OANC that contains journal articles from biochemistry (40,727 sentences). We focus on chunks that begin with words appearing 0-2 times in the labeled training data, and appearing at least ten times in the HMM’s training data. Table 4 contains our results. For our POS tagging experiments, we use 561 MEDLINE sentences (9576 words) from the Penn BioIE project (PennBioIE, 2005), a test set previously used by Blitzer et al.(2006). We use the same experimental setup as Blitzer et al.: 40,000 manually tagged sentences from the Penn Treebank for our labeled training data, and all of the unlabeled text from the Penn Treebank plus their MEDLINE corpus of 71,306 sentences to train our HMM. We report on tagging accuracy for all words and OOV words in Table 5. This table also includes results for two previous systems as reported by Blitzer et al. (2006): the semi-supervised Alternating Structural Optimization (ASO) technique and the Structural Correspondence Learning (SCL) technique for domain adaptation. Note that this test set for NP chunking contains a much higher proportion of rare and OOV words: 23% of chunks begin with an OOV word, and 29% begin with a rare word, as compared with Baseline HMM Freq. # R P F1 R P F1 0 284 .74 .70 .72 .80 .89 .84 1 39 .85 .87 .86 .92 .88 .90 2 39 .79 .86 .83 .92 .90 .91 0-2 362 .75 .73 .74 .82 .89 .85 all 1258 .86 .87 .86 .91 .90 .91 Table 4: On biochemistry journal data from the OANC, our HMM-smoothed NP chunker outperforms the baseline CRF chunker by 0.12 (F1) on chunks that begin with OOV words, and by 0.05 (F1) on all chunks. Results in bold are statistically significantly different from the baseline results at p < 0.05 using the two-tailed Fisher’s exact test. We did not perform significance tests for F1. All Unknown Model words words Baseline 88.3 67.3 ASO 88.4 70.9 SCL 88.9 72.0 HMM 90.5 75.2 Table 5: On biomedical data from the Penn BioIE project, our HMM-smoothed tagger outperforms the SCL tagger by 3% (accuracy) on OOV words, and by 1.6% (accuracy) on all words. Differences between the smoothed tagger and the SCL tagger are significant at p < .001 for all words and for OOV words, using the Chi-squared test with 1 degree of freedom. 1% and 4%, respectively, for NP chunks in the test set from the original domain. The test set for tagging also contains a much higher proportion: 23% OOV words, as compared with 1% in the original domain. Because of the increase in the number of rare words, the baseline chunker’s overall performance drops by 4% compared with performance on WSJ data, and the baseline tagger’s overall performance drops by 5% in the new domain. The performance improvements for both the smoothed NP chunker and tagger are again impressive: there is a 12% improvement on OOV words, and a 10% overall improvement on rare words for chunking; the tagger shows an 8% improvement on OOV words compared to out baseline and a 3% improvement on OOV words compared to the SCL model. The resulting performance of the smoothed NP chunker is almost identical to its performance on the WSJ data. Through smoothing, the chunker not only improves by 5% 500 in F1 over the baseline system on all words, it in fact outperforms our baseline NP chunker on the WSJ data. 60% of this improvement comes from improved accuracy on rare words. The performance of our HMM-smoothed chunker caused us to wonder how well the chunker could work without some of its other features. We removed all tag features and all features for word types that appear fewer than 20 times in training. This chunker achieves 0.91 F1 on OANC data, and 0.93 F1 on WSJ data, outperforming the baseline system in both cases. It has only 20% as many features as the baseline chunker, greatly improving its training time. Thus our smoothing features are more valuable to the chunker than features from POS tags and features for all but the most common words. Our results point to the exciting possibility that with smoothing, we may be able to train a sequence-labeling system on a small labeled sample, and have it apply generally to other domains. Exactly what size training set we need is a question that we address next. 3.4 Sample Complexity Our complete system consists of two learned components, a supervised CRF system and an unsupervised smoothing model. We measure the sample complexity of each component separately. To measure the sample complexity of the supervised CRF, we use the same experimental setup as in the chunking experiment on WSJ text, but we vary the amount of labeled data available to the CRF. We take ten random samples of a fixed size from the labeled training set, train a chunking model on each subset, and graph the F1 on the labeled test set, averaged over the ten runs, in Figure 1. To measure the sample complexity of our HMM with respect to unlabeled text, we use the full labeled training set and vary the amount of unlabeled text available to the HMM. At minimum, we use the text available in the labeled training and test sets, and then add random subsets of the Penn Treebank, sections 2-22. For each subset size, we take ten random samples of the unlabeled text, train an HMM and then a chunking model, and graph the F1 on the labeled test set averaged over the ten runs in Figure 2. The results from our labeled sample complexity experiment indicate that sample complexity is drastically reduced by HMM smoothing. On rare chunks, the smoothed system reaches 0.78 F1 using only 87 labeled training sentences, a level that the baseline system never reaches, even with 6933 baseline (all) HMM (all) HMM (rare) 0.6 0.7 0.8 0.9 1 F1 (Chunking) Labeled Sample Complexity baseline (rare) 0.2 0.3 0.4 0.5 1 10 100 1000 10000 F1 (Chunking) Number of Labeled Sentences (log scale) Figure 1: The smoothed NP chunker requires less than 10% of the samples needed by the baseline chunker to achieve .83 F1, and the same for .88 F1. Baseline (all) HMM (all) HMM (rare) 0.80 0.85 0.90 0.95 F1 (Chunking) Unlabeled Sample Complexity Baseline (rare) 0.70 0.75 0.80 0 10000 20000 30000 40000 F1 (Chunking) Number of Unannotated Sentences Figure 2: By leveraging plentiful unannotated text, the smoothed chunker soon outperforms the baseline. labeled sentences. On the overall data set, the smoothed system reaches 0.83 F1 with 50 labeled sentences, which the baseline does not reach until it has 867 labeled sentences. With 434 labeled sentences, the smoothed system reaches 0.88 F1, which the baseline system does not reach until it has 5200 labeled samples. Our unlabeled sample complexity results show that even with access to a small amount of unlabeled text, 6000 sentences more than what appears in the training and test sets, smoothing using the HMM yields 0.78 F1 on rare chunks. However, the smoothed system requires 25,000 more sentences before it outperforms the baseline system on all chunks. No peak in performance is reached, so further improvements are possible with more unlabeled data. Thus smoothing is optimizing performance for the case where unlabeled data is plentiful and labeled data is scarce, as we would hope. 4 Related Work To our knowledge, only one previous system — the REALM system for sparse information extrac501 tion — has used HMMs as a feature representation for other applications. REALM uses an HMM trained on a large corpus to help determine whether the arguments of a candidate relation are of the appropriate type (Downey et al., 2007). We extend and generalize this smoothing technique and apply it to common NLP applications involving supervised sequence-labeling, and we provide an in-depth empirical analysis of its performance. Several researchers have previously studied methods for using unlabeled data for tagging and chunking, either alone or as a supplement to labeled data. Ando and Zhang develop a semisupervised chunker that outperforms purely supervised approaches on the CoNLL 2000 dataset (Ando and Zhang, 2005). Recent projects in semisupervised (Toutanova and Johnson, 2007) and unsupervised (Biemann et al., 2007; Smith and Eisner, 2005) tagging also show significant progress. Unlike these systems, our efforts are aimed at using unlabeled data to find distributional representations that work well on rare terms, making the supervised systems more applicable to other domains and decreasing their sample complexity. HMMs have been used many times for POS tagging and chunking, in supervised, semisupervised, and in unsupervised settings (Banko and Moore, 2004; Goldwater and Griffiths, 2007; Johnson, 2007; Zhou, 2004). We take a novel perspective on the use of HMMs by using them to compute features of each token in the data that represent the distribution over that token’s contexts. Our technique lets the HMM find parameters that maximize cross-entropy, and then uses labeled data to learn the best mapping from the HMM categories to the POS categories. Smoothing in NLP usually refers to the problem of smoothing n-gram models. Sophisticated smoothing techniques like modified Kneser-Ney and Katz smoothing (Chen and Goodman, 1996) smooth together the predictions of unigram, bigram, trigram, and potentially higher n-gram sequences to obtain accurate probability estimates in the face of data sparsity. Our task differs in that we are primarily concerned with the case where even the unigram model (single word) is rarely or never observed in the labeled training data. Sparsity for low-order contexts has recently spurred interest in using latent variables to represent distributions over contexts in language models. While n-gram models have traditionally dominated in language modeling, two recent efforts develop latent-variable probabilistic models that rival and even surpass n-gram models in accuracy (Blitzer et al., 2005; Mnih and Hinton, 2007). Several authors investigate neural network models that learn not just one latent state, but rather a vector of latent variables, to represent each word in a language model (Bengio et al., 2003; Emami et al., 2003; Morin and Bengio, 2005). One of the benefits of our smoothing technique is that it allows for domain adaptation, a topic that has received a great deal of attention from the NLP community recently. Unlike our technique, in most cases researchers have focused on the scenario where labeled training data is available in both the source and the target domain (e.g., (Daum´e III, 2007; Chelba and Acero, 2004; Daum´e III and Marcu, 2006)). Our technique uses unlabeled training data from the target domain, and is thus applicable more generally, including in web processing, where the domain and vocabulary is highly variable, and it is extremely difficult to obtain labeled data that is representative of the test distribution. When labeled target-domain data is available, instance weighting and similar techniques can be used in combination with our smoothing technique to improve our results further, although this has not yet been demonstrated empirically. HMM-smoothing improves on the most closely related work, the Structural Correspondence Learning technique for domain adaptation (Blitzer et al., 2006), in experiments. 5 Conclusion and Future Work Our study of smoothing techniques demonstrates that by aggregating information across many unannotated examples, it is possible to find accurate distributional representations that can provide highly informative features to supervised sequence labelers. These features help improve sequence labeling performance on rare word types, on domains that differ from the training set, and on smaller training sets. Further experiments are of course necessary to investigate distributional representations as smoothing techniques. One particularly promising area for further study is the combination of smoothing and instance weighting techniques for domain adaptation. Whether the current techniques are applicable to structured prediction tasks, like parsing and relation extraction, also deserves future attention. 502 References Rie Kubota Ando and Tong Zhang. 2005. A highperformance semi-supervised learning method for text chunking. In ACL. Michele Banko and Robert C. Moore. 2004. Part of speech tagging in context. In COLING. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155. C. Biemann, C. Giuliano, and A. Gliozzo. 2007. Unsupervised pos tagging supporting supervised methods. Proceeding of RANLP-07. J. Blitzer, A. Globerson, and F. Pereira. 2005. Distributed latent variable models of lexical cooccurrences. In Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In EMNLP. E. Brill. 1994. Some Advances in Rule-Based Part of Speech Tagging. In AAAI, pages 722–727, Seattle, Washington. Ciprian Chelba and Alex Acero. 2004. Adaptation of maximum entropy classifier: Little data can help a lot. In EMNLP. Stanley F. Chen and Joshua Goodman. 1996. An empirical study of smoothing techniques for language modeling. In Proceedings of the 34th annual meeting on Association for Computational Linguistics, pages 310–318, Morristown, NJ, USA. Association for Computational Linguistics. Hal Daum´e III and Daniel Marcu. 2006. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, 26. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In ACL. S. C. Deerwester, S. T. Dumais, T. K. Landauer, G. W. Furnas, and R. A. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society of Information Science, 41(6):391–407. Arthur Dempster, Nan Laird, and Donald Rubin. 1977. Likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38. Doug Downey, Stefan Schoenmackers, and Oren Etzioni. 2007. Sparse information extraction: Unsupervised language models to the rescue. In ACL. A. Emami, P. Xu, and F. Jelinek. 2003. Using a connectionist model in a syntactical based language model. In Proceedings of the International Conference on Spoken Language Processing, pages 372– 375. Zoubin Ghahramani and Michael I. Jordan. 1997. Factorial hidden markov models. Machine Learning, 29(2-3):245–273. Sharon Goldwater and Thomas L. Griffiths. 2007. A fully bayesian approach to unsupervised part-ofspeech tagging. In ACL. Mark Johnson. 2007. Why doesn’t EM find good HMM POS-taggers. In EMNLP. J. Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the International Conference on Machine Learning. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19(2):313–330. Andriy Mnih and Geoffrey Hinton. 2007. Three new graphical models for statistical language modelling. In Proceedings of the 24th International Conference on Machine Learning, pages 641–648, New York, NY, USA. ACM. F. Morin and Y. Bengio. 2005. Hierarchical probabilistic neural network language model. In Proceedings of the International Workshop on Artificial Intelligence and Statistics, pages 246–252. PennBioIE. 2005. Mining the bibliome project. http://bioie.ldc.upenn.edu/. Lawrence R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257– 285. Randi Reppen, Nancy Ide, and Keith Suderman. 2005. American national corpus (ANC) second release. Linguistic Data Consortium. F. Sha and Fernando Pereira. 2003. Shallow parsing with conditional random fields. In Proceedings of Human Language Technology - NAACL. Noah A. Smith and Jason Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL), pages 354–362, Ann Arbor, Michigan, June. Erik F. Tjong, Kim Sang, and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task: Chunking. In Proceedings of the 4th Conference on Computational Natural Language Learning, pages 127–132. Kristina Toutanova and Mark Johnson. 2007. A bayesian LDA-based model for semi-supervised part-of-speech tagging. In NIPS. GuoDong Zhou. 2004. Discriminative hidden Markov modeling with long state dependence using a kNN ensemble. In COLING. 503
2009
56
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 504–512, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Minimized Models for Unsupervised Part-of-Speech Tagging Sujith Ravi and Kevin Knight University of Southern California Information Sciences Institute Marina del Rey, California 90292 {sravi,knight}@isi.edu Abstract We describe a novel method for the task of unsupervised POS tagging with a dictionary, one that uses integer programming to explicitly search for the smallest model that explains the data, and then uses EM to set parameter values. We evaluate our method on a standard test corpus using different standard tagsets (a 45-tagset as well as a smaller 17-tagset), and show that our approach performs better than existing state-of-the-art systems in both settings. 1 Introduction In recent years, we have seen increased interest in using unsupervised methods for attacking different NLP tasks like part-of-speech (POS) tagging. The classic Expectation Maximization (EM) algorithm has been shown to perform poorly on POS tagging, when compared to other techniques, such as Bayesian methods. In this paper, we develop new methods for unsupervised part-of-speech tagging. We adopt the problem formulation of Merialdo (1994), in which we are given a raw word sequence and a dictionary of legal tags for each word type. The goal is to tag each word token so as to maximize accuracy against a gold tag sequence. Whether this is a realistic problem set-up is arguable, but an interesting collection of methods and results has accumulated around it, and these can be clearly compared with one another. We use the standard test set for this task, a 24,115-word subset of the Penn Treebank, for which a gold tag sequence is available. There are 5,878 word types in this test set. We use the standard tag dictionary, consisting of 57,388 word/tag pairs derived from the entire Penn Treebank.1 8,910 dictionary entries are relevant to the 5,878 word types in the test set. Per-token ambiguity is about 1.5 tags/token, yielding approximately 106425 possible ways to tag the data. There are 45 distinct grammatical tags. In this set-up, there are no unknown words. Figure 1 shows prior results for this problem. While the methods are quite different, they all make use of two common model elements. One is a probabilistic n-gram tag model P(ti|ti−n+1...ti−1), which we call the grammar. The other is a probabilistic word-given-tag model P(wi|ti), which we call the dictionary. The classic approach (Merialdo, 1994) is expectation-maximization (EM), where we estimate grammar and dictionary probabilities in order to maximize the probability of the observed word sequence: P(w1...wn) = X t1...tn P(t1...tn) · P(w1...wn|t1...tn) ≈ X t1...tn n Y i=1 P(ti|ti−2 ti−1) · P(wi|ti) Goldwater and Griffiths (2007) report 74.5% accuracy for EM with a 3-gram tag model, which we confirm by replication. They improve this to 83.9% by employing a fully Bayesian approach which integrates over all possible parameter values, rather than estimating a single distribution. They further improve this to 86.8% by using priors that favor sparse distributions. Smith and Eisner (2005) employ a contrastive estimation tech1As (Banko and Moore, 2004) point out, unsupervised tagging accuracy varies wildly depending on the dictionary employed. We follow others in using a fat dictionary (with 49,206 distinct word types), rather than a thin one derived only from the test set. 504 System Tagging accuracy (%) on 24,115-word corpus 1. Random baseline (for each word, pick a random tag from the alternatives given by the word/tag dictionary) 64.6 2. EM with 2-gram tag model 81.7 3. EM with 3-gram tag model 74.5 4a. Bayesian method (Goldwater and Griffiths, 2007) 83.9 4b. Bayesian method with sparse priors (Goldwater and Griffiths, 2007) 86.8 5. CRF model trained using contrastive estimation (Smith and Eisner, 2005) 88.6 6. EM-HMM tagger provided with good initial conditions (Goldberg et al., 2008) 91.4* (*uses linguistic constraints and manual adjustments to the dictionary) Figure 1: Previous results on unsupervised POS tagging using a dictionary (Merialdo, 1994) on the full 45-tag set. All other results reported in this paper (unless specified otherwise) are on the 45-tag set as well. nique, in which they automatically generate negative examples and use CRF training. In more recent work, Toutanova and Johnson (2008) propose a Bayesian LDA-based generative model that in addition to using sparse priors, explicitly groups words into ambiguity classes. They show considerable improvements in tagging accuracy when using a coarser-grained version (with 17-tags) of the tag set from the Penn Treebank. Goldberg et al. (2008) depart from the Bayesian framework and show how EM can be used to learn good POS taggers for Hebrew and English, when provided with good initial conditions. They use language specific information (like word contexts, syntax and morphology) for learning initial P(t|w) distributions and also use linguistic knowledge to apply constraints on the tag sequences allowed by their models (e.g., the tag sequence “V V” is disallowed). Also, they make other manual adjustments to reduce noise from the word/tag dictionary (e.g., reducing the number of tags for “the” from six to just one). In contrast, we keep all the original dictionary entries derived from the Penn Treebank data for our experiments. The literature omits one other baseline, which is EM with a 2-gram tag model. Here we obtain 81.7% accuracy, which is better than the 3-gram model. It seems that EM with a 3-gram tag model runs amok with its freedom. For the rest of this paper, we will limit ourselves to a 2-gram tag model. 2 What goes wrong with EM? We analyze the tag sequence output produced by EM and try to see where EM goes wrong. The overall POS tag distribution learnt by EM is relatively uniform, as noted by Johnson (2007), and it tends to assign equal number of tokens to each tag label whereas the real tag distribution is highly skewed. The Bayesian methods overcome this effect by using priors which favor sparser distributions. But it is not easy to model such priors into EM learning. As a result, EM exploits a lot of rare tags (like FW = foreign word, or SYM = symbol) and assigns them to common word types (in, of, etc.). We can compare the tag assignments from the gold tagging and the EM tagging (Viterbi tag sequence). The table below shows tag assignments (and their counts in parentheses) for a few word types which occur frequently in the test corpus. word/tag dictionary Gold tagging EM tagging in →{IN, RP, RB, NN, FW, RBR} IN (355) IN (0) RP (3) RP (0) FW (0) FW (358) of →{IN, RP, RB} IN (567) IN (0) RP (0) RP (567) on →{IN,RP, RB} RP (5) RP (127) IN (129) IN (0) RB (0) RB (7) a →{DT, JJ, IN, LS, FW, SYM, NNP} DT (517) DT (0) SYM (0) SYM (517) We see how the rare tag labels (like FW, SYM, etc.) are abused by EM. As a result, many word tokens which occur very frequently in the corpus are incorrectly tagged with rare tags in the EM tagging output. We also look at things more globally. We investigate the Viterbi tag sequence generated by EM training and count how many distinct tag bigrams there are in that sequence. We call this the observed grammar size, and it is 915. That is, in tagging the 24,115 test tokens, EM uses 915 of the available 45 × 45 = 2025 tag bigrams.2 The advantage of the observed grammar size is that we 2We contrast observed size with the model size for the grammar, which we define as the number of P(t2|t1) entries in EM’s trained tag model that exceed 0.0001 probability. 505 L8 L0 they can fish . I fish L1 L2 L3 L4 L6 L5 L7 L9 L10 L11 START PRO AUX V N PUNC L0 they can fish . I fish L1 L2 L1 L2 L3 L4 L6 L5 L7 L9 L10 L11 START PRO AUX V N PUNC d1 PRO-they d2 AUX-can d3 V-can d4 N-fish d5 V-fish d6 PUNC-. d7 PRO-I g1 PRO-AUX g2 PRO-V g3 AUX-N g4 AUX-V g5 V-N g6 V-V g7 N-PUNC g8 V-PUNC g9 PUNC-PRO g10 PRO-N dictionary variables grammar variables Integer Program Minimize: ∑i=1…10 gi Constraints: 1. Single left-to-right path (at each node, flow in = flow out) e.g., L0 = 1 L1 = L3 + L4 2. Path consistency constraints (chosen path respects chosen dictionary & grammar) e.g., L0 ≤d1 L1 ≤g1 IP formulation training text link variables Figure 2: Integer Programming formulation for finding the smallest grammar that explains a given word sequence. Here, we show a sample word sequence and the corresponding IP network generated for that sequence. can compare it with the gold tagging’s observed grammar size, which is 760. So we can safely say that EM is learning a grammar that is too big, still abusing its freedom. 3 Small Models Bayesian sparse priors aim to create small models. We take a different tack in the paper and directly ask: What is the smallest model that explains the text? Our approach is related to minimum description length (MDL). We formulate our question precisely by asking which tag sequence (of the 106425 available) has the smallest observed grammar size. The answer is 459. That is, there exists a tag sequence that contains 459 distinct tag bigrams, and no other tag sequence contains fewer. We obtain this answer by formulating the problem in an integer programming (IP) framework. Figure 2 illustrates this with a small sample word sequence. We create a network of possible taggings, and we assign a binary variable to each link in the network. We create constraints to ensure that those link variables receiving a value of 1 form a left-to-right path through the tagging network, and that all other link variables receive a value of 0. We accomplish this by requiring the sum of the links entering each node to equal to the sum of the links leaving each node. We also create variables for every possible tag bigram and word/tag dictionary entry. We constrain link variable assignments to respect those grammar and dictionary variables. For example, we do not allow a link variable to “activate” unless the corresponding grammar variable is also “activated”. Finally, we add an objective function that minimizes the number of grammar variables that are assigned a value of 1. Figure 3 shows the IP solution for the example word sequence from Figure 2. Of course, a small grammar size does not necessarily correlate with higher tagging accuracy. For the small toy example shown in Figure 3, the correct tagging is “PRO AUX V . PRO V” (with 5 tag pairs), whereas the IP tries to minimize the grammar size and picks another solution instead. For solving the integer program, we use CPLEX software (a commercial IP solver package). Alternatively, there are other programs such as lp solve, which are free and publicly available for use. Once we create an integer program for the full test corpus, and pass it to CPLEX, the solver returns an 506 word sequence: they can fish . I fish Tagging Grammar Size PRO AUX N . PRO N 5 PRO AUX V . PRO N 5 PRO AUX N . PRO V 5 PRO AUX V . PRO V 5 PRO V N . PRO N 5 PRO V V . PRO N 5 PRO V N . PRO V 4 PRO V V . PRO V 4 Figure 3: Possible tagging solutions and corresponding grammar sizes for the sample word sequence from Figure 2 using the given dictionary and grammar. The IP solver finds the smallest grammar set that can explain the given word sequence. In this example, there exist two solutions that each contain only 4 tag pair entries, and IP returns one of them. objective function value of 459.3 CPLEX also returns a tag sequence via assignments to the link variables. However, there are actually 104378 tag sequences compatible with the 459-sized grammar, and our IP solver just selects one at random. We find that of all those tag sequences, the worst gives an accuracy of 50.8%, and the best gives an accuracy of 90.3%. We also note that CPLEX takes 320 seconds to return the optimal solution for the integer program corresponding to this particular test data (24,115 tokens with the 45-tag set). It might be interesting to see how the performance of the IP method (in terms of time complexity) is affected when scaling up to larger data and bigger tagsets. We leave this as part of future work. But we do note that it is possible to obtain less than optimal solutions faster by interrupting the CPLEX solver. 4 Fitting the Model Our IP formulation can find us a small model, but it does not attempt to fit the model to the data. Fortunately, we can use EM for that. We still give EM the full word/tag dictionary, but now we constrain its initial grammar model to the 459 tag bigrams identified by IP. Starting with uniform probabilities, EM finds a tagging that is 84.5% accurate, substantially better than the 81.7% originally obtained with the fully-connected grammar. So we see a benefit to our explicit small-model approach. While EM does not find the most accurate 3Note that the grammar identified by IP is not uniquely minimal. For the same word sequence, there exist other minimal grammars having the same size (459 entries). In our experiments, we choose the first solution returned by CPLEX. in on IN IN RP RP word/tag dictionary RB RB NN FW RBR observed EM dictionary FW (358) RP (127) RB (7) observed IP+EM dictionary IN (349) IN (126) RB (9) RB (8) observed gold dictionary IN (355) IN (129) RB (3) RP (5) Figure 4: Examples of tagging obtained from different systems for prepositions in and on. sequence consistent with the IP grammar (90.3%), it finds a relatively good one. The IP+EM tagging (with 84.5% accuracy) has some interesting properties. First, the dictionary we observe from the tagging is of higher quality (with fewer spurious tagging assignments) than the one we observe from the original EM tagging. Figure 4 shows some examples. We also measure the quality of the two observed grammars/dictionaries by computing their precision and recall against the grammar/dictionary we observe in the gold tagging.4 We find that precision of the observed grammar increases from 0.73 (EM) to 0.94 (IP+EM). In addition to removing many bad tag bigrams from the grammar, IP minimization also removes some of the good ones, leading to lower recall (EM = 0.87, IP+EM = 0.57). In the case of the observed dictionary, using a smaller grammar model does not affect the precision (EM = 0.91, IP+EM = 0.89) or recall (EM = 0.89, IP+EM = 0.89). During EM training, the smaller grammar with fewer bad tag bigrams helps to restrict the dictionary model from making too many bad choices that EM made earlier. Here are a few examples of bad dictionary entries that get removed when we use the minimized grammar for EM training: in → FW a → SYM of → RP In → RBR During EM training, the minimized grammar 4For any observed grammar or dictionary X, Precision (X) = |{X}∩{observedgold}| |{X}| Recall (X) = |{X}∩{observedgold}| |{observedgold}| 507 Model Tagging accuracy Observed size Model size on 24,115-word corpus grammar(G), dictionary(D) grammar(G), dictionary(D) 1. EM baseline with full grammar + full dictionary 81.7 G=915, D=6295 G=935, D=6430 2. EM constrained with minimized IP-grammar + full dictionary 84.5 G=459, D=6318 G=459, D=6414 3. EM constrained with full grammar + dictionary from (2) 91.3 G=606, D=6245 G=612, D=6298 4. EM constrained with grammar from (3) + full dictionary 91.5 G=593, D=6285 G=600, D=6373 5. EM constrained with full grammar + dictionary from (4) 91.6 G=603, D=6280 G=618, D=6337 Figure 5: Percentage of word tokens tagged correctly by different models. The observed sizes and model sizes of grammar (G) and dictionary (D) produced by these models are shown in the last two columns. helps to eliminate many incorrect entries (i.e., zero out model parameters) from the dictionary, thereby yielding an improved dictionary model. So using the minimized grammar (which has higher precision) helps to improve the quality of the chosen dictionary (examples shown in Figure 4). This in turn helps improve the tagging accuracy from 81.7% to 84.5%. It is clear that the IP-constrained grammar is a better choice to run EM on than the full grammar. Note that we used a very small IP-grammar (containing only 459 tag bigrams) during EM training. In the process of minimizing the grammar size, IP ends up removing many good tag bigrams from our grammar set (as seen from the low measured recall of 0.57 for the observed grammar). Next, we proceed to recover some good tag bigrams and expand the grammar in a restricted fashion by making use of the higher-quality dictionary produced by the IP+EM method. We now run EM again on the full grammar (all possible tag bigrams) in combination with this good dictionary (containing fewer entries than the full dictionary). Unlike the original training with full grammar, where EM could choose any tag bigram, now the choice of grammar entries is constrained by the good dictionary model that we provide EM with. This allows EM to recover some of the good tag pairs, and results in a good grammardictionary combination that yields better tagging performance. With these improvements in mind, we embark on an alternating scheme to find better models and taggings. We run EM for multiple passes, and in each pass we alternately constrain either the grammar model or the dictionary model. The procedure is simple and proceeds as follows: 1. Run EM constrained to the last trained dictionary, but provided with a full grammar.5 2. Run EM constrained to the last trained grammar, but provided with a full dictionary. 3. Repeat steps 1 and 2. We notice significant gains in tagging performance when applying this technique. The tagging accuracy increases at each step and finally settles at a high of 91.6%, which outperforms the existing state-of-the-art systems for the 45-tag set. The system achieves a better accuracy than the 88.6% from Smith and Eisner (2005), and even surpasses the 91.4% achieved by Goldberg et al. (2008) without using any additional linguistic constraints or manual cleaning of the dictionary. Figure 5 shows the tagging performance achieved at each step. We found that it is the elimination of incorrect entries from the dictionary (and grammar) and not necessarily the initialization weights from previous EM training, that results in the tagging improvements. Initializing the last trained dictionary or grammar at each step with uniform weights also yields the same tagging improvements as shown in Figure 5. We find that the observed grammar also improves, growing from 459 entries to 603 entries, with precision increasing from 0.94 to 0.96, and recall increasing from 0.57 to 0.76. The figure also shows the model’s internal grammar and dictionary sizes. Figure 6 and 7 show how the precision/recall of the observed grammar and dictionary varies for different models from Figure 5. In the case of the observed grammar (Figure 6), precision increases 5For all experiments, EM training is allowed to run for 40 iterations or until the likelihood ratios between two subsequent iterations reaches a value of 0.99999, whichever occurs earlier. 508 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Precision / Recall of observed grammar Tagging Model Model 1 Model 2 Model 3 Model 4 Model 5 Precision Recall Figure 6: Comparison of observed grammars from the model tagging vs. gold tagging in terms of precision and recall measures. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Precision / Recall of observed dictionary Tagging Model Model 1 Model 2 Model 3 Model 4 Model 5 Precision Recall Figure 7: Comparison of observed dictionaries from the model tagging vs. gold tagging in terms of precision and recall measures. Model Tagging accuracy on 24,115-word corpus no-restarts with 100 restarts 1. Model 1 (EM baseline) 81.7 83.8 2. Model 2 84.5 84.5 3. Model 3 91.3 91.8 4. Model 4 91.5 91.8 5. Model 5 91.6 91.8 Figure 8: Effect of random restarts (during EM training) on tagging accuracy. at each step, whereas recall drops initially (owing to the grammar minimization) but then picks up again. The precision/recall of the observed dictionary on the other hand, is not affected by much. 5 Restarts and More Data Multiple random restarts for EM, while not often emphasized in the literature, are key in this domain. Recall that our original EM tagging with a fully-connected 2-gram tag model was 81.7% accurate. When we execute 100 random restarts and select the model with the highest data likelihood, we get 83.8% accuracy. Likewise, when we extend our alternating EM scheme to 100 random restarts at each step, we improve our tagging accuracy from 91.6% to 91.8% (Figure 8). As noted by Toutanova and Johnson (2008), there is no reason to limit the amount of unlabeled data used for training the models. Their models are trained on the entire Penn Treebank data (instead of using only the 24,115-token test data), and so are the tagging models used by Goldberg et al. (2008). But previous results from Smith and Eisner (2005) and Goldwater and Griffiths (2007) show that their models do not benefit from using more unlabeled training data. Because EM is efficient, we can extend our word-sequence training data from the 24,115-token set to the entire Penn Treebank (973k tokens). We run EM training again for Model 5 (the best model from Figure 5) but this time using 973k word tokens, and further increase our accuracy to 92.3%. This is our final result on the 45-tagset, and we note that it is higher than previously reported results. 6 Smaller Tagset and Incomplete Dictionaries Previously, researchers working on this task have also reported results for unsupervised tagging with a smaller tagset (Smith and Eisner, 2005; Goldwater and Griffiths, 2007; Toutanova and Johnson, 2008; Goldberg et al., 2008). Their systems were shown to obtain considerable improvements in accuracy when using a 17-tagset (a coarsergrained version of the tag labels from the Penn Treebank) instead of the 45-tagset. When tagging the same standard test corpus with the smaller 17-tagset, our method is able to achieve a substantially high accuracy of 96.8%, which is the best result reported so far on this task. The table in Figure 9 shows a comparison of different systems for which tagging accuracies have been reported previously for the 17-tagset case (Goldberg et al., 2008). The first row in the table compares tagging results when using a full dictionary (i.e., a lexicon containing entries for 49,206 word types). The InitEM-HMM system from Goldberg et al. (2008) reports an accuracy of 93.8%, followed by the LDA+AC model (Latent Dirichlet Allocation model with a strong Ambiguity Class component) from Toutanova and Johnson (2008). In comparison, the Bayesian HMM (BHMM) model from Goldwater et al. (2007) and 509 Dict IP+EM (24k) InitEM-HMM LDA+AC CE+spl BHMM Full (49206 words) 96.8 (96.8) 93.8 93.4 88.7 87.3 ≥2 (2141 words) 90.6 (90.0) 89.4 91.2 79.5 79.6 ≥3 (1249 words) 88.0 (86.1) 87.4 89.7 78.4 71 Figure 9: Comparison of different systems for English unsupervised POS tagging with 17 tags. the CE+spl model (Contrastive Estimation with a spelling model) from Smith and Eisner (2005) report lower accuracies (87.3% and 88.7%, respectively). Our system (IP+EM) which uses integer programming and EM, gets the highest accuracy (96.8%). The accuracy numbers reported for Init-HMM and LDA+AC are for models that are trained on all the available unlabeled data from the Penn Treebank. The IP+EM models used in the 17-tagset experiments reported here were not trained on the entire Penn Treebank, but instead used a smaller section containing 77,963 tokens for estimating model parameters. We also include the accuracies for our IP+EM model when using only the 24,115 token test corpus for EM estimation (shown within parenthesis in second column of the table in Figure 9). We find that our performance does not degrade when the parameter estimation is done using less data, and our model still achieves a high accuracy of 96.8%. 6.1 Incomplete Dictionaries and Unknown Words The literature also includes results reported in a different setting for the tagging problem. In some scenarios, a complete dictionary with entries for all word types may not be readily available to us and instead, we might be provided with an incomplete dictionary that contains entries for only frequent word types. In such cases, any word not appearing in the dictionary will be treated as an unknown word, and can be labeled with any of the tags from given tagset (i.e., for every unknown word, there are 17 tag possibilities). Some previous approaches (Toutanova and Johnson, 2008; Goldberg et al., 2008) handle unknown words explicitly using ambiguity class components conditioned on various morphological features, and this has shown to produce good tagging results, especially when dealing with incomplete dictionaries. We follow a simple approach using just one of the features used in (Toutanova and Johnson, 2008) for assigning tag possibilities to every unknown word. We first identify the top-100 suffixes (up to 3 characters) for words in the dictionary. Using the word/tag pairs from the dictionary, we train a simple probabilistic model that predicts the tag given a particular suffix (e.g., P(VBG | ing) = 0.97, P(N | ing) = 0.0001, ...). Next, for every unknown word “w”, the trained P(tag | suffix) model is used to predict the top 3 tag possibilities for “w” (using only its suffix information), and subsequently this word along with its 3 tags are added as a new entry to the lexicon. We do this for every unknown word, and eventually we have a dictionary containing entries for all the words. Once the completed lexicon (containing both correct entries for words in the lexicon and the predicted entries for unknown words) is available, we follow the same methodology from Sections 3 and 4 using integer programming to minimize the size of the grammar and then applying EM to estimate parameter values. Figure 9 shows comparative results for the 17tagset case when the dictionary is incomplete. The second and third rows in the table shows tagging accuracies for different systems when a cutoff of 2 (i.e., all word types that occur with frequency counts < 2 in the test corpus are removed) and a cutoff of 3 (i.e., all word types occurring with frequency counts < 3 in the test corpus are removed) is applied to the dictionary. This yields lexicons containing 2,141 and 1,249 words respectively, which are much smaller compared to the original 49,206 word dictionary. As the results in Figure 9 illustrate, the IP+EM method clearly does better than all the other systems except for the LDA+AC model. The LDA+AC model from Toutanova and Johnson (2008) has a strong ambiguity class component and uses more features to handle the unknown words better, and this contributes to the slightly higher performance in the incomplete dictionary cases, when compared to the IP+EM model. 7 Discussion The method proposed in this paper is simple— once an integer program is produced, there are solvers available which directly give us the solution. In addition, we do not require any complex parameter estimation techniques; we train our models using simple EM, which proves to be efficient for this task. While some previous methods 510 word type Gold tag Automatic tag # of tokens tagged incorrectly ’s POS VBZ 173 be VB VBP 67 that IN WDT 54 New NNP NNPS 33 U.S. NNP JJ 31 up RP RB 28 more RBR JJR 27 and CC IN 23 have VB VBP 20 first JJ JJS 20 to TO IN 19 out RP RB 17 there EX RB 15 stock NN JJ 15 what WP WDT 14 one CD NN 14 ’ POS : 14 as RB IN 14 all DT RB 14 that IN RB 13 Figure 10: Most frequent mistakes observed in the model tagging (using the best model, which gives 92.3% accuracy) when compared to the gold tagging. introduced for the same task have achieved big tagging improvements using additional linguistic knowledge or manual supervision, our models are not provided with any additional information. Figure 10 illustrates for the 45-tag set some of the common mistakes that our best tagging model (92.3%) makes. In some cases, the model actually gets a reasonable tagging but is penalized perhaps unfairly. For example, “to” is tagged as IN by our model sometimes when it occurs in the context of a preposition, whereas in the gold tagging it is always tagged as TO. The model also gets penalized for tagging the word “U.S.” as an adjective (JJ), which might be considered valid in some cases such as “the U.S. State Department”. In other cases, the model clearly produces incorrect tags (e.g., “New” gets tagged incorrectly as NNPS). Our method resembles the classic Minimum Description Length (MDL) approach for model selection (Barron et al., 1998). In MDL, there is a single objective function to (1) maximize the likelihood of observing the data, and at the same time (2) minimize the length of the model description (which depends on the model size). However, the search procedure for MDL is usually non-trivial, and for our task of unsupervised tagging, we have not found a direct objective function which we can optimize and produce good tagging results. In the past, only a few approaches utilizing MDL have been shown to work for natural language applications. These approaches employ heuristic search methods with MDL for the task of unsupervised learning of morphology of natural languages (Goldsmith, 2001; Creutz and Lagus, 2002; Creutz and Lagus, 2005). The method proposed in this paper is the first application of the MDL idea to POS tagging, and the first to use an integer programming formulation rather than heuristic search techniques. We also note that it might be possible to replicate our models in a Bayesian framework similar to that proposed in (Goldwater and Griffiths, 2007). 8 Conclusion We presented a novel method for attacking dictionary-based unsupervised part-of-speech tagging. Our method achieves a very high accuracy (92.3%) on the 45-tagset and a higher (96.8%) accuracy on a smaller 17-tagset. The method works by explicitly minimizing the grammar size using integer programming, and then using EM to estimate parameter values. The entire process is fully automated and yields better performance than any existing state-of-the-art system, even though our models were not provided with any additional linguistic knowledge (for example, explicit syntactic constraints to avoid certain tag combinations such as “V V”, etc.). However, it is easy to model some of these linguistic constraints (both at the local and global levels) directly using integer programming, and this may result in further improvements and lead to new possibilities for future research. For direct comparison to previous works, we also presented results for the case when the dictionaries are incomplete and find the performance of our system to be comparable with current best results reported for the same task. 9 Acknowledgements This research was supported by the Defense Advanced Research Projects Agency under SRI International’s prime Contract Number NBCHD040058. 511 References M. Banko and R. C. Moore. 2004. Part of speech tagging in context. In Proceedings of the International Conference on Computational Linguistics (COLING). A. Barron, J. Rissanen, and B. Yu. 1998. The minimum description length principle in coding and modeling. IEEE Transactions on Information Theory, 44(6):2743–2760. M. Creutz and K. Lagus. 2002. Unsupervised discovery of morphemes. In Proceedings of the ACL Workshop on Morphological and Phonological Learning of. M. Creutz and K. Lagus. 2005. Unsupervised morpheme segmentation and morphology induction from text corpora using Morfessor 1.0. Publications in Computer and Information Science, Report A81, Helsinki University of Technology, March. Y. Goldberg, M. Adler, and M. Elhadad. 2008. EM can find pretty good HMM POS-taggers (when given a good start). In Proceedings of the ACL. J. Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational Linguistics, 27(2):153–198. S. Goldwater and T. L. Griffiths. 2007. A fully Bayesian approach to unsupervised part-of-speech tagging. In Proceedings of the ACL. M. Johnson. 2007. Why doesnt EM find good HMM POS-taggers? In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). B. Merialdo. 1994. Tagging English text with a probabilistic model. Computational Linguistics, 20(2):155–171. N. Smith and J. Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proceedings of the ACL. K. Toutanova and M. Johnson. 2008. A Bayesian LDA-based model for semi-supervised part-ofspeech tagging. In Proceedings of the Advances in Neural Information Processing Systems (NIPS). 512
2009
57
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 513–521, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP An Error-Driven Word-Character Hybrid Model for Joint Chinese Word Segmentation and POS Tagging Canasai Kruengkrai†‡ and Kiyotaka Uchimoto‡ and Jun’ichi Kazama‡ Yiou Wang‡ and Kentaro Torisawa‡ and Hitoshi Isahara†‡ †Graduate School of Engineering, Kobe University 1-1 Rokkodai-cho, Nada-ku, Kobe 657-8501 Japan ‡National Institute of Information and Communications Technology 3-5 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0289 Japan {canasai,uchimoto,kazama,wangyiou,torisawa,isahara}@nict.go.jp Abstract In this paper, we present a discriminative word-character hybrid model for joint Chinese word segmentation and POS tagging. Our word-character hybrid model offers high performance since it can handle both known and unknown words. We describe our strategies that yield good balance for learning the characteristics of known and unknown words and propose an errordriven policy that delivers such balance by acquiring examples of unknown words from particular errors in a training corpus. We describe an efficient framework for training our model based on the Margin Infused Relaxed Algorithm (MIRA), evaluate our approach on the Penn Chinese Treebank, and show that it achieves superior performance compared to the state-ofthe-art approaches reported in the literature. 1 Introduction In Chinese, word segmentation and part-of-speech (POS) tagging are indispensable steps for higherlevel NLP tasks. Word segmentation and POS tagging results are required as inputs to other NLP tasks, such as phrase chunking, dependency parsing, and machine translation. Word segmentation and POS tagging in a joint process have received much attention in recent research and have shown improvements over a pipelined fashion (Ng and Low, 2004; Nakagawa and Uchimoto, 2007; Zhang and Clark, 2008; Jiang et al., 2008a; Jiang et al., 2008b). In joint word segmentation and the POS tagging process, one serious problem is caused by unknown words, which are defined as words that are not found in a training corpus or in a system’s word dictionary1. The word boundaries and the POS tags of unknown words, which are very difficult to identify, cause numerous errors. The word-character hybrid model proposed by Nakagawa and Uchimoto (Nakagawa, 2004; Nakagawa and Uchimoto, 2007) shows promising properties for solving this problem. However, it suffers from structural complexity. Nakagawa (2004) described a training method based on a word-based Markov model and a character-based maximum entropy model that can be completed in a reasonable time. However, this training method is limited by the generatively-trained Markov model in which informative features are hard to exploit. In this paper, we overcome such limitations concerning both efficiency and effectiveness. We propose a new framework for training the wordcharacter hybrid model based on the Margin Infused Relaxed Algorithm (MIRA) (Crammer, 2004; Crammer et al., 2005; McDonald, 2006). We describe k-best decoding for our hybrid model and design its loss function and the features appropriate for our task. In our word-character hybrid model, allowing the model to learn the characteristics of both known and unknown words is crucial to achieve optimal performance. Here, we describe our strategies that yield good balance for learning these two characteristics. We propose an errordriven policy that delivers this balance by acquiring examples of unknown words from particular errors in a training corpus. We conducted our experiments on Penn Chinese Treebank (Xia et al., 2000) and compared our approach with the best previous approaches reported in the literature. Experimental results indicate that our approach can achieve state-of-the-art performance. 1A system’s word dictionary usually consists of a word list, and each word in the list has its own POS category. In this paper, we constructed the system’s word dictionary from a training corpus. 513 Figure 1: Lattice used in word-character hybrid model. Tag Description B Beginning character in a multi-character word I Intermediate character in a multi-character word E End character in a multi-character word S Single-character word Table 1: Position-of-character (POC) tags. The paper proceeds as follows: Section 2 gives background on the word-character hybrid model, Section 3 describes our policies for correct path selection, Section 4 presents our training method based on MIRA, Section 5 shows our experimental results, Section 6 discusses related work, and Section 7 concludes the paper. 2 Background 2.1 Problem formation In joint word segmentation and the POS tagging process, the task is to predict a path of word hypotheses y = (y1, . . . , y#y) = (⟨w1, p1⟩, . . . , ⟨w#y, p#y⟩) for a given character sequence x = (c1, . . . , c#x), where w is a word, p is its POS tag, and a “#” symbol denotes the number of elements in each variable. The goal of our learning algorithm is to learn a mapping from inputs (unsegmented sentences) x ∈X to outputs (segmented paths) y ∈Y based on training samples of input-output pairs S = {(xt, yt)}T t=1. 2.2 Search space representation We represent the search space with a lattice based on the word-character hybrid model (Nakagawa and Uchimoto, 2007). In the hybrid model, given an input sentence, a lattice that consists of word-level and character-level nodes is constructed. Word-level nodes, which correspond to words found in the system’s word dictionary, have regular POS tags. Character-level nodes have special tags where position-of-character (POC) and POS tags are combined (Asahara, 2003; Nakagawa, 2004). POC tags indicate the word-internal positions of the characters, as described in Table 1. Figure 1 shows an example of a lattice for a Chinese sentence: “ ” (Chongming is China’s third largest island). Note that some nodes and state transitions are not allowed. For example, I and E nodes cannot occur at the beginning of the lattice (marked with dashed boxes), and the transitions from I to B nodes are also forbidden. These nodes and transitions are ignored during the lattice construction processing. In the training phase, since several paths (marked in bold) can correspond to the correct analysis in the annotated corpus, we need to select one correct path yt as a reference for training.2 The next section describes our strategies for dealing with this issue. With this search space representation, we can consistently handle unknown words with character-level nodes. In other words, we use word-level nodes to identify known words and character-level nodes to identify unknown words. In the testing phase, we can use a dynamic programming algorithm to search for the most likely path out of all candidate paths. 2A machine learning problem exists called structured multi-label classification that allows training from multiple correct paths. However, in this paper we limit our consideration to structured single-label classification, which is simple yet provides great performance. 514 3 Policies for correct path selection In this section, we describe our strategies for selecting the correct path yt in the training phase. As shown in Figure 1, the paths marked in bold can represent the correct annotation of the segmented sentence. Ideally, we need to build a wordcharacter hybrid model that effectively learns the characteristics of unknown words (with characterlevel nodes) as well as those of known words (with word-level nodes). We can directly estimate the statistics of known words from an annotated corpus where a sentence is already segmented into words and assigned POS tags. If we select the correct path yt that corresponds to the annotated sentence, it will only consist of word-level nodes that do not allow learning for unknown words. We therefore need to choose character-level nodes as correct nodes instead of word-level nodes for some words. We expect that those words could reflect unknown words in the future. Baayen and Sproat (1996) proposed that the characteristics of infrequent words in a training corpus resemble those of unknown words. Their idea has proven effective for estimating the statistics of unknown words in previous studies (Ratnaparkhi, 1996; Nagata, 1999; Nakagawa, 2004). We adopt Baayen and Sproat’s approach as the baseline policy in our word-character hybrid model. In the baseline policy, we first count the frequencies of words3 in the training corpus. We then collect infrequent words that appear less than or equal to r times.4 If these infrequent words are in the correct path, we use character-level nodes to represent them, and hence the characteristics of unknown words can be learned. For example, in Figure 1 we select the character-level nodes of the word “ ” (Chongming) as the correct nodes. As a result, the correct path yt can contain both wordlevel and character-level nodes (marked with asterisks (*)). To discover more statistics of unknown words, one might consider just increasing the threshold value r to obtain more artificial unknown words. However, our experimental results indicate that our word-character hybrid model requires an appropriate balance between known and artificial un3We consider a word and its POS tag a single entry. 4In our experiments, the optimal threshold value r is selected by evaluating the performance of joint word segmentation and POS tagging on the development set. known words to achieve optimal performance. We now describe our new approach to leverage additional examples of unknown words. Intuition suggests that even though the system can handle some unknown words, many unidentified unknown words remain that cannot be recovered by the system; we wish to learn the characteristics of such unidentified unknown words. We propose the following simple scheme: • Divide the training corpus into ten equal sets and perform 10-fold cross validation to find the errors. • For each trial, train the word-character hybrid model with the baseline policy (r = 1) using nine sets and estimate errors using the remaining validation set. • Collect unidentified unknown words from each validation set. Several types of errors are produced by the baseline model, but we only focus on those caused by unidentified unknown words, which can be easily collected in the evaluation process. As described later in Section 5.2, we measure the recall on out-of-vocabulary (OOV) words. Here, we define unidentified unknown words as OOV words in each validation set that cannot be recovered by the system. After ten cross validation runs, we get a list of the unidentified unknown words derived from the whole training corpus. Note that the unidentified unknown words in the cross validation are not necessary to be infrequent words, but some overlap may exist. Finally, we obtain the artificial unknown words that combine the unidentified unknown words in cross validation and infrequent words for learning unknown words. We refer to this approach as the error-driven policy. 4 Training method 4.1 Discriminative online learning Let Yt = {y1 t , . . . , yK t } be a lattice consisting of candidate paths for a given sentence xt. In the word-character hybrid model, the lattice Yt can contain more than 1000 nodes, depending on the length of the sentence xt and the number of POS tags in the corpus. Therefore, we require a learning algorithm that can efficiently handle large and complex lattice structures. Online learning is an attractive method for the hybrid model since it quickly converges 515 Algorithm 1 Generic Online Learning Algorithm Input: Training set S = {(xt, yt)}T t=1 Output: Model weight vector w 1: w(0) = 0; v = 0; i = 0 2: for iter = 1 to N do 3: for t = 1 to T do 4: w(i+1) = update w(i) according to (xt, yt) 5: v = v + w(i+1) 6: i = i + 1 7: end for 8: end for 9: w = v/(N × T) within a few iterations (McDonald, 2006). Algorithm 1 outlines the generic online learning algorithm (McDonald, 2006) used in our framework. 4.2 k-best MIRA We focus on an online learning algorithm called MIRA (Crammer, 2004), which has the desired accuracy and scalability properties. MIRA combines the advantages of margin-based and perceptron-style learning with an optimization scheme. In particular, we use a generalized version of MIRA (Crammer et al., 2005; McDonald, 2006) that can incorporate k-best decoding in the update procedure. To understand the concept of kbest MIRA, we begin with a linear score function: s(x, y; w) = ⟨w, f(x, y)⟩, (1) where w is a weight vector and f is a feature representation of an input x and an output y. Learning a mapping between an input-output pair corresponds to finding a weight vector w such that the best scoring path of a given sentence is the same as (or close to) the correct path. Given a training example (xt, yt), MIRA tries to establish a margin between the score of the correct path s(xt, yt; w) and the score of the best candidate path s(xt,ˆy; w) based on the current weight vector w that is proportional to a loss function L(yt,ˆy). In each iteration, MIRA updates the weight vector w by keeping the norm of the change in the weight vector as small as possible. With this framework, we can formulate the optimization problem as follows (McDonald, 2006): w(i+1) = argminw∥w −w(i)∥ (2) s.t. ∀ˆy ∈bestk(xt; w(i)) : s(xt, yt; w) −s(xt,ˆy; w) ≥L(yt,ˆy) , where bestk(xt; w(i)) ∈Yt represents a set of top k-best paths given the weight vector w(i). The above quadratic programming (QP) problem can be solved using Hildreth’s algorithm (Yair Censor, 1997). Replacing Eq. (2) into line 4 of Algorithm 1, we obtain k-best MIRA. The next question is how to efficiently generate bestk(xt; w(i)). In this paper, we apply a dynamic programming search (Nagata, 1994) to kbest MIRA. The algorithm has two main search steps: forward and backward. For the forward search, we use Viterbi-style decoding to find the best partial path and its score up to each node in the lattice. For the backward search, we use A∗style decoding to generate the top k-best paths. A complete path is found when the backward search reaches the beginning node of the lattice, and the algorithm terminates when the number of generated paths equals k. In summary, we use k-best MIRA to iteratively update w(i). The final weight vector w is the average of the weight vectors after each iteration. As reported in (Collins, 2002; McDonald et al., 2005), parameter averaging can effectively avoid overfitting. For inference, we can use Viterbi-style decoding to search for the most likely path y∗for a given sentence x where: y∗= argmax y∈Y s(x, y; w) . (3) 4.3 Loss function In conventional sequence labeling where the observation sequence (word) boundaries are fixed, one can use the 0/1 loss to measure the errors of a predicted path with respect to the correct path. However, in our model, word boundaries vary based on the considered path, resulting in a different numbers of output tokens. As a result, we cannot directly use the 0/1 loss. We instead compute the loss function through false positives (FP) and false negatives (FN). Here, FP means the number of output nodes that are not in the correct path, and FN means the number of nodes in the correct path that cannot be recognized by the system. We define the loss function by: L(yt,ˆy) = FP + FN . (4) This loss function can reflect how bad the predicted path ˆy is compared to the correct path yt. A weighted loss function based on FP and FN can be found in (Ganchev et al., 2007). 516 ID Template Condition W0 ⟨w0⟩ for word-level W1 ⟨p0⟩ nodes W2 ⟨w0, p0⟩ W3 ⟨Length(w0), p0⟩ A0 ⟨AS(w0)⟩ if w0 is a singleA1 ⟨AS(w0), p0⟩ character word A2 ⟨AB(w0)⟩ for word-level A3 ⟨AB(w0), p0⟩ nodes A4 ⟨AE(w0)⟩ A5 ⟨AE(w0), p0⟩ A6 ⟨AB(w0), AE(w0)⟩ A7 ⟨AB(w0), AE(w0), p0⟩ T0 ⟨TS(w0)⟩ if w0 is a singleT1 ⟨TS(w0), p0⟩ character word T2 ⟨TB(w0)⟩ for word-level T3 ⟨TB(w0), p0⟩ nodes T4 ⟨TE(w0)⟩ T5 ⟨TE(w0), p0⟩ T6 ⟨TB(w0), TE(w0)⟩ T7 ⟨TB(w0), TE(w0), p0⟩ C0 ⟨cj⟩, j ∈[−2, 2] × p0 for characterC1 ⟨cj, cj+1⟩, j ∈[−2, 1] × p0 level nodes C2 ⟨c−1, c1⟩× p0 C3 ⟨T(cj)⟩, j ∈[−2, 2] × p0 C4 ⟨T(cj), T(cj+1)⟩, j ∈[−2, 1] × p0 C5 ⟨T(c−1), T(c1)⟩× p0 C6 ⟨c0, T(c0)⟩× p0 Table 2: Unigram features. 4.4 Features This section discusses the structure of f(x, y). We broadly classify features into two categories: unigram and bigram features. We design our feature templates to capture various levels of information about words and POS tags. Let us introduce some notation. We write w−1 and w0 for the surface forms of words, where subscripts −1 and 0 indicate the previous and current positions, respectively. POS tags p−1 and p0 can be interpreted in the same way. We denote the characters by cj. Unigram features: Table 2 shows our unigram features. Templates W0–W3 are basic word-level unigram features, where Length(w0) denotes the length of the word w0. Using just the surface forms can overfit the training data and lead to poor predictions on the test data. To alleviate this problem, we use two generalized features of the surface forms. The first is the beginning and end characters of the surface (A0–A7). For example, ⟨AB(w0)⟩denotes the beginning character of the current word w0, and ⟨AB(w0), AE(w0)⟩denotes the beginning and end characters in the word. The second is the types of beginning and end characters of the surface (T0–T7). We define a set of general character types, as shown in Table 4. Templates C0–C6 are basic character-level unID Template Condition B0 ⟨w−1, w0⟩ if w−1 and w0 B1 ⟨p−1, p0⟩ are word-level B2 ⟨w−1, p0⟩ nodes B3 ⟨p−1, w0⟩ B4 ⟨w−1, w0, p0⟩ B5 ⟨p−1, w0, p0⟩ B6 ⟨w−1, p−1, w0⟩ B7 ⟨w−1, p−1, p0⟩ B8 ⟨w−1, p−1, w0, p0⟩ B9 ⟨Length(w−1), p0⟩ TB0 ⟨TE(w−1)⟩ TB1 ⟨TE(w−1), p0⟩ TB2 ⟨TE(w−1), p−1, p0⟩ TB3 ⟨TE(w−1), TB(w0)⟩ TB4 ⟨TE(w−1), TB(w0), p0⟩ TB5 ⟨TE(w−1), p−1, TB(w0)⟩ TB6 ⟨TE(w−1), p−1, TB(w0), p0⟩ CB0 ⟨p−1, p0⟩ otherwise Table 3: Bigram features. Character type Description Space Space Numeral Arabic and Chinese numerals Symbol Symbols Alphabet Alphabets Chinese Chinese characters Other Others Table 4: Character types. igram features taken from (Nakagawa, 2004). These templates operate over a window of ±2 characters. The features include characters (C0), pairs of characters (C1–C2), character types (C3), and pairs of character types (C4–C5). In addition, we add pairs of characters and character types (C6). Bigram features: Table 3 shows our bigram features. Templates B0-B9 are basic wordlevel bigram features. These features aim to capture all the possible combinations of word and POS bigrams. Templates TB0-TB6 are the types of characters for bigrams. For example, ⟨TE(w−1), TB(w0)⟩captures the change of character types from the end character in the previous word to the beginning character in the current word. Note that if one of the adjacent nodes is a character-level node, we use the template CB0 that represents POS bigrams. In our preliminary experiments, we found that if we add more features to non-word-level bigrams, the number of features grows rapidly due to the dense connections between non-word-level nodes. However, these features only slightly improve performance over using simple POS bigrams. 517 (a) Experiments on small training corpus Data set CTB chap. IDs # of sent. # of words Training 1-270 3,046 75,169 Development 301-325 350 6,821 Test 271-300 348 8,008 # of POS tags 32 OOV (word) 0.0987 (790/8,008) OOV (word & POS) 0.1140 (913/8,008) (b) Experiments on large training corpus Data set CTB chap. IDs # of sent. # of words Training 1-270, 18,089 493,939 400-931, 1001-1151 Development 301-325 350 6,821 Test 271-300 348 8,008 # of POS tags 35 OOV (word) 0.0347 (278/8,008) OOV (word & POS) 0.0420 (336/8,008) Table 5: Training, development, and test data statistics on CTB 5.0 used in our experiments. 5 Experiments 5.1 Data sets Previous studies on joint Chinese word segmentation and POS tagging have used Penn Chinese Treebank (CTB) (Xia et al., 2000) in experiments. However, versions of CTB and experimental settings vary across different studies. In this paper, we used CTB 5.0 (LDC2005T01) as our main corpus, defined the training, development and test sets according to (Jiang et al., 2008a; Jiang et al., 2008b), and designed our experiments to explore the impact of the training corpus size on our approach. Table 5 provides the statistics of our experimental settings on the small and large training data. The out-of-vocabulary (OOV) is defined as tokens in the test set that are not in the training set (Sproat and Emerson, 2003). Note that the development set was only used for evaluating the trained model to obtain the optimal values of tunable parameters. 5.2 Evaluation We evaluated both word segmentation (Seg) and joint word segmentation and POS tagging (Seg & Tag). We used recall (R), precision (P), and F1 as evaluation metrics. Following (Sproat and Emerson, 2003), we also measured the recall on OOV (ROOV) tokens and in-vocabulary (RIV) tokens. These performance measures can be calculated as follows: Recall (R) = # of correct tokens # of tokens in test data Precision (P) = # of correct tokens # of tokens in system output F1 = 2 · R · P R + P ROOV = # of correct OOV tokens # of OOV tokens in test data RIV = # of correct IV tokens # of IV tokens in test data For Seg, a token is considered to be a correct one if the word boundary is correctly identified. For Seg & Tag, both the word boundary and its POS tag have to be correctly identified to be counted as a correct token. 5.3 Parameter estimation Our model has three tunable parameters: the number of training iterations N; the number of top k-best paths; and the threshold r for infrequent words. Since we were interested in finding an optimal combination of word-level and characterlevel nodes for training, we focused on tuning r. We fixed N = 10 and k = 5 for all experiments. For the baseline policy, we varied r in the range of [1, 5] and found that setting r = 3 yielded the best performance on the development set for both the small and large training corpus experiments. For the error-driven policy, we collected unidentified unknown words using 10-fold cross validation on the training set, as previously described in Section 3. 5.4 Impact of policies for correct path selection Table 6 shows the results of our word-character hybrid model using the error-driven and baseline policies. The third and fourth columns indicate the numbers of known and artificial unknown words in the training phase. The total number of words is the same, but the different policies yield different balances between the known and artificial unknown words for learning the hybrid model. Optimal balances were selected using the development set. The error-driven policy provides additional artificial unknown words in the training set. The error-driven policy can improve ROOV as well as maintain good RIV, resulting in overall F1 improvements. 518 (a) Experiments on small training corpus # of words in training (75,169) Eval type Policy kwn. art. unk. R P F1 ROOV RIV Seg error-driven 63,997 11,172 0.9587 0.9509 0.9548 0.7557 0.9809 baseline 64,999 10,170 0.9572 0.9489 0.9530 0.7304 0.9820 Seg & Tag error-driven 63,997 11,172 0.8929 0.8857 0.8892 0.5444 0.9377 baseline 64,999 10,170 0.8897 0.8820 0.8859 0.5246 0.9367 (b) Experiments on large training corpus # of words in training (493,939) Eval Type Policy kwn. art. unk. R P F1 ROOV RIV Seg error-driven 442,423 51,516 0.9829 0.9746 0.9787 0.7698 0.9906 baseline 449,679 44,260 0.9821 0.9736 0.9779 0.7590 0.9902 Seg & Tag error-driven 442,423 51,516 0.9407 0.9328 0.9367 0.5982 0.9557 baseline 449,679 44,260 0.9401 0.9319 0.9360 0.5952 0.9552 Table 6: Results of our word-character hybrid model using error-driven and baseline policies. Method Seg Seg & Tag Ours (error-driven) 0.9787 0.9367 Ours (baseline) 0.9779 0.9360 Jiang08a 0.9785 0.9341 Jiang08b 0.9774 0.9337 N&U07 0.9783 0.9332 Table 7: Comparison of F1 results with previous studies on CTB 5.0. Seg Seg & Tag N&U07 Z&C08 Ours N&U07 Z&C08 Ours Trial (base.) (base.) 1 0.9701 0.9721 0.9732 0.9262 0.9346 0.9358 2 0.9738 0.9762 0.9752 0.9318 0.9385 0.9380 3 0.9571 0.9594 0.9578 0.9023 0.9086 0.9067 4 0.9629 0.9592 0.9655 0.9132 0.9160 0.9223 5 0.9597 0.9606 0.9617 0.9132 0.9172 0.9187 6 0.9473 0.9456 0.9460 0.8823 0.8883 0.8885 7 0.9528 0.9500 0.9562 0.9003 0.9051 0.9076 8 0.9519 0.9512 0.9528 0.9002 0.9030 0.9062 9 0.9566 0.9479 0.9575 0.8996 0.9033 0.9052 10 0.9631 0.9645 0.9659 0.9154 0.9196 0.9225 Avg. 0.9595 0.9590 0.9611 0.9085 0.9134 0.9152 Table 8: Comparison of F1 results of our baseline model with Nakagawa and Uchimoto (2007) and Zhang and Clark (2008) on CTB 3.0. Method Seg Seg & Tag Ours (baseline) 0.9611 0.9152 Z&C08 0.9590 0.9134 N&U07 0.9595 0.9085 N&L04 0.9520 Table 9: Comparison of averaged F1 results (by 10-fold cross validation) with previous studies on CTB 3.0. 5.5 Comparison with best prior approaches In this section, we attempt to make meaningful comparison with the best prior approaches reported in the literature. Although most previous studies used CTB, their versions of CTB and experimental settings are different, which complicates comparison. Ng and Low (2004) (N&L04) used CTB 3.0. However, they just showed POS tagging results on a per character basis, not on a per word basis. Zhang and Clark (2008) (Z&C08) generated CTB 3.0 from CTB 4.0. Jiang et al. (2008a; 2008b) (Jiang08a, Jiang08b) used CTB 5.0. Shi and Wang (2007) used CTB that was distributed in the SIGHAN Bakeoff. Besides CTB, they also used HowNet (Dong and Dong, 2006) to obtain semantic class features. Zhang and Clark (2008) indicated that their results cannot directly compare to the results of Shi and Wang (2007) due to different experimental settings. We decided to follow the experimental settings of Jiang et al. (2008a; 2008b) on CTB 5.0 and Zhang and Clark (2008) on CTB 4.0 since they reported the best performances on joint word segmentation and POS tagging using the training materials only derived from the corpora. The performance scores of previous studies are directly taken from their papers. We also conducted experiments using the system implemented by Nakagawa and Uchimoto (2007) (N&U07) for comparison. Our experiment on the large training corpus is identical to that of Jiang et al. (Jiang et al., 2008a; Jiang et al., 2008b). Table 7 compares the F1 results with previous studies on CTB 5.0. The result of our error-driven model is superior to previous reported results for both Seg and Seg & Tag, and the result of our baseline model compares favorably to the others. Following Zhang and Clark (2008), we first generated CTB 3.0 from CTB 4.0 using sentence IDs 1–10364. We then divided CTB 3.0 into ten equal sets and conducted 10-fold cross vali519 dation. Unfortunately, Zhang and Clark’s experimental setting did not allow us to use our errordriven policy since performing 10-fold cross validation again on each main cross validation trial is computationally too expensive. Therefore, we used our baseline policy in this setting and fixed r = 3 for all cross validation runs. Table 8 compares the F1 results of our baseline model with Nakagawa and Uchimoto (2007) and Zhang and Clark (2008) on CTB 3.0. Table 9 shows a summary of averaged F1 results on CTB 3.0. Our baseline model outperforms all prior approaches for both Seg and Seg & Tag, and we hope that our error-driven model can further improve performance. 6 Related work In this section, we discuss related approaches based on several aspects of learning algorithms and search space representation methods. Maximum entropy models are widely used for word segmentation and POS tagging tasks (Uchimoto et al., 2001; Ng and Low, 2004; Nakagawa, 2004; Nakagawa and Uchimoto, 2007) since they only need moderate training times while they provide reasonable performance. Conditional random fields (CRFs) (Lafferty et al., 2001) further improve the performance (Kudo et al., 2004; Shi and Wang, 2007) by performing whole-sequence normalization to avoid label-bias and length-bias problems. However, CRF-based algorithms typically require longer training times, and we observed an infeasible convergence time for our hybrid model. Online learning has recently gained popularity for many NLP tasks since it performs comparably or better than batch learning using shorter training times (McDonald, 2006). For example, a perceptron algorithm is used for joint Chinese word segmentation and POS tagging (Zhang and Clark, 2008; Jiang et al., 2008a; Jiang et al., 2008b). Another potential algorithm is MIRA, which integrates the notion of the large-margin classifier (Crammer, 2004). In this paper, we first introduce MIRA to joint word segmentation and POS tagging and show very encouraging results. With regard to error-driven learning, Brill (1995) proposed a transformation-based approach that acquires a set of error-correcting rules by comparing the outputs of an initial tagger with the correct annotations on a training corpus. Our approach does not learn the error-correcting rules. We only aim to capture the characteristics of unknown words and augment their representatives. As for search space representation, Ng and Low (2004) found that for Chinese, the characterbased model yields better results than the wordbased model. Nakagawa and Uchimoto (2007) provided empirical evidence that the characterbased model is not always better than the wordbased model. They proposed a hybrid approach that exploits both the word-based and characterbased models. Our approach overcomes the limitation of the original hybrid model by a discriminative online learning algorithm for training. 7 Conclusion In this paper, we presented a discriminative wordcharacter hybrid model for joint Chinese word segmentation and POS tagging. Our approach has two important advantages. The first is robust search space representation based on a hybrid model in which word-level and characterlevel nodes are used to identify known and unknown words, respectively. We introduced a simple scheme based on the error-driven concept to effectively learn the characteristics of known and unknown words from the training corpus. The second is a discriminative online learning algorithm based on MIRA that enables us to incorporate arbitrary features to our hybrid model. Based on extensive comparisons, we showed that our approach is superior to the existing approaches reported in the literature. In future work, we plan to apply our framework to other Asian languages, including Thai and Japanese. Acknowledgments We would like to thank Tetsuji Nakagawa for his helpful suggestions about the word-character hybrid model, Chen Wenliang for his technical assistance with the Chinese processing, and the anonymous reviewers for their insightful comments. References Masayuki Asahara. 2003. Corpus-based Japanese morphological analysis. Nara Institute of Science and Technology, Doctor’s Thesis. Harald Baayen and Richard Sproat. 1996. Estimating lexical priors for low-frequency morphologically ambiguous forms. Computational Linguistics, 22(2):155–166. 520 Eric Brill. 1995. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 21(4):543–565. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of EMNLP, pages 1–8. Koby Crammer, Ryan McDonald, and Fernando Pereira. 2005. Scalable large-margin online learning for structured classification. In NIPS Workshop on Learning With Structured Outputs. Koby Crammer. 2004. Online Learning of Complex Categorial Problems. Hebrew Univeristy of Jerusalem, PhD Thesis. Zhendong Dong and Qiang Dong. 2006. Hownet and the Computation of Meaning. World Scientific. Kuzman Ganchev, Koby Crammer, Fernando Pereira, Gideon Mann, Kedar Bellare, Andrew McCallum, Steven Carroll, Yang Jin, and Peter White. 2007. Penn/umass/chop biocreative ii systems. In Proceedings of the Second BioCreative Challenge Evaluation Workshop. Wenbin Jiang, Liang Huang, Qun Liu, and Yajuan L¨u. 2008a. A cascaded linear model for joint chinese word segmentation and part-of-speech tagging. In Proceedings of ACL. Wenbin Jiang, Haitao Mi, and Qun Liu. 2008b. Word lattice reranking for chinese word segmentation and part-of-speech tagging. In Proceedings of COLING. Taku Kudo, Kaoru Yamamoto, and Yuji Matsumoto. 2004. Applying conditional random fields to japanese morphological analysis. In Proceedings of EMNLP, pages 230–237. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML, pages 282– 289. Ryan McDonald, Femando Pereira, Kiril Ribarow, and Jan Hajic. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of HLT/EMNLP, pages 523–530. Ryan McDonald. 2006. Discriminative Training and Spanning Tree Algorithms for Dependency Parsing. University of Pennsylvania, PhD Thesis. Masaki Nagata. 1994. A stochastic japanese morphological analyzer using a forward-DP backwardA* n-best search algorithm. In Proceedings of the 15th International Conference on Computational Linguistics, pages 201–207. Masaki Nagata. 1999. A part of speech estimation method for japanese unknown words using a statistical model of morphology and context. In Proceedings of ACL, pages 277–284. Tetsuji Nakagawa and Kiyotaka Uchimoto. 2007. A hybrid approach to word segmentation and pos tagging. In Proceedings of ACL Demo and Poster Sessions. Tetsuji Nakagawa. 2004. Chinese and japanese word segmentation using word-level and character-level information. In Proceedings of COLING, pages 466–472. Hwee Tou Ng and Jin Kiat Low. 2004. Chinese partof-speech tagging: One-at-a-time or all-at-once? word-based or character-based? In Proceedings of EMNLP, pages 277–284. Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proceedings of EMNLP, pages 133–142. Yanxin Shi and Mengqiu Wang. 2007. A dual-layer crfs based joint decoding method for cascaded segmentation and labeling tasks. In Proceedings of IJCAI. Richard Sproat and Thomas Emerson. 2003. The first international chinese word segmentation bakeoff. In Proceedings of the 2nd SIGHAN Workshop on Chinese Language Processing, pages 133–143. Kiyotaka Uchimoto, Satoshi Sekine, and Hitoshi Isahara. 2001. The unknown word problem: a morphological analysis of japanese using maximum entropy aided by a dictionary. In Proceedings of EMNLP, pages 91–99. Fei Xia, Martha Palmer, Nianwen Xue, Mary Ellen Okurowski, John Kovarik, Fu dong Chiou, and Shizhe Huang. 2000. Developing guidelines and ensuring consistency for chinese text annotation. In Proceedings of LREC. Stavros A. Zenios Yair Censor. 1997. Parallel Optimization: Theory, Algorithms, and Applications. Oxford University Press. Yue Zhang and Stephen Clark. 2008. Joint word segmentation and pos tagging on a single perceptron. In Proceedings of ACL. 521
2009
58
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 522–530, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Automatic Adaptation of Annotation Standards: Chinese Word Segmentation and POS Tagging – A Case Study Wenbin Jiang † Liang Huang ‡ Qun Liu † †Key Lab. of Intelligent Information Processing ‡Google Research Institute of Computing Technology 1350 Charleston Rd. Chinese Academy of Sciences Mountain View, CA 94043, USA P.O. Box 2704, Beijing 100190, China [email protected] {jiangwenbin, liuqun}@ict.ac.cn [email protected] Abstract Manually annotated corpora are valuable but scarce resources, yet for many annotation tasks such as treebanking and sequence labeling there exist multiple corpora with different and incompatible annotation guidelines or standards. This seems to be a great waste of human efforts, and it would be nice to automatically adapt one annotation standard to another. We present a simple yet effective strategy that transfers knowledge from a differently annotated corpus to the corpus with desired annotation. We test the efficacy of this method in the context of Chinese word segmentation and part-of-speech tagging, where no segmentation and POS tagging standards are widely accepted due to the lack of morphology in Chinese. Experiments show that adaptation from the much larger People’s Daily corpus to the smaller but more popular Penn Chinese Treebank results in significant improvements in both segmentation and tagging accuracies (with error reductions of 30.2% and 14%, respectively), which in turn helps improve Chinese parsing accuracy. 1 Introduction Much of statistical NLP research relies on some sort of manually annotated corpora to train their models, but these resources are extremely expensive to build, especially at a large scale, for example in treebanking (Marcus et al., 1993). However the linguistic theories underlying these annotation efforts are often heavily debated, and as a result there often exist multiple corpora for the same task with vastly different and incompatible annotation philosophies. For example just for English treebanking there have been the Chomskian-style {1 B2 o3 Ú4 –5 u6 NR NN VV NR U.S. Vice-President visited China {1 B2 o3 Ú4 –5 u6 ns b n v U.S. Vice President visited-China Figure 1: Incompatible word segmentation and POS tagging standards between CTB (upper) and People’s Daily (below). Penn Treebank (Marcus et al., 1993) the HPSG LinGo Redwoods Treebank (Oepen et al., 2002), and a smaller dependency treebank (Buchholz and Marsi, 2006). A second, related problem is that the raw texts are also drawn from different domains, which for the above example range from financial news (PTB/WSJ) to transcribed dialog (LinGo). These two problems seem be a great waste in human efforts, and it would be nice if one could automatically adapt from one annotation standard and/or domain to another in order to exploit much larger datasets for better training. The second problem, domain adaptation, is very well-studied, e.g. by Blitzer et al. (2006) and Daum´e III (2007) (and see below for discussions), so in this paper we focus on the less studied, but equally important problem of annotationstyle adaptation. We present a very simple yet effective strategy that enables us to utilize knowledge from a differently annotated corpora for the training of a model on a corpus with desired annotation. The basic idea is very simple: we first train on a source corpus, resulting in a source classifier, which is used to label the target corpus and results in a “sourcestyle” annotation of the target corpus. We then 522 train a second model on the target corpus with the first classifier’s prediction as additional features for guided learning. This method is very similar to some ideas in domain adaptation (Daum´e III and Marcu, 2006; Daum´e III, 2007), but we argue that the underlying problems are quite different. Domain adaptation assumes the labeling guidelines are preserved between the two domains, e.g., an adjective is always labeled as JJ regardless of from Wall Street Journal (WSJ) or Biomedical texts, and only the distributions are different, e.g., the word “control” is most likely a verb in WSJ but often a noun in Biomedical texts (as in “control experiment”). Annotation-style adaptation, however, tackles the problem where the guideline itself is changed, for example, one treebank might distinguish between transitive and intransitive verbs, while merging the different noun types (NN, NNS, etc.), and for example one treebank (PTB) might be much flatter than the other (LinGo), not to mention the fundamental disparities between their underlying linguistic representations (CFG vs. HPSG). In this sense, the problem we study in this paper seems much harder and more motivated from a linguistic (rather than statistical) point of view. More interestingly, our method, without any assumption on the distributions, can be simultaneously applied to both domain and annotation standards adaptation problems, which is very appealing in practice because the latter problem often implies the former, as in our case study. To test the efficacy of our method we choose Chinese word segmentation and part-of-speech tagging, where the problem of incompatible annotation standards is one of the most evident: so far no segmentation standard is widely accepted due to the lack of a clear definition of Chinese words, and the (almost complete) lack of morphology results in much bigger ambiguities and heavy debates in tagging philosophies for Chinese parts-of-speech. The two corpora used in this study are the much larger People’s Daily (PD) (5.86M words) corpus (Yu et al., 2001) and the smaller but more popular Penn Chinese Treebank (CTB) (0.47M words) (Xue et al., 2005). They used very different segmentation standards as well as different POS tagsets and tagging guidelines. For example, in Figure 1, People’s Daily breaks “Vice-President” into two words while combines the phrase “visited-China” as a compound. Also CTB has four verbal categories (VV for normal verbs, and VC for copulas, etc.) while PD has only one verbal tag (v) (Xia, 2000). It is preferable to transfer knowledge from PD to CTB because the latter also annotates tree structures which is very useful for downstream applications like parsing, summarization, and machine translation, yet it is much smaller in size. Indeed, many recent efforts on Chinese-English translation and Chinese parsing use the CTB as the de facto segmentation and tagging standards, but suffers from the limited size of training data (Chiang, 2007; Bikel and Chiang, 2000). We believe this is also a reason why stateof-the-art accuracy for Chinese parsing is much lower than that of English (CTB is only half the size of PTB). Our experiments show that adaptation from PD to CTB results in a significant improvement in segmentation and POS tagging, with error reductions of 30.2% and 14%, respectively. In addition, the improved accuracies from segmentation and tagging also lead to an improved parsing accuracy on CTB, reducing 38% of the error propagation from word segmentation to parsing. We envision this technique to be general and widely applicable to many other sequence labeling tasks. In the rest of the paper we first briefly review the popular classification-based method for word segmentation and tagging (Section 2), and then describe our idea of annotation adaptation (Section 3). We then discuss other relevant previous work including co-training and classifier combination (Section 4) before presenting our experimental results (Section 5). 2 Segmentation and Tagging as Character Classification Before describing the adaptation algorithm, we give a brief introduction of the baseline character classification strategy for segmentation, as well as joint segmenation and tagging (henceforth “Joint S&T”). following our previous work (Jiang et al., 2008). Given a Chinese sentence as sequence of n characters: C1 C2 .. Cn where Ci is a character, word segmentation aims to split the sequence into m(≤n) words: C1:e1 Ce1+1:e2 .. Cem−1+1:em where each subsequence Ci:j indicates a Chinese word spanning from characters Ci to Cj (both in523 Algorithm 1 Perceptron training algorithm. 1: Input: Training examples (xi, yi) 2: ⃗α ←0 3: for t ←1 .. T do 4: for i ←1 .. N do 5: zi ←argmaxz∈GEN(xi) Φ(xi, z) · ⃗α 6: if zi ̸= yi then 7: ⃗α ←⃗α + Φ(xi, yi) −Φ(xi, zi) 8: Output: Parameters ⃗α clusive). While in Joint S&T, each word is further annotated with a POS tag: C1:e1/t1 Ce1+1:e2/t2 .. Cem−1+1:em/tm where tk(k = 1..m) denotes the POS tag for the word Cek−1+1:ek. 2.1 Character Classification Method Xue and Shen (2003) describe for the first time the character classification approach for Chinese word segmentation, where each character is given a boundary tag denoting its relative position in a word. In Ng and Low (2004), Joint S&T can also be treated as a character classification problem, where a boundary tag is combined with a POS tag in order to give the POS information of the word containing these characters. In addition, Ng and Low (2004) find that, compared with POS tagging after word segmentation, Joint S&T can achieve higher accuracy on both segmentation and POS tagging. This paper adopts the tag representation of Ng and Low (2004). For word segmentation only, there are four boundary tags: • b: the begin of the word • m: the middle of the word • e: the end of the word • s: a single-character word while for Joint S&T, a POS tag is attached to the tail of a boundary tag, to incorporate the word boundary information and POS information together. For example, b-NN indicates that the character is the begin of a noun. After all characters of a sentence are assigned boundary tags (or with POS postfix) by a classifier, the corresponding word sequence (or with POS) can be directly derived. Take segmentation for example, a character assigned a tag s or a subsequence of words assigned a tag sequence bm∗e indicates a word. 2.2 Training Algorithm and Features Now we will show the training algorithm of the classifier and the features used. Several classification models can be adopted here, however, we choose the averaged perceptron algorithm (Collins, 2002) because of its simplicity and high accuracy. It is an online training algorithm and has been successfully used in many NLP tasks, such as POS tagging (Collins, 2002), parsing (Collins and Roark, 2004), Chinese word segmentation (Zhang and Clark, 2007; Jiang et al., 2008), and so on. Similar to the situation in other sequence labeling problems, the training procedure is to learn a discriminative model mapping from inputs x ∈X to outputs y ∈Y , where X is the set of sentences in the training corpus and Y is the set of corresponding labelled results. Following Collins, we use a function GEN(x) enumerating the candidate results of an input x , a representation Φ mapping each training example (x, y) ∈X × Y to a feature vector Φ(x, y) ∈Rd, and a parameter vector ⃗α ∈Rd corresponding to the feature vector. For an input character sequence x, we aim to find an output F(x) that satisfies: F(x) = argmax y∈GEN(x) Φ(x, y) · ⃗α (1) where Φ(x, y)·⃗α denotes the inner product of feature vector Φ(x, y) and the parameter vector ⃗α. Algorithm 1 depicts the pseudo code to tune the parameter vector ⃗α. In addition, the “averaged parameters” technology (Collins, 2002) is used to alleviate overfitting and achieve stable performance. Table 1 lists the feature template and corresponding instances. Following Ng and Low (2004), the current considering character is denoted as C0, while the ith character to the left of C0 as C−i, and to the right as Ci. There are additional two functions of which each returns some property of a character. Pu(·) is a boolean function that checks whether a character is a punctuation symbol (returns 1 for a punctuation, 0 for not). T(·) is a multi-valued function, it classifies a character into four classifications: number, date, English letter and others (returns 1, 2, 3 and 4, respectively). 3 Automatic Annotation Adaptation From this section, several shortened forms are adopted for representation inconvenience. We use source corpus to denote the corpus with the annotation standard that we don’t require, which is of 524 Feature Template Instances Ci (i = −2..2) C−2 = Ê, C−1 = , C0 = c, C1 = “, C2 = R CiCi+1 (i = −2..1) C−2C−1 = Ê, C−1C0 = c, C0C1 = c“, C1C2 = “R C−1C1 C−1C1 = “ Pu(C0) Pu(C0) = 0 T(C−2)T(C−1)T(C0)T(C1)T(C2) T(C−2)T(C−1)T(C0)T(C1)T(C2) = 11243 Table 1: Feature templates and instances from Ng and Low (Ng and Low, 2004). Suppose we are considering the third character “c” in “Ê c “R”. course the source of the adaptation, while target corpus denoting the corpus with the desired standard. And correspondingly, the two annotation standards are naturally denoted as source standard and target standard, while the classifiers following the two annotation standards are respectively named as source classifier and target classifier, if needed. Considering that word segmentation and Joint S&T can be conducted in the same character classification manner, we can design an unified standard adaptation framework for the two tasks, by taking the source classifier’s classification result as the guide information for the target classifier’s classification decision. The following section depicts this adaptation strategy in detail. 3.1 General Adaptation Strategy In detail, in order to adapt knowledge from the source corpus, first, a source classifier is trained on it and therefore captures the knowledge it contains; then, the source classifier is used to classify the characters in the target corpus, although the classification result follows a standard that we don’t desire; finally, a target classifier is trained on the target corpus, with the source classifier’s classification result as additional guide information. The training procedure of the target classifier automatically learns the regularity to transfer the source classifier’s predication result from source standard to target standard. This regularity is incorporated together with the knowledge learnt from the target corpus itself, so as to obtain enhanced predication accuracy. For a given un-classified character sequence, the decoding is analogous to the training. First, the character sequence is input into the source classifier to obtain an source standard annotated classification result, then it is input into the target classifier with this classification result as additional information to get the final result. This coincides with the stacking method for combining dependency parsers (Martins et al., 2008; Nivre and McDonsource corpus train with normal features source classifier train with additional features target classifier target corpus source annotation classification result Figure 2: The pipeline for training. raw sentence source classifier source annotation classification result target classifier target annotation classification result Figure 3: The pipeline for decoding. ald, 2008), and is also similar to the Pred baseline for domain adaptation in (Daum´e III and Marcu, 2006; Daum´e III, 2007). Figures 2 and 3 show the flow charts for training and decoding. The utilization of the source classifier’s classification result as additional guide information resorts to the introduction of new features. For the current considering character waiting for classification, the most intuitive guide features is the source classifier’s classification result itself. However, our effort isn’t limited to this, and more special features are introduced: the source classifier’s classification result is attached to every feature listed in Table 1 to get combined guide features. This is similar to feature design in discriminative dependency parsing (McDonald et al., 2005; Mc525 Donald and Pereira, 2006), where the basic features, composed of words and POSs in the context, are also conjoined with link direction and distance in order to obtain more special features. Table 2 shows an example of guide features and basic features, where “α = b ” represents that the source classifier classifies the current character as b, the beginning of a word. Such combination method derives a series of specific features, which helps the target classifier to make more precise classifications. The parameter tuning procedure of the target classifier will automatically learn the regularity of using the source classifier’s classification result to guide its decision making. For example, if a current considering character shares some basic features in Table 2 and it is classified as b, then the target classifier will probably classify it as m. In addition, the training procedure of the target classifier also learns the relative weights between the guide features and the basic features, so that the knowledge from both the source corpus and the target corpus are automatically integrated together. In fact, more complicated features can be adopted as guide information. For error tolerance, guide features can be extracted from n-best results or compacted lattices of the source classifier; while for the best use of the source classifier’s output, guide features can also be the classification results of several successive characters. We leave them as future research. 4 Related Works Co-training (Sarkar, 2001) and classifier combination (Nivre and McDonald, 2008) are two technologies for training improved dependency parsers. The co-training technology lets two different parsing models learn from each other during parsing an unlabelled corpus: one model selects some unlabelled sentences it can confidently parse, and provide them to the other model as additional training corpus in order to train more powerful parsers. The classifier combination lets graph-based and transition-based dependency parsers to utilize the features extracted from each other’s parsing results, to obtain combined, enhanced parsers. The two technologies aim to let two models learn from each other on the same corpora with the same distribution and annotation standard, while our strategy aims to integrate the knowledge in multiple corpora with different Baseline Features C−2 = { C−1 = B C0 = o C1 = Ú C2 = – C−2C−1 = {B C−1C0 = Bo C0C1 = oÚ C1C2 = ږ C−1C1 = BÚ Pu(C0) = 0 T(C−2)T(C−1)T(C0)T(C1)T(C2) = 44444 Guide Features α = b C−2 = { ◦α = b C−1 = B ◦α = b C0 = o ◦α = b C1 = Ú ◦α = b C2 = – ◦α = b C−2C−1 = {B ◦α = b C−1C0 = Bo ◦α = b C0C1 = oÚ ◦α = b C1C2 = ږ ◦α = b C−1C1 = BÚ ◦α = b Pu(C0) = 0 ◦α = b T(C−2)T(C−1)T(C0)T(C1)T(C2) = 44444 ◦α = b Table 2: An example of basic features and guide features of standard-adaptation for word segmentation. Suppose we are considering the third character “o” in “{B o ږu”. annotation-styles. Gao et al. (2004) described a transformationbased converter to transfer a certain annotationstyle word segmentation result to another style. They design some class-type transformation templates and use the transformation-based errordriven learning method of Brill (1995) to learn what word delimiters should be modified. However, this converter need human designed transformation templates, and is hard to be generalized to POS tagging, not to mention other structure labeling tasks. Moreover, the processing procedure is divided into two isolated steps, conversion after segmentation, which suffers from error propagation and wastes the knowledge in the corpora. On the contrary, our strategy is automatic, generalizable and effective. In addition, many efforts have been devoted to manual treebank adaptation, where they adapt PTB to other grammar formalisms, such as such as CCG and LFG (Hockenmaier and Steedman, 2008; Cahill and Mccarthy, 2007). However, they are heuristics-based and involve heavy human engineering. 526 5 Experiments Our adaptation experiments are conducted from People’s Daily (PD) to Penn Chinese Treebank 5.0 (CTB). These two corpora are segmented following different segmentation standards and labeled with different POS sets (see for example Figure 1). PD is much bigger in size, with about 100K sentences, while CTB is much smaller, with only about 18K sentences. Thus a classifier trained on CTB usually falls behind that trained on PD, but CTB is preferable because it also annotates tree structures, which is very useful for downstream applications like parsing and translation. For example, currently, most Chinese constituency and dependency parsers are trained on some version of CTB, using its segmentation and POS tagging as the de facto standards. Therefore, we expect the knowledge adapted from PD will lead to more precise CTB-style segmenter and POS tagger, which would in turn reduce the error propagation to parsing (and translation). Experiments adapting from PD to CTB are conducted for two tasks: word segmentation alone, and joint segmentation and POS tagging (Joint S&T). The performance measurement indicators for word segmentation and Joint S&T are balanced F-measure, F = 2PR/(P + R), a function of Precision P and Recall R. For word segmentation, P indicates the percentage of words in segmentation result that are segmented correctly, and R indicates the percentage of correctly segmented words in gold standard words. For Joint S&T, P and R mean nearly the same except that a word is correctly segmented only if its POS is also correctly labelled. 5.1 Baseline Perceptron Classifier We first report experimental results of the single perceptron classifier on CTB 5.0. The original corpus is split according to former works: chapters 271 −300 for testing, chapters 301 −325 for development, and others for training. Figure 4 shows the learning curves for segmentation only and Joint S&T, we find all curves tend to moderate after 7 iterations. The data splitting convention of other two corpora, People’s Daily doesn’t reserve the development sets, so in the following experiments, we simply choose the model after 7 iterations when training on this corpus. The first 3 rows in each sub-table of Table 3 show the performance of the single perceptron 0.880 0.890 0.900 0.910 0.920 0.930 0.940 0.950 0.960 0.970 0.980 1 2 3 4 5 6 7 8 9 10 F measure number of iterations segmentation only segmentation in Joint S&T Joint S&T Figure 4: Averaged perceptron learning curves for segmentation and Joint S&T. Train on Test on Seg F1% JST F1% Word Segmentation PD PD 97.45 — PD CTB 91.71 — CTB CTB 97.35 — PD →CTB CTB 98.15 — Joint S&T PD PD 97.57 94.54 PD CTB 91.68 — CTB CTB 97.58 93.06 PD →CTB CTB 98.23 94.03 Table 3: Experimental results for both baseline models and final systems with annotation adaptation. PD →CTB means annotation adaptation from PD to CTB. For the upper sub-table, items of JST F1 are undefined since only segmentation is performs. While in the sub-table below, JST F1 is also undefined since the model trained on PD gives a POS set different from that of CTB. models. Comparing row 1 and 3 in the sub-table below with the corresponding rows in the upper sub-table, we validate that when word segmentation and POS tagging are conducted jointly, the performance for segmentation improves since the POS tags provide additional information to word segmentation (Ng and Low, 2004). We also see that for both segmentation and Joint S&T, the performance sharply declines when a model trained on PD is tested on CTB (row 2 in each sub-table). In each task, only about 92% F1 is achieved. This obviously fall behind those of the models trained on CTB itself (row 3 in each sub-table), about 97% F1, which are used as the baselines of the following annotation adaptation experiments. 527 POS #Word #BaseErr #AdaErr ErrDec% AD 305 30 19 36.67 ↓ AS 76 0 0 BA 4 1 1 CC 135 8 8 CD 356 21 14 33.33 ↓ CS 6 0 0 DEC 137 31 23 25.81 ↓ DEG 197 32 37 ↑ DEV 10 0 0 DT 94 3 1 66.67 ↓ ETC 12 0 0 FW 1 1 1 JJ 127 41 44 ↑ LB 2 1 1 LC 106 3 2 33.33 ↓ M 349 18 4 77.78 ↓ MSP 8 2 1 50.00 ↓ NN 1715 151 126 16.56 ↓ NR 713 59 50 15.25 ↓ NT 178 1 2 ↑ OD 84 0 0 P 251 10 6 40.00 ↓ PN 81 1 1 PU 997 0 1 ↑ SB 2 0 0 SP 2 2 2 VA 98 23 21 08.70 ↓ VC 61 0 0 VE 25 1 0 100.00 ↓ VV 689 64 40 37.50 ↓ SUM 6821 213 169 20.66 ↓ Table 4: Error analysis for Joint S&T on the developing set of CTB. #BaseErr and #AdaErr denote the count of words that can’t be recalled by the baseline model and adapted model, respectively. ErrDec denotes the error reduction of Recall. 5.2 Adaptation for Segmentation and Tagging Table 3 also lists the results of annotation adaptation experiments. For word segmentation, the model after annotation adaptation (row 4 in upper sub-table) achieves an F-measure increment of 0.8 points over the baseline model, corresponding to an error reduction of 30.2%; while for Joint S&T, the F-measure increment of the adapted model (row 4 in sub-table below) is 1 point, which corresponds to an error reduction of 14%. In addition, the performance of the adapted model for Joint S&T obviously surpass that of (Jiang et al., 2008), which achieves an F1 of 93.41% for Joint S&T, although with more complicated models and features. Due to the obvious improvement brought by annotation adaptation to both word segmentation and Joint S&T, we can safely conclude that the knowledge can be effectively transferred from on anInput Type Parsing F1% gold-standard segmentation 82.35 baseline segmentation 80.28 adapted segmentation 81.07 Table 5: Chinese parsing results with different word segmentation results as input. notation standard to another, although using such a simple strategy. To obtain further information about what kind of errors be alleviated by annotation adaptation, we conduct an initial error analysis for Joint S&T on the developing set of CTB. It is reasonable to investigate the error reduction of Recall for each word cluster grouped together according to their POS tags. From Table 4 we find that out of 30 word clusters appeared in the developing set of CTB, 13 clusters benefit from the annotation adaptation strategy, while 4 clusters suffer from it. However, the compositive error rate of Recall for all word clusters is reduced by 20.66%, such a fact invalidates the effectivity of annotation adaptation. 5.3 Contribution to Chinese Parsing We adopt the Chinese parser of Xiong et al. (2005), and train it on the training set of CTB 5.0 as described before. To sketch the error propagation to parsing from word segmentation, we redefine the constituent span as a constituent subtree from a start character to a end character, rather than from a start word to a end word. Note that if we input the gold-standard segmented test set into the parser, the F-measure under the two definitions are the same. Table 5 shows the parsing accuracies with different word segmentation results as the parser’s input. The parsing F-measure corresponding to the gold-standard segmentation, 82.35, represents the “oracle” accuracy (i.e., upperbound) of parsing on top of automatic word segmention. After integrating the knowledge from PD, the enhanced word segmenter gains an F-measure increment of 0.8 points, which indicates that 38% of the error propagation from word segmentation to parsing is reduced by our annotation adaptation strategy. 6 Conclusion and Future Works This paper presents an automatic annotation adaptation strategy, and conducts experiments on a classic problem: word segmentation and Joint 528 S&T. To adapt knowledge from a corpus with an annotation standard that we don’t require, a classifier trained on this corpus is used to pre-process the corpus with the desired annotated standard, on which a second classifier is trained with the first classifier’s predication results as additional guide information. Experiments of annotation adaptation from PD to CTB 5.0 for word segmentation and POS tagging show that, this strategy can make effective use of the knowledge from the corpus with different annotations. It obtains considerable F-measure increment, about 0.8 point for word segmentation and 1 point for Joint S&T, with corresponding error reductions of 30.2% and 14%. The final result outperforms the latest work on the same corpus which uses more complicated technologies, and achieves the state-of-the-art. Moreover, such improvement further brings striking Fmeasure increment for Chinese parsing, about 0.8 points, corresponding to an error propagation reduction of 38%. In the future, we will continue to research on annotation adaptation for other NLP tasks which have different annotation-style corpora. Especially, we will pay efforts to the annotation standard adaptation between different treebanks, for example, from HPSG LinGo Redwoods Treebank to PTB, or even from a dependency treebank to PTB, in order to obtain more powerful PTB annotation-style parsers. Acknowledgement This project was supported by National Natural Science Foundation of China, Contracts 60603095 and 60736014, and 863 State Key Project No. 2006AA010108. We are especially grateful to Fernando Pereira and the anonymous reviewers for pointing us to relevant domain adaption references. We also thank Yang Liu and Haitao Mi for helpful discussions. References Daniel M. Bikel and David Chiang. 2000. Two statistical parsing models applied to the chinese treebank. In Proceedings of the second workshop on Chinese language processing. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of EMNLP. Eric Brill. 1995. Transformation-based error-driven learning and natural language processing: a case study in part-of-speech tagging. In Computational Linguistics. Sabine Buchholz and Erwin Marsi. 2006. Conll-x shared task on multilingual dependency parsing. In Proceedings of CoNLL. Aoife Cahill and Mairead Mccarthy. 2007. Automatic annotation of the penn treebank with lfg fstructure information. In in Proceedings of the LREC Workshop on Linguistic Knowledge Acquisition and Representation: Bootstrapping Annotated Language Data. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, pages 201–228. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of the 42th Annual Meeting of the Association for Computational Linguistics. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the Empirical Methods in Natural Language Processing Conference, pages 1–8, Philadelphia, USA. Hal Daum´e III and Daniel Marcu. 2006. Domain adaptation for statistical classifiers. In Journal of Artificial Intelligence Research. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In Proceedings of ACL. Jianfeng Gao, Andi Wu, Mu Li, Chang-Ning Huang, Hongqiao Li, Xinsong Xia, and Haowei Qin. 2004. Adaptive chinese word segmentation. In Proceedings of ACL. Julia Hockenmaier and Mark Steedman. 2008. Ccgbank: a corpus of ccg derivations and dependency structures extracted from the penn treebank. In Computational Linguistics, volume 33(3), pages 355–396. Wenbin Jiang, Liang Huang, Yajuan L¨u, and Qun Liu. 2008. A cascaded linear model for joint chinese word segmentation and part-of-speech tagging. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. In Computational Linguistics. Andr´e F. T. Martins, Dipanjan Das, Noah A. Smith, and Eric P. Xing. 2008. Stacking dependency parsers. In Proceedings of EMNLP. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of EACL, pages 81–88. 529 Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of ACL, pages 91– 98. Hwee Tou Ng and Jin Kiat Low. 2004. Chinese partof-speech tagging: One-at-a-time or all-at-once? word-based or character-based? In Proceedings of the Empirical Methods in Natural Language Processing Conference. Joakim Nivre and Ryan McDonald. 2008. Integrating graph-based and transition-based dependency parsers. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics. Stephan Oepen, Kristina Toutanova, Stuart Shieber, Christopher Manning Dan Flickinger, and Thorsten Brants. 2002. The lingo redwoods treebank: Motivation and preliminary applications. In In Proceedings of the 19th International Conference on Computational Linguistics (COLING 2002). Anoop Sarkar. 2001. Applying co-training methods to statistical parsing. In Proceedings of NAACL. Fei Xia. 2000. The part-of-speech tagging guidelines for the penn chinese treebank (3.0). In Technical Reports. Deyi Xiong, Shuanglong Li, Qun Liu, and Shouxun Lin. 2005. Parsing the penn chinese treebank with semantic knowledge. In Proceedings of IJCNLP 2005, pages 70–81. Nianwen Xue and Libin Shen. 2003. Chinese word segmentation as lmr tagging. In Proceedings of SIGHAN Workshop. Nianwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. In Natural Language Engineering. Shiwen Yu, Jianming Lu, Xuefeng Zhu, Huiming Duan, Shiyong Kang, Honglin Sun, Hui Wang, Qiang Zhao, and Weidong Zhan. 2001. Processing norms of modern chinese corpus. Technical report. Yue Zhang and Stephen Clark. 2007. Chinese segmentation with a word-based perceptron algorithm. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. 530
2009
59
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 46–54, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Exploiting Heterogeneous Treebanks for Parsing Zheng-Yu Niu, Haifeng Wang, Hua Wu Toshiba (China) Research and Development Center 5/F., Tower W2, Oriental Plaza, Beijing, 100738, China {niuzhengyu,wanghaifeng,wuhua}@rdc.toshiba.com.cn Abstract We address the issue of using heterogeneous treebanks for parsing by breaking it down into two sub-problems, converting grammar formalisms of the treebanks to the same one, and parsing on these homogeneous treebanks. First we propose to employ an iteratively trained target grammar parser to perform grammar formalism conversion, eliminating predefined heuristic rules as required in previous methods. Then we provide two strategies to refine conversion results, and adopt a corpus weighting technique for parsing on homogeneous treebanks. Results on the Penn Treebank show that our conversion method achieves 42% error reduction over the previous best result. Evaluation on the Penn Chinese Treebank indicates that a converted dependency treebank helps constituency parsing and the use of unlabeled data by self-training further increases parsing f-score to 85.2%, resulting in 6% error reduction over the previous best result. 1 Introduction The last few decades have seen the emergence of multiple treebanks annotated with different grammar formalisms, motivated by the diversity of languages and linguistic theories, which is crucial to the success of statistical parsing (Abeille et al., 2000; Brants et al., 1999; Bohmova et al., 2003; Han et al., 2002; Kurohashi and Nagao, 1998; Marcus et al., 1993; Moreno et al., 2003; Xue et al., 2005). Availability of multiple treebanks creates a scenario where we have a treebank annotated with one grammar formalism, and another treebank annotated with another grammar formalism that we are interested in. We call the first a source treebank, and the second a target treebank. We thus encounter a problem of how to use these heterogeneous treebanks for target grammar parsing. Here heterogeneous treebanks refer to two or more treebanks with different grammar formalisms, e.g., one treebank annotated with dependency structure (DS) and the other annotated with phrase structure (PS). It is important to acquire additional labeled data for the target grammar parsing through exploitation of existing source treebanks since there is often a shortage of labeled data. However, to our knowledge, there is no previous study on this issue. Recently there have been some works on using multiple treebanks for domain adaptation of parsers, where these treebanks have the same grammar formalism (McClosky et al., 2006b; Roark and Bacchiani, 2003). Other related works focus on converting one grammar formalism of a treebank to another and then conducting studies on the converted treebank (Collins et al., 1999; Forst, 2003; Wang et al., 1994; Watkinson and Manandhar, 2001). These works were done either on multiple treebanks with the same grammar formalism or on only one converted treebank. We see that their scenarios are different from ours as we work with multiple heterogeneous treebanks. For the use of heterogeneous treebanks1, we propose a two-step solution: (1) converting the grammar formalism of the source treebank to the target one, (2) refining converted trees and using them as additional training data to build a target grammar parser. For grammar formalism conversion, we choose the DS to PS direction for the convenience of the comparison with existing works (Xia and Palmer, 2001; Xia et al., 2008). Specifically, we assume that the source grammar formalism is dependency 1Here we assume the existence of two treebanks. 46 grammar, and the target grammar formalism is phrase structure grammar. Previous methods for DS to PS conversion (Collins et al., 1999; Covington, 1994; Xia and Palmer, 2001; Xia et al., 2008) often rely on predefined heuristic rules to eliminate converison ambiguity, e.g., minimal projection for dependents, lowest attachment position for dependents, and the selection of conversion rules that add fewer number of nodes to the converted tree. In addition, the validity of these heuristic rules often depends on their target grammars. To eliminate the heuristic rules as required in previous methods, we propose to use an existing target grammar parser (trained on the target treebank) to generate N-best parses for each sentence in the source treebank as conversion candidates, and then select the parse consistent with the structure of the source tree as the converted tree. Furthermore, we attempt to use converted trees as additional training data to retrain the parser for better conversion candidates. The procedure of tree conversion and parser retraining will be run iteratively until a stopping condition is satisfied. Since some converted trees might be imperfect from the perspective of the target grammar, we provide two strategies to refine conversion results: (1) pruning low-quality trees from the converted treebank, (2) interpolating the scores from the source grammar and the target grammar to select better converted trees. Finally we adopt a corpus weighting technique to get an optimal combination of the converted treebank and the existing target treebank for parser training. We have evaluated our conversion algorithm on a dependency structure treebank (produced from the Penn Treebank) for comparison with previous work (Xia et al., 2008). We also have investigated our two-step solution on two existing treebanks, the Penn Chinese Treebank (CTB) (Xue et al., 2005) and the Chinese Dependency Treebank (CDT)2 (Liu et al., 2006). Evaluation on WSJ data demonstrates that it is feasible to use a parser for grammar formalism conversion and the conversion benefits from converted trees used for parser retraining. Our conversion method achieves 93.8% f-score on dependency trees produced from WSJ section 22, resulting in 42% error reduction over the previous best result for DS to PS conversion. Results on CTB show that score interpolation is 2Available at http://ir.hit.edu.cn/. more effective than instance pruning for the use of converted treebanks for parsing and converted CDT helps parsing on CTB. When coupled with self-training technique, a reranking parser with CTB and converted CDT as labeled data achieves 85.2% f-score on CTB test set, an absolute 1.0% improvement (6% error reduction) over the previous best result for Chinese parsing. The rest of this paper is organized as follows. In Section 2, we first describe a parser based method for DS to PS conversion, and then we discuss possible strategies to refine conversion results, and finally we adopt the corpus weighting technique for parsing on homogeneous treebanks. Section 3 provides experimental results of grammar formalism conversion on a dependency treebank produced from the Penn Treebank. In Section 4, we evaluate our two-step solution on two existing heterogeneous Chinese treebanks. Section 5 reviews related work and Section 6 concludes this work. 2 Our Two-Step Solution 2.1 Grammar Formalism Conversion Previous DS to PS conversion methods built a converted tree by iteratively attaching nodes and edges to the tree with the help of conversion rules and heuristic rules, based on current headdependent pair from a source dependency tree and the structure of the built tree (Collins et al., 1999; Covington, 1994; Xia and Palmer, 2001; Xia et al., 2008). Some observations can be made on these methods: (1) for each head-dependent pair, only one locally optimal conversion was kept during tree-building process, at the risk of pruning globally optimal conversions, (2) heuristic rules are required to deal with the problem that one head-dependent pair might have multiple conversion candidates, and these heuristic rules are usually hand-crafted to reflect the structural preference in their target grammars. To overcome these limitations, we propose to employ a parser to generate N-best parses as conversion candidates and then use the structural information of source trees to select the best parse as a converted tree. We formulate our conversion method as follows. Let CDS be a source treebank annotated with DS and CPS be a target treebank annotated with PS. Our goal is to convert the grammar formalism of CDS to that of CPS. We first train a constituency parser on CPS 47 Input: CPS, CDS, Q, and a constituency parser Output: Converted trees CDS PS 1. Initialize: — Set CDS,0 PS as null, DevScore=0, q=0; — Split CPS into training set CPS,train and development set CPS,dev; — Train the parser on CPS,train and denote it by Pq−1; 2. Repeat: — Use Pq−1 to generate N-best PS parses for each sentence in CDS, and convert PS to DS for each parse; — For each sentence in CDS Do ⋄ˆt=argmaxtScore(xi,t), and select the ˆt-th parse as a converted tree for this sentence; — Let CDS,q PS represent these converted trees, and let Ctrain=CPS,train S CDS,q PS ; — Train the parser on Ctrain, and denote the updated parser by Pq; — Let DevScoreq be the f-score of Pq on CPS,dev; — If DevScoreq > DevScore Then DevScore=DevScoreq, and CDS PS =CDS,q PS ; — Else break; — q++; Until q > Q Table 1: Our algorithm for DS to PS conversion. (90% trees in CPS as training set CPS,train, and other trees as development set CPS,dev) and then let the parser generate N-best parses for each sentence in CDS. Let n be the number of sentences (or trees) in CDS and ni be the number of N-best parses generated by the parser for the i-th (1 ≤i ≤n) sentence in CDS. Let xi,t be the t-th (1 ≤t ≤ni) parse for the i-th sentence. Let yi be the tree of the i-th (1 ≤i ≤n) sentence in CDS. To evaluate the quality of xi,t as a conversion candidate for yi, we convert xi,t to a dependency tree (denoted as xDS i,t ) and then use unlabeled dependency f-score to measure the similarity between xDS i,t and yi. Let Score(xi,t) denote the unlabeled dependency f-score of xDS i,t against yi. Then we determine the converted tree for yi by maximizing Score(xi,t) over the N-best parses. The conversion from PS to DS works as follows: Step 1. Use a head percolation table to find the head of each constituent in xi,t. Step 2. Make the head of each non-head child depend on the head of the head child for each constituent. Unlabeled dependency f-score is a harmonic mean of unlabeled dependency precision and unlabeled dependency recall. Precision measures how many head-dependent word pairs found in xDS i,t are correct and recall is the percentage of headdependent word pairs defined in the gold-standard tree that are found in xDS i,t . Here we do not take dependency tags into consideration for evaluation since they cannot be obtained without more sophisticated rules. To improve the quality of N-best parses, we attempt to use the converted trees as additional training data to retrain the parser. The procedure of tree conversion and parser retraining can be run iteratively until a termination condition is satisfied. Here we use the parser’s f-score on CPS,dev as a termination criterion. If the update of training data hurts the performance on CPS,dev, then we stop the iteration. Table 1 shows this DS to PS conversion algorithm. Q is an upper limit of the number of loops, and Q ≥0. 2.2 Target Grammar Parsing Through grammar formalism conversion, we have successfully turned the problem of using heterogeneous treebanks for parsing into the problem of parsing on homogeneous treebanks. Before using converted source treebank for parsing, we present two strategies to refine conversion results. Instance Pruning For some sentences in CDS, the parser might fail to generate high quality N-best parses, resulting in inferior converted trees. To clean the converted treebank, we can remove the converted trees with low unlabeled dependency f-scores (defined in Section 2.1) before using the converted treebank for parser training 48 Figure 1: A parse tree in CTB for a sentence of / ­ .<world> ˆ<every> I<country> < ¬<people> Ñ<all> r<with> 81<eyes> Ý •<cast> † l<Hong Kong>0with /People from all over the world are casting their eyes on Hong Kong0as its English translation. because these trees are /misleading0training instances. The number of removed trees will be determined by cross validation on development set. Score Interpolation Unlabeled dependency f-scores used in Section 2.1 measure the quality of converted trees from the perspective of the source grammar only. In extreme cases, the top best parses in the N-best list are good conversion candidates but we might select a parse ranked quite low in the N-best list since there might be conflicts of syntactic structure definition between the source grammar and the target grammar. Figure 1 shows an example for illustration of a conflict between the grammar of CDT and that of CTB. According to Chinese head percolation tables used in the PS to DS conversion tool /Penn2Malt03 and Charniak’s parser4, the head of VP-2 is the word /r0(a preposition, with /BA0as its POS tag in CTB), and the head of IP-OBJ is Ý • 0. Therefore the word / Ý •0depends on the word /r0. But according to the annotation scheme in CDT (Liu et al., 2006), the word /r0is a dependent of the word /Ý •0. The conflicts between the two grammars may lead to the problem that the selected parses based on the information of the source grammar might not be preferred from the perspective of the 3Available at http://w3.msi.vxu.se/∼nivre/. 4Available at http://www.cs.brown.edu/∼ec/. target grammar. Therefore we modified the selection metric in Section 2.1 by interpolating two scores, the probability of a conversion candidate from the parser and its unlabeled dependency f-score, shown as follows: d Score(xi,t) = λ×Prob(xi,t)+(1−λ)×Score(xi,t). (1) The intuition behind this equation is that converted trees should be preferred from the perspective of both the source grammar and the target grammar. Here 0 ≤λ ≤1. Prob(xi,t) is a probability produced by the parser for xi,t (0 ≤Prob(xi,t) ≤1). The value of λ will be tuned by cross validation on development set. After grammar formalism conversion, the problem now we face has been limited to how to build parsing models on multiple homogeneous treebank. A possible solution is to simply concatenate the two treebanks as training data. However this method may lead to a problem that if the size of CPS is significantly less than that of converted CDS, converted CDS may weaken the effect CPS might have. One possible solution is to reduce the weight of examples from converted CDS in parser training. Corpus weighting is exactly such an approach, with the weight tuned on development set, that will be used for parsing on homogeneous treebanks in this paper. 3 Experiments of Grammar Formalism Conversion 3.1 Evaluation on WSJ section 22 Xia et al. (2008) used WSJ section 19 from the Penn Treebank to extract DS to PS conversion rules and then produced dependency trees from WSJ section 22 for evaluation of their DS to PS conversion algorithm. They showed that their conversion algorithm outperformed existing methods on the WSJ data. For comparison with their work, we conducted experiments in the same setting as theirs: using WSJ section 19 (1844 sentences) as CPS, producing dependency trees from WSJ section 22 (1700 sentences) as CDS5, and using labeled bracketing f-scores from the tool /EVALB0on WSJ section 22 for performance evaluation. 5We used the tool /Penn2Malt0to produce dependency structures from the Penn Treebank, which was also used for PS to DS conversion in our conversion algorithm. 49 All the sentences DevScore LR LP F Models (%) (%) (%) (%) The best result of Xia et al. (2008) 90.7 88.1 89.4 Q-0-method 86.8 92.2 92.8 92.5 Q-10-method 88.0 93.4 94.1 93.8 Table 2: Comparison with the work of Xia et al. (2008) on WSJ section 22. All the sentences DevScore LR LP F Models (%) (%) (%) (%) Q-0-method 91.0 91.6 92.5 92.1 Q-10-method 91.6 93.1 94.1 93.6 Table 3: Results of our algorithm on WSJ section 2∼18 and 20∼22. We employed Charniak’s maximum entropy inspired parser (Charniak, 2000) to generate N-best (N=200) parses. Xia et al. (2008) used POS tag information, dependency structures and dependency tags in test set for conversion. Similarly, we used POS tag information in the test set to restrict search space of the parser for generation of better N-best parses. We evaluated two variants of our DS to PS conversion algorithm: Q-0-method: We set the value of Q as 0 for a baseline method. Q-10-method: We set the value of Q as 10 to see whether it is helpful for conversion to retrain the parser on converted trees. Table 2 shows the results of our conversion algorithm on WSJ section 22. In the experiment of Q-10-method, DevScore reached the highest value of 88.0% when q was 1. Then we used CDS,1 PS as the conversion result. Finally Q-10method achieved an f-score of 93.8% on WSJ section 22, an absolute 4.4% improvement (42% error reduction) over the best result of Xia et al. (2008). Moreover, Q-10-method outperformed Q0-method on the same test set. These results indicate that it is feasible to use a parser for DS to PS conversion and the conversion benefits from the use of converted trees for parser retraining. 3.2 Evaluation on WSJ section 2∼18 and 20∼22 In this experiment we evaluated our conversion algorithm on a larger test set, WSJ section 2∼18 and 20∼22 (totally 39688 sentences). Here we also used WSJ section 19 as CPS. Other settings for All the sentences LR LP F Training data (%) (%) (%) 1 × CTB + CDT P S 84.7 85.1 84.9 2 × CTB + CDT P S 85.1 85.6 85.3 5 × CTB + CDT P S 85.0 85.5 85.3 10 × CTB + CDT P S 85.3 85.8 85.6 20 × CTB + CDT P S 85.1 85.3 85.2 50 × CTB + CDT P S 84.9 85.3 85.1 Table 4: Results of the generative parser on the development set, when trained with various weighting of CTB training set and CDTPS. this experiment are as same as that in Section 3.1, except that here we used a larger test set. Table 3 provides the f-scores of our method with Q equal to 0 or 10 on WSJ section 2∼18 and 20∼22. With Q-10-method, DevScore reached the highest value of 91.6% when q was 1. Finally Q10-method achieved an f-score of 93.6% on WSJ section 2∼18 and 20∼22, better than that of Q-0method and comparable with that of Q-10-method in Section 3.1. It confirms our previous finding that the conversion benefits from the use of converted trees for parser retraining. 4 Experiments of Parsing We investigated our two-step solution on two existing treebanks, CDT and CTB, and we used CDT as the source treebank and CTB as the target treebank. CDT consists of 60k Chinese sentences, annotated with POS tag information and dependency structure information (including 28 POS tags, and 24 dependency tags) (Liu et al., 2006). We did not use POS tag information as inputs to the parser in our conversion method due to the difficulty of conversion from CDT POS tags to CTB POS tags. We used a standard split of CTB for performance evaluation, articles 1-270 and 400-1151 as training set, articles 301-325 as development set, and articles 271-300 as test set. We used Charniak’s maximum entropy inspired parser and their reranker (Charniak and Johnson, 2005) for target grammar parsing, called a generative parser (GP) and a reranking parser (RP) respectively. We reported ParseVal measures from the EVALB tool. 50 All the sentences LR LP F Models Training data (%) (%) (%) GP CTB 79.9 82.2 81.0 RP CTB 82.0 84.6 83.3 GP 10 × CTB + CDT P S 80.4 82.7 81.5 RP 10 × CTB + CDT P S 82.8 84.7 83.8 Table 5: Results of the generative parser (GP) and the reranking parser (RP) on the test set, when trained on only CTB training set or an optimal combination of CTB training set and CDTPS. 4.1 Results of a Baseline Method to Use CDT We used our conversion algorithm6 to convert the grammar formalism of CDT to that of CTB. Let CDTPS denote the converted CDT by our method. The average unlabeled dependency f-score of trees in CDTPS was 74.4%, and their average index in 200-best list was 48. We tried the corpus weighting method when combining CDTPS with CTB training set (abbreviated as CTB for simplicity) as training data, by gradually increasing the weight (including 1, 2, 5, 10, 20, 50) of CTB to optimize parsing performance on the development set. Table 4 presents the results of the generative parser with various weights of CTB on the development set. Considering the performance on the development set, we decided to give CTB a relative weight of 10. Finally we evaluated two parsing models, the generative parser and the reranking parser, on the test set, with results shown in Table 5. When trained on CTB only, the generative parser and the reranking parser achieved f-scores of 81.0% and 83.3%. The use of CDTPS as additional training data increased f-scores of the two models to 81.5% and 83.8%. 4.2 Results of Two Strategies for a Better Use of CDT 4.2.1 Instance Pruning We used unlabeled dependency f-score of each converted tree as the criterion to rank trees in CDTPS and then kept only the top M trees with high f-scores as training data for parsing, resulting in a corpus CDTPS M . M varied from 100%×|CDTPS| to 10%×|CDTPS| with 10%×|CDTPS| as the interval. |CDTPS| 6The setting for our conversion algorithm in this experiment was as same as that in Section 3.1. In addition, we used CTB training set as CP S,train, and CTB development set as CP S,dev. All the sentences LR LP F Models Training data (%) (%) (%) GP CTB + CDT P S λ 81.4 82.8 82.1 RP CTB + CDT P S λ 83.0 85.4 84.2 Table 6: Results of the generative parser and the reranking parser on the test set, when trained on an optimal combination of CTB training set and converted CDT. is the number of trees in CDTPS. Then we tuned the value of M by optimizing the parser’s performance on the development set with 10×CTB+CDTPS M as training data. Finally the optimal value of M was 100%×|CDT|. It indicates that even removing very few converted trees hurts the parsing performance. A possible reason is that most of non-perfect parses can provide useful syntactic structure information for building parsing models. 4.2.2 Score Interpolation We used d Score(xi,t)7 to replace Score(xi,t) in our conversion algorithm and then ran the updated algorithm on CDT. Let CDTPS λ denote the converted CDT by this updated conversion algorithm. The values of λ (varying from 0.0 to 1.0 with 0.1 as the interval) and the CTB weight (including 1, 2, 5, 10, 20, 50) were simultaneously tuned on the development set8. Finally we decided that the optimal value of λ was 0.4 and the optimal weight of CTB was 1, which brought the best performance on the development set (an f-score of 86.1%). In comparison with the results in Section 4.1, the average index of converted trees in 200-best list increased to 2, and their average unlabeled dependency f-score dropped to 65.4%. It indicates that structures of converted trees become more consistent with the target grammar, as indicated by the increase of average index of converted trees, further away from the source grammar. Table 6 provides f-scores of the generative parser and the reranker on the test set, when trained on CTB and CDTPS λ . We see that the performance of the reranking parser increased to 7Before calculating d Score(xi,t), we normalized the values of Prob(xi,t) for each N-best list by (1) Prob(xi,t)=Prob(xi,t)-Min(Prob(xi,∗)), (2)Prob(xi,t)=Prob(xi,t)/Max(Prob(xi,∗)), resulting in that their maximum value was 1 and their minimum value was 0. 8Due to space constraint, we do not show f-scores of the parser with different values of λ and the CTB weight. 51 All the sentences LR LP F Models Training data (%) (%) (%) Self-trained GP 10×T+10×D+P 83.0 84.5 83.7 Updated RP CTB+CDT P S λ 84.3 86.1 85.2 Table 7: Results of the self-trained generative parser and updated reranking parser on the test set. 10×T+10×D+P stands for 10×CTB+10×CDTPS λ +PDC. 84.2% f-score, better than the result of the reranking parser with CTB and CDTPS as training data (shown in Table 5). It indicates that the use of probability information from the parser for tree conversion helps target grammar parsing. 4.3 Using Unlabeled Data for Parsing Recent studies on parsing indicate that the use of unlabeled data by self-training can help parsing on the WSJ data, even when labeled data is relatively large (McClosky et al., 2006a; Reichart and Rappoport, 2007). It motivates us to employ self-training technique for Chinese parsing. We used the POS tagged People Daily corpus9 (Jan. 1998∼Jun. 1998, and Jan. 2000∼Dec. 2000) (PDC) as unlabeled data for parsing. First we removed the sentences with less than 3 words or more than 40 words from PDC to ease parsing, resulting in 820k sentences. Then we ran the reranking parser in Section 4.2.2 on PDC and used the parses on PDC as additional training data for the generative parser. Here we tried the corpus weighting technique for an optimal combination of CTB, CDTPS λ and parsed PDC, and chose the relative weight of both CTB and CDTPS λ as 10 by cross validation on the development set. Finally we retrained the generative parser on CTB, CDTPS λ and parsed PDC. Furthermore, we used this self-trained generative parser as a base parser to retrain the reranker on CTB and CDTPS λ . Table 7 shows the performance of self-trained generative parser and updated reranker on the test set, with CTB and CDTPS λ as labeled data. We see that the use of unlabeled data by self-training further increased the reranking parser’s performance from 84.2% to 85.2%. Our results on Chinese data confirm previous findings on English data shown in (McClosky et al., 2006a; Reichart and Rappoport, 2007). 9Available at http://icl.pku.edu.cn/. 4.4 Comparison with Previous Studies for Chinese Parsing Table 8 and 9 present the results of previous studies on CTB. All the works in Table 8 used CTB articles 1-270 as labeled data. In Table 9, Petrov and Klein (2007) trained their model on CTB articles 1-270 and 400-1151, and Burkett and Klein (2008) used the same CTB articles and parse trees of their English translation (from the English Chinese Translation Treebank) as training data. Comparing our result in Table 6 with that of Petrov and Klein (2007), we see that CDTPS λ helps parsing on CTB, which brought 0.9% f-score improvement. Moreover, the use of unlabeled data further boosted the parsing performance to 85.2%, an absolute 1.0% improvement over the previous best result presented in Burkett and Klein (2008). 5 Related Work Recently there have been some studies addressing how to use treebanks with same grammar formalism for domain adaptation of parsers. Roark and Bachiani (2003) presented count merging and model interpolation techniques for domain adaptation of parsers. They showed that their system with count merging achieved a higher performance when in-domain data was weighted more heavily than out-of-domain data. McClosky et al. (2006b) used self-training and corpus weighting to adapt their parser trained on WSJ corpus to Brown corpus. Their results indicated that both unlabeled in-domain data and labeled out-of-domain data can help domain adaptation. In comparison with these works, we conduct our study in a different setting where we work with multiple heterogeneous treebanks. Grammar formalism conversion makes it possible to reuse existing source treebanks for the study of target grammar parsing. Wang et al. (1994) employed a parser to help conversion of a treebank from a simple phrase structure to a more informative phrase structure and then used this converted treebank to train their parser. Collins et al. (1999) performed statistical constituency parsing of Czech on a treebank that was converted from the Prague Dependency Treebank under the guidance of conversion rules and heuristic rules, e.g., one level of projection for any category, minimal projection for any dependents, and fixed position of attachment. Xia and Palmer (2001) adopted better heuristic rules to build converted trees, which 52 ≤40 words All the sentences LR LP F LR LP F Models (%) (%) (%) (%) (%) (%) Bikel & Chiang (2000) 76.8 77.8 77.3 Chiang & Bikel (2002) 78.8 81.1 79.9 Levy & Manning (2003) 79.2 78.4 78.8 Bikel’s thesis (2004) 78.0 81.2 79.6 Xiong et. al. (2005) 78.7 80.1 79.4 Chen et. al. (2005) 81.0 81.7 81.2 76.3 79.2 77.7 Wang et. al. (2006) 79.2 81.1 80.1 76.2 78.0 77.1 Table 8: Results of previous studies on CTB with CTB articles 1-270 as labeled data. ≤40 words All the sentences LR LP F LR LP F Models (%) (%) (%) (%) (%) (%) Petrov & Klein (2007) 85.7 86.9 86.3 81.9 84.8 83.3 Burkett & Klein (2008) 84.2 Table 9: Results of previous studies on CTB with more labeled data. reflected the structural preference in their target grammar. For acquisition of better conversion rules, Xia et al. (2008) proposed to automatically extract conversion rules from a target treebank. Moreover, they presented two strategies to solve the problem that there might be multiple conversion rules matching the same input dependency tree pattern: (1) choosing the most frequent rules, (2) preferring rules that add fewer number of nodes and attach the subtree lower. In comparison with the works of Wang et al. (1994) and Collins et al. (1999), we went further by combining the converted treebank with the existing target treebank for parsing. In comparison with previous conversion methods (Collins et al., 1999; Covington, 1994; Xia and Palmer, 2001; Xia et al., 2008) in which for each headdependent pair, only one locally optimal conversion was kept during tree-building process, we employed a parser to generate globally optimal syntactic structures, eliminating heuristic rules for conversion. In addition, we used converted trees to retrain the parser for better conversion candidates, while Wang et al. (1994) did not exploit the use of converted trees for parser retraining. 6 Conclusion We have proposed a two-step solution to deal with the issue of using heterogeneous treebanks for parsing. First we present a parser based method to convert grammar formalisms of the treebanks to the same one, without applying predefined heuristic rules, thus turning the original problem into the problem of parsing on homogeneous treebanks. Then we present two strategies, instance pruning and score interpolation, to refine conversion results. Finally we adopt the corpus weighting technique to combine the converted source treebank with the existing target treebank for parser training. The study on the WSJ data shows the benefits of our parser based approach for grammar formalism conversion. Moreover, experimental results on the Penn Chinese Treebank indicate that a converted dependency treebank helps constituency parsing, and it is better to exploit probability information produced by the parser through score interpolation than to prune low quality trees for the use of the converted treebank. Future work includes further investigation of our conversion method for other pairs of grammar formalisms, e.g., from the grammar formalism of the Penn Treebank to more deep linguistic formalism like CCG, HPSG, or LFG. References Anne Abeille, Lionel Clement and Francois Toussenel. 2000. Building a Treebank for French. In Proceedings of LREC 2000, pages 87-94. Daniel Bikel and David Chiang. 2000. Two Statistical Parsing Models Applied to the Chinese Treebank. In Proceedings of the Second SIGHAN workshop, pages 1-6. Daniel Bikel. 2004. On the Parameter Space of Generative Lexicalized Statistical Parsing Models. Ph.D. thesis, University of Pennsylvania. Alena Bohmova, Jan Hajic, Eva Hajicova and Barbora Vidova-Hladka. 2003. The Prague Dependency Treebank: A Three-Level Annotation Scenario. Treebanks: 53 Building and Using Annotated Corpora. Kluwer Academic Publishers, pages 103-127. Thorsten Brants, Wojciech Skut and Hans Uszkoreit. 1999. Syntactic Annotation of a German Newspaper Corpus. In Proceedings of the ATALA Treebank Workshop, pages 6976. David Burkett and Dan Klein. 2008. Two Languages are Better than One (for Syntactic Parsing). In Proceedings of EMNLP 2008, pages 877-886. Eugene Charniak. 2000. A Maximum Entropy Inspired Parser. In Proceedings of NAACL 2000, pages 132-139. Eugene Charniak and Mark Johnson. 2005. Coarse-to-Fine N-Best Parsing and MaxEnt Discriminative Reranking. In Proceedings of ACL 2005, pages 173-180. Ying Chen, Hongling Sun and Dan Jurafsky. 2005. A Corrigendum to Sun and Jurafsky (2004) Shallow Semantic Parsing of Chinese. University of Colorado at Boulder CSLR Tech Report TR-CSLR-2005-01. David Chiang and Daniel M. Bikel. 2002. Recovering Latent Information in Treebanks. In Proceedings of COLING 2002, pages 1-7. Micheal Collins, Lance Ramshaw, Jan Hajic and Christoph Tillmann. 1999. A Statistical Parser for Czech. In Proceedings of ACL 1999, pages 505-512. Micheal Covington. 1994. GB Theory as Dependency Grammar. Research Report AI-1992-03. Martin Forst. 2003. Treebank Conversion - Establishing a Testsuite for a Broad-Coverage LFG from the TIGER Treebank. In Proceedings of LINC at EACL 2003, pages 25-32. Chunghye Han, Narae Han, Eonsuk Ko and Martha Palmer. 2002. Development and Evaluation of a Korean Treebank and its Application to NLP. In Proceedings of LREC 2002, pages 1635-1642. Sadao Kurohashi and Makato Nagao. 1998. Building a Japanese Parsed Corpus While Improving the Parsing System. In Proceedings of LREC 1998, pages 719-724. Roger Levy and Christopher Manning. 2003. Is It Harder to Parse Chinese, or the Chinese Treebank? In Proceedings of ACL 2003, pages 439-446. Ting Liu, Jinshan Ma and Sheng Li. 2006. Building a Dependency Treebank for Improving Chinese Parser. Journal of Chinese Language and Computing, 16(4):207-224. Mitchell P. Marcus, Beatrice Santorini and Mary Ann Marcinkiewicz. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330. David McClosky, Eugene Charniak and Mark Johnson. 2006a. Effective Self-Training for Parsing. In Proceedings of NAACL 2006, pages 152-159. David McClosky, Eugene Charniak and Mark Johnson. 2006b. Reranking and Self-Training for Parser Adaptation. In Proceedings of COLING/ACL 2006, pages 337344. Antonio Moreno, Susana Lopez, Fernando Sanchez and Ralph Grishman. 2003. Developing a Syntactic Annotation Scheme and Tools for a Spanish Treebank. Treebanks: Building and Using Annotated Corpora. Kluwer Academic Publishers, pages 149-163. Slav Petrov and Dan Klein. 2007. Improved Inference for Unlexicalized Parsing. In Proceedings of HLT/NAACL 2007, pages 404-411. Roi Reichart and Ari Rappoport. 2007. Self-Training for Enhancement and Domain Adaptation of Statistical Parsers Trained on Small Datasets. In Proceedings of ACL 2007, pages 616-623. Brian Roark and Michiel Bacchiani. 2003. Supervised and Unsupervised PCFG Adaptation to Novel Domains. In Proceedings of HLT/NAACL 2003, pages 126-133. Jong-Nae Wang, Jing-Shin Chang and Keh-Yih Su. 1994. An Automatic Treebank Conversion Algorithm for Corpus Sharing. In Proceedings of ACL 1994, pages 248-254. Mengqiu Wang, Kenji Sagae and Teruko Mitamura. 2006. A Fast, Accurate Deterministic Parser for Chinese. In Proceedings of COLING/ACL 2006, pages 425-432. Stephen Watkinson and Suresh Manandhar. 2001. Translating Treebank Annotation for Evaluation. In Proceedings of ACL Workshop on Evaluation Methodologies for Language and Dialogue Systems, pages 1-8. Fei Xia and Martha Palmer. 2001. Converting Dependency Structures to Phrase Structures. In Proceedings of HLT 2001, pages 1-5. Fei Xia, Rajesh Bhatt, Owen Rambow, Martha Palmer and Dipti Misra. Sharma. 2008. Towards a MultiRepresentational Treebank. In Proceedings of the 7th International Workshop on Treebanks and Linguistic Theories, pages 159-170. Deyi Xiong, Shuanglong Li, Qun Liu, Shouxun Lin and Yueliang Qian. 2005. Parsing the Penn Chinese Treebank with Semantic Knowledge. In Proceedings of IJCNLP 2005, pages 70-81. Nianwen Xue, Fei Xia, Fu-Dong Chiou and Martha Palmer. 2005. The Penn Chinese TreeBank: Phrase Structure Annotation of a Large Corpus. Natural Language Engineering, 11(2):207-238. 54
2009
6
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 531–539, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Linefeed Insertion into Japanese Spoken Monologue for Captioning Tomohiro Ohno Graduate School of International Development, Nagoya University, Japan [email protected] Masaki Murata Graduate School of Information Science, Nagoya University, Japan [email protected] Shigeki Matsubara Information Technology Center, Nagoya University, Japan [email protected] Abstract To support the real-time understanding of spoken monologue such as lectures and commentaries, the development of a captioning system is required. In monologues, since a sentence tends to be long, each sentence is often displayed in multi lines on one screen, it is necessary to insert linefeeds into a text so that the text becomes easy to read. This paper proposes a technique for inserting linefeeds into a Japanese spoken monologue text as an elemental technique to generate the readable captions. Our method appropriately inserts linefeeds into a sentence by machine learning, based on the information such as dependencies, clause boundaries, pauses and line length. An experiment using Japanese speech data has shown the effectiveness of our technique. 1 Introduction Real-time captioning is a technique for supporting the speech understanding of deaf persons, elderly persons, or foreigners by displaying transcribed texts of monologue speech such as lectures. In recent years, there exist a lot of researches about automatic captioning, and the techniques of automatic speech recognition (ASR) aimed for captioning have been developed (Boulianne et al., 2006; Holter et al., 2000; Imai et al., 2006; Munteanu et al., 2007; Saraclar et al., 2002; Xue et al., 2006). However, in order to generate captions which is easy to read, it is important not only to recognize speech with high recognition rate but also to properly display the transcribed text on a screen (Hoogenboom et al., 2008). Especially, in spoken monologue, since a sentence tends to be long, each sentence is often displayed as a multi-line text on a screen. Therefore, proper linefeed insertion for the displayed text is desired so that the text becomes easy to read. Until now, there existed few researches about how to display text on a screen in automatic captioning. As the research about linefeed insertion, Monma et al. proposed a method based on patterns of a sequence of morphemes (Monma et al., 2003). However, the target of the research is closed-captions of Japanese TV shows, in which less than or equal to 2 lines text is displayed on a screen and the text all switches to other text at a time. In the work, the highest priority concept on captioning is that one screen should be filled with as much text as possible. Therefore, a semantic boundary in a sentence is hardly taken into account in linefeed insertion, and the readability of the caption is hardly improved. This paper proposes a technique for inserting linefeeds into transcribed texts of Japanese monologue speech as an elemental technique to generate readable captions. We assume that a screen for displaying only multi-line caption is placed to provide the caption information to the audience on the site of a lecture. In our method, the linefeeds are inserted into only the boundaries between bunsetsus1, and the linefeeds are appropriately inserted into a sentence by machine learning, based on the information such as morphemes, dependencies2, clause boundaries, pauses and line length. We conducted an experiment on inserting linefeeds by using Japanese spoken monologue data. As the results of inserting linefeeds for 1,714 sentences, the recall and precision of our method were 82.66% and 80.24%, respectively. Our method improved the performance dramatically compared 1Bunsetsu is a linguistic unit in Japanese that roughly corresponds to a basic phrase in English. A bunsetsu consists of one independent word and zero or more ancillary words. 2A dependency in Japanese is a modification relation in which a modifier bunsetsu depends on a modified bunsetsu. That is, the modifier bunsetsu and the modified bunsetsu work as modifier and modifyee, respectively. 531 Figure 1: Caption display of spoken monologue with four baseline methods, which we established for comparative evaluation. The effectiveness of our method has been confirmed. This paper is organized as follows: The next section describes our assumed caption and the preliminary analysis. Section 3 presents our linefeed insertion technique. An experiment and discussion are reported in Sections 4 and 5, respectively. Finally, Section 6 concludes the paper. 2 Linefeed Insertion for Spoken Monologue In our research, in an environment in which captions are displayed on the site of a lecture, we assume that a screen for displaying only captions is used. In the screen, multi lines are always displayed, being scrolled line by line. Figure 1 shows our assumed environment in which captions are displayed. As shown in Figure 2, if the transcribed text of speech is displayed in accordance with only the width of a screen without considering the proper points of linefeeds, the caption is not easy to read. Especially, since the audience is forced to read the caption in synchronization with the speaker’s utterance speed, it is important that linefeeds are properly inserted into the displayed text in consideration of the readability as shown in Figure 3. To investigate whether the line insertion facilitates the readability of the displayed texts, we conducted an experiment using the transcribed text of lecture speeches in the Simultaneous Interpretation Database (SIDB) (Matsubara et al., 2002). We randomly selected 50 sentences from the data, and then created the following two texts for each sentence based on two different concepts about linefeed insertion. (1)Text into which linefeeds were forcibly inserted once every 20 characters 例えば環境の問題あるいは人口の問題エイズ の問題などなど地球規模の問題たくさん生じ ておりますが残念ながらこれらの問題は二十 一世紀にも継続しあるいは悲観的な見方をす ればさらに悪くなるという風に思われます For example, environmental problem, population problem, AIDS problem and so on, a lot of global-scale problems have occurred, and unfortunately, these problems seem to continue during 21st century or to become worse if we look through blue glasses. Figure 2: Caption of monologue speech 例えば環境の問題 あるいは人口の問題 エイズの問題などなど 地球規模の問題たくさん生じておりますが 残念ながらこれらの問題は 二十一世紀にも継続し あるいは悲観的な見方をすれば さらに悪くなるという風に思われます (For example, environmental problem) (population problem) (AIDS problem and so on) a lot of global-scale problems have occurred (and unfortunately, these problems) (to continue during also 21st century) (or if we look through blue glasses) (seems to become worse) Figure 3: Caption into which linefeeds are properly inserted 49 50 49 37 40 36 43 48 34 49 1 2 3 4 5 6 7 8 9 10 (1)Forcible insertion of linefeeds (2)Proper insertion of linefeeds subject ID # of sentences 50 45 40 35 30 25 20 15 10 5 0 Figure 4: Result of investigation of effect of linefeed insertion into transcription (2)Text into which linefeeds were properly inserted in consideration of readability by hand3 Figure 2 and 3 show examples of the text (1) and (2), respectively. 10 examinees decided which of the two texts was more readable. Figure 4 shows the result of the investigation. The ratio that each examinee selected text (2) was 87.0% on average. There was no sentence in the text group (1) which was selected by more than 5 examinees. These indicates that a text becomes more readable by proper insertion of linefeeds. Here, since a bunsetsu is the smallest semantically meaningful language unit in Japanese, our method adopts the bunsetsu boundaries as the candidates of points into which a linefeed is inserted. In this paper, hereafter, we call a bunsetsu boundary into which a linefeed is inserted a linefeed point. 33 persons inserted linefeeds into the 50 sentences by discussing where to insert the linefeeds. 532 Table 1: Size of analysis data sentence 221 bunsetsu 2,891 character 13,899 linefeed 883 character per line 13.2 3 Preliminary Analysis about Linefeed Points In our research, the points into which linefeeds should be inserted is detected by using machine learning. To find the effective features, we investigated the spoken language corpus. In our investigation, we used Japanese monologue speech data in the SIDB (Matsubara et al., 2002). The data is annotated by hand with information on morphological analysis, bunsetsu segmentation, dependency analysis, clause boundary detection, and linefeeds insertion. Table 1 shows the size of the analysis data. Among 2,670 (= 2, 891−221) bunsetsu boundaries, which are candidates of linefeed points, there existed 833 bunsetsu boundaries into which linefeeds were inserted, that is, the ratio of linefeed insertion was 31.2%. The linefeeds were inserted by hand so that the maximum number of characters per line is 20. We set the number in consideration of the relation between readability and font size on the display. In the analysis, we focused on the clause boundary, dependency relation, line length, pause and morpheme of line head, and investigated the relations between them and linefeed points. 3.1 Clause Boundary and Linefeed Point Since a clause is one of semantically meaningful language units, the clause boundary is considered to be a strong candidate of a linefeed point. In the analysis data, there existed 969 clause boundaries except sentence breaks. Among them, 490 were the points into which linefeeds were inserted, that is, the ratio of linefeed insertion was 51.1%. This ratio is higher than that of bunsetsu boundaries. This indicates that linefeeds tend to be inserted into clause boundaries. We investigated the ratio of linefeed insertion about 42 types4 of clause boundaries, which were seen in the analysis data. Table 2 shows the top 10 4In our research, we used the types of clause boundaries defined by the Clause Boundary Annotation Program (Kashioka and Maruyama, 2004). Table 2: Ratio of linefeed insertion for clause boundary type type of ratio of linefeed clause boundary insertion (%) topicalized element-wa 50.8 discourse marker 12.0 quotational clause 22.1 adnominal clause 23.3 compound clause-te 90.2 supplement clause 68.0 compound clause-ga 100.0 compound clause-keredomo 100.0 condition clause-to 93.5 adnominal clause-toiu 27.3 clause boundary types about the occurrence frequency, and each ratio of linefeed insertion. In the case of “compound clause-ga” and “compound clause-keredomo,” the ratio of linefeed insertion was 100%. On the other hand, in the case of “quotational clause,” “adnominal clause” and “adnominal clause-toiu,” the ratio of linefeed insertion was less than 30%. This means that the likelihood of linefeed insertion is different according to the type of the clause boundary. 3.2 Dependency Structure and Linefeed Point When a bunsetsu depends on the next bunsetsu, it is thought that a linefeed is hard to be inserted into the bunsetsu boundary between them because the sequence of such bunsetsus constitutes a semantically meaningful unit. In the analysis data, there existed 1,459 bunsetsus which depend on the next bunsetsu. Among the bunsetsu boundaries right after them, 192 were linefeed points, that is, the ratio of linefeed insertions was 13.2%. This ratio is less than half of that for all the bunsetsu boundaries. On the other hand, when the bunsetsu boundary right after the bunsetsu which does not depend on the next bunsetsu, the ratio of linefeed insertion was 52.7%. Next, we focused on the type of the dependency relation, by which the likelihood of linefeed insertion is different. For example, when the bunsetsu boundary right after a bunsetsu on which the final bunsetsu of an adnominal clause depends, the ratio of linefeed insertion was 43.1%. This ratio is higher than that for all the bunsetsu boundaries. In addition, we investigated the relation be533 古い国産車ばかりを掲載する雑誌の記者が 私の車を取材したいといってきているので : dependency relation :bunsetsu [Dependency structure] [Result of linefeed insertion in the analysis data] A writer of the magazine in which only old domestic cars are covered asks to get a story about my car 古い 国産車 ばかりを 掲載する 雑誌の 記者が私の車を 取材 したいと いって きてるので only domestic cars in which are covered old of the magazine a writer my car to get a story about ask Figure 5: Relation between dependency structure and linefeed points tween a dependency structure and linefeed points, that is, whether the dependency structure is closed within a line or not. Here, a line whose dependency structure is closed means that all bunsetsus, except the final bunsetsu, in the line depend on one of bunsetsus in the line. Since, in many of semantically meaningful units, the dependency structure is closed, the dependency structure of a line is considered to tend to be closed. In the analysis data, among 883 lines, 599 lines’ dependency structures were closed. Figure 5 shows the relation between dependency structure and linefeed points. In this example, linefeeds are not inserted right after bunsetsus which depend on the next bunsetsu (e.g. “私 の(my)” and “車を(car)”). Instead, a linefeed is inserted right after a bunsetsu which does not depend on the next bunsetsu (“記者が(a writer)”). In addition, the dependency structure in each line is closed. 3.3 Line Length and Linefeed Point An extremely-short line is considered to be hardly generated because the readability goes down if the length of each line is very different. In the analysis data, a line whose length is less than or equal to 6 characters occupied only 7.59% of the total. This indicates that linefeeds tend to be inserted into the place where a line can maintain a certain length. 3.4 Pause and Linefeed Point It is thought that a pause corresponds to a syntactic boundary. Therefore, there are possibility that a linefeed becomes more easily inserted into a bunsetsu boundary at which a pause exists. In our research, a pause is defined as a silent interval equal to or longer than 200ms. In the analysis data, among 748 bunsetsu boundaries at which a pause exists, linefeeds were inserted into 471 bunsetsu boundaries, that is, the ratio of linefeed insertion was 62.97%. This ratio is higher than that for all the bunsetsu boundaries, thus, we confirmed that linefeeds tend to be inserted into bunsetsu boundaries at which a pause exists. 3.5 Morpheme Located in the Start of a Line There exist some morphemes which are unlikely to become a line head. We investigated the ratio that each leftmost morpheme of all the bunsetsus appears at a line head. Here, we focused on the basic form and part-of-speech of a morpheme. The morphemes which appeared 20 times and of which the ratio of appearance at a line head was less than 10% were as follows: • Basic form: “思う(think) [2/70]”, “問題(problem) [0/42]”, “する(do) [3/33]”, “なる(become) [2/32]”,“必要(necessary) [1/21]” • Part-of-speech: noun-non independent-general [0/40], noun-nai adjective stem [0/40], noun-non independent-adverbial [(0/27] If the leftmost morpheme of a bunsetsu is one of these, it is thought that a linefeed is hardly inserted right after the bunsetsu. 4 Linefeed Insertion Technique In our method, a sentence, on which morphological analysis, bunsetsu segmentation, clause boundary analysis and dependency analysis are performed, is considered the input. Our method decides whether or not to insert a linefeed into each bunsetsu boundary in an input sentence. Under the condition that the number of characters in each line has to be less than or equal to the maximum number of characters per line, our method identifies the most appropriate combination among all combinations of the points into which linefeeds can be inserted, by using the probabilistic model. In this paper, we describe an input sentence which consists of n bunsetsus as B = b1 · · · bn, and the result of linefeeds insertion as R = r1 · · · rn. Here, ri is 1 if a linefeed is inserted right after bunsetsu bi, and is 0 otherwise. We describe a sequence of bunsetsus in the j-th line among the m lines created by dividing an input sentence as Lj = bj 1 · · · bj nj(1 ≤j ≤m), and then, rj k = 0 if k ̸= nj, and rj k = 1 otherwise. 534 4.1 Probabilistic Model for Linefeed Insertion When an input sentence B is provided, our method identifies the result of linefeeds insertion R, which maximizes the conditional probability P(R|B). Assuming that whether or not a linefeed is inserted right after a bunsetsu is independent of other linefeed points except the linefeed point of the start of the line which contains the bunsetsu, P(R|B) can be calculated as follows: P(R|B) (1) = P(r1 1 = 0, · · · , r1 n1 = 1, · · · , rm 1 = 0, · · · , rm nm = 1|B) ∼= P(r1 1 = 0|B) × P(r1 2 = 0|r1 1 = 0, B) × · · · ×P(r1 n1 = 1|r1 n1−1 = 0, · · · , r1 1 = 0, B) × · · · ×P(rm 1 = 0|rm−1 nm−1 = 1, B) × · · · ×P(rm m = 1|rm nm−1 = 0, · · · , rm 1 = 0, rm−1 nm−1 = 1, B) where P(rj k = 1|rj k−1 = 0, · · · , rj 1 = 0, rj−1 nj−1 = 1, B) is the probability that a linefeed is inserted right after a bunsetsu bj k when the sequence of bunsetsus B is provided and the linefeed point of the start of the j-th line is identified. Similarly, P(rj k = 0|rj k−1 = 0, · · · , rj 1 = 0, rj−1 nj−1 = 1, B) is the probability that a linefeed is not inserted right after a bunsetsu bj k. These probabilities are estimated by the maximum entropy method. The result R which maximizes the conditional probability P(R|B) is regarded as the most appropriate result of linefeed insertion, and calculated by dynamic programming. 4.2 Features on Maximum Entropy Method To estimate P(rj k = 1|rj k−1 = 0, · · · , rj 1 = 0, rj−1 nj−1 = 1, B) and P(rj k = 0|rj k−1 = 0, · · · , rj 1 = 0, rj−1 nj−1 = 1, B) by the maximum entropy method, we used the following features based on the analysis described in Section 2.2. Morphological information • the rightmost independent morpheme (a partof-speech, an inflected form) and rightmost morpheme (a part-of-speech) of a bunsetsu bj k Clause boundary information • whether or not a clause boundary exists right after bj k • a type of the clause boundary right after bj k (if there exists a clause boundary) Dependency information • whether or not bj k depends on the next bunsetsu • whether or not bj k depends on the final bunsetsu of a clause • whether or not bj k depends on a bunsetsu to which the number of characters from the start of the line is less than or equal to the maximum number of characters • whether or not bj k is depended on by the final bunsetsu of an adnominal clause • whether or not bj k is depended on by the bunsetsu located right before it • whether or not the dependency structure of a sequence of bunsetsus between bj k and bj 1, which is the first bunsetsu of the line, is closed • whether or not there exists a bunsetsu which depends on the modified bunsetsu of bj k, among bunsetsus which are located after bj k and to which the number of characters from the start of the line is less than or equal to the maximum number of characters Line length • any of the following is the class into which the number of characters from the start of the line to bj k is classified – less than or equal to 2 – more than 2 and less than or equal to 6 – more than 6 Pause • whether or not a pause exists right after bj k Leftmost morpheme of a bunsetsu • whether or not the basic form or part-ofspeech of the leftmost morpheme of the next bunsetsu of bj k is one of the morphemes enumerated in Section 3.5. 5 Experiment To evaluate the effectiveness of our method, we conducted an experiment on inserting linefeeds by using discourse speech data. 5.1 Outline of Experiment As the experimental data, we used the transcribed data of Japanese discourse speech in the SIDB (Matsubara et al., 2002). All the data are annotated with information on morphological analysis, clause boundary detection and dependency analysis by hand. We performed a cross-validation experiment by using 16 discourses. That is, we 535 repeated the experiment, in which we used one discourse from among 16 discourses as the test data and the others as the learning data, 16 times. However, since we used 2 discourse among 16 discourses as the preliminary analysis data, we evaluated the experimental result for the other 14 discourses (1,714 sentences, 20,707 bunsetsus). Here, we used the maximum entropy method tool (Zhang, 2008) with the default options except “-i 2000.” In the evaluation, we obtained recall, precision and the ratio of sentences into which all linefeed points were correctly inserted (hereinafter called sentence accuracy). The recall and precision are respectively defined as follows. recall = # of correctly inserted LFs # of LFs in the correct data precision = # of correctly inserted LFs # of automatically inserted LFs For comparison, we established the following four baseline methods. 1. Linefeeds are inserted into the rightmost bunsetsu boundaries among the bunsetsu boundaries into which linefeeds can be inserted so that the length of the line does not exceed the maximum number of characters (Linefeed insertion based on bunsetsu boundaries). 2. Linefeeds are inserted into the all clause boundaries (Linefeed insertion based on clause boundaries). 3. Linefeeds are inserted between adjacent bunsetsus which do not depend on each other (Linefeed insertion based on dependency relations). 4. Linefeeds are inserted into the all bunsetsu boundaries in which a pause exists (Linefeed insertion based on pauses). In the baseline 2, 3 and 4, if each condition is not fulfilled within the maximum number of characters, a linefeed is inserted into the rightmost bunsetsu boundary as well as the baseline 1. In the experiment, we defined the maximum number of characters per line as 20. The correct data of linefeed insertion were created by experts who were familiar with displaying captions. There existed 5,497 inserted linefeeds in the 14 discourses, which were used in the evaluation. Table 3: Experimental results recall (%) precision (%) F-measure our method 82.66 80.24 81.43 (4,544/5,497) (4,544/5,663) baseline 1 27.47 34.51 30.59 (1,510/5,497) (1,510/4,376) baseline 2 69.34 48.65 57.19 (3,812/5,497) (3,812/7,834) baseline 3 89.48 53.73 67.14 (4,919/5,497) (4,919/9,155) baseline 4 69.84 55.60 61.91 (3,893/5,497) (3,893/6,905) 5.2 Experimental Result Table 3 shows the experimental results of the baselines and our method. The baseline 1 is very simple method which inserts linefeeds into the bunsetsu boundaries so that the length of the line does not exceed the maximum number of characters per line. Therefore, the recall and precision were the lowest. In the result of baseline 2, the precision was low. As described in the Section 3.1, the degree in which linefeeds are inserted varies in different types of clause boundaries. In the baseline 2, because linefeeds are also inserted into clause boundaries which have the tendency that linefeeds are hardly inserted, the unnecessary linefeeds are considered to have been inserted. The recall of baseline 3 was very high. This is because, in the correct data, linefeeds were hardly inserted between two neighboring bunsetsus which are in a dependency relation. However, the precision was low, because, in the baseline 3, linefeeds are invariably inserted between two neighboring bunsetsus which are not in a dependency relation. In the baseline 4, both the recall and precision were not good. The possible reason is that the bunsetsu boundaries at which a pause exists do not necessarily correspond to the linefeed points. On the other hand, the F-measure and the sentence accuracy of our method were 81.43 and 53.15%, respectively. Both of them were highest among those of the four baseline, which showed an effectiveness of our method. 5.3 Causes of Incorrect Linefeed Insertion In this section, we discuss the causes of the incorrect linefeed insertion occurred in our method. Among 1,119 incorrectly inserted linefeeds, the most frequent cause was that linefeeds were in536 以上がこの第一期と私が勝手に呼 呼 呼 呼んでる んでる んでる んでる 時期 時期 時期 時期でございます でございます でございます でございます That is the period which I call the first period without apology Figure 6: Example of incorrect linefeed insertion in “adnominal clause.” どこまで詳しくお話しできるか 不安ですが 堅いお話しからやわらかいお話 織り交ぜてお話ししていこうと思います (about how detail I can speak) (I have a concern) (from serious story to easy story ) (I want to speak) Figure 7: Example of extra linefeed insertion serted into clause boundaries of a “adnominal clause” type. The cause occupies 10.19% of the total number of the incorrectly inserted linefeeds. In the clause boundaries of the “adnominal clause” type, linefeeds should rarely be inserted fundamentally. However, in the result of our method, a lot of linefeeds were inserted into the “adnominal clause.” Figure 6 shows an example of those results. In this example, a linefeed is inserted into the “adnominal clause” boundary which is located right after the bunsetsu “呼んでる(call).” The semantic chunk “呼んでる時期でございます(is the period which I call)” is divided. As another cause, there existed 291 linefeeds which divide otherwise one line according to the correct data into two lines. Figure 7 shows an example of the extra linefeed insertion. Although, in the example, a linefeed is inserted between “どこ まで詳しくお話しできるか(about how detail I can speak)” and “不安ですが(I have a concern),” the two lines are displayed in one line in the correct data. It is thought that, in our method, linefeeds tend to be inserted even if a line has space to spare. 6 Discussion In this section, we discuss the experimental results described in Section 5 to verify the effectiveness of our method in more detail. 6.1 Subjective Evaluation of Linefeed Insertion Result The purpose of our research is to improve the readability of the spoken monologue text by our linefeed insertion. Therefore, we conducted a subjective evaluation of the texts which were generated by the above-mentioned experiment. In the subjective evaluation, examinees looked at the two texts placed side-by-side between which the only difference is linefeed points, and then se35 34 40 45 39 48 45 47 47 44 1 2 3 4 5 6 7 8 9 10 Baseline 3 Our method subject ID # of sentences 50 45 40 35 30 25 20 15 10 5 0 Figure 8: Result of subjective evaluation lected the one which was felt more readable. Here, we compared our method with the baseline 3, of which F-measure was highest among four baselines described in Section 5.1. Ten examinees evaluated 50 pairs of the results generated from the same 50 randomly selected sentences. Figure 8 shows the result of subjective evaluation. This graph shows the number of each method selected by each examinee. The ratio that our method was selected was 94% in the highest case, and 68% even in the lowest case. We confirmed the effectiveness of our method for improving the readability of the spoken monologue text. On the other hand, there existed three sentences for which more than 5 examinees judged that the results of baseline 3 were more readable than those of our method. From the analysis of the three sentences, we found the following phenomena caused text to be less readable • Japanese syllabary characters (Hiragana) are successionally displayed across a bunsetsu boundary. • The length of anteroposterior lines is extremely different each other. Each example of the two causes is shown in Figure 9 and 10, respectively. In Figure 9, a bunsetsu boundary existed between Japanese syllabary characters “私もですね(I)” and “かくゆう (if truth be told)” and these characters are successionally displayed in the same line. In these cases, it becomes more difficult to identify the bunsetsu boundary, therefore, the text is thought to become difficult to read. In Figure 10, since the length of the second line is extremely shorter than the first line or third line, the text is thought to become difficult to read. 537 実は私もですねかくゆう私も 大学生の頃はよくキセルをしておりまして 捕まったものです (Actually, I, if truth be told, I) when I was a college student, (I) used to dodge my train fare and (be caught ) Actually, I, if truth be told, I used to dodge my train fare and be caught when I was a college student. Figure 9: Example of succession of hiragana 私は残り少なくなったエネルギー資源を 巡って 過去と未来の人間たちが戦いを繰り広げる エスエフ小説を書いていました I, the energy resources of which the remaining amount became little in which humans who are in the past and future fight (wrote a science-fiction novel) (over) I wrote a science-fiction novel, in which humans who are in the past and future fight over the energy resources of which the remaining amount became little. Figure 10: Lines that have extremely different length Table 4: Other annotator’s results recall (%) precision (%) F-measure by human 89.82 (459/511) 89.82 (459/511) 89.82 our method 82.19 (420/511) 81.71 (420/514) 81.95 6.2 Comparison with Linefeeds Inserted by Human The concept of linefeed insertion for making the caption be easy to read varies by the individual. When multiple people insert linefeeds for the same text, there is possibility that linefeeds are inserted into different points. Therefore, for one lecture data (128 sentences, 511 bunsetsus) in the experimental data, we conducted an experiment on linefeed insertion by an annotator who was not involved in the construction of the correct data. Table 4 shows the recall and the precision. The second line shows the result of our method for the same lecture data. In F-measure, our method achieved 91.24% (81.95/89.82) of the result by the human annotator. 6.3 Performance of Linefeed Insertion Based on Automatic Natural Language Analysis In the experiment described in Section 5, we used the linguistic information provided by human as the features on the maximum entropy method. However, compared with baseline 1, our method uses a lot of linguistic information which should be provided not by human but by natural language analyzers under the real situation. Therefore, to fairly evaluate our method and four baselines, we conducted an experiment on linefeed insertion by using the automatically provided information on clause boundaries and dependency structures5. 5We used CBAP (Kashioka and Maruyama, 2004) as a clause boundary analyzer and CaboCha (Kudo and Matsumoto, 2002) with default learning data as a dependency parser. Table 5: Experimental results when information of features are automatically provided recall (%) precision (%) F-measure our method 77.37 75.04 76.18 (4,253/5,497) (4,253/5,668) baseline 1 27.47 34.51 30.59 (1,510/5,497) (1,510/4,376) baseline 2 69.51 48.63 57.23 (3,821/5,497) (3,821/7,857) baseline 3 84.01 52.03 64.26 (4,618/5,497) (4,618/8,876) baseline 4 69.84 55.60 61.91 (3,893/5,497) (3.893/6,905) Table 5 shows the result. Compared with Table 3, it shows the decreasing rate of the performance of our method was more than those of four baselines which use simply only basic linguistic information. However, the F-measure of our method was more than 10% higher than those of four baselines. 7 Conclusion This paper proposed a method for inserting linefeeds into discourse speech data. Our method can insert linefeeds so that captions become easy to read, by using machine learning techniques on features such as morphemes, dependencies, clause boundaries, pauses and line length. An experiment by using transcribed data of Japanese discourse speech showed the recall and precision was 82.66% and 80.24%, respectively, and we confirmed the effectiveness of our method. In applying the linefeed insertion technique to practical real-time captioning, we have to consider not only the readability but also the simultaneity. Since the input of our method is a sentence which tends to be long in spoken monologue, in the future, we will develop more simultaneous a technique in which the input is shorter than a sentence. In addition, we assumed the speech recognition system with perfect performance. To demonstrate practicality of our method for automatic speech transcription, an experiment using a continuous speech recognition system will be performed in the future. Acknowledgments This research was partially supported by the Grant-in-Aid for Scientific Research (B) (No. 20300058) and Young Scientists (B) (No. 21700157) of JSPS, and by The Asahi Glass Foundation. 538 References G. Boulianne, J.-F. Beaumont, M. Boisvert, J. Brousseau, P. Cardinal, C. Chapdelaine, M. Comeau, P. Ouellet, and F. Osterrath. 2006. Computer-assisted closed-captioning of live TV broadcasts in French. In Proceedings of 9th International Conference on Spoken Language Processing, pages 273–276. T. Holter, E. Harborg, M. H. Johnsen, and T. Svendsen. 2000. ASR-based subtitling of live TV-programs for the hearing impaired. In Proceedings of 6th International Conference on Spoken Language Processing, volume 3, pages 570–573. R. B. Hoogenboom, K. Uehara, T. Kanazawa, S. Nakano, H. Kuroki, S. Ino, and T. Ifukube. 2008. An application of real-time captioning system using automatic speech recognition technology to college efleducation for deaf and hard-of-hearing students. Gunma University Annual Research Reports, Cultural Science Series, 57. T. Imai, S. Sato, A. Kobayashi, K. Onoe, and S. Homma. 2006. Online speech detection and dual-gender speech recognition for captioning broadcast news. In Proceedings of 9th International Conference on Spoken Language Processing, pages 1602–1605. H. Kashioka and T. Maruyama. 2004. Segmentation of semantic units in Japanese monologues. In Proceedings of ICSLT2004 and Oriental-COCOSDA2004, pages 87–92. T. Kudo and Y. Matsumoto. 2002. Japanese dependency analysis using cascaded chunking. In Proceedings of 6th Conference on Computational Natural Language Learning, pages 63–69. S. Matsubara, A. Takagi, N. Kawaguchi, and Y. Inagaki. 2002. Bilingual spoken monologue corpus for simultaneous machine interpretation research. In Proceedings of 3rd International Conference on Language Resources and Evaluation, pages 153– 159. T. Monma, E. Sawamura, T. Fukushima, I. Maruyama, T. Ehara, and K. Shirai. 2003. Automatic closedcaption production system on TV programs for hearing-impaired people. Systems and Computers in Japan, 34(13):71–82. C. Munteanu, G. Penn, and R. Baecker. 2007. Webbased language modelling for automatic lecture transcription. In Proceedings of 8th Annual Conference of the International Speech Communication Association, pages 2353–2356. M. Saraclar, M. Riley, E. Bocchieri, and V. Goffin. 2002. Towards automatic closed captioning: Low latency real time broadcast news transcription. In Proceedings of 7th International Conference on Spoken Language Processing, pages 1741–1744. J. Xue, R. Hu, and Y. Zhao. 2006. New improvements in decoding speed and latency for automatic captioning. In Proceedings of 9th International Conference on Spoken Language Processing, pages 1630–1633. L. Zhang. 2008. Maximum entropy modeling toolkit for Python and C++. http: //homepages.inf.ed.ac.uk/s0450736/ maxent toolkit.html. [Online; accessed 1-March-2008]. 539
2009
60
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 540–548, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Semi-supervised Learning for Automatic Prosodic Event Detection Using Co-training Algorithm Je Hun Jeon and Yang Liu Computer Science Department The University of Texas at Dallas, Richardson, TX, USA {jhjeon,yangl}@hlt.utdallas.edu Abstract Most of previous approaches to automatic prosodic event detection are based on supervised learning, relying on the availability of a corpus that is annotated with the prosodic labels of interest in order to train the classification models. However, creating such resources is an expensive and time-consuming task. In this paper, we exploit semi-supervised learning with the co-training algorithm for automatic detection of coarse level representation of prosodic events such as pitch accents, intonational phrase boundaries, and break indices. We propose a confidence-based method to assign labels to unlabeled data and demonstrate improved results using this method compared to the widely used agreement-based method. In addition, we examine various informative sample selection methods. In our experiments on the Boston University radio news corpus, using only a small amount of the labeled data as the initial training set, our proposed labeling method combined with most confidence sample selection can effectively use unlabeled data to improve performance and finally reach performance closer to that of the supervised method using all the training data. 1 Introduction Prosody represents suprasegmental information in speech since it normally extends over more than one phoneme segment. Prosodic phenomena manifest themselves in speech in different ways, including changes in relative intensity to emphasize specific words or syllables, variations of the fundamental frequency range and contour, and subtle timing variations, such as syllable lengthening and insertion of pause. In spoken utterances, speakers use prosody to convey emphasis, intent, attitude, and emotion. These are important cues to aid the listener for interpretation of speech. Prosody also plays an important role in automatic spoken language processing tasks, such as speech act detection and natural speech synthesis, because it includes aspect of higher level information that is not completely revealed by segmental acoustics or lexical information. To represent prosodic events for the categorical annotation schemes, one of the most popular labeling schemes is the Tones and Break Indices (ToBI) framework (Silverman et al., 1992). The most important prosodic phenomena captured within this framework include pitch accents (or prominence) and prosodic phrase boundaries. Within the ToBI framework, prosodic phrasing refers to the perceived grouping of words in an utterance, and accent refers to the greater perceived strength or emphasis of some syllables in a phrase. Corpora annotated with prosody information can be used for speech analysis and to learn the relationship between prosodic events and lexical, syntactic and semantic structure of the utterance. However, it is very expensive and time-consuming to perform prosody labeling manually. Therefore, automatic labeling of prosodic events is an attractive alternative that has received attention over the past decades. In addition, automatically detecting prosodic events also benefits many other speech understanding tasks. Many previous efforts on prosodic event detection were supervised learning approaches that used acoustic, lexical, and syntactic cues. However, the major drawback with these methods is that they require a hand-labeled training corpus and depend on specific corpus used for training. Limited research has been conducted using unsupervised and semi-supervised methods. In this paper, we exploit semi-supervised learning with the 540 Figure 1: An example of ToBI annotation on a sentence “Hennessy will be a hard act to follow.” co-training algorithm (Blum and Mitchell, 1998) for automatic prosodic event labeling. Two different views according to acoustic and lexicalsyntactic knowledge sources are used in the cotraining framework. We propose a confidencebased method to assign labels to unlabeled data in training iterations and evaluate its performance combined with different informative sample selection methods. Our experiments on the Boston Radio News corpus show that the use of unlabeled data can lead to significant improvement of prosodic event detection compared to using the original small training set, and that the semisupervised learning result is comparable with supervised learning with similar amount of training data. The remainder of this paper is organized as follows. In the next section, we provide details of the corpus and the prosodic event detection tasks. Section 3 reviews previous work briefly. In Section 4, we describe the classification method for prosodic event detection, including the acoustic and syntactic prosodic models, and the features used. Section 5 introduces the co-training algorithm we used. Section 6 presents our experiments and results. The final section gives a brief summary along with future directions. 2 Corpus and tasks In this paper, our experiments were carried out on the Boston University Radio News Corpus (BU) (Ostendorf et al., 2003) which consists of broadcast news style read speech and has ToBI-style prosodic annotations for a part of the data. The corpus is annotated with orthographic transcription, automatically generated and handcorrected part-of-speech (POS) tags, and automatic phone alignments. The main prosodic events that we are concerned to detect automatically in this paper are phrasing and accent (or prominence). Prosodic phrasing refers to the perceived grouping of words in an utterance, and prominence refers to the greater perceived strength or emphasis of some syllables in a phrase. In the ToBI framework, the pitch accent tones (*) are marked at every accented syllable and have five types according to pitch contour: H*, L*, L*+H, L+H*, H+!H*. The phrase boundary tones are marked at every intermediate phrase boundary (L-, H-) or intonational phrase boundary (L-L%, L-H%, H-H%, H-L%) at certain word boundaries. There are also the break indices at every word boundary which range in value from 0 through 4, where 4 means intonational phrase boundary, 3 means intermediate phrase boundary, and a value under 3 means phrase-medial word boundary. Figure 1 shows a ToBI annotation example for a sentence “Hennessy will be a hard act to follow.” The first and second tiers show the orthographic information such as words and syllables of the utterance. The third tier shows the accents and phrase boundary tones. The accent tone is located on each accented syllable, such as the first syllable of word “Hennessy.” The boundary tone is marked on every final syllable if there is a prosodic boundary. For example, there are intermediate phrase boundaries after words “Hennessy” and “act”, and there is an intonational phrase boundary after word “follow.” The fourth tier shows the break indices at the end of every word. The detailed representation of prosodic events in the ToBI framework creates a serious sparse data problem for automatic prosody detection. This problem can be alleviated by grouping ToBI labels into coarse categories, such as presence or absence of pitch accents and phrasal tones. This also significantly reduces ambiguity of the task. In this paper, we thus use coarse representation (presence versus absence) for three prosodic event detection tasks: 541 • Pitch accents: accent mark (*) means presence. • Intonational phrase boundaries (IPB): all of the IPB tones (%) are grouped into one category. • Break indices: value 3 and 4 are grouped together to represent that there is a break. This task is equivalent to detecting the presence of intermediate and intonational phrase boundaries. These three tasks are binary classification problems. Similar setup has also been used in other previous work. 3 Previous work Many previous efforts on prosodic event detection used supervised learning approaches. In the work by Wightman and Ostendorf (1994), binary accent, IPB, and break index were assigned to syllables based on posterior probabilities computed from acoustic evidence using decision trees, combined with a bigram model of accent and boundary patterns. Their method achieved an accuracy of 84% for accent, 71% for IPB, and 84% for break index detection at the syllable level. Chen et al. (2004) used a Gaussian mixture model for acoustic-prosodic information and neural network based syntactic-prosodic model and achieved pitch accent detection accuracy of 84% and IPB detection accuracy of 90% at the word level. The experiments of Ananthakrishnan and Narayanan (2008) with neural network based acoustic-prosodic model and a factored ngram syntactic model reported 87% accuracy on accent and break index detection at the syllable level. The work of Sridhar et al. (2008) using a maximum entropy model achieved accent and IPB detection accuracies of 86% and 93% on the word level. Limited research has been done in prosodic detection using unsupervised or semi-supervised methods. Ananthakrishnan and Narayanan (2006) proposed an unsupervised algorithm for prosodic event detection. This algorithm was based on clustering techniques to make use of acoustic and syntactic cues and achieved accent and IPB detection accuracies of 77.8% and 88.5%, compared with the accuracies of 86.5% and 91.6% with supervised methods. Similarly, Levow (2006) tried clustering based unsupervised approach on accent detection with only acoustic evidence and reported accuracy of 78.4% for accent detection compared with 80.1% using supervised learning. She also exploited a semi-supervised approach using Laplacian SVM classification on a small set of examples. This approach achieved 81.5%, compared to 84% accuracy for accent detection in a fully supervised fashion. Since Blum and Mitchell (1998) proposed cotraining, it has received a lot of attention in the research community. This multi-view setting applies well to learning problems that have a natural way to divide their features into subsets, each of which are sufficient to learn the target concept. Theoretical and empirical analysis has been performed for the effectiveness of co-training such as Blum and Mitchell (1998), Goldman and Zhou (2000), Nigam and Ghani (2000), and Dasuta et al. (2001). More recently, researchers have begun to explore ways of combing ideas from sample selection with that of co-training. Steedman et al. (2003) applied co-training method to statistical parsing and introduced sample selection heuristics. Clark et al. (2003) and Wang et al. (2007) applied cotraining method in POS tagging using agreementbased selection strategy. Co-testing (Muslea et al., 2000), one of active learning approaches, has a similar spirit. Like co-training, it consists of two classifiers with redundant views and compares their outputs for an unlabeled example. If they disagree, then the example is considered as a contention point, and therefore a good candidate for human labeling. In this paper, we apply co-training algorithm to automatic prosodic event detection and propose methods to better select samples to improve semisupervised learning performance for this task. 4 Prosodic event detection method We model the prosody detection problem as a classification task. We separately develop acousticprosodic and syntactic-prosodic models according to information sources and then combine the two models. Our previous supervised learning approach (Jeon and Liu, 2009) showed that a combined model using Neural Network (NN) classifier for acoustic-prosodic evidence and Support Vector Machine (SVM) classifier for syntactic-prosodic evidence performed better than other classifiers. We therefore use NN and SVM in this study. Note 542 that our feature extraction is performed at the syllable level. This is straightforward for accent detection since stress is defined associated with syllables. In the case of IPB and break index detection, we use only the features from the final syllable of a word since those events are associated with word boundaries. 4.1 The acoustic-prosodic model The most likely sequence of prosodic events P ∗= {p∗ 1, . . . , p∗ n} given the sequence of acoustic evidences A = {a1, . . . , an} can be found as following: P ∗ = arg max P p(P|A) ≈ arg max P n Y i=1 p(pi|ai) (1) where ai = {a1 i , . . . , at i} is the acoustic feature vector corresponding to a syllable. Note that this assumes that the prosodic events are independent and they are only dependent on the acoustic observations in the corresponding locations. The primary acoustic cues for prosodic events are pitch, energy and duration. In order to reduce the effect by both inter-speaker and intra-speaker variation, both pitch and energy values were normalized (z-value) with utterance specific means and variances. The acoustic features used in our experiments are listed below. Again, all of the features are computed for a syllable. • Pitch range (4 features): maximum pitch, minimum pitch, mean pitch, and pitch range (difference between maximum and minimum pitch). • Pitch slope (5 features): first pitch slope, last pitch slope, maximum plus pitch slope, maximum minus pitch slope, and the number of changes in the pitch slope patterns. • Energy range (4 features): maximum energy, minimum energy, mean energy, and energy range (difference between maximum and minimum energy). • Duration (3 features): normalized vowel duration, pause duration after the word final syllable, and the ratio of vowel durations between this syllable and the next syllable. Among the duration features, the pause duration and the ratio of vowel durations are only used to detect IPB and break index, not for accent detection. 4.2 The syntactic-prosodic model The prosodic events P ∗given the sequence of lexical and syntactic evidences S = {s1, . . . , sn} can be found as following: P ∗ = arg max P p(P|S) ≈ arg max P n Y i=1 p(pi|φ(si)) (2) where φ(si) is chosen such that it contains lexical and syntactic evidence from a fixed window of syllables surrounding location i. There is a very strong correlation between the prosodic events in an utterance and its lexical and syntactic structure. Previous studies have shown that for pitch accent detection, the lexical features such as the canonical stress patterns from the pronunciation dictionary perform better than the syntactic features, while for IPB and break index detection, the syntactic features such as POS work better than the lexical features. We use different feature types for each task and the detailed features are as follows: • Accent detection: syllable identity, lexical stress (exist or not), word boundary information (boundary or not), and POS tag. We also include syllable identity, lexical stress, and word boundary features from the previous and next context window. • IPB and Break index detection: POS tag, the ratio of syntactic phrases the word initiates, and the ratio of syntactic phrases the word terminates. All of these features from the previous and next context windows are also included. 4.3 The combined model The two models above can be coupled as a classifier for prosodic event detection. If we assume that the acoustic observations are conditionally independent of the syntactic features given the prosody labels, the task of prosodic detection is to find the optimal sequence P ∗as follows: P ∗ = arg max P p(P|A, S) 543 ≈ arg max P p(P|A)p(P|S) ≈ arg max P n Y i=1 p(pi|ai)λp(pi|φ(si)) (3) where λ is a parameter that can be used to adjust the weighting between syntactic and the acoustic model. In our experiments, the value of λ is estimated based on development data. 5 Co-training strategy for prosodic event detection Co-training (Blum and Mitchell, 1998) is a semisupervised multi-view algorithm that uses the initial training set to learn a (weak) classifier in each view. Then each classifier is applied to all the unlabeled examples. Those examples that each classifier makes the most confident predictions are selected and labeled with the estimated class labels and added to the training set. Based on the new training set, a new classifier is learned in each view, and the whole process is repeated for some iterations. At the end, a final hypothesis is created by combining the predictions of the classifiers learned in each view. As described in Section 4, we use two classifiers for the prosodic event detection task based on two different information sources: one is the acoustic evidence extracted from the speech signal of an utterance; the other is the lexical and syntactic evidence such as syllables, words, POS tags and phrasal boundary information. These are two different views for prosodic event detection and fit the co-training framework. The general co-training algorithm we used is described in Algorithm 1. Given a set L of labeled data and a set U of unlabeled data, the algorithm first creates a smaller pool U′ containing u unlabeled data. It then iterates in the following procedure. First, we use L to train two distinct classifiers: the acoustic-prosodic classifier h1, and the syntactic classifier h2. These two classifiers are used to examine the unlabeled set U′ and assign “possible” labels. Then we select some samples to add to L. Finally, the pool U′ is recreated from U at random. This iteration continues until reaching the defined number of iterations or U is empty. The main issue of co-training is to select training samples for next iteration so as to minimize noise and maximize training utility. There are two issues: (1) the accurate self-labeling method for unlabeled data and (2) effective heuristics to seAlgorithm 1 General co-training algorithm. Given a set L of labeled training data and a set U of unlabeled data Randomly select U′ from U, |U′|=u while iteration < k do Use L to train classifiers h1 and h2 Apply h1 and h2 to assign labels for all examples in U′ Select n self-labeled samples and add to L Remove these n samples from U Recreate U′ by choosing u instances randomly from U end while lect more informative examples. We investigate different approaches to address these issues for the prosodic event detection task. The first issue is how to assign possible labels accurately. The general method is to let the two classifiers predict the class for a given sample, and if they agree, the hypothesized label is used. However, when this agreement-based approach is used for prosodic event detection, we notice that there is not only difference in the labeling accuracy between positive and negative samples, but also an imbalance of the self-labeled positive and negative examples (details in Section 6). Therefore we believe that using the hard decisions from the two classifiers along with the agreement-based rule is not enough to label the unlabeled samples. To address this problem, we propose an approximated confidence measure based on the combined classifier (Equation 3). First, we take a squared root of the classifier’s posterior probabilities for the two classes, denoted as score(pos) and score(neg), respectively. Our proposed confidence is the distance between these two scores. For example, if the classifier’s hypothesized label is positive, then: Positive confidence=score(pos)-score(neg) Similarly if the classifier’s hypothesis is negative, we calculate a negative confidence: Negative confidence=score(neg)-score(pos) Then we apply different thresholds of confidence level for positive and negative labeling. The thresholds are chosen based on the accuracy distribution obtained on the labeled development data and are reestimated at every iteration. Figure 2 shows the accuracy distribution for accent detection according to different confidence levels in the first iteration. In Figure 2, if we choose 70% labeling accuracy, the positive confidence level is about 544 0 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 Confidence level Accuracy Figure 2: Approximated confidence level and labeling accuracy on accent detection task. 0.1 and the negative confidence level is about 0.8. In our confidence-based approach, the samples with a confidence level higher than these thresholds are assigned with the classifier’s hypothesized labels, and the other samples are disregarded. The second problem in co-training is how to select informative samples. Active learning approaches, such as Muslea et al. (2000), can generally select more informative samples, for example, samples for which two classifiers disagree (since one of two classifiers is wrong) and ask for human labels. Co-training approaches cannot, however, use this selection method since there is a risk to label the disagreed samples. Usually co-training selects samples for which two classifiers have the same prediction but high difference in their confidence measures. Based on this idea, we applied three sampling strategies on top of our confidencebased labeling method: • Random selection: randomly select samples from those that the two classifiers have different posterior probabilities. • Most confident selection: select samples that have the highest posterior probability based on one classifier, and at the same time there is certain posterior probability difference between the two classifiers. • Most different selection: select samples that have the most difference between the two classifiers’ posterior probabilities. The first strategy is appropriate for base classifiers that lack the capability of estimating the posterior probability of their predictions. The second is appropriate for base classifiers that have high classification accuracy and also with high posterior probability. The last one is also appropriate for accurate classifiers and expected to converge utter. word syll Speaker Test Set 102 5,448 8,962 f1a, m1b Development Set 20 1,356 2,275 f2b, f3b Labeled set L 5 347 573 m2b, m3b Unlabeled set U 1,027 77,207 129,305 m4b Table 1: Training and test sets. faster since big mistakes of one of the two classifiers can be fixed. These sample selection strategies share some similarity with those in previous work (Steedman et al., 2003). 6 Experiments and results Our goal is to determine whether the co-training algorithm described above could successfully use the unlabeled data for prosodic event detection. In our experiment, 268 ToBI labeled utterances and 886 unlabeled utterances in BU corpus were used. Among labeled data, 102 utterances of all f1a and m1b speakers are used for testing, 20 utterances randomly chosen from f2b, f3b, m2b, m3b, and m4b are used as development set to optimize parameters such as λ and confidence level threshold, 5 utterances are used as the initial training set L, and the rest of the data is used as unlabeled set U, which has 1027 unlabeled utterances (we removed the human labels for co-training experiments). The detailed training and test setting is shown in Table 1. First of all, we compare the learning curves using our proposed confidence-based method to assign possible labels with the simple agreementbased random selection method. We expect that if self-labeling is accurate, adding new samples randomly drawn from these self-labeled data generally should not make performance worse. For this experiment, in every iteration, we randomly select the self-labeled samples that have at least 0.1 difference between two classifiers’ posterior probabilities. The number of new samples added to training is 5% of the size of the previous training data. Figure 3 shows the learning curves for accent detection. The number of samples in the x-axis is the number of syllables. The F-measure score using the initial training data is 0.69. The dark solid line in Figure 3 is the learning curve of the supervised method when varying the size of the training data. Compared with supervised method, our proposed relative confidence-based labeling method shows better performance when there is 545 5,000 10,000 15,000 0.55 0.6 0.65 0.7 0.75 0.8 0.85 # of samples F−measure Supervised Agreement based Confidence based Figure 3: The learning curve of agreement-based and our proposed confidence-based random selection methods for accent detection. Confidence Agreement Accent detection % of P samples 47% 38% P sample error 0.17 0.09 N sample error 0.12 0.22 IPB detection % of P samples 46% 19% P sample error 0.12 0.01 N sample error 0.18 0.53 Break detection % of P samples 50% 25% P sample error 0.15 0.03 N sample error 0.17 0.42 Table 2: Percentage of positive samples, and averaged error rate for positive (P) and negative (N) samples for the first 20 iterations using the agreement-based and our confidence labeling methods. less data, but after some iteration, the performance is saturated earlier. However, the agreement-based method does not yield any performance gain, instead, its performance is much worse after some iteration. The other two prosodic event detection tasks also show similar patterns. To analyze the reason for this performance degradation using the agreement-based method, we compare the labels of the newly added samples in random selection with the reference annotation. Table 2 shows the percentage of the positive samples added for the first 20 iterations, and the average labeling error rate of those samples for the self-labeled positive and negative classes for two methods. The agreement-based random selection added more negative samples that also have higher error rate than the positive samples. Adding these samples has a negative impact on the classifier’s performance. In contrast, our confidence-based approach balances the number of positive and negative samples and significantly reduces the error 5,000 10,000 15,000 0.65 0.7 0.75 0.8 # of samples F−measure Supervised Random Most confident Most different Figure 4: The learning curve of 3 sample selection methods for accent detection. rates for the negative samples as well, thus leading to performance improvement. Next we evaluate the efficacy of the three sample selection methods described in Section 5, namely, random, most confident, and most different selections. Figure 4 shows the learning curves for the three selection methods for accent detection. The same configuration is used as in the previous experiment, i.e., at least 0.1 posterior probability difference between the two classifiers, and adding 5% of new samples in each iteration. All of these sample selection approaches use the confidence-based labeling. For comparison, Figure 4 also shows the learning curve for supervised learning when varying the training size. We can see from the figure that compared to random selection, the most confident selection method shows similar performance in the first few iterations, but its performance continues to increase and the saturation point is much later than random selection. Unlike the other two sample selection methods, most different selection results in noticeable performance degradation after some iteration. This difference is caused by the high self-labeling error rate of selected samples. Both random and most confident selections perform better than supervised learning at the first few iterations. This is because the new samples added have different posterior probabilities by the two classifiers, and thus one of the classifiers benefits from these samples. Learning curves for the other two tasks (break index and IPB detection) show similar pattern for the random and most different selection methods, but some differences in the most confident selection results. For the IPB task, the learning curve of the most confident selection fluctuates somewhat in the middle of the iterations with similar performance to random selection, however, afterward the performance is better than random selection. 546 5,000 10,000 15,000 20,000 25,000 0.68 0.7 0.72 0.74 0.76 0.78 0.8 # of samples F−measure Supervised 5 utterances 10 utterances 20 utterances 5 utterances 10 utterances 20 utterances Figure 5: The learning curves for accent detection using different amounts of initial labeled training data. For the break index detection, the learning curve of most different selection increases more slowly than random selection at the beginning, but the saturation point is much later and therefore outperforms the random selection at the later iterations. We also evaluated the effect of the amount of initial labeled training data. In this experiment, most confident selection is used, and the other configurations are the same as the previous experiment. The learning curve for accent detection is shown in Figure 5 using different numbers of utterances in the initial training data. The arrow marks indicate the start position of each learning curve. As we can see, the learning curve when using 20 utterances is slightly better than the others, but there is no significant performance gain according to the size of initial labeled training data. Finally we compared our co-training performance with supervised learning. For supervised learning, all labeled utterances except for the test set are used for training. We used most confident selection with proposed self-labeling method. The initial training data in co-training is 3% of that used for supervised learning. After 74 iterations, the size of samples of co-training is similar to that in the supervised method. Table 3 presents the results of three prosodic event detection tasks. We can see that the performance of co-training for these three tasks is slightly worse than supervised learning using all the labeled data, but is significantly better than the original performance using 3% of hand labeled data. Most of the previous work for prosodic event detection reported their results using classification accuracy instead of F-measure. Therefore to better compare with previous work, we present below the accuracy results in our approach. The cotraining algorithm achieves the accuracy of 85.3%, Accent IPB Break Supervised 0.82 0.74 0.77 Cotraining Initial training (3%) 0.69 0.59 0.62 After 74 iterations 0.80 0.71 0.75 Table 3: The results (F-measure) of prosodic event detection for supervised and co-training approaches. 90.1%, and 86.7% respectively for accent, intonational phrase boundary, and break index detection, compared with 87.6%, 92.3%, and 88.9% in supervised learning. Although the test condition is different, our result is significantly better than that of other semi-supervised approaches of previous work and comparable with supervised approaches. 7 Conclusions In this paper, we exploit the co-training method for automatic prosodic event detection. We introduced a confidence-based method to assign possible labels to unlabeled data and evaluated the performance combined with informative sample selection methods. Our experimental results using co-training are significantly better than the original supervised results using the small amount of training data, and closer to that using supervised learning with a large amount of data. This suggests that the use of unlabeled data can lead to significant improvement for prosodic event detection. In our experiment, we used some labeled data as development set to estimate some parameters. For the future work, we will perform analysis of loss function of each classifier in order to estimate parameters without labeled development data. In addition, we plan to compare this to other semi-supervised learning techniques such as active learning. We also plan to use this algorithm to annotate different types of data, such as spontaneous speech, and incorporate prosodic events in spoken language applications. Acknowledgments This work is supported by DARPA under Contract No. HR0011-06-C-0023. Distribution is unlimited. References A. Blum and T. Mitchell. 1998. Combining labeled and unlabeled data with co-training. Proceedings of 547 the Workshop on Computational Learning Theory, pp. 92-100. C. W. Wightman and M. Ostendorf. 1994. Automatic labeling of prosodic patterns. IEEE Transactions on Speech and Audio Processing, Vol. 2(4), pp. 69-481. G. Levow. 2006. Unsupervised and semi-supervised learning of tone and pitch accent. Proceedings of HLT-NAACL, pp. 224-231. I. Muslea, S. Minton and C. Knoblock. 2000. Selective sampling with redundant views. Proceedings of the 7th International Conference on Artificial Intelligence, pp. 621-626. J. Jeon and Y. Liu. 2009. Automatic prosodic event detection using syllable-base acoustic and syntactic features. Proceeding of ICASSP, pp. 4565-4568. K. Chen, M. Hasegawa-Johnson, and A. Cohen. 2004. An automatic prosody labeling system using ANNbased syntactic-prosodic model and GMM-based acoustic prosodic model. Proceedings of ICASSP, pp. 509-512. K. Nigam and R. Ghani. 2000 Analyzing the effectiveness and applicability of Co-training Proceedings 9th International Conference on Information and Knowledge Management, pp. 86-93. K. Silverman, M. Beckman, J. Pitrelli, M. Ostendorf, C. Wightman, P. Price, J. Pierrehumbert, and J. Hirschberg. 1992. ToBI: A standard for labeling English prosody. Proceedings of ICSLP, pp. 867870. M. Steedman, S. Baker, S. Clark, J. Crim, J. Hockenmaier, R. Hwa, M. Osborne, P. Ruhlen, A. Sarkar 2003. CLSP WS-02 Final Report: Semi-Supervised Training for Statistical Parsing. M. Ostendorf, P. J. Price and S. Shattuck-Hunfnagel. 1995. The Boston University Radio News Corpus. Linguistic Data Consortium. S. Ananthakrishnan and S. Narayanan. 2006. Combining acoustic, lexical, and syntactic evidence for automatic unsupervised prosody labeling. Proceedings of ICSLP, pp. 297-300. S. Ananthakrishnan and S. Narayanan. 2008. Automatic prosodic event detection using acoustic, lexical and syntactic evidence. IEEE Transactions on Audio, Speech and Language Processing, Vol. 16(1), pp. 216-228. S. Clark, J. Currant, and M. Osborne. 2003. Bootstrapping POS taggers using unlabeled data. Proceedings of CoNLL, pp. 49-55. S. Dasupta, M. L. Littman, and D. McAllester. 2001. PAC generalization bounds for co-training. Advances in Neural Information Processing Systems, Vol. 14, pp. 375-382. S. Goldman and Y. Zhou. 2000. Enhancing supervised learning with unlabeled data. Proceedings of the Seventeenth International Conference on Machine Learning, pp. 327-334. V. K. Rangarajan Sridhar, S. Bangalore, and S. Narayanan. 2008. Exploiting acoustic and syntactic features for automatic prosody labeling in a maximum entropy framework. IEEE Transactions on Audio, Speech, and Language processing, pp. 797-811. W. Wang, Z. Huang, and M. Harper. 2007. Semisupervised learning for part-of-speech tagging of Mandarin transcribed speech. Proceeding of ICASSP, pp. 137-140. 548
2009
61
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 549–557, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Summarizing multiple spoken documents: finding evidence from untranscribed audio Xiaodan Zhu, Gerald Penn and Frank Rudzicz University of Toronto 10 King’s College Rd., Toronto, M5S 3G4, ON, Canada {xzhu,gpenn,frank}@cs.toronto.edu Abstract This paper presents a model for summarizing multiple untranscribed spoken documents. Without assuming the availability of transcripts, the model modifies a recently proposed unsupervised algorithm to detect re-occurring acoustic patterns in speech and uses them to estimate similarities between utterances, which are in turn used to identify salient utterances and remove redundancies. This model is of interest due to its independence from spoken language transcription, an error-prone and resource-intensive process, its ability to integrate multiple sources of information on the same topic, and its novel use of acoustic patterns that extends previous work on low-level prosodic feature detection. We compare the performance of this model with that achieved using manual and automatic transcripts, and find that this new approach is roughly equivalent to having access to ASR transcripts with word error rates in the 33–37% range without actually having to do the ASR, plus it better handles utterances with out-ofvocabulary words. 1 Introduction Summarizing spoken documents has been extensively studied over the past several years (Penn and Zhu, 2008; Maskey and Hirschberg, 2005; Murray et al., 2005; Christensen et al., 2004; Zechner, 2001). Conventionally called speech summarization, although speech connotes more than spoken documents themselves, it is motivated by the demand for better ways to navigate spoken content and the natural difficulty in doing so — speech is inherently more linear or sequential than text in its traditional delivery. Previous research on speech summarization has addressed several important problems in this field (see Section 2.1). All of this work, however, has focused on single-document summarization and the integration of fairly simplistic acoustic features, inspired by work in descriptive linguistics. The issues of navigating speech content are magnified when dealing with larger collections — multiple spoken documents on the same topic. For example, when one is browsing news broadcasts covering the same events or call-centre recordings related to the same type of customer questions, content redundancy is a prominent issue. Multi-document summarization on written documents has been studied for more than a decade (see Section 2.2). Unfortunately, no such effort has been made on audio documents yet. An obvious way to summarize multiple spoken documents is to adopt the transcribe-andsummarize approach, in which automatic speech recognition (ASR) is first employed to acquire written transcripts. Speech summarization is accordingly reduced to a text summarization task conducted on error-prone transcripts. Such an approach, however, encounters several problems. First, assuming the availability of ASR is not always valid for many languages other than English that one may want to summarize. Even when it is, transcription quality is often an issue— training ASR models requires collecting and annotating corpora on specific languages, dialects, or even different domains. Although recognition errors do not significantly impair extractive summarizers (Christensen et al., 2004; Zhu and Penn, 2006), error-laden transcripts are not necessarily browseable if recognition errors are higher than certain thresholds (Munteanu et al., 2006). In such situations, audio summaries are an alternative when salient content can be identified directly from untranscribed audio. Third, the underlying paradigm of most ASR models aims to solve a 549 classification problem, in which speech is segmented and classified into pre-existing categories (words). Words not in the predefined dictionary are certain to be misrecognized without exception. This out-of-vocabulary (OOV) problem is unavoidable in the regular ASR framework, although it is more likely to happen on salient words such as named entities or domain-specific terms. Our approach uses acoustic evidence from the untranscribed audio stream. Consider text summarization first: many well-known models such as MMR (Carbonell and Goldstein, 1998) and MEAD (Radev et al., 2004) rely on the reoccurrence statistics of words. That is, if we switch any word w1 with another word w2 across an entire corpus, the ranking of extracts (often sentences) will be unaffected, because no wordspecific knowledge is involved. These models have achieved state-of-the-art performance in transcript-based speech summarization (Zechner, 2001; Penn and Zhu, 2008). For spoken documents, such reoccurrence statistics are available directly from the speech signal. In recent years, a variant of dynamic time warping (DTW) has been proposed to find reoccurring patterns in the speech signal (Park and Glass, 2008). This method has been successfully applied to tasks such as word detection (Park and Glass, 2006) and topic boundary detection (Malioutov et al., 2007). Motivated by the work above, this paper explores the approach to summarizing multiple spoken documents directly over an untranscribed audio stream. Such a model is of interest because of its independence from ASR. It is directly applicable to audio recordings in languages or domains when ASR is not possible or transcription quality is low. In principle, this approach is free from the OOV problem inherent to ASR. The premise of this approach, however, is to reliably find reoccuring acoustic patterns in audio, which is challenging because of noise and pronunciation variance existing in the speech signal, as well as the difficulty of finding alignments with proper lengths corresponding to words well. Therefore, our primary goal in this paper is to empirically determine the extent to which acoustic information alone can effectively replace conventional speech recognition with or without simple prosodic feature detection within the multi-document speech summarization task. As shown below, a modification of the Park-Glass approach amounts to the efficacy of a 33-37% WER ASR engine in the domain of multiple spoken document summarization, and also has better treatment of OOV items. ParkGlass similarity scores by themselves can attribute a high score to distorted paths that, in our context, ultimately leads to too many false-alarm alignments, even after applying the distortion threshold. We introduce additional distortion penalty and subpath length constraints on their scoring to discourage this possibility. 2 Related work 2.1 Speech summarization Although abstractive summarization is more desirable, the state-of-the-art research on speech summarization has been less ambitious, focusing primarily on extractive summarization, which presents the most important N% of words, phrases, utterances, or speaker turns of a spoken document. The presentation can be in transcripts (Zechner, 2001), edited speech data (Furui et al., 2003), or a combination of these (He et al., 2000). Audio data amenable to summarization include meeting recordings (Murray et al., 2005), telephone conversations (Zhu and Penn, 2006; Zechner, 2001), news broadcasts (Maskey and Hirschberg, 2005; Christensen et al., 2004), presentations (He et al., 2000; Zhang et al., 2007; Penn and Zhu, 2008), etc. Although extractive summarization is not as ideal as abstractive summarization, it outperforms several comparable alternatives. Tucker and Whittaker (2008) have shown that extractive summarization is generally preferable to time compression, which speeds up the playback of audio documents with either fixed or variable rates. He et al. (2000) have shown that either playing back important audio-video segments or just highlighting the corresponding transcripts is significantly better than providing users with full transcripts, electronic slides, or both for browsing presentation recordings. Given the limitations associated with ASR, it is no surprise that previous work (He et al., 1999; Maskey and Hirschberg, 2005; Murray et al., 2005; Zhu and Penn, 2006) has studied features available in audio. The focus, however, is primarily limited to prosody. The assumption is that prosodic effects such as stress can indicate salient information. Since a direct modeling of complicated compound prosodic effects like stress is dif550 ficult, they have used basic features of prosody instead, such as pitch, energy, duration, and pauses. The usefulness of prosody was found to be very limited by itself, if the effect of utterance length is not considered (Penn and Zhu, 2008). In multiplespoken-document summarization, it is unlikely that prosody will be more useful in predicating salience than in single document summarization. Furthermore, prosody is also unlikely to be applicable to detecting or handling redundancy, which is prominent in the multiple-document setting. All of the work above has been conducted on single-document summarization. In this paper we are interested in summarizing multiple spoken documents by using reoccurrence statistics of acoustic patterns. 2.2 Multiple-document summarization Multi-document summarization on written text has been studied for over a decade. Compared with the single-document task, it needs to remove more content, cope with prominent redundancy, and organize content from different sources properly. This field has been pioneered by early work such as the SUMMONS architecture (Mckeown and Radev, 1995; Radev and McKeown, 1998). Several well-known models have been proposed, i.e., MMR (Carbonell and Goldstein, 1998), multiGen (Barzilay et al., 1999), and MEAD (Radev et al., 2004). Multi-document summarization has received intensive study at DUC. 1 Unfortunately, no such efforts have been extended to summarize multiple spoken documents yet. Abstractive approaches have been studied since the beginning. A famous effort in this direction is the information fusion approach proposed in Barzilay et al. (1999). However, for error-prone transcripts of spoken documents, an abstractive method still seems to be too ambitious for the time being. As in single-spoken-document summarization, this paper focuses on the extractive approach. Among the extractive models, MMR (Carbonell and Goldstein, 1998) and MEAD (Radev et al., 2004), are possibly the most widely known. Both of them are linear models that balance salience and redundancy. Although in principle, these models allow for any estimates of salience and redundancy, they themselves calculate these scores with word reoccurrence statistics, e.g., tf.idf, and yield state-of-the-art performance. MMR it1http://duc.nist.gov/ eratively selects sentences that are similar to the entire documents, but dissimilar to the previously selected sentences to avoid redundancy. Its details will be revisited below. MEAD uses a redundancy removal mechanism similar to MMR, but to decide the salience of a sentence to the whole topic, MEAD uses not only its similarity score but also sentence position, e.g., the first sentence of each new story is considered important. Our work adopts the general framework of MMR and MEAD to study the effectiveness of the acoustic pattern evidence found in untranscribed audio. 3 An acoustics-based approach The acoustics-based summarization technique proposed in this paper consists of three consecutive components. First, we detect acoustic patterns that recur between pairs of utterances in a set of documents that discuss a common topic. The assumption here is that lemmata, words, or phrases that are shared between utterances are more likely to be acoustically similar. The next step is to compute a relatedness score between each pair of utterances, given the matching patterns found in the first step. This yields a symmetric relatedness matrix for the entire document set. Finally, the relatedness matrix is incorporated into a general summarization model, where it is used for utterance selection. 3.1 Finding common acoustic patterns Our goal is to identify subsequences within acoustic sequences that appear highly similar to regions within other sequences, where each sequence consists of a progression of overlapping 20ms vectors (frames). In order to find those shared patterns, we apply a modification of the segmental dynamic time warping (SDTW) algorithm to pairs of audio sequences. This method is similar to standard DTW, except that it computes multiple constrained alignments, each within predetermined bands of the similarity matrix (Park and Glass, 2008).2 SDTW has been successfully applied to problems such as topic boundary detection (Malioutov et al., 2007) and word detection (Park and Glass, 2006). An example application of SDTW is shown in Figure 1, which shows the results of two utterances from the TDT-4 English dataset: 2Park and Glass (2008) used Euclidean distance. We used cosine distance instead, which was found to be better on our held-out dataset. 551 I: the explosion in aden harbor killed seventeen u.s. sailors and injured other thirty nine last month. II: seventeen sailors were killed. These two utterances share three words: killed, seventeen, and sailors, though in different orders. The upper panel of Figure 1 shows a matrix of frame-level similarity scores between these two utterances where lighter grey represents higher similarity. The lower panel shows the four most similar shared subpaths, three of which correspond to the common words, as determined by the approach detailed below. Figure 1: Using segmental dynamic time warping to find matching acoustic patterns between two utterances. Calculating MFCC The first step of SDTW is to represent each utterance as sequences of Mel-frequency cepstral coefficient (MFCC) vectors, a commonly used representation of the spectral characteristics of speech acoustics. First, conventional short-time Fourier transforms are applied to overlapping 20ms Hamming windows of the speech amplitude signal. The resulting spectral energy is then weighted by filters on the Mel-scale and converted to 39dimensional feature vectors, each consisting of 12 MFCCs, one normalized log-energy term, as well as the first and second derivatives of these 13 components over time. The MFCC features used in the acoustics-based approach are the same as those used below in the ASR systems. As in (Park and Glass, 2008), an additional whitening step is taken to normalize the variances on each of these 39 dimensions. The similarities between frames are then estimated using cosine distance. All similarity scores are then normalized to the range of [0, 1], which yields similarity matrices exemplified in the upper panel of Figure 1. Finding optimal paths For each similarity matrix obtained above, local alignments of matching patterns need to be found, as shown in the lower panel of Figure 1. A single global DTW alignment is not adequate, since words or phrases held in common between utterances may occur in any order. For example, in Figure 1 killed occurs before all other shared words in one document and after all of these in the other, so a single alignment path that monotonically seeks the lower right-hand corner of the similarity matrix could not possibly match all common words. Instead, multiple DTWs are applied, each starting from different points on the left or top edges of the similarity matrix, and ending at different points on the bottom or right edges, respectively. The width of this diagonal band is proportional to the estimated number of words per sequence. Given an M-by-N matrix of frame-level similarity scores, the top-left corner is considered the origin, and the bottom-right corner represents an alignment of the last frames in each sequence. For each of the multiple starting points p0 = (x0, y0) where either x0 = 0 or y0 = 0, but not necessarily both, we apply DTW to find paths P = p0, p1, ..., pK that maximize P 0≤i≤K sim(pi), where sim(pi) is the cosine similarity score of point pi = (xi, yi) in the matrix. Each point on the path, pi, is subject to the constraint |xi −yi| < T, where T limits the distortion of the path, as we determine experimentally. The ending points are pK = (xK, yK) with either xK = N or yK = M. For considerations of efficiency, the multiple DTW processes do not start from every point on the left or top edges. Instead, they skip every T such starting points, which still guarantees that there will be no blind-spot in the matrices that are inaccessible to all DTW search paths. Finding optimal subpaths After the multiple DTW paths are calculated, the optimal subpath on each is then detected in order to find the local alignments where the similarity is maximal, which is where we expect actual matched phrases to occur. For a given path P = p0, p2, ..., pK, the optimal subpath is defined to be a continuous subpath, P ∗= pm, pm+1..., pn 552 that maximizes P m≤i≤n sim(pi) n−m+1 , 0 ≤n ≤m ≤k, and m −n + 1 ≥L. That is, the subpath is at least as long as L and has the maximal average similarity. L is used to avoid short alignments that correspond to subword segments or short function words. The value of L is determined on a development set. The version of SDTW employed by (Malioutov et al., 2007) and Park and Glass (2008) employed an algorithm of complexity O(Klog(L)) from (Lin et al., 2002) to find subpaths. Lin et al. (2002) have also proven that the length of the optimal subpath is between L and 2L −1, inclusively. Therefore, our version uses a very simple algorithm— just search and find the maximum of average similarities among all possible subpaths with lengths between L and 2L −1. Although the theoretical upper bound for this algorithm is O(KL), in practice we have found no significant increase in computation time compared with the O(Klog(L)) algorithm—L is actually a constant for both Park and Glass (2008) and us, it is much smaller than K, and the O(Klog(L)) algorithm has (constant) overhead of calculating right-skew partitions. In our implementation, since most of the time is spent on calculating the average similarity scores on candidate subpaths, all average scores are therefore pre-calculated incrementally and saved. We have also parallelized the computation of similarities by topics over several computer clusters. A detailed comparison of different parallelization techniques has been conducted by Gajjar et al. (2008). In addition, comparing time efficiency between the acoustics-based approach and ASRbased summarizers is interesting but not straightforward since a great deal of comparable programming optimization needs to be additionally considered in the present approach. 3.2 Estimating utterance-level similarity In the previous stage, we calculated frame-level similarities between utterance pairs and used these to find potential matching patterns between the utterances. With this information, we estimate utterance-level similarities by estimating the numbers of true subpath alignments between two utterances, which are in turn determined by combining the following features associated with subpaths: Similarity of subpath We compute similarity features on each subpath. We have obtained the average similarity score of each subpath as discussed in Section 3.1. Based on this, we calculate relative similarity scores, which are computed by dividing the original similarity of a given subpath by the average similarity of its surrounding background. The motivation for capturing the relative similarity is to punish subpaths that cannot distinguish themselves from their background, e.g., those found in a block of high-similarity regions caused by certain acoustic noise. Distortion score Warped subpaths are less likely to correspond to valid matching patterns than straighter ones. In addition to removing very distorted subpaths by applying a distortion threshold as in (Park and Glass, 2008), we also quantitatively measured the remaining ones. We fit each of them with leastsquare linear regression and estimate the residue scores. As discussed above, each point on a subpath satisfies |xi −yi| < T, so the residue cannot be bigger than T. We used this to normalize the distortion scores to the range of [0,1]. Subpath length Given two subpaths with nearly identical average similarity scores, we suggest that the longer of the two is more likely to refer to content of interest that is shared between two speech utterances, e.g., named entities. Longer subpaths may in this sense therefore be more useful in identifying similarities and redundancies within a speech summarization system. As discussed above, since the length of a subpath len(P ′) has been proven to fall between L and 2L −1, i.e., L ≤len(P ′) ≤2L −1, given a parameter L, we normalize the path length to (len(P ′) −L)/L, corresponding to the range [0,1). The similarity scores of subpaths can vary widely over different spoken documents. We do not use the raw similarity score of a subpath, but rather its rank. For example, given an utterance pair, the top-1 subpath is more likely to be a true alignment than the rest, even if its distortion score may be higher. The similarity ranks are combined with distortion scores and subpath lengths simply as follows. We divide subpaths into the top 1, 3, 5, and 10 by their raw similarity scores. For subpaths in each group, we check whether their distortion scores are below and lengths are above 553 some thresholds. If they are, in any group, then the corresponding subpaths are selected as “true” alignments for the purposes of building utterancelevel similarity matrix. The numbers of true alignments are used to measure the similarity between two utterances. We therefore have 8 threshold parameters to estimate, and subpaths with similarity scores outside the top 10 are ignored. The rank groups are checked one after another in a decision list. Powell’s algorithm (Press et al., 2007) is used to find the optimal parameters that directly minimize summarization errors made by the acousticsbased model relative to utterances selected from manual transcripts. 3.3 Extractive summarization Once the similarity matrix between sentences in a topic is acquired, we can conduct extractive summarization by using the matrix to estimate both similarity and redundancy. As discussed above, we take the general framework of MMR and MEAD, i.e., a linear model combining salience and redundancy. In practice, we used MMR in our experiments, since the original MEAD considers also sentence positions 3 , which can always been added later as in (Penn and Zhu, 2008). To facilitate our discussion below, we briefly revisit MMR here. MMR (Carbonell and Goldstein, 1998) iteratively augments the summary with utterances that are most similar to the document set under consideration, but most dissimilar to the previously selected utterances in that summary, as shown in the equation below. Here, the sim1 term represents the similarity between a sentence and the document set it belongs to. The assumption is that a sentence having a higher sim1 would better represent the content of the documents. The sim2 term represents the similarity between a candidate sentence and sentences already in the summary. It is used to control redundancy. For the transcriptbased systems, the sim1 and sim2 scores in this paper are measured by the number of words shared between a sentence and a sentence/document set mentioned above, weighted by the idf scores of these words, which is similar to the calculation of sentence centroid values by Radev et al. (2004). 3The usefulness of position varies significantly in different genres (Penn and Zhu, 2008). Even in the news domain, the style of broadcast news differs from written news, for example, the first sentence often serves to attract audiences (Christensen et al., 2004) and is hence less important as in written news. Without consideration of position, MEAD is more similar to MMR. Note that the acoustics-based approach estimates this by using the method discussed above in Section 3.2. Nextsent = argmax tnr,j (λ sim1(doc, tnr,j) −(1 −λ)maxtr,ksim2(tnr,j, tr,k)) 4 Experimental setup We use the TDT-4 dataset for our evaluation, which consists of annotated news broadcasts grouped into common topics. Since our aim in this paper is to study the achievable performance of the audio-based model, we grouped together news stories by their news anchors for each topic. Then we selected the largest 20 groups for our experiments. Each of these contained between 5 and 20 articles. We compare our acoustics-only approach against transcripts produced automatically from two ASR systems. The first set of transcripts was obtained directly from the TDT-4 database. These transcripts contain a word error rate of 12.6%, which is comparable to the best accuracies obtained in the literature on this data set. We also run a custom ASR system designed to produce transcripts at various degrees of accuracy in order to simulate the type of performance one might expect given languages with sparser training corpora. These custom acoustic models consist of context-dependent tri-phone units trained on HUB-4 broadcast news data by sequential Viterbi forced alignment. During each round of forced alignment, the maximum likelihood linear regression (MLLR) transform is used on gender-dependent models to improve the alignment quality. Language models are also trained on HUB-4 data. Our aim in this paper is to study the achievable performance of the audio-based model. Instead of evaluating the result against human generated summaries, we directly compare the performance against the summaries obtained by using manual transcripts, which we take as an upper bound to the audio-based system’s performance. This obviously does not preclude using the audio-based system together with other features such as utterance position, length, speaker’s roles, and most others used in the literature (Penn and Zhu, 2008). Here, we do not want our results to be affected by them with the hope of observing the difference accurately. As such, we quantify success based on ROUGE (Lin, 2004) scores. Our goal is to evalu554 ate whether the relatedness of spoken documents can reasonably be gleaned solely from the surface acoustic information. 5 Experimental results We aim to empirically determine the extent to which acoustic information alone can effectively replace conventional speech recognition within the multi-document speech summarization task. Since ASR performance can vary greatly as we discussed above, we compare our system against automatic transcripts having word error rates of 12.6%, 20.9%, 29.2%, and 35.5% on the same speech source. We changed our language models by restricting the training data so as to obtain the worst WER and then interpolated the corresponding transcripts with the TDT-4 original automatic transcripts to obtain the rest. Figure 2 shows ROUGE scores for our acoustics-only system, as depicted by horizontal lines, as well as those for the extractive summaries given automatic transcripts having different WERs, as depicted by points. Dotted lines represent the 95% confidence intervals of the transcript-based models. Figure 2 reveals that, typically, as the WERs of automatic transcripts increase to around 33%-37%, the difference between the transcript-based and the acoustics-based models is no longer significant. These observations are consistent across summaries with different fixed lengths, namely 10%, 20%, and 30% of the lengths of the source documents for the top, middle, and bottom rows of Figure 2, respectively. The consistency of this trend is shown across both ROUGE-2 and ROUGE-SU4, which are the official measures used in the DUC evaluation. We also varied the MMR parameter λ within a typical range of 0.4–1, which yielded the same observation. Since the acoustics-based approach can be applied to any data domain and to any language in principle, this would be of special interest when those situations yield relatively high WER with conventional ASR. Figure 2 also shows the ROUGE scores achievable by selecting utterances uniformly at random for extractive summarization, which are significantly lower than all other presented methods and corroborate the usefulness of acoustic information. Although our acoustics-based method performs similarly to automatic transcripts with 33-37% WER, the errors observed are not the same, which 0 0.1 0.2 0.3 0.4 0.5 0.7 0.75 0.8 0.85 0.9 0.95 1 Len=10% Rand=0.197 ROUGE−SU4 Word error rate 0 0.1 0.2 0.3 0.4 0.5 0.7 0.75 0.8 0.85 0.9 0.95 1 Len=20%, Rand=0.340 ROUGE−SU4 Word error rate 0 0.1 0.2 0.3 0.4 0.5 0.7 0.75 0.8 0.85 0.9 0.95 1 Len=30%, Rand=0.402 ROUGE−SU4 Word error rate 0 0.1 0.2 0.3 0.4 0.5 0.7 0.75 0.8 0.85 0.9 0.95 1 Len=10%, Rand=0.176 ROUGE−2 Word error rate 0 0.1 0.2 0.3 0.4 0.5 0.7 0.75 0.8 0.85 0.9 0.95 1 Len=20%, Rand=0.324 ROUGE−2 Word error rate 0 0.1 0.2 0.3 0.4 0.5 0.7 0.75 0.8 0.85 0.9 0.95 1 Len=30%, Rand=0.389 ROUGE−2 Word error rate Figure 2: ROUGE scores and 95% confidence intervals for the MMR-based extractive summaries produced from our acoustics-only approach (horizontal lines), and from ASR-generated transcripts having varying WER (points). The top, middle, and bottom rows of subfigures correspond to summaries whose lengths are fixed at 10%, 20%, and 30% the sizes of the source text, respectively. λ in MMR takes 1, 0.7, and 0.4 in these rows, respectively. we attribute to fundamental differences between these two methods. Table 1 presents the number of different utterances correctly selected by the acoustics-based and ASR-based methods across three categories, namely those sentences that are correctly selected by both methods, those appearing only in the acoustics-based summaries, and those appearing only in the ASR-based summaries. These are shown for summaries having different proportional lengths relative to the source documents and at different WERs. Again, correctness here means that the utterance is also selected when using a manual transcript, since that is our defined topline. A manual analysis of the corpus shows that utterances correctly included in summaries by 555 Summ. Both ASR Aco.length only only WER=12.6% 10% 85 37 8 20% 185 62 12 30% 297 87 20 WER=20.9% 10% 83 36 10 20% 178 65 19 30% 293 79 24 WER=29.2% 10% 77 34 16 20% 172 58 25 30% 286 64 31 WER=35.5% 10% 75 33 18 20% 164 54 33 30% 272 67 45 Table 1: Utterances correctly selected by both the ASR-based models and acoustics-based approach, or by either of them, under different WERs (12.6%, 20.9%, 29.2%, and 35.5%) and summary lengths (10%, 20%, and 30% utterances of the original documents) the acoustics-based method often contain out-ofvocabulary errors in the corresponding ASR transcripts. For example, given the news topic of the bombing of the U.S. destroyer ship Cole in Yemen, the ASR-based method always mistook the word Cole, which was not in the vocabulary, for cold, khol, and called. Although named entities and domain-specific terms are often highly relevant to the documents in which they are referenced, these types of words are often not included in ASR vocabularies, due to their relative global rarity. Importantly, an unsupervised acoustics-based approach such as ours does not suffer from this fundamental discord. At the very least, these findings suggest that ASR-based summarization systems augmented with our type of approach might be more robust against out-of-vocabulary errors. It is, however, very encouraging that an acousticsbased approach can perform to within a typical WER range within non-broadcast-news domains, although those domains can likewise be more challenging for the acoustics-based approach. Further experimentation is necessary. It is also of scientific interest to be able to quantify this WER as an acoustics-only baseline for further research on ASR-based spoken document summarizers. 6 Conclusions and future work In text summarization, statistics based on word counts have traditionally served as the foundation of state-of-the-art models. In this paper, the similarity of utterances is estimated directly from recurring acoustic patterns in untranscribed audio sequences. These relatedness scores are then integrated into a maximum marginal relevance linear model to estimate the salience and redundancy of those utterance for extractive summarization. Our empirical results show that the summarization performance given acoustic information alone is statistically indistinguishable from that of modern ASR on broadcast news in cases where the WER of the latter approaches 33%-37%. This is an encouraging result in cases where summarization is required, but ASR is not available or speech recognition performance is degraded. Additional analysis suggests that the acoustics-based approach is useful in overcoming situations where out-ofvocabulary error may be more prevalent, and we suggest that a hybrid approach of traditional ASR with acoustics-based pattern matching may be the most desirable future direction of research. One limitation of the current analysis is that summaries are extracted only for collections of spoken documents from among similar speakers. Namely, none of the topics under analysis consists of a mix of male and female speakers. We are currently investigating supervised methods to learn joint probabilistic models relating the acoustics of groups of speakers in order to normalize acoustic similarity matrices (Toda et al., 2001). We suggest that if a stochastic transfer function between male and female voices can be estimated, then the somewhat disparate acoustics of these groups of speakers may be more easily compared. References R. Barzilay, K. McKeown, and M. Elhadad. 1999. Information fusion in the context of multi-document summarization. In Proc. of the 37th Association for Computational Linguistics, pages 550–557. J. G. Carbonell and J. Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In Proceedings of the 21st annual international ACM SIGIR conference on research and development in information retrieval, pages 335–336. H. Christensen, B. Kolluru, Y. Gotoh, and S. Renals. 2004. From text summarisation to style-specific 556 summarisation for broadcast news. In Proceedings of the 26th European Conference on Information Retrieval (ECIR-2004), pages 223–237. S. Furui, T. Kikuichi, Y. Shinnaka, and C. Hori. 2003. Speech-to-speech and speech to text summarization. In First International workshop on Language Understanding and Agents for Real World Interaction. M. Gajjar, R. Govindarajan, and T. V. Sreenivas. 2008. Online unsupervised pattern discovery in speech using parallelization. In Proc. Interspeech, pages 2458–2461. L. He, E. Sanocki, A. Gupta, and J. Grudin. 1999. Auto-summarization of audio-video presentations. In Proceedings of the seventh ACM international conference on Multimedia, pages 489–498. L. He, E. Sanocki, A. Gupta, and J. Grudin. 2000. Comparing presentation summaries: Slides vs. reading vs. listening. In Proceedings of ACM CHI, pages 177–184. Y. Lin, T. Jiang, and Chao. K. 2002. Efficient algorithms for locating the length-constrained heaviest segments with applications to biomolecular sequence analysis. J. Computer and System Science, 63(3):570–586. C. Lin. 2004. Rouge: a package for automatic evaluation of summaries. In Proceedings of the 42st Annual Meeting of the Association for Computational Linguistics (ACL), Text Summarization Branches Out Workshop, pages 74–81. I Malioutov, A. Park, B. Barzilay, and J. Glass. 2007. Making sense of sound: Unsupervised topic segmentation over acoustic input. In Proc. ACL, pages 504–511. S. Maskey and J. Hirschberg. 2005. Comparing lexial, acoustic/prosodic, discourse and structural features for speech summarization. In Proceedings of the 9th European Conference on Speech Communication and Technology (Eurospeech), pages 621–624. K. Mckeown and D.R. Radev. 1995. Generating summaries of multiple news articles. In Proc. of SIGIR, pages 72–82. C. Munteanu, R. Baecker, G Penn, E. Toms, and E. James. 2006. Effect of speech recognition accuracy rates on the usefulness and usability of webcast archives. In Proceedings of SIGCHI, pages 493–502. G. Murray, S. Renals, and J. Carletta. 2005. Extractive summarization of meeting recordings. In Proceedings of the 9th European Conference on Speech Communication and Technology (Eurospeech), pages 593–596. A. Park and J. Glass. 2006. Unsupervised word acquisition from speech using pattern discovery. Proc. ICASSP, pages 409–412. A. Park and J. Glass. 2008. Unsupervised pattern discovery in speech. IEEE Trans. ASLP, 16(1):186– 197. G. Penn and X. Zhu. 2008. A critical reassessment of evaluation baselines for speech summarization. In Proc. of the 46th Association for Computational Linguistics, pages 407–478. W.H. Press, S.A. Teukolsky, W.T. Vetterling, and B.P. Flannery. 2007. Numerical recipes: The art of science computing. D. Radev and K. McKeown. 1998. Generating natural language summaries from multiple on-line sources. In Computational Linguistics, pages 469–500. D. Radev, H. Jing, M. Stys, and D. Tam. 2004. Centroid-based summarization of multiple documents. Information Processing and Management, 40:919–938. T. Toda, H. Saruwatari, and K. Shikano. 2001. Voice conversion algorithm based on gaussian mixture model with dynamic frequency warping of straight spectrum. In Proc. ICASPP, pages 841–844. S. Tucker and S. Whittaker. 2008. Temporal compression of speech: an evaluation. IEEE Transactions on Audio, Speech and Language Processing, pages 790–796. K. Zechner. 2001. Automatic Summarization of Spoken Dialogues in Unrestricted Domains. Ph.D. thesis, Carnegie Mellon University. J. Zhang, H. Chan, P. Fung, and L Cao. 2007. Comparative study on speech summarization of broadcast news and lecture speech. In Proc. of Interspeech, pages 2781–2784. X. Zhu and G. Penn. 2006. Summarization of spontaneous conversations. In Proceedings of the 9th International Conference on Spoken Language Processing, pages 1531–1534. 557
2009
62
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 558–566, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Improving Tree-to-Tree Translation with Packed Forests Yang Liu and Yajuan L¨u and Qun Liu Key Laboratory of Intelligent Information Processing Institute of Computing Technology Chinese Academy of Sciences P.O. Box 2704, Beijing 100190, China {yliu,lvyajuan,liuqun}@ict.ac.cn Abstract Current tree-to-tree models suffer from parsing errors as they usually use only 1best parses for rule extraction and decoding. We instead propose a forest-based tree-to-tree model that uses packed forests. The model is based on a probabilistic synchronous tree substitution grammar (STSG), which can be learned from aligned forest pairs automatically. The decoder finds ways of decomposing trees in the source forest into elementary trees using the source projection of STSG while building target forest in parallel. Comparable to the state-of-the-art phrase-based system Moses, using packed forests in tree-to-tree translation results in a significant absolute improvement of 3.6 BLEU points over using 1-best trees. 1 Introduction Approaches to syntax-based statistical machine translation make use of parallel data with syntactic annotations, either in the form of phrase structure trees or dependency trees. They can be roughly divided into three categories: string-to-tree models (e.g., (Galley et al., 2006; Marcu et al., 2006; Shen et al., 2008)), tree-to-string models (e.g., (Liu et al., 2006; Huang et al., 2006)), and tree-totree models (e.g., (Eisner, 2003; Ding and Palmer, 2005; Cowan et al., 2006; Zhang et al., 2008)). By modeling the syntax of both source and target languages, tree-to-tree approaches have the potential benefit of providing rules linguistically better motivated. However, while string-to-tree and tree-to-string models demonstrate promising results in empirical evaluations, tree-to-tree models have still been underachieving. We believe that tree-to-tree models face two major challenges. First, tree-to-tree models are more vulnerable to parsing errors. Obtaining syntactic annotations in quantity usually entails running automatic parsers on a parallel corpus. As the amount and domain of the data used to train parsers are relatively limited, parsers will inevitably output ill-formed trees when handling real-world text. Guided by such noisy syntactic information, syntax-based models that rely on 1-best parses are prone to learn noisy translation rules in training phase and produce degenerate translations in decoding phase (Quirk and CorstonOliver, 2006). This situation aggravates for treeto-tree models that use syntax on both sides. Second, tree-to-tree rules provide poorer rule coverage. As a tree-to-tree rule requires that there must be trees on both sides, tree-to-tree models lose a larger amount of linguistically unmotivated mappings. Studies reveal that the absence of such non-syntactic mappings will impair translation quality dramatically (Marcu et al., 2006; Liu et al., 2007; DeNeefe et al., 2007; Zhang et al., 2008). Compactly encoding exponentially many parses, packed forests prove to be an excellent fit for alleviating the above two problems (Mi et al., 2008; Mi and Huang, 2008). In this paper, we propose a forest-based tree-to-tree model. To learn STSG rules from aligned forest pairs, we introduce a series of notions for identifying minimal tree-to-tree rules. Our decoder first converts the source forest to a translation forest and then finds the best derivation that has the source yield of one source tree in the forest. Comparable to Moses, our forest-based tree-to-tree model achieves an absolute improvement of 3.6 BLEU points over conventional tree-based model. 558 IP1 NP2 VP3 PP4 VP-B5 NP-B6 NP-B7 NP-B8 NR9 CC10P 11 NR12 VV13 AS14 NN15 bushi yu shalong juxing le huitan Bush held a talk with Sharon NNP16 VBD17 DT18 NN19 IN20 NNP21 NP22 NP23 NP24 NP25 PP26 NP27 VP28 S 29 Figure 1: An aligned packed forest pair. Each node is assigned a unique identity for reference. The solid lines denote hyperedges and the dashed lines denote word alignments. Shaded nodes are frontier nodes. 2 Model Figure 1 shows an aligned forest pair for a Chinese sentence and an English sentence. The solid lines denote hyperedges and the dashed lines denote word alignments between the two forests. Each node is assigned a unique identity for reference. Each hyperedge is associated with a probability, which we omit in Figure 1 for clarity. In a forest, a node usually has multiple incoming hyperedges. We use IN(v) to denote the set of incoming hyperedges of node v. For example, the source node “IP1” has following two incoming hyperedges: 1 e1 = ⟨(NP-B6, VP3), IP1⟩ e2 = ⟨(NP2, VP-B5), IP1⟩ 1As there are both source and target forests, it might be confusing by just using a span to refer to a node. In addition, some nodes will often have the same labels and spans. Therefore, it is more convenient to use an identity for referring to a node. The notation “IP1” denotes the node that has a label of “IP” and has an identity of “1”. Formally, a packed parse forest is a compact representation of all the derivations (i.e., parse trees) for a given sentence under a context-free grammar. Huang and Chiang (2005) define a forest as a tuple ⟨V, E, ¯v, R⟩, where V is a finite set of nodes, E is a finite set of hyperedges, ¯v ∈V is a distinguished node that denotes the goal item in parsing, and R is the set of weights. For a given sentence w1:l = w1 . . . wl, each node v ∈V is in the form of Xi,j, which denotes the recognition of non-terminal X spanning the substring from positions i through j (that is, wi+1 . . . wj). Each hyperedge e ∈E is a triple e = ⟨T(e), h(e), f(e)⟩, where h(e) ∈V is its head, T(e) ∈V ∗is a vector of tail nodes, and f(e) is a weight function from R|T(e)| to R. Our forest-based tree-to-tree model is based on a probabilistic STSG (Eisner, 2003). Formally, an STSG can be defined as a quintuple G = ⟨Fs, Ft, Ss, St, P⟩, where • Fs and Ft are the source and target alphabets, respectively, • Ss and St are the source and target start symbols, and • P is a set of production rules. A rule r is a triple ⟨ts, tt, ∼⟩that describes the correspondence ∼between a source tree ts and a target tree tt. To integrate packed forests into tree-to-tree translation, we model the process of synchronous generation of a source forest Fs and a target forest Ft using a probabilistic STSG grammar: Pr(Fs, Ft) = X Ts∈Fs X Tt∈Ft Pr(Ts, Tt) = X Ts∈Fs X Tt∈Ft X d∈D Pr(d) = X Ts∈Fs X Tt∈Ft X d∈D Y r∈d p(r) (1) where Ts is a source tree, Tt is a target tree, D is the set of all possible derivations that transform Ts into Tt, d is one such derivation, and r is a tree-totree rule. Table 1 shows a derivation of the forest pair in Figure 1. A derivation is a sequence of tree-to-tree rules. Note that we use x to represent a nonterminal. 559 (1) IP(x1:NP-B, x2:VP) →S(x1:NP, x2:VP) (2) NP-B(x1:NR) →NP(x1:NNP) (3) NR(bushi) →NNP(Bush) (4) VP(x1:PP, VP-B(x2:VV, AS(le), x3:NP-B)) →VP(x2:VBD, NP(DT(a), x3:NP), x1:PP) (5) PP(x1:P, x2:NP-B) →PP(x1:IN, x2:NP) (6) P(yu) →IN(with) (7) NP-B(x1:NR) →NP(x1:NP) (8) NR(shalong) →NNP(Sharon) (9) VV(juxing) →VBD(held) (10) NP-B(x1:NN) →NP(x1:NN) (11) NN(huitan) →NN(talk) Table 1: A minimal derivation of the forest pair in Figure 1. id span cspan complement consistent frontier counterparts 1 1-6 1-2, 4-6 1 1 29 2 1-3 1, 5-6 2, 4 0 0 3 2-6 2, 4-6 1 1 1 28 4 2-3 5-6 1-2, 4 1 1 25, 26 5 4-6 2, 4 1, 5-6 1 0 6 1-1 1 2, 4-6 1 1 16, 22 7 3-3 6 1-2, 4-5 1 1 21, 24 8 6-6 4 1-2, 5-6 1 1 19, 23 9 1-1 1 2, 4-6 1 1 16, 22 10 2-2 5 1-2, 4, 6 1 1 20 11 2-2 5 1-2, 4, 6 1 1 20 12 3-3 6 1-2, 4-5 1 1 21, 24 13 4-4 2 1, 4-6 1 1 17 14 5-5 1-2, 4-6 1 0 15 6-6 4 1-2, 5-6 1 1 19, 23 16 1-1 1 2-4, 6 1 1 6, 9 17 2-2 4 1-3, 6 1 1 13 18 3-3 1-4, 6 1 0 19 4-4 6 1-4 1 1 8, 15 20 5-5 2 1, 3-4, 6 1 1 10, 11 21 6-6 3 1-2, 4, 6 1 1 7, 12 22 1-1 1 2-4, 6 1 1 6, 9 23 3-4 6 1-4 1 1 8, 15 24 6-6 3 1-2, 4, 6 1 1 7, 12 25 5-6 2-3 1, 4, 6 1 1 4 26 5-6 2-3 1, 4, 6 1 1 4 27 3-6 2-3, 6 1, 4 0 0 28 2-6 2-4, 6 1 1 1 3 29 1-6 1-4, 6 1 1 1 Table 2: Node attributes of the example forest pair. 3 Rule Extraction Given an aligned forest pair as shown in Figure 1, how to extract all valid tree-to-tree rules that explain its synchronous generation process? By constructing a theory that gives formal semantics to word alignments, Galley et al. (2004) give principled answers to these questions for extracting tree-to-string rules. Their GHKM procedure draws connections among word alignments, derivations, and rules. They first identify the tree nodes that subsume tree-string pairs consistent with word alignments and then extract rules from these nodes. By this means, GHKM proves to be able to extract all valid tree-to-string rules from training instances. Although originally developed for the tree-to-string case, it is possible to extend GHKM to extract all valid tree-to-tree rules from aligned packed forests. In this section, we introduce our tree-to-tree rule extraction method adapted from GHKM, which involves four steps: (1) identifying the correspondence between the nodes in forest pairs, (2) identifying minimum rules, (3) inferring composed rules, and (4) estimating rule probabilities. 3.1 Identifying Correspondence Between Nodes To learn tree-to-tree rules, we need to find aligned tree pairs in the forest pairs. To do this, the starting point is to identify the correspondence between nodes. We propose a number of attributes for nodes, most of which derive from GHKM, to facilitate the identification. Definition 1 Given a node v, its span σ(v) is an index set of the words it covers. For example, the span of the source node “VP-B5” is {4, 5, 6} as it covers three source words: “juxing”, “le”, and “huitan”. For convenience, we use {4-6} to denotes a contiguous span {4, 5, 6}. Definition 2 Given a node v, its corresponding span γ(v) is the index set of aligned words on another side. For example, the corresponding span of the source node “VP-B5” is {2, 4}, corresponding to the target words “held” and “talk”. Definition 3 Given a node v, its complement span δ(v) is the union of corresponding spans of nodes that are neither antecedents nor descendants of v. For example, the complement span of the source node “VP-B5” is {1, 5-6}, corresponding to target words “Bush”, “with”, and “Sharon”. Definition 4 A node v is said to be consistent with alignment if and only if closure(γ(v))∩δ(v) = ∅. For example, the closure of the corresponding span of the source node “VP-B5” is {2-4} and its complement span is {1, 5-6}. As the intersection of the closure and the complement span is an empty set, the source node “VP-B5” is consistent with the alignment. 560 PP 4 NP-B7 P 11 NR12 PP 4 P 11 NP-B7 PP 4 NP-B7 P 11 NR12 PP26 IN20 NP24 NNP21 PP 4 P 11 NP-B7 PP26 IN 20 NP24 (a) (b) (c) (d) Figure 2: (a) A frontier tree; (b) a minimal frontier tree; (c) a frontier tree pair; (d) a minimal frontier tree pair. All trees are taken from the example forest pair in Figure 1. Shaded nodes are frontier nodes. Each node is assigned an identity for reference. Definition 5 A node v is said to be a frontier node if and only if: 1. v is consistent; 2. There exists at least one consistent node v′ on another side satisfying: • closure(γ(v′)) ⊆σ(v); • closure(γ(v)) ⊆σ(v′). v′ is said to be a counterpart of v. We use τ(v) to denote the set of counterparts of v. A frontier node often has multiple counterparts on another side due to the usage of unary rules in parsers. For example, the source node “NP-B6” has two counterparts on the target side: “NNP16” and “NP22”. Conversely, the target node “NNP16” also has two counterparts counterparts on the source side: “NR9” and “NP-B6”. The node attributes of the example forest pair are listed in Table 2. We use identities to refer to nodes. “cspan” denotes corresponding span and “complement” denotes complement span. In Figure 1, there are 12 frontier nodes (highlighted by shading) on the source side and 12 frontier nodes on the target side. Note that while a consistent node is equal to a frontier node in GHKM, this is not the case in our method because we have a tree on the target side. Frontier nodes play a critical role in forest-based rule extraction because they indicate where to cut the forest pairs to obtain treeto-tree rules. 3.2 Identifying Minimum Rules Given the frontier nodes, the next step is to identify aligned tree pairs, from which tree-to-tree rules derive. Following Galley et al. (2006), we distinguish between minimal and composed rules. As a composed rule can be decomposed as a sequence of minimal rules, we are particularly interested in how to extract minimal rules. Also, we introduce a number of notions to help identify minimal rules. Definition 6 A frontier tree is a subtree in a forest satisfying: 1. Its root is a frontier node; 2. If the tree contains only one node, it must be a lexicalized frontier node; 3. If the tree contains more than one nodes, its leaves are either non-lexicalized frontier nodes or lexicalized non-frontier nodes. For example, Figure 2(a) shows a frontier tree in which all nodes are frontier nodes. Definition 7 A minimal frontier tree is a frontier tree such that all nodes other than the root and leaves are non-frontier nodes. For example, Figure 2(b) shows a minimal frontier tree. Definition 8 A frontier tree pair is a triple ⟨ts, tt, ∼⟩satisfying: 1. ts is a source frontier tree; 561 2. tt is a target frontier tree; 3. The root of ts is a counterpart of that of tt; 4. There is a one-to-one correspondence ∼between the frontier leaves of ts and tt. For example, Figure 2(c) shows a frontier tree pair. Definition 9 A frontier tree pair ⟨ts, tt, ∼⟩is said to be a subgraph of another frontier tree pair ⟨ts′, tt′, ∼′⟩if and only if: 1. root(ts) = root(ts′); 2. root(tt) = root(tt′); 3. ts is a subgraph of ts′; 4. tt is a subgraph of tt′. For example, the frontier tree pair shown in Figure 2(d) is a subgraph of that in Figure 2(c). Definition 10 A frontier tree pair is said to be minimal if and only if it is not a subgraph of any other frontier tree pair that shares with the same root. For example, Figure 2(d) shows a minimal frontier tree pair. Our goal is to find the minimal frontier tree pairs, which correspond to minimal tree-to-tree rules. For example, the tree pair shown in Figure 2(d) denotes a minimal rule as follows: PP(x1:P,x2:NP-B) →PP(x1:IN, x2:NP) Figure 3 shows the algorithm for identifying minimal frontier tree pairs. The input is a source forest Fs, a target forest Ft, and a source frontier node v (line 1). We use a set P to store collected minimal frontier tree pairs (line 2). We first call the procedure FINDTREES(Fs, v) to identify a set of frontier trees rooted at v in Fs (line 3). For example, for the source frontier node “PP4” in Figure 1, we obtain two frontier trees: (PP4(P11)(NP-B7)) (PP4(P11)(NP-B7(NR12))) Then, we try to find the set of corresponding target frontier trees (i.e., Tt). For each counterpart v′ of v (line 5), we call the procedure FINDTREES(Ft, v′) to identify a set of frontier trees rooted at v′ in Ft (line 6). For example, the source 1: procedure FINDTREEPAIRS(Fs, Ft, v) 2: P = ∅ 3: Ts ←FINDTREES(Fs, v) 4: Tt ←∅ 5: for v′ ∈τ(v) do 6: Tt ←Tt∪FINDTREES(Ft, v′) 7: end for 8: for ⟨ts, tt⟩∈Ts × Tt do 9: if ts ∼tt then 10: P ←P ∪{⟨ts, tt, ∼⟩} 11: end if 12: end for 13: for ⟨ts, tt, ∼⟩∈P do 14: if ∃⟨ts′, tt′, ∼′⟩∈P : ⟨ts′, tt′, ∼′⟩⊆ ⟨ts, tt, ∼⟩then 15: P ←P −{⟨ts, tt, ∼⟩} 16: end if 17: end for 18: end procedure Figure 3: Algorithm for identifying minimal frontier tree pairs. frontier node “PP4” has two counterparts on the target side: “NP25” and “PP26”. There are four target frontier trees rooted at the two nodes: (NP25(IN20)(NP24)) (NP25(IN20)(NP24(NNP21))) (PP26(IN20)(NP24)) (PP26(IN20)(NP24(NNP21))) Therefore, there are 2 × 4 = 8 pairs of trees. We examine each tree pair ⟨ts, tt⟩(line 8) to see whether it is a frontier tree pair (line 9) and then update P (line 10). In the above example, all the eight tree pairs are frontier tree pairs. Finally, we keep only minimal frontier tree pairs in P (lines 13-15). As a result, we obtain the following two minimal frontier tree pairs for the source frontier node “PP4”: (PP4(P11)(NP-B7)) ↔(NP25(IN20)(NP24)) (PP4(P11)(NP-B7)) ↔(PP26(IN20)(NP24)) To maintain a reasonable rule table size, we restrict that the number of nodes in a tree of an STSG rule is no greater than n, which we refer to as maximal node count. It seems more efficient to let the procedure FINDTREES(F, v) to search for minimal frontier 562 trees rather than frontier trees. However, a minimal frontier tree pair is not necessarily a pair of minimal frontier trees. On our Chinese-English corpus, we find that 38% of minimal frontier tree pairs are not pairs of minimal frontier trees. As a result, we have to first collect all frontier tree pairs and then decide on the minimal ones. Table 1 shows some minimal rules extracted from the forest pair shown in Figure 1. 3.3 Inferring Composed Rules After minimal rules are learned, composed rules can be obtained by composing two or more minimal rules. For example, the composition of the second rule and the third rule in Table 1 produces a new rule: NP-B(NR(shalong)) →NP(NNP(Sharon)) While minimal rules derive from minimal frontier tree pairs, composed rules correspond to nonminimal frontier tree pairs. 3.4 Estimating Rule Probabilities We follow Mi and Huang (2008) to estimate the fractional count of a rule extracted from an aligned forest pair. Intuitively, the relative frequency of a subtree that occurs in a forest is the sum of all the trees that traverse the subtree divided by the sum of all trees in the forest. Instead of enumerating all trees explicitly and computing the sum of tree probabilities, we resort to inside and outside probabilities for efficient calculation: c(r) = p(ts) × α(root(ts)) × Q v∈leaves(ts) β(v) β(¯vs) × p(tt) × α(root(tt)) × Q v∈leaves(tt) β(v) β(¯vt) where c(r) is the fractional count of a rule, ts is the source tree in r, tt is the target tree in r, root(·) a function that gets tree root, leaves(·) is a function that gets tree leaves, and α(v) and β(v) are outside and inside probabilities, respectively. 4 Decoding Given a source packed forest Fs, our decoder finds the target yield of the single best derivation d that has source yield of Ts(d) ∈Fs: ˆe = e argmax d s.t. Ts(d)∈Fs p(d) ! (2) We extend the model in Eq. 1 to a log-linear model (Och and Ney, 2002) that uses the following eight features: relative frequencies in two directions, lexical weights in two directions, number of rules used, language model score, number of target words produced, and the probability of matched source tree (Mi et al., 2008). Given a source parse forest and an STSG grammar G, we first apply the conversion algorithm proposed by Mi et al. (2008) to produce a translation forest. The translation forest has a similar hypergraph structure. While the nodes are the same as those of the parse forest, each hyperedge is associated with an STSG rule. Then, the decoder runs on the translation forest. We use the cube pruning method (Chiang, 2007) to approximately intersect the translation forest with the language model. Traversing the translation forest in a bottom-up order, the decoder tries to build target parses at each node. After the first pass, we use lazy Algorithm 3 (Huang and Chiang, 2005) to generate k-best translations for minimum error rate training. 5 Experiments 5.1 Data Preparation We evaluated our model on Chinese-to-English translation. The training corpus contains 840K Chinese words and 950K English words. A trigram language model was trained on the English sentences of the training corpus. We used the 2002 NIST MT Evaluation test set as our development set, and used the 2005 NIST MT Evaluation test set as our test set. We evaluated the translation quality using the BLEU metric, as calculated by mteval-v11b.pl with its default setting except that we used case-insensitive matching of n-grams. To obtain packed forests, we used the Chinese parser (Xiong et al., 2005) modified by Haitao Mi and the English parser (Charniak and Johnson, 2005) modified by Liang Huang to produce entire parse forests. Then, we ran the Python scripts (Huang, 2008) provided by Liang Huang to output packed forests. To prune the packed forests, Huang (2008) uses inside and outside probabilities to compute the distance of the best derivation that traverses a hyperedge away from the globally best derivation. A hyperedge will be pruned away if the difference is greater than a threshold p. Nodes with all incoming hyperedges pruned are also pruned. The greater the threshold p is, 563 p avg trees # of rules BLEU 0 1 73, 614 0.2021 ± 0.0089 2 238.94 105, 214 0.2165 ± 0.0081 5 5.78 × 106 347, 526 0.2336 ± 0.0078 8 6.59 × 107 573, 738 0.2373 ± 0.0082 10 1.05 × 108 743, 211 0.2385 ± 0.0084 Table 3: Comparison of BLEU scores for treebased and forest-based tree-to-tree models. 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0 1 2 3 4 5 6 7 8 9 10 11 coverage maximal node count p=0 p=2 p=5 p=8 p=10 Figure 4: Coverage of lexicalized STSG rules on bilingual phrases. the more parses are encoded in a packed forest. We obtained word alignments of the training data by first running GIZA++ (Och and Ney, 2003) and then applying the refinement rule “grow-diagfinal-and” (Koehn et al., 2003). 5.2 Forests Vs. 1-best Trees Table 3 shows the BLEU scores of tree-based and forest-based tree-to-tree models achieved on the test set over different pruning thresholds. p is the threshold for pruning packed forests, “avg trees” is the average number of trees encoded in one forest on the test set, and “# of rules” is the number of STSG rules used on the test set. We restrict that both source and target trees in a tree-to-tree rule can contain at most 10 nodes (i.e., the maximal node count n = 10). The 95% confidence intervals were computed using Zhang ’s significance tester (Zhang et al., 2004). We chose five different pruning thresholds in our experiments: p = 0, 2, 5, 8, 10. The forests pruned by p = 0 contained only 1-best tree per sentence. With the increase of p, the average number of trees encoded in one forest rose dramatically. When p was set to 10, there were over 100M parses encoded in one forest on average. p extraction decoding 0 1.26 6.76 2 2.35 8.52 5 6.34 14.87 8 8.51 19.78 10 10.21 25.81 Table 4: Comparison of rule extraction time (seconds/1000 sentence pairs) and decoding time (second/sentence) Moreover, the more trees are encoded in packed forests, the more rules are made available to forest-based models. The number of rules when p = 10 was almost 10 times of p = 0. With the increase of the number of rules used, the BLEU score increased accordingly. This suggests that packed forests enable tree-to-tree model to learn more useful rules on the training data. However, when a pack forest encodes over 1M parses per sentence, the improvements are less significant, which echoes the results in (Mi et al., 2008). The forest-based tree-to-tree model outperforms the original model that uses 1-best trees dramatically. The absolute improvement of 3.6 BLEU points (from 0.2021 to 0.2385) is statistically significant at p < 0.01 using the signtest as described by Collins et al. (2005), with 700(+1), 360(-1), and 15(0). We also ran Moses (Koehn et al., 2007) with its default setting using the same data and obtained a BLEU score of 0.2366, slightly lower than our best result (i.e., 0.2385). But this difference is not statistically significant. 5.3 Effect on Rule Coverage Figure 4 demonstrates the effect of pruning threshold and maximal node count on rule coverage. We extracted phrase pairs from the training data to investigate how many phrase pairs can be captured by lexicalized tree-to-tree rules that contain only terminals. We set the maximal length of phrase pairs to 10. For tree-based tree-to-tree model, the coverage was below 8% even the maximal node count was set to 10. This suggests that conventional tree-to-tree models lose over 92% linguistically unmotivated mappings due to hard syntactic constraints. The absence of such nonsyntactic mappings prevents tree-based tree-totree models from achieving comparable results to phrase-based models. With more parses included 564 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.20 0 1 2 3 4 5 6 7 8 9 10 11 BLEU maximal node count Figure 5: Effect of maximal node count on BLEU scores. in packed forests, the rule coverage increased accordingly. When p = 10 and n = 10, the coverage was 9.7%, higher than that of p = 0. As a result, packed forests enable tree-to-tree models to capture more useful source-target mappings and therefore improve translation quality. 2 5.4 Training and Decoding Time Table 4 gives the rule extraction time (seconds/1000 sentence pairs) and decoding time (second/sentence) with varying pruning thresholds. We found that the extraction time grew faster than decoding time with the increase of p. One possible reason is that the number of frontier tree pairs (see Figure 3) rose dramatically when more parses were included in packed forests. 5.5 Effect of Maximal Node Count Figure 5 shows the effect of maximal node count on BLEU scores. With the increase of maximal node count, the BLEU score increased dramatically. This implies that allowing tree-to-tree rules to capture larger contexts will strengthen the expressive power of tree-to-tree model. 5.6 Results on Larger Data We also conducted an experiment on larger data to further examine the effectiveness of our approach. We concatenated the small corpus we used above and the FBIS corpus. After removing the sentences that we failed to obtain forests, 2Note that even we used packed forests, the rule coverage was still very low. One reason is that we set the maximal phrase length to 10 words, while an STSG rule with 10 nodes in each tree usually cannot subsume 10 words. the new training corpus contained about 260K sentence pairs with 7.39M Chinese words and 9.41M English words. We set the forest pruning threshold p = 5. Moses obtained a BLEU score of 0.3043 and our forest-based tree-to-tree system achieved a BLEU score of 0.3059. The difference is still not significant statistically. 6 Related Work In machine translation, the concept of packed forest is first used by Huang and Chiang (2007) to characterize the search space of decoding with language models. The first direct use of packed forest is proposed by Mi et al. (2008). They replace 1-best trees with packed forests both in training and decoding and show superior translation quality over the state-of-the-art hierarchical phrasebased system. We follow the same direction and apply packed forests to tree-to-tree translation. Zhang et al. (2008) present a tree-to-tree model that uses STSG. To capture non-syntactic phrases, they apply tree-sequence rules (Liu et al., 2007) to tree-to-tree models. Their extraction algorithm first identifies initial rules and then obtains abstract rules. While this method works for 1-best tree pairs, it cannot be applied to packed forest pairs because it is impractical to enumerate all tree pairs over a phrase pair. While Galley (2004) describes extracting treeto-string rules from 1-best trees, Mi and Huang et al. (2008) go further by proposing a method for extracting tree-to-string rules from aligned foreststring pairs. We follow their work and focus on identifying tree-tree pairs in a forest pair, which is more difficult than the tree-to-string case. 7 Conclusion We have shown how to improve tree-to-tree translation with packed forests, which compactly encode exponentially many parses. To learn STSG rules from aligned forest pairs, we first identify minimal rules and then get composed rules. The decoder finds the best derivation that have the source yield of one source tree in the forest. Experiments show that using packed forests in treeto-tree translation results in dramatic improvements over using 1-best trees. Our system also achieves comparable performance with the stateof-the-art phrase-based system Moses. 565 Acknowledgement The authors were supported by National Natural Science Foundation of China, Contracts 60603095 and 60736014, and 863 State Key Project No. 2006AA010108. Part of this work was done while Yang Liu was visiting the SMT group led by Stephan Vogel at CMU. We thank the anonymous reviewers for their insightful comments. Many thanks go to Liang Huang, Haitao Mi, and Hao Xiong for their invaluable help in producing packed forests. We are also grateful to Andreas Zollmann, Vamshi Ambati, and Kevin Gimpel for their helpful feedback. References Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proc. of ACL 2005. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2). Brooke Cowan, Ivona Ku˘cerov´a, and Michael Collins. 2006. A discriminative model for tree-to-tree translation. In Proc. of EMNLP 2006. Steve DeNeefe, Kevin Knight, Wei Wang, and Daniel Marcu. 2007. What can syntax-based MT learn from phrase-based MT? In Proc. of EMNLP 2007. Yuan Ding and Martha Palmer. 2005. Machine translation using probabilistic synchronous dependency insertion grammars. In Proc. of ACL 2005. Jason Eisner. 2003. Learning non-isomorphic tree mappings for machine translation. In Proc. of ACL 2003 (Companion Volume). Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proc. of NAACL/HLT 2004. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proc. of COLING/ACL 2006. Liang Huang and David Chiang. 2005. Better k-best parsing. In Proc. of IWPT 2005. Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proc. of ACL 2007. Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proc. of AMTA 2006. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proc. of ACL/HLT 2008. Phillip Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proc. of NAACL 2003. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of ACL 2007 (demonstration session). Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment template for statistical machine translation. In Proc. of COLING/ACL 2006. Yang Liu, Yun Huang, Qun Liu, and Shouxun Lin. 2007. Forest-to-string statistical translation rules. In Proc. of ACL 2007. Daniel Marcu, Wei Wang, Abdessamad Echihabi, and Kevin Knight. 2006. Spmt: Statistical machine translation with syntactified target language phrases. In Proc. of EMNLP 2006. Haitao Mi and Liang Huang. 2008. Forest-based translation rule extraction. In Proc. of EMNLP 2008. Haitao Mi, Liang Huang, and Qun Liu. 2008. Forestbased translation. In Proc. of ACL/HLT 2008. Franz J. Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proc. of ACL 2002. Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1). Chris Quirk and Simon Corston-Oliver. 2006. The impact of parsing quality on syntactically-informed statistical machine translation. In Proc. of EMNLP 2006. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proc. of ACL/HLT 2008. Deyi Xiong, Shuanglong Li, Qun Liu, and Shouxun Lin. 2005. Parsing the penn chinese treebank with semantic knowledge. In Proc. of IJCNLP 2005. Ying Zhang, Stephan Vogel, and Alex Waibel. 2004. Interpreting bleu/nist scores how much improvement do we need to have a better system? In Proc. of LREC 2004. Min Zhang, Hongfei Jiang, Aiti Aw, Haizhou Li, Chew Lim Tan, and Sheng Li. 2008. A tree sequence alignment-based tree-to-tree translation model. In Proc. of ACL/HLT 2008. 566
2009
63
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 567–575, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Fast Consensus Decoding over Translation Forests John DeNero David Chiang and Kevin Knight Computer Science Division Information Sciences Institute University of California, Berkeley University of Southern California [email protected] {chiang, knight}@isi.edu Abstract The minimum Bayes risk (MBR) decoding objective improves BLEU scores for machine translation output relative to the standard Viterbi objective of maximizing model score. However, MBR targeting BLEU is prohibitively slow to optimize over k-best lists for large k. In this paper, we introduce and analyze an alternative to MBR that is equally effective at improving performance, yet is asymptotically faster — running 80 times faster than MBR in experiments with 1000-best lists. Furthermore, our fast decoding procedure can select output sentences based on distributions over entire forests of translations, in addition to k-best lists. We evaluate our procedure on translation forests from two large-scale, state-of-the-art hierarchical machine translation systems. Our forest-based decoding objective consistently outperforms k-best list MBR, giving improvements of up to 1.0 BLEU. 1 Introduction In statistical machine translation, output translations are evaluated by their similarity to human reference translations, where similarity is most often measured by BLEU (Papineni et al., 2002). A decoding objective specifies how to derive final translations from a system’s underlying statistical model. The Bayes optimal decoding objective is to minimize risk based on the similarity measure used for evaluation. The corresponding minimum Bayes risk (MBR) procedure maximizes the expected similarity score of a system’s translations relative to the model’s distribution over possible translations (Kumar and Byrne, 2004). Unfortunately, with a non-linear similarity measure like BLEU, we must resort to approximating the expected loss using a k-best list, which accounts for only a tiny fraction of a model’s full posterior distribution. In this paper, we introduce a variant of the MBR decoding procedure that applies efficiently to translation forests. Instead of maximizing expected similarity, we express similarity in terms of features of sentences, and choose translations that are similar to expected feature values. Our exposition begins with algorithms over kbest lists. A na¨ıve algorithm for finding MBR translations computes the similarity between every pair of k sentences, entailing O(k2) comparisons. We show that if the similarity measure is linear in features of a sentence, then computing expected similarity for all k sentences requires only k similarity evaluations. Specific instances of this general algorithm have recently been proposed for two linear similarity measures (Tromble et al., 2008; Zhang and Gildea, 2008). However, the sentence similarity measures we want to optimize in MT are not linear functions, and so this fast algorithm for MBR does not apply. For this reason, we propose a new objective that retains the benefits of MBR, but can be optimized efficiently, even for non-linear similarity measures. In experiments using BLEU over 1000best lists, we found that our objective provided benefits very similar to MBR, only much faster. This same decoding objective can also be computed efficiently from forest-based expectations. Translation forests compactly encode distributions over much larger sets of derivations and arise naturally in chart-based decoding for a wide variety of hierarchical translation systems (Chiang, 2007; Galley et al., 2006; Mi et al., 2008; Venugopal et al., 2007). The resulting forest-based decoding procedure compares favorably in both complexity and performance to the recently proposed latticebased MBR (Tromble et al., 2008). The contributions of this paper include a lineartime algorithm for MBR using linear similarities, a linear-time alternative to MBR using non-linear similarity measures, and a forest-based extension to this procedure for similarities based on n-gram counts. In experiments, we show that our fast procedure is on average 80 times faster than MBR using 1000-best lists. We also show that using forests outperforms using k-best lists consistently across language pairs. Finally, in the first published multi-system experiments on consensus de567 coding for translation, we demonstrate that benefits can differ substantially across systems. In all, we show improvements of up to 1.0 BLEU from consensus approaches for state-of-the-art largescale hierarchical translation systems. 2 Consensus Decoding Algorithms Let e be a candidate translation for a sentence f, where e may stand for a sentence or its derivation as appropriate. Modern statistical machine translation systems take as input some f and score each derivation e according to a linear model of features: P i λi·θi(f, e). The standard Viterbi decoding objective is to find e∗= arg maxe λ · θ(f, e). For MBR decoding, we instead leverage a similarity measure S(e; e′) to choose a translation using the model’s probability distribution P(e|f), which has support over a set of possible translations E. The Viterbi derivation e∗is the mode of this distribution. MBR is meant to choose a translation that will be similar, on expectation, to any possible reference translation. To this end, MBR chooses ˜e that maximizes expected similarity to the sentences in E under P(e|f):1 ˜e = arg maxe EP(e′|f)  S(e; e′)  = arg maxe X e′∈E P(e′|f) · S(e; e′) MBR can also be interpreted as a consensus decoding procedure: it chooses a translation similar to other high-posterior translations. Minimizing risk has been shown to improve performance for MT (Kumar and Byrne, 2004), as well as other language processing tasks (Goodman, 1996; Goel and Byrne, 2000; Kumar and Byrne, 2002; Titov and Henderson, 2006; Smith and Smith, 2007). The distribution P(e|f) can be induced from a translation system’s features and weights by exponentiating with base b to form a log-linear model: P(e|f) = bλ·θ(f,e) P e′∈E bλ·θ(f,e′) We follow Ehling et al. (2007) in choosing b using a held-out tuning set. For algorithms in this section, we assume that E is a k-best list and b has been chosen already, so P(e|f) is fully specified. 1Typically, MBR is defined as arg mine∈EE[L(e; e′)] for some loss function L, for example 1 −BLEU(e; e′). These definitions are equivalent. 2.1 Minimum Bayes Risk over Sentence Pairs Given any similarity measure S and a k-best list E, the minimum Bayes risk translation can be found by computing the similarity between all pairs of sentences in E, as in Algorithm 1. Algorithm 1 MBR over Sentence Pairs 1: A ←−∞ 2: for e ∈E do 3: Ae ←0 4: for e′ ∈E do 5: Ae ←Ae + P(e′|f) · S(e; e′) 6: if Ae > A then A, ˜e ←Ae, e 7: return ˜e We can sometimes exit the inner for loop early, whenever Ae can never become larger than A (Ehling et al., 2007). Even with this shortcut, the running time of Algorithm 1 is O(k2 · n), where n is the maximum sentence length, assuming that S(e; e′) can be computed in O(n) time. 2.2 Minimum Bayes Risk over Features We now consider the case when S(e; e′) is a linear function of sentence features. Let S(e; e′) be a function of the form P j ωj(e) · φj(e′), where φj(e′) are real-valued features of e′, and ωj(e) are sentence-specific weights on those features. Then, the MBR objective can be re-written as arg maxe∈E EP(e′|f)  S(e; e′)  = arg maxe X e′∈E P(e′|f) · X j ωj(e) · φj(e′) = arg maxe X j ωj(e) "X e′∈E P(e′|f) · φj(e′) # = arg maxe X j ωj(e) · EP(e′|f)  φj(e′)  . (1) Equation 1 implies that we can find MBR translations by first computing all feature expectations, then applying S only once for each e. Algorithm 2 proceduralizes this idea: lines 1-4 compute feature expectations, and lines 5-11 find the translation with highest S relative to those expectations. The time complexity is O(k · n), assuming the number of non-zero features φ(e′) and weights ω(e) grow linearly in sentence length n and all features and weights can be computed in constant time. 568 Algorithm 2 MBR over Features 1: ¯φ ←[0 for j ∈J] 2: for e′ ∈E do 3: for j ∈J such that φj(e′) ̸= 0 do 4: ¯φj ←¯φj + P(e′|f) · φj(e′) 5: A ←−∞ 6: for e ∈E do 7: Ae ←0 8: for j ∈J such that ωj(e) ̸= 0 do 9: Ae ←Ae + ωj(e) · ¯φj 10: if Ae > A then A, ˜e ←Ae, e 11: return ˜e An example of a linear similarity measure is bag-of-words precision, which can be written as: U(e; e′) = X t∈T1 δ(e, t) |e| · δ(e′, t) where T1 is the set of unigrams in the language, and δ(e, t) is an indicator function that equals 1 if t appears in e and 0 otherwise. Figure 1 compares Algorithms 1 and 2 using U(e; e′). Other linear functions have been explored for MBR, including Taylor approximations to the logarithm of BLEU (Tromble et al., 2008) and counts of matching constituents (Zhang and Gildea, 2008), which are discussed further in Section 3.3. 2.3 Fast Consensus Decoding using Non-Linear Similarity Measures Most similarity measures of interest for machine translation are not linear, and so Algorithm 2 does not apply. Computing MBR even with simple non-linear measures such as BLEU, NIST or bagof-words F1 seems to require O(k2) computation time. However, these measures are all functions of features of e′. That is, they can be expressed as S(e; φ(e′)) for a feature mapping φ : E →Rn. For example, we can express BLEU(e; e′) = exp "„ 1 −|e′| |e| « − + 1 4 4 X n=1 ln P t∈Tn min(c(e, t), c(e′, t)) P t∈Tn c(e, t) # In this expression, BLEU(e; e′) references e′ only via its n-gram count features c(e′, t).2 2The length penalty “ 1 −|e′| |e| ” −is also a function of ngram counts: |e′| = P t∈T1 c(e′, t). The negative part operator (·)−is equivalent to min(·, 0). Choose a distribution P over a set of translations E MBR over Sentence Pairs Compute pairwise similarity Compute expectations Max expected similarity Max feature similarity 3/3 1/4 2/5 1/3 4/4 0/5 2/3 0/4 5/5 MBR over Features E [δ(efficient)] = 0.6 E [δ(forest)] = 0.7 E [δ(decoding)] = 0.7 E [δ(for)] = 0.3 E [δ(rusty)] = 0.3 E [δ(coating)] = 0.3 E [δ(a)] = 0.4 E [δ(fish)] = 0.4 E [δ(ain’t)] = 0.4 c1 c2 c3 r1 r2 r3 1 2 3 2 3 I ... telescope Yo vi al hombre con el telescopio I ... saw the ... man with ... telescope the ... telescope 0.4 “saw the” “man with” 0.6 “saw the” 1.0 “man with” E [r(man with)] = 0.4 + 0.6 · 1.0 U(e2; e1) = |efficient| |efficient for rusty coating| EU(e1; e′) = 0.3(1+ 1 3)+0.4· 2 3 = 0.667 EU(e2; e′) = 0.375 EU(e3; e′) = 0.520 U(e1; Eφ) = 0.6+0.7+0.7 3 = 0.667 U(e2; Eφ) = 0.375 U(e3; Eφ) = 0.520 P(e1|f) = 0.3 ; e1 = efficient forest decoding P(e2|f) = 0.3 ; e2 = efficient for rusty coating P(e3|f) = 0.4 ; e3 = A fish ain’t forest decoding Figure 1: For the linear similarity measure U(e; e′), which computes unigram precision, the MBR translation can be found by iterating either over sentence pairs (Algorithm 1) or over features (Algorithm 2). These two algorithms take the same input (step 1), but diverge in their consensus computations (steps 2 & 3). However, they produce identical results for U and any other linear similarity measure. Following the structure of Equation 1, we can choose a translation e based on the feature expectations of e′. In particular, we can choose ˜e = arg maxe∈ES(e; EP(e′|f)  φ(e′)  ). (2) This objective differs from MBR, but has a similar consensus-building structure. We have simply moved the expectation inside the similarity function, just as we did in Equation 1. This new objective can be optimized by Algorithm 3, a procedure that runs in O(k · n) time if the count of non-zero features in e′ and the computation time of S(e; φ(e′)) are both linear in sentence length n. This fast consensus decoding procedure shares the same structure as linear MBR: first we compute feature expectations, then we choose the sentence that is most similar to those expectations. In fact, Algorithm 2 is a special case of Algorithm 3. Lines 7-9 of the former and line 7 of the latter are equivalent for linear S(e; e′). Thus, for any linear similarity measure, Algorithm 3 is an algorithm for minimum Bayes risk decoding. 569 Algorithm 3 Fast Consensus Decoding 1: ¯φ ←[0 for j ∈J] 2: for e′ ∈E do 3: for j ∈J such that φj(e′) ̸= 0 do 4: ¯φj ←¯φj + P(e′|f) · φj(e′) 5: A ←−∞ 6: for e ∈E do 7: Ae ←S(e; ¯φ) 8: if Ae > A then A, ˜e ←Ae, e 9: return ˜e As described, Algorithm 3 can use any similarity measure that is defined in terms of realvalued features of e′. There are some nuances of this procedure, however. First, the precise form of S(e; φ(e′)) will affect the output, but S(e; E[φ(e′)]) is often an input point for which a sentence similarity measure S was not originally defined. For example, our definition of BLEU above will have integer valued φ(e′) for any real sentence e′, but E[φ(e′)] will not be integer valued. As a result, we are extending the domain of BLEU beyond its original intent. One could imagine different feature-based expressions that also produce BLEU scores for real sentences, but produce different values for fractional features. Some care must be taken to define S(e; φ(e′)) to extend naturally from integer-valued to real-valued features. Second, while any similarity measure can in principle be expressed as S(e; φ(e′)) for a sufficiently rich feature space, fast consensus decoding will not apply effectively to all functions. For instance, we cannot naturally use functions that include alignments or matchings between e and e′, such as METEOR (Agarwal and Lavie, 2007) and TER (Snover et al., 2006). Though these functions can in principle be expressed in terms of features of e′ (for instance with indicator features for whole sentences), fast consensus decoding will only be effective if different sentences share many features, so that the feature expectations effectively capture trends in the underlying distribution. 3 Computing Feature Expectations We now turn our focus to efficiently computing feature expectations, in service of our fast consensus decoding procedure. Computing feature expectations from k-best lists is trivial, but k-best lists capture very little of the underlying model’s posterior distribution. In place of k-best = 0.667 EU(e2; e′) = 0.375 EU(e3; e′) = 0.520 = 0.667 U(e2; Eφ) = 0.375 U(e3; Eφ) = 0.520 I ... telescope Yo vi al hombre con el telescopio I ... saw the ... man with ... telescope the ... telescope 0.4 “saw the” “man with” 0.6 “saw the” 1.0 “man with” E [c(e, “man with”)] = ! h P(h|f) · c(h, “man with”) = 0.4 · 1 + (0.6 · 1.0) · 1 Figure 2: This translation forest for a Spanish sentence encodes two English parse trees. Hyper-edges (boxes) are annotated with normalized transition probabilities, as well as the bigrams produced by each rule application. The expected count of the bigram “man with” is the sum of posterior probabilities of the two hyper-edges that produce it. In this example, we normalized inside scores at all nodes to 1 for clarity. lists, compact encodings of translation distributions have proven effective for MBR (Zhang and Gildea, 2008; Tromble et al., 2008). In this section, we consider BLEU in particular, for which the relevant features φ(e) are n-gram counts up to length n = 4. We show how to compute expectations of these counts efficiently from translation forests. 3.1 Translation Forests Translation forests compactly encode an exponential number of output translations for an input sentence, along with their model scores. Forests arise naturally in chart-based decoding procedures for many hierarchical translation systems (Chiang, 2007). Exploiting forests has proven a fruitful avenue of research in both parsing (Huang, 2008) and machine translation (Mi et al., 2008). Formally, translation forests are weighted acyclic hyper-graphs. The nodes are states in the decoding process that include the span (i, j) of the sentence to be translated, the grammar symbol s over that span, and the left and right context words of the translation relevant for computing n-gram language model scores.3 Each hyper-edge h represents the application of a synchronous rule r that combines nodes corresponding to non-terminals in 3Decoder states can include additional information as well, such as local configurations for dependency language model scoring. 570 r into a node spanning the union of the child spans and perhaps some additional portion of the input sentence covered directly by r’s lexical items. The weight of h is the incremental score contributed to all translations containing the rule application, including translation model features on r and language model features that depend on both r and the English contexts of the child nodes. Figure 2 depicts a forest. Each n-gram that appears in a translation e is associated with some h in its derivation: the h corresponding to the rule that produces the n-gram. Unigrams are produced by lexical rules, while higherorder n-grams can be produced either directly by lexical rules, or by combining constituents. The n-gram language model score of e similarly decomposes over the h in e that produce n-grams. 3.2 Computing Expected N-Gram Counts We can compute expected n-gram counts efficiently from a translation forest by appealing to the linearity of expectations. Let φ(e) be a vector of n-gram counts for a sentence e. Then, φ(e) is the sum of hyper-edge-specific n-gram count vectors φ(h) for all h in e. Therefore, E[φ(e)] = P h∈e E[φ(h)]. To compute n-gram expectations for a hyperedge, we first compute the posterior probability of each h, conditioned on the input sentence f: P(h|f) = X e:h∈e bλ·θ(f,e) ! X e bλ·θ(f,e) !−1 , where e iterates over translations in the forest. We compute the numerator using the inside-outside algorithm, while the denominator is the inside score of the root node. Note that many possible derivations of f are pruned from the forest during decoding, and so this posterior is approximate. The expected n-gram count vector for a hyperedge is E[φ(h)] = P(h|f) · φ(h). Hence, after computing P(h|f) for every h, we need only sum P(h|f) · φ(h) for all h to compute E[φ(e)]. This entire procedure is a linear-time computation in the number of hyper-edges in the forest. To complete forest-based fast consensus decoding, we then extract a k-best list of unique translations from the forest (Huang et al., 2006) and continue Algorithm 3 from line 5, which chooses the ˜e from the k-best list that maximizes BLEU(e; E[φ(e′)]). 3.3 Comparison to Related Work Zhang and Gildea (2008) embed a consensus decoding procedure into a larger multi-pass decoding framework. They focus on inversion transduction grammars, but their ideas apply to richer models as well. They propose an MBR decoding objective of maximizing the expected number of matching constituent counts relative to the model’s distribution. The corresponding constituent-matching similarity measure can be expressed as a linear function of features of e′, which are indicators of constituents. Expectations of constituent indicator features are the same as posterior constituent probabilities, which can be computed from a translation forest using the inside-outside algorithm. This forest-based MBR approach improved translation output relative to Viterbi translations. Tromble et al. (2008) describe a similar approach using MBR with a linear similarity measure. They derive a first-order Taylor approximation to the logarithm of a slightly modified definition of corpus BLEU4, which is linear in n-gram indicator features δ(e′, t) of e′. These features are weighted by n-gram counts c(e, t) and constants θ that are estimated from held-out data. The linear similarity measure takes the following form, where Tn is the set of n-grams: G(e; e′) = θ0|e| + 4 X n=1 X t∈Tn θt · c(e, t) · δ(e′, t). Using G, Tromble et al. (2008) extend MBR to word lattices, which improves performance over k-best list MBR. Our approach differs from Tromble et al. (2008) primarily in that we propose decoding with an alternative to MBR using BLEU, while they propose decoding with MBR using a linear alternative to BLEU. The specifics of our approaches also differ in important ways. First, word lattices are a subclass of forests that have only one source node for each edge (i.e., a graph, rather than a hyper-graph). While forests are more general, the techniques for computing posterior edge probabilities in lattices and forests are similar. One practical difference is that the forests needed for fast consensus decoding are 4The log-BLEU function must be modified slightly to yield a linear Taylor approximation: Tromble et al. (2008) replace the clipped n-gram count with the product of an ngram count and an n-gram indicator function. 571 generated already by the decoder of a syntactic translation system. Second, rather than use BLEU as a sentencelevel similarity measure directly, Tromble et al. (2008) approximate corpus BLEU with G above. The parameters θ of the approximation must be estimated on a held-out data set, while our approach requires no such estimation step. Third, our approach is also simpler computationally. The features required to compute G are indicators δ(e′, t); the features relevant to us are counts c(e′, t). Tromble et al. (2008) compute expected feature values by intersecting the translation lattice with a lattices for each n-gram t. By contrast, expectations of c(e′, t) can all be computed with a single pass over the forest. This contrast implies a complexity difference. Let H be the number of hyper-edges in the forest or lattice, and T the number of n-grams that can potentially appear in a translation. Computing indicator expectations seems to require O(H · T) time because of automata intersections. Computing count expectations requires O(H) time, because only a constant number of n-grams can be produced by each hyper-edge. Our approaches also differ in the space of translations from which ˜e is chosen. A linear similarity measure like G allows for efficient search over the lattice or forest, whereas fast consensus decoding restricts this search to a k-best list. However, Tromble et al. (2008) showed that most of the improvement from lattice-based consensus decoding comes from lattice-based expectations, not search: searching over lattices instead of k-best lists did not change results for two language pairs, and improved a third language pair by 0.3 BLEU. Thus, we do not consider our use of k-best lists to be a substantial liability of our approach. Fast consensus decoding is also similar in character to the concurrently developed variational decoding approach of Li et al. (2009). Using BLEU, both approaches choose outputs that match expected n-gram counts from forests, though differ in the details. It is possible to define a similarity measure under which the two approaches are equivalent.5 5For example, decoding under a variational approximation to the model’s posterior that decomposes over bigram probabilities is equivalent to fast consensus decoding with the similarity measure B(e; e′) = Q t∈T2 h c(e′,t) c(e′,h(t)) ic(e,t) , where h(t) is the unigram prefix of bigram t. 4 Experimental Results We evaluate these consensus decoding techniques on two different full-scale state-of-the-art hierarchical machine translation systems. Both systems were trained for 2008 GALE evaluations, in which they outperformed a phrase-based system trained on identical data. 4.1 Hiero: a Hierarchical MT Pipeline Hiero is a hierarchical system that expresses its translation model as a synchronous context-free grammar (Chiang, 2007). No explicit syntactic information appears in the core model. A phrase discovery procedure over word-aligned sentence pairs provides rule frequency counts, which are normalized to estimate features on rules. The grammar rules of Hiero all share a single non-terminal symbol X, and have at most two non-terminals and six total items (non-terminals and lexical items), for example: my X2 ’s X1 →X1 de mi X2 We extracted the grammar from training data using standard parameters. Rules were allowed to span at most 15 words in the training data. The log-linear model weights were trained using MIRA, a margin-based optimization procedure that accommodates many features (Crammer and Singer, 2003; Chiang et al., 2008). In addition to standard rule frequency features, we included the distortion and syntactic features described in Chiang et al. (2008). 4.2 SBMT: a Syntax-Based MT Pipeline SBMT is a string-to-tree translation system with rich target-side syntactic information encoded in the translation model. The synchronous grammar rules are extracted from word aligned sentence pairs where the target sentence is annotated with a syntactic parse (Galley et al., 2004). Rules map source-side strings to target-side parse tree fragments, and non-terminal symbols correspond to target-side grammatical categories: (NP (NP (PRP$ my) NN2 (POS ’s)) NNS1) → NNS1 de mi NN2 We extracted the grammar via an array of criteria (Galley et al., 2006; DeNeefe et al., 2007; Marcu et al., 2006). The model was trained using minimum error rate training for Arabic (Och, 2003) and MIRA for Chinese (Chiang et al., 2008). 572 Arabic-English Objective Hiero SBMT Min. Bayes Risk (Alg 1) 2h 47m 12h 42m Fast Consensus (Alg 3) 5m 49s 5m 22s Speed Ratio 29 142 Chinese-English Objective Hiero SBMT Min. Bayes Risk (Alg 1) 10h 24m 3h 52m Fast Consensus (Alg 3) 4m 52s 6m 32s Speed Ratio 128 36 Table 1: Fast consensus decoding is orders of magnitude faster than MBR when using BLEU as a similarity measure. Times only include reranking, not k-best list extraction. 4.3 Data Conditions We evaluated on both Chinese-English and Arabic-English translation tasks. Both ArabicEnglish systems were trained on 220 million words of word-aligned parallel text. For the Chinese-English experiments, we used 260 million words of word-aligned parallel text; the hierarchical system used all of this data, and the syntax-based system used a 65-million word subset. All four systems used two language models: one trained from the combined English sides of both parallel texts, and another, larger, language model trained on 2 billion words of English text (1 billion for Chinese-English SBMT). All systems were tuned on held-out data (1994 sentences for Arabic-English, 2010 sentences for Chinese-English) and tested on another dataset (2118 sentences for Arabic-English, 1994 sentences for Chinese-English). These datasets were drawn from the NIST 2004 and 2005 evaluation data, plus some additional data from the GALE program. There was no overlap at the segment or document level between the tuning and test sets. We tuned b, the base of the log-linear model, to optimize consensus decoding performance. Interestingly, we found that tuning b on the same dataset used for tuning λ was as effective as tuning b on an additional held-out dataset. 4.4 Results over K-Best Lists Taking expectations over 1000-best lists6 and using BLEU7 as a similarity measure, both MBR 6We ensured that k-best lists contained no duplicates. 7To prevent zero similarity scores, we also used a standard smoothed version of BLEU that added 1 to the numerator and denominator of all n-gram precisions. Performance results Arabic-English Expectations Similarity Hiero SBMT Baseline 52.0 53.9 104-best BLEU 52.2 53.9 Forest BLEU 53.0 54.0 Forest Linear G 52.3 54.0 Chinese-English Expectations Similarity Hiero SBMT Baseline 37.8 40.6 104-best BLEU 38.0 40.7 Forest BLEU 38.2 40.8 Forest Linear G 38.1 40.8 Table 2: Translation performance improves when computing expected sentences from translation forests rather than 104best lists, which in turn improve over Viterbi translations. We also contrasted forest-based consensus decoding with BLEU and its linear approximation, G. Both similarity measures are effective, but BLEU outperforms G. and our variant provided consistent small gains of 0.0–0.2 BLEU. Algorithms 1 and 3 gave the same small BLEU improvements in each data condition up to three significant figures. The two algorithms differed greatly in speed, as shown in Table 1. For Algorithm 1, we terminated the computation of E[BLEU(e; e′)] for each e whenever e could not become the maximal hypothesis. MBR speed depended on how often this shortcut applied, which varied by language and system. Despite this optimization, our new Algorithm 3 was an average of 80 times faster across systems and language pairs. 4.5 Results for Forest-Based Decoding Table 2 contrasts Algorithm 3 over 104-best lists and forests. Computing E[φ(e′)] from a translation forest rather than a 104-best list improved Hiero by an additional 0.8 BLEU (1.0 over the baseline). Forest-based expectations always outperformed k-best lists, but curiously the magnitude of benefit was not consistent across systems. We believe the difference is in part due to more aggressive forest pruning within the SBMT decoder. For forest-based decoding, we compared two similarity measures: BLEU and its linear Taylor approximation G from section 3.3.8 Table 2 shows were identical to standard BLEU. 8We did not estimate the θ parameters of G ourselves; instead we used the parameters listed in Tromble et al. (2008), which were also estimated for GALE data. We also approximated E[δ(e′, t)] with a clipped expected count 573 50.0 50.2 50.4 50.6 50.8 511,660 513,245 514,830 Total model score for 1000 translations Corpus BLEU 0 20 40 60 80 Hiero SBMT 56.6 61.4 51.1 50.5 N-grams from baseline translations N-grams with high expected count Forest samples (b!2) Forest samples (b!5) Viterbi translations N-gram Precision Figure 3: N-grams with high expected count are more likely to appear in the reference translation that n-grams in the translation model’s Viterbi translation, e∗. Above, we compare the precision, relative to reference translations, of sets of n-grams chosen in two ways. The left bar is the precision of the n-grams in e∗. The right bar is the precision of n-grams with E[c(e, t)] > ρ. To justify this comparison, we chose ρ so that both methods of choosing n-grams gave the same ngram recall: the fraction of n-grams in reference translations that also appeared in e∗or had E[c(e, t)] > ρ. that both similarities were effective, but BLEU outperformed its linear approximation. 4.6 Analysis Forest-based consensus decoding leverages information about the correct translation from the entire forest. In particular, consensus decoding with BLEU chooses translations using n-gram count expectations E[c(e, t)]. Improvements in translation quality should therefore be directly attributable to information in these expected counts. We endeavored to test the hypothesis that expected n-gram counts under the forest distribution carry more predictive information than the baseline Viterbi derivation e∗, which is the mode of the distribution. To this end, we first tested the predictive accuracy of the n-grams proposed by e∗: the fraction of the n-grams in e∗that appear in a reference translation. We compared this n-gram precision to a similar measure of predictive accuracy for expected n-gram counts: the fraction of the n-grams t with E[c(e, t)] ≥ρ that appear in a reference. To make these two precisions comparable, we chose ρ such that the recall of reference n-grams was equal. Figure 3 shows that computing n-gram expectations—which sum over translations—improves the model’s ability to predict which n-grams will appear in the reference. min(1, E[c(e′, t)]). Assuming an n-gram appears at most once per sentence, these expressions are equivalent, and this assumption holds for most n-grams. Reference translation: Mubarak said that he received a telephone call from Sharon in which he said he was “ready (to resume negotiations) but the Palestinians are hesitant.” Baseline translation: Mubarak said he had received a telephone call from Sharon told him he was ready to resume talks with the Palestinians. Fast forest-based consensus translation: Mubarak said that he had received a telephone call from Sharon told him that he “was ready to resume the negotiations) , but the Palestinians are hesitant.” Figure 4: Three translations of an example Arabic sentence: its human-generated reference, the translation with the highest model score under Hiero (Viterbi), and the translation chosen by forest-based consensus decoding. The consensus translation reconstructs content lost in the Viterbi translation. We attribute gains from fast consensus decoding to this increased predictive accuracy. Examining the translations chosen by fast consensus decoding, we found that gains in BLEU often arose from improved lexical choice. However, in our hierarchical systems, consensus decoding did occasionally trigger large reordering. We also found examples where the translation quality improved by recovering content that was missing from the baseline translation, as in Figure 4. 5 Conclusion We have demonstrated substantial speed increases in k-best consensus decoding through a new procedure inspired by MBR under linear similarity measures. To further improve this approach, we computed expected n-gram counts from translation forests instead of k-best lists. Fast consensus decoding using forest-based n-gram expectations and BLEU as a similarity measure yielded consistent improvements over MBR with k-best lists, yet required only simple computations that scale linearly with the size of the translation forest. The space of similarity measures is large and relatively unexplored, and the feature expectations that can be computed from forests extend beyond n-gram counts. Therefore, future work may show additional benefits from fast consensus decoding. Acknowledgements This work was supported under DARPA GALE, Contract No. HR0011-06-C-0022. 574 References Abhaya Agarwal and Alon Lavie. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In Proceedings of the Workshop on Statistical Machine Translation for the Association of Computational Linguistics. David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and structural translation features. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics. Koby Crammer and Yoram Singer. 2003. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research, 3:951–991. Steve DeNeefe, Kevin Knight, Wei Wang, and Daniel Marcu. 2007. What can syntax-based MT learn from phrase-based MT? In Proceedings of the Conference on Empirical Methods in Natural Language Processing and CoNLL. Nicola Ehling, Richard Zens, and Hermann Ney. 2007. Minimum Bayes risk decoding for BLEU. In Proceedings of the Association for Computational Linguistics: Short Paper Track. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proceedings of HLT: the North American Chapter of the Association for Computational Linguistics. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceedings of the Association for Computational Linguistics. Vaibhava Goel and William Byrne. 2000. Minimum Bayes-risk automatic speech recognition. In Computer, Speech and Language. Joshua Goodman. 1996. Parsing algorithms and metrics. In Proceedings of the Association for Computational Linguistics. Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proceedings of the Association for Machine Translation in the Americas. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of the Association for Computational Linguistics. Shankar Kumar and William Byrne. 2002. Minimum Bayes-risk word alignments of bilingual texts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Shankar Kumar and William Byrne. 2004. Minimum Bayes-risk decoding for statistical machine translation. In Proceedings of the North American Chapter of the Association for Computational Linguistics. Zhifei Li, Jason Eisner, and Sanjeev Khudanpur. 2009. Variational decoding for statistical machine translation. In Proceedings of the Association for Computational Linguistics and IJCNLP. Daniel Marcu, Wei Wang, Abdessamad Echihabi, and Kevin Knight. 2006. SPMT: Statistical machine translation with syntactified target language phrases. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Haitao Mi, Liang Huang, and Qun Liu. 2008. Forestbased translation. In Proceedings of the Association for Computational Linguistics. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the Association for Computational Linguistics. David Smith and Noah Smith. 2007. Probabilistic models of nonprojective dependency trees. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and CoNLL. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Translation in the Americas. Ivan Titov and James Henderson. 2006. Loss minimization in parse reranking. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Roy Tromble, Shankar Kumar, Franz Josef Och, and Wolfgang Macherey. 2008. Lattice minimum Bayes-risk decoding for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Ashish Venugopal, Andreas Zollmann, and Stephan Vogel. 2007. An efficient two-pass approach to synchronous-CFG driven statistical MT. In Proceedings of HLT: the North American Association for Computational Linguistics Conference. Hao Zhang and Daniel Gildea. 2008. Efficient multipass decoding for synchronous context free grammars. In Proceedings of the Association for Computational Linguistics. 575
2009
64
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 576–584, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Joint Decoding with Multiple Translation Models Yang Liu and Haitao Mi and Yang Feng and Qun Liu Key Laboratory of Intelligent Information Processing Institute of Computing Technology Chinese Academy of Sciences P.O. Box 2704, Beijing 100190, China {yliu,htmi,fengyang,liuqun}@ict.ac.cn Abstract Current SMT systems usually decode with single translation models and cannot benefit from the strengths of other models in decoding phase. We instead propose joint decoding, a method that combines multiple translation models in one decoder. Our joint decoder draws connections among multiple models by integrating the translation hypergraphs they produce individually. Therefore, one model can share translations and even derivations with other models. Comparable to the state-of-the-art system combination technique, joint decoding achieves an absolute improvement of 1.5 BLEU points over individual decoding. 1 Introduction System combination aims to find consensus translations among different machine translation systems. It proves that such consensus translations are usually better than the output of individual systems (Frederking and Nirenburg, 1994). Recent several years have witnessed the rapid development of system combination methods based on confusion networks (e.g., (Rosti et al., 2007; He et al., 2008)), which show state-of-theart performance in MT benchmarks. A confusion network consists of a sequence of sets of candidate words. Each candidate word is associated with a score. The optimal consensus translation can be obtained by selecting one word from each set of candidates to maximizing the overall score. While it is easy and efficient to manipulate strings, current methods usually have no access to most information available in decoding phase, which might be useful for obtaining further improvements. In this paper, we propose a framework for combining multiple translation models directly in decoding phase. 1 Based on max-translation decoding and max-derivation decoding used in conventional individual decoders (Section 2), we go further to develop a joint decoder that integrates multiple models on a firm basis: • Structuring the search space of each model as a translation hypergraph (Section 3.1), our joint decoder packs individual translation hypergraphs together by merging nodes that have identical partial translations (Section 3.2). Although such translation-level combination will not produce new translations, it does change the way of selecting promising candidates. • Two models could even share derivations with each other if they produce the same structures on the target side (Section 3.3), which we refer to as derivation-level combination. This method enlarges the search space by allowing for mixing different types of translation rules within one derivation. • As multiple derivations are used for finding optimal translations, we extend the minimum error rate training (MERT) algorithm (Och, 2003) to tune feature weights with respect to BLEU score for max-translation decoding (Section 4). We evaluated our joint decoder that integrated a hierarchical phrase-based model (Chiang, 2005; Chiang, 2007) and a tree-to-string model (Liu et al., 2006) on the NIST 2005 Chinese-English testset. Experimental results show that joint decod1It might be controversial to use the term “model”, which usually has a very precise definition in the field. Some researchers prefer to saying “phrase-based approaches” or “phrase-based systems”. On the other hand, other authors (e.g., (Och and Ney, 2004; Koehn et al., 2003; Chiang, 2007)) do use the expression “phrase-based models”. In this paper, we use the term “model” to emphasize that we integrate different approaches directly in decoding phase rather than postprocessing system outputs. 576 S → ⟨X1, X1⟩ X → ⟨fabiao X1, give a X1⟩ X → ⟨yanjiang, talk⟩ Figure 1: A derivation composed of SCFG rules that translates a Chinese sentence “fabiao yanjiang” into an English sentence “give a talk”. ing with multiple models achieves an absolute improvement of 1.5 BLEU points over individual decoding with single models (Section 5). 2 Background Statistical machine translation is a decision problem where we need decide on the best of target sentence matching a source sentence. The process of searching for the best translation is conventionally called decoding, which usually involves sequences of decisions that translate a source sentence into a target sentence step by step. For example, Figure 1 shows a sequence of SCFG rules (Chiang, 2005; Chiang, 2007) that translates a Chinese sentence “fabiao yanjiang” into an English sentence “give a talk”. Such sequence of decisions is called a derivation. In phrase-based models, a decision can be translating a source phrase into a target phrase or reordering the target phrases. In syntax-based models, decisions usually correspond to transduction rules. Often, there are many derivations that are distinct yet produce the same translation. Blunsom et al. (2008) present a latent variable model that describes the relationship between translation and derivation clearly. Given a source sentence f, the probability of a target sentence e being its translation is the sum over all possible derivations: Pr(e|f) = X d∈∆(e,f) Pr(d, e|f) (1) where ∆(e, f) is the set of all possible derivations that translate f into e and d is one such derivation. They use a log-linear model to define the conditional probability of a derivation d and corresponding translation e conditioned on a source sentence f: Pr(d, e|f) = exp P m λmhm(d, e, f) Z(f) (2) where hm is a feature function, λm is the associated feature weight, and Z(f) is a constant for normalization: Z(f) = X e X d∈∆(e,f) exp X m λmhm(d, e, f) (3) A feature value is usually decomposed as the product of decision probabilities: 2 h(d, e, f) = Y d∈d p(d) (4) where d is a decision in the derivation d. Although originally proposed for supporting large sets of non-independent and overlapping features, the latent variable model is actually a more general form of conventional linear model (Och and Ney, 2002). Accordingly, decoding for the latent variable model can be formalized as ˆe = argmax e ( X d∈∆(e,f) exp X m λmhm(d, e, f) ) (5) where Z(f) is not needed in decoding because it is independent of e. Most SMT systems approximate the summation over all possible derivations by using 1-best derivation for efficiency. They search for the 1best derivation and take its target yield as the best translation: ˆe ≈argmax e,d  X m λmhm(d, e, f)  (6) We refer to Eq. (5) as max-translation decoding and Eq. (6) as max-derivation decoding, which are first termed by Blunsom et al. (2008). By now, most current SMT systems, adopting either max-derivation decoding or max-translation decoding, have only used single models in decoding phase. We refer to them as individual decoders. In the following section, we will present a new method called joint decoding that includes multiple models in one decoder. 3 Joint Decoding There are two major challenges for combining multiple models directly in decoding phase. First, they rely on different kinds of knowledge sources 2There are also features independent of derivations, such as language model and word penalty. 577 S give 0-1 talk 1-2 give a talk 0-2 give talks 0-2 S give 0-1 speech 1-2 give a talk 0-2 make a speech 0-2 S give 0-1 talk 1-2 speech 1-2 give a talk 0-2 give talks 0-2 make a speech 0-2 packing (a) (b) (c) Figure 2: (a) A translation hypergraph produced by one model; (b) a translation hypergraph produced by another model; (c) the packed translation hypergraph based on (a) and (b). Solid and dashed lines denote the translation rules of the two models, respectively. Shaded nodes occur in both (a) and (b), indicating that the two models produce the same translations. and thus need to collect different information during decoding. For example, taking a source parse as input, a tree-to-string decoder (e.g., (Liu et al., 2006)) pattern-matches the source parse with treeto-string rules and produces a string on the target side. On the contrary, a string-to-tree decoder (e.g., (Galley et al., 2006; Shen et al., 2008)) is a parser that applies string-to-tree rules to obtain a target parse for the source string. As a result, the hypothesis structures of the two models are fundamentally different. Second, translation models differ in decoding algorithms. Depending on the generating order of a target sentence, we distinguish between two major categories: left-to-right and bottom-up. Decoders that use rules with flat structures (e.g., phrase pairs) usually generate target sentences from left to right while those using rules with hierarchical structures (e.g., SCFG rules) often run in a bottom-up style. In response to the two challenges, we first argue that the search space of an arbitrary model can be structured as a translation hypergraph, which makes each model connectable to others (Section 3.1). Then, we show that a packed translation hypergraph that integrates the hypergraphs of individual models can be generated in a bottom-up topological order, either integrated at the translation level (Section 3.2) or the derivation level (Section 3.3). 3.1 Translation Hypergraph Despite the diversity of translation models, they all have to produce partial translations for substrings of input sentences. Therefore, we represent the search space of a translation model as a structure called translation hypergraph. Figure 2(a) demonstrates a translation hypergraph for one model, for example, a hierarchical phrase-based model. A node in a hypergraph denotes a partial translation for a source substring, except for the starting node “S”. For example, given the example source sentence 0 fabiao 1 yanjiang 2 the node ⟨“give talks”, [0, 2]⟩in Figure 2(a) denotes that “give talks” is one translation of the source string f 2 1 = “fabiao yanjiang”. The hyperedges between nodes denote the decision steps that produce head nodes from tail nodes. For example, the incoming hyperedge of the node ⟨“give talks”, [0, 2]⟩could correspond to an SCFG rule: X →⟨X1 yanjiang, X1 talks⟩ Each hyperedge is associated with a number of weights, which are the feature values of the corresponding translation rules. A path of hyperedges constitutes a derivation. 578 Hypergraph Decoding node translation hyperedge rule path derivation Table 1: Correspondence between translation hypergraph and decoding. More formally, a hypergraph (Klein and Manning., 2001; Huang and Chiang, 2005) is a tuple ⟨V, E, R⟩, where V is a set of nodes, E is a set of hyperedges, and R is a set of weights. For a given source sentence f = f n 1 = f1 . . . fn, each node v ∈V is in the form of ⟨t, [i, j]⟩, which denotes the recognition of t as one translation of the source substring spanning from i through j (that is, fi+1 . . . fj). Each hyperedge e ∈E is a tuple e = ⟨tails(e), head(e), w(e)⟩, where head(e) ∈ V is the consequent node in the deductive step, tails(e) ∈V ∗is the list of antecedent nodes, and w(e) is a weight function from R|tails(e)| to R. As a general representation, a translation hypergraph is capable of characterizing the search space of an arbitrary translation model. Furthermore, it offers a graphic interpretation of decoding process. A node in a hypergraph denotes a translation, a hyperedge denotes a decision step, and a path of hyperedges denotes a derivation. A translation hypergraph is formally a semiring as the weight of a path is the product of hyperedge weights and the weight of a node is the sum of path weights. While max-derivation decoding only retains the single best path at each node, max-translation decoding sums up all incoming paths. Table 1 summarizes the relationship between translation hypergraph and decoding. 3.2 Translation-Level Combination The conventional interpretation of Eq. (1) is that the probability of a translation is the sum over all possible derivations coming from the same model. Alternatively, we interpret Eq. (1) as that the derivations could come from different models.3 This forms the theoretical basis of joint decoding. Although the information inside a derivation differs widely among translation models, the beginning and end points (i.e., f and e, respectively) must be identical. For example, a tree-to-string 3The same for all d occurrences in Section 2. For example, ∆(e, f) might include derivations from various models now. Note that we still use Z for normalization. model first parses f to obtain a source tree T(f) and then transforms T(f) to the target sentence e. Conversely, a string-to-tree model first parses f into a target tree T(e) and then takes the surface string e as the translation. Despite different inside, their derivations must begin with f and end with e. This situation remains the same for derivations between a source substring f j i and its partial translation t during joint decoding: Pr(t|f j i ) = X d∈∆(t,fj i ) Pr(d, t|f j i ) (7) where d might come from multiple models. In other words, derivations from multiple models could be brought together for computing the probability of one partial translation. Graphically speaking, joint decoding creates a packed translation hypergraph that combines individual hypergraphs by merging nodes that have identical translations. For example, Figure 2 (a) and (b) demonstrate two translation hypergraphs generated by two models respectively and Figure 2 (c) is the resulting packed hypergraph. The solid lines denote the hyperedges of the first model and the dashed lines denote those of the second model. The shaded nodes are shared by both models. Therefore, the two models are combined at the translation level. Intuitively, shared nodes should be favored in decoding because they offer consensus translations among different models. Now the question is how to decode with multiple models jointly in just one decoder. We believe that both left-to-right and bottom-up strategies can be used for joint decoding. Although phrase-based decoders usually produce translations from left to right, they can adopt bottom-up decoding in principle. Xiong et al. (2006) develop a bottom-up decoder for BTG (Wu, 1997) that uses only phrase pairs. They treat reordering of phrases as a binary classification problem. On the other hand, it is possible for syntax-based models to decode from left to right. Watanabe et al. (2006) propose leftto-right target generation for hierarchical phrasebased translation. Although left-to-right decoding might enable a more efficient use of language models and hopefully produce better translations, we adopt bottom-up decoding in this paper just for convenience. Figure 3 demonstrates the search algorithm of our joint decoder. The input is a source language sentence f n 1 , and a set of translation models M 579 1: procedure JOINTDECODING(f n 1 , M) 2: G ←∅ 3: for l ←1 . . . n do 4: for all i, j s.t. j −i = l do 5: for all m ∈M do 6: ADD(G, i, j, m) 7: end for 8: PRUNE(G, i, j) 9: end for 10: end for 11: end procedure Figure 3: Search algorithm for joint decoding. (line 1). After initializing the translation hypergraph G (line 2), the decoder runs in a bottomup style, adding nodes for each span [i, j] and for each model m. For each span [i, j] (lines 3-5), the procedure ADD(G, i, j, m) add nodes generated by the model m to the hypergraph G (line 6). Each model searches for partial translations independently: it uses its own knowledge sources and visits its own antecedent nodes, just running like a bottom-up individual decoder. After all models finishes adding nodes for span [i, j], the procedure PRUNE(G, i, j) merges identical nodes and removes less promising nodes to control the search space (line 8). The pruning strategy is similar to that of individual decoders, except that we require there must exist at least one node for each model to ensure further inference. Although translation-level combination will not offer new translations as compared to single models, it changes the way of selecting promising candidates in a combined search space and might potentially produce better translations than individual decoding. 3.3 Derivation-Level Combination In translation-level combination, different models interact with each other only at the nodes. The derivations of one model are unaccessible to other models. However, if two models produce the same structures on the target side, it is possible to combine two models within one derivation, which we refer to as derivation-level combination. For example, although different on the source side, both hierarchical phrase-based and tree-tostring models produce strings of terminals and nonterminals on the target side. Figure 4 shows a derivation composed of both hierarchical phrase IP(x1:VV, x2:NN) → x1 x2 X → ⟨fabiao, give⟩ X → ⟨yanjiang, a talk⟩ Figure 4: A derivation composed of both SCFG and tree-to-string rules. pairs and tree-to-string rules. Hierarchical phrase pairs are used for translating smaller units and tree-to-string rules for bigger ones. It is appealing to combine them in such a way because the hierarchical phrase-based model provides excellent rule coverage while the tree-to-string model offers linguistically motivated non-local reordering. Similarly, Blunsom and Osborne (2008) use both hierarchical phrase pairs and tree-to-string rules in decoding, where source parse trees serve as conditioning context rather than hard constraints. Depending on the target side output, we distinguish between string-targeted and tree-targeted models. String-targeted models include phrasebased, hierarchical phrase-based, and tree-tostring models. Tree-targeted models include string-to-tree and tree-to-tree models. All models can be combined at the translation level. Models that share with same target output structure can be further combined at the derivation level. The joint decoder usually runs as maxtranslation decoding because multiple derivations from various models are used. However, if all models involved belong to the same category, a joint decoder can also adopt the max-derivation fashion because all nodes and hyperedges are accessible now (Section 5.2). Allowing derivations for comprising rules from different models and integrating their strengths, derivation-level combination could hopefully produce new and better translations as compared with single models. 4 Extended Minimum Error Rate Training Minimum error rate training (Och, 2003) is widely used to optimize feature weights for a linear model (Och and Ney, 2002). The key idea of MERT is to tune one feature weight to minimize error rate each time while keep others fixed. Therefore, each 580 x f(x) t1 t2 t3 (0, 0) x1 x2 Figure 5: Calculation of critical intersections. candidate translation can be represented as a line: f(x) = a × x + b (8) where a is the feature value of current dimension, x is the feature weight being tuned, and b is the dotproduct of other dimensions. The intersection of two lines is where the candidate translation will change. Instead of computing all intersections, Och (2003) only computes critical intersections where highest-score translations will change. This method reduces the computational overhead significantly. Unfortunately, minimum error rate training cannot be directly used to optimize feature weights of max-translation decoding because Eq. (5) is not a linear model. However, if we also tune one dimension each time and keep other dimensions fixed, we obtain a monotonic curve as follows: f(x) = K X k=1 eak×x+bk (9) where K is the number of derivations for a candidate translation, ak is the feature value of current dimension on the kth derivation and bk is the dotproduct of other dimensions on the kth derivation. If we restrict that ak is always non-negative, the curve shown in Eq. (9) will be a monotonically increasing function. Therefore, it is possible to extend the MERT algorithm to handle situations where multiple derivations are taken into account for decoding. The key difference is the calculation of critical intersections. The major challenge is that two curves might have multiple intersections while two lines have at most one intersection. Fortunately, as the curve is monotonically increasing, we need only to find the leftmost intersection of a curve with other curves that have greater values after the intersection as a candidate critical intersection. Figure 5 demonstrates three curves: t1, t2, and t3. Suppose that the left bound of x is 0, we compute the function values for t1, t2, and t3 at x = 0 and find that t3 has the greatest value. As a result, we choose x = 0 as the first critical intersection. Then, we compute the leftmost intersections of t3 with t1 and t2 and choose the intersection closest to x = 0, that is x1, as our new critical intersection. Similarly, we start from x1 and find x2 as the next critical intersection. This iteration continues until it reaches the right bound. The bold curve denotes the translations we will choose over different ranges. For example, we will always choose t2 for the range [x1, x2]. To compute the leftmost intersection of two curves, we divide the range from current critical intersection to the right bound into many bins (i.e., smaller ranges) and search the bins one by one from left to right. We assume that there is at most one intersection in each bin. As a result, we can use the Bisection method for finding the intersection in each bin. The search process ends immediately once an intersection is found. We divide max-translation decoding into three phases: (1) build the translation hypergraphs, (2) generate n-best translations, and (3) generate n′best derivations. We apply Algorithm 3 of Huang and Chiang (2005) for n-best list generation. Extended MERT runs on n-best translations plus n′best derivations to optimize the feature weights. Note that feature weights of various models are tuned jointly in extended MERT. 5 Experiments 5.1 Data Preparation Our experiments were on Chinese-to-English translation. We used the FBIS corpus (6.9M + 8.9M words) as the training corpus. For language model, we used the SRI Language Modeling Toolkit (Stolcke, 2002) to train a 4-gram model on the Xinhua portion of GIGAWORD corpus. We used the NIST 2002 MT Evaluation test set as our development set, and used the NIST 2005 test set as test set. We evaluated the translation quality using case-insensitive BLEU metric (Papineni et al., 2002). Our joint decoder included two models. The 581 Max-derivation Max-translation Model Combination Time BLEU Time BLEU hierarchical N/A 40.53 30.11 44.87 29.82 tree-to-string N/A 6.13 27.23 6.69 27.11 translation N/A N/A 55.89 30.79 both derivation 48.45 31.63 54.91 31.49 Table 2: Comparison of individual decoding and joint decoding on average decoding time (seconds/sentence) and BLEU score (case-insensitive). first model was the hierarchical phrase-based model (Chiang, 2005; Chiang, 2007). We obtained word alignments of training data by first running GIZA++ (Och and Ney, 2003) and then applying the refinement rule “grow-diag-final-and” (Koehn et al., 2003). About 2.6M hierarchical phrase pairs extracted from the training corpus were used on the test set. Another model was the tree-to-string model (Liu et al., 2006; Liu et al., 2007). Based on the same word-aligned training corpus, we ran a Chinese parser on the source side to obtain 1-best parses. For 15,157 sentences we failed to obtain 1-best parses. Therefore, only 93.7% of the training corpus were used by the tree-to-string model. About 578K tree-to-string rules extracted from the training corpus were used on the test set. 5.2 Individual Decoding Vs. Joint Decoding Table 2 shows the results of comparing individual decoding and joint decoding on the test set. With conventional max-derivation decoding, the hierarchical phrase-based model achieved a BLEU score of 30.11 on the test set, with an average decoding time of 40.53 seconds/sentence. We found that accounting for all possible derivations in maxtranslation decoding resulted in a small negative effect on BLEU score (from 30.11 to 29.82), even though the feature weights were tuned with respect to BLEU score. One possible reason is that we only used n-best derivations instead of all possible derivations for minimum error rate training. Max-derivation decoding with the tree-to-string model yielded much lower BLEU score (i.e., 27.23) than the hierarchical phrase-based model. One reason is that the tree-to-string model fails to capture a large amount of linguistically unmotivated mappings due to syntactic constraints. Another reason is that the tree-to-string model only used part of the training data because of parsing failure. Similarly, accounting for all possible 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0 1 2 3 4 5 6 7 8 9 10 11 percentage span width Figure 6: Node sharing in max-translation decoding with varying span widths. We retain at most 100 nodes for each source substring for each model. derivations in max-translation decoding failed to bring benefits for the tree-to-string model (from 27.23 to 27.11). When combining the two models at the translation level, the joint decoder achieved a BLEU score of 30.79 that outperformed the best result (i.e., 30.11) of individual decoding significantly (p < 0.05). This suggests that accounting for all possible derivations from multiple models will help discriminate among candidate translations. Figure 6 demonstrates the percentages of nodes shared by the two models over various span widths in packed translation hypergraphs during maxtranslation decoding. For one-word source strings, 89.33% nodes in the hypergrpah were shared by both models. With the increase of span width, the percentage decreased dramatically due to the diversity of the two models. However, there still exist nodes shared by two models even for source substrings that contain 33 words. When combining the two models at the derivation level using max-derivation decoding, the joint decoder achieved a BLEU score of 31.63 that outperformed the best result (i.e., 30.11) of individ582 Method Model BLEU hierarchical 30.11 individual decoding tree-to-string 27.23 system combination both 31.50 joint decoding both 31.63 Table 3: Comparison of individual decoding, system combination, and joint decoding. ual decoding significantly (p < 0.01). This improvement resulted from the mixture of hierarchical phrase pairs and tree-to-string rules. To produce the result, the joint decoder made use of 8,114 hierarchical phrase pairs learned from training data, 6,800 glue rules connecting partial translations monotonically, and 16,554 tree-to-string rules. While tree-to-string rules offer linguistically motivated non-local reordering during decoding, hierarchical phrase pairs ensure good rule coverage. Max-translation decoding still failed to surpass max-derivation decoding in this case. 5.3 Comparison with System Combination We re-implemented a state-of-the-art system combination method (Rosti et al., 2007). As shown in Table 3, taking the translations of the two individual decoders as input, the system combination method achieved a BLEU score of 31.50, slightly lower than that of joint decoding. But this difference is not significant statistically. 5.4 Individual Training Vs. Joint Training Table 4 shows the effects of individual training and joint training. By individual, we mean that the two models are trained independently. We concatenate and normalize their feature weights for the joint decoder. By joint, we mean that they are trained together by the extended MERT algorithm. We found that joint training outperformed individual training significantly for both max-derivation decoding and max-translation decoding. 6 Related Work System combination has benefited various NLP tasks in recent years, such as products-of-experts (e.g., (Smith and Eisner, 2005)) and ensemblebased parsing (e.g., (Henderson and Brill, 1999)). In machine translation, confusion-network based combination techniques (e.g., (Rosti et al., 2007; He et al., 2008)) have achieved the state-of-theart performance in MT evaluations. From a difTraining Max-derivation Max-translation individual 30.70 29.95 joint 31.63 30.79 Table 4: Comparison of individual training and joint training. ferent perspective, we try to combine different approaches directly in decoding phase by using hypergraphs. While system combination techniques manipulate only the final translations of each system, our method opens the possibility of exploiting much more information. Blunsom et al. (2008) first distinguish between max-derivation decoding and max-translation decoding explicitly. They show that max-translation decoding outperforms max-derivation decoding for the latent variable model. While they train the parameters using a maximum a posteriori estimator, we extend the MERT algorithm (Och, 2003) to take the evaluation metric into account. Hypergraphs have been successfully used in parsing (Klein and Manning., 2001; Huang and Chiang, 2005; Huang, 2008) and machine translation (Huang and Chiang, 2007; Mi et al., 2008; Mi and Huang, 2008). Both Mi et al. (2008) and Blunsom et al. (2008) use a translation hypergraph to represent search space. The difference is that their hypergraphs are specifically designed for the forest-based tree-to-string model and the hierarchical phrase-based model, respectively, while ours is more general and can be applied to arbitrary models. 7 Conclusion We have presented a framework for including multiple translation models in one decoder. Representing search space as a translation hypergraph, individual models are accessible to others via sharing nodes and even hyperedges. As our decoder accounts for multiple derivations, we extend the MERT algorithm to tune feature weights with respect to BLEU score for max-translation decoding. In the future, we plan to optimize feature weights for max-translation decoding directly on the entire packed translation hypergraph rather than on n-best derivations, following the latticebased MERT (Macherey et al., 2008). 583 Acknowledgement The authors were supported by National Natural Science Foundation of China, Contracts 60873167 and 60736014, and 863 State Key Project No. 2006AA010108. Part of this work was done while Yang Liu was visiting the SMT group led by Stephan Vogel at CMU. We thank the anonymous reviewers for their insightful comments. We are also grateful to Yajuan L¨u, Liang Huang, Nguyen Bach, Andreas Zollmann, Vamshi Ambati, and Kevin Gimpel for their helpful feedback. References Phil Blunsom and Mile Osborne. 2008. Probabilistic inference for machine translation. In Proc. of EMNLP08. Phil Blunsom, Trevor Cohn, and Miles Osborne. 2008. A discriminative latent variable model for statistical machine translation. In Proc. of ACL08. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. of ACL05. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2). Robert Frederking and Sergei Nirenburg. 1994. Three heads are better than one. In Proc. of ANLP94. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proc. of ACL06. Xiaodong He, Mei Yang, Jianfeng Gao, Patrick Nguyen, and Robert Moore. 2008. Indirect-HMMbased hypothesis alignment for combining outputs from machine translation systems. In Proc. of EMNLP08. John C. Henderson and Eric Brill. 1999. Exploiting diversity in natural language processing: Combining parsers. In Proc. of EMNLP99. Liang Huang and David Chiang. 2005. Better k-best parsing. In Proc. of IWPT05. Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proc. of ACL07. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proc. of ACL08. Dan Klein and Christopher D. Manning. 2001. Parsing and hypergraphs. In Proc. of ACL08. Phillip Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proc. of NAACL03. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment template for statistical machine translation. In Proc. of ACL06. Yang Liu, Yun Huang, Qun Liu, and Shouxun Lin. 2007. Forest-to-string statistical translation rules. In Proc. of ACL07. Wolfgang Macherey, Franz J. Och, Ignacio Thayer, and Jakob Uszkoreit. 2008. Lattice-based minimum error rate training for statistical machine translation. In Proc. of EMNLP08. Haitao Mi and Liang Huang. 2008. Forest-based translation rule extraction. In Proc. of EMNLP08. Haitao Mi, Liang Huang, and Qun Liu. 2008. Forestbased translation. In Proc. of ACL08. Franz J. Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proc. of ACL02. Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1). Franz J. Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4). Franz J. Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of ACL03. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proc. of ACL02. Antti-Veikko Rosti, Spyros Matsoukas, and Richard Schwartz. 2007. Improved word-level system combination for machine translation. In Proc. of ACL07. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proc. of ACL08. Noah A. Smith and Jason Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proc. of ACL05. Andreas Stolcke. 2002. Srilm - an extension language model modeling toolkit. In Proc. of ICSLP02. Taro Watanabe, Hajime Tsukada, and Hideki Isozaki. 2006. Left-to-right target generation for hierarchical phrase-based translation. In Proc. of ACL06. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23. Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum entropy based phrase reordering model for statistical machine translation. In Proc. of ACL06. 584
2009
65
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 585–592, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Collaborative Decoding: Partial Hypothesis Re-ranking Using Translation Consensus between Decoders Mu Li1, Nan Duan2, Dongdong Zhang1, Chi-Ho Li1, Ming Zhou1 1Microsoft Research Asia 2Tianjin University Beijing, China Tianjin, China {muli,v-naduan,dozhang,chl,mingzhou}@microsoft.com Abstract This paper presents collaborative decoding (co-decoding), a new method to improve machine translation accuracy by leveraging translation consensus between multiple machine translation decoders. Different from system combination and MBR decoding, which postprocess the n-best lists or word lattice of machine translation decoders, in our method multiple machine translation decoders collaborate by exchanging partial translation results. Using an iterative decoding approach, n-gram agreement statistics between translations of multiple decoders are employed to re-rank both full and partial hypothesis explored in decoding. Experimental results on data sets for NIST Chinese-to-English machine translation task show that the co-decoding method can bring significant improvements to all baseline decoders, and the outputs from co-decoding can be used to further improve the result of system combination. 1 Introduction Recent research has shown substantial improvements can be achieved by utilizing consensus statistics obtained from outputs of multiple machine translation systems. Translation consensus can be measured either at sentence level or at word level. For example, Minimum Bayes Risk (MBR) (Kumar and Byrne, 2004) decoding over n-best list tries to find a hypothesis with lowest expected loss with respect to all the other translations, which can be viewed as sentence-level consensus-based decoding. Word based methods proposed range from straightforward consensus voting (Bangalore et al., 2001; Matusov et al., 2006) to more complicated word-based system combination model (Rosti et al., 2007; Sim et al., 2007). Typically, the resulting systems take outputs of individual machine translation systems as input, and build a new confusion network for second-pass decoding. There have been many efforts dedicated to advance the state-of-the-art performance by combining multiple systems’ outputs. Most of the work focused on seeking better word alignment for consensus-based confusion network decoding (Matusov et al., 2006) or word-level system combination (He et al., 2008; Ayan et al., 2008). In addition to better alignment, Rosti et al. (2008) introduced an incremental strategy for confusion network construction; and Hildebrand and Vogel (2008) proposed a hypotheses reranking model for multiple systems’ outputs with more features including word translation probability and n-gram agreement statistics. A common property of all the work mentioned above is that the combination models work on the basis of n-best translation lists (full hypotheses) of existing machine translation systems. However, the n-best list only presents a very small portion of the entire search space of a Statistical Machine Translation (SMT) model while a majority of the space, within which there are many potentially good translations, is pruned away in decoding. In fact, due to the limitations of present-day computational resources, a considerable number of promising possibilities have to be abandoned at the early stage of the decoding process. It is therefore expected that exploring additional possibilities beyond n-best hypotheses lists for full sentences could bring improvements to consensus-based decoding. In this paper, we present collaborative decoding (or co-decoding), a new SMT decoding scheme to leverage consensus information between multiple machine translation systems. In this scheme, instead of using a post-processing step, multiple machine translation decoders collaborate during the decoding process, and translation consensus statistics are taken into account to improve ranking not only for full translations, but also for partial hypotheses. In this way, we 585 expect to reduce search errors caused by partial hypotheses pruning, maximize the contribution of translation consensus, and result in better final translations. We will discuss the general co-decoding model, requirements for decoders that enable collaborative decoding and describe the updated model structures. We will present experimental results on the data sets of NIST Chinese-to-English machine translation task, and demonstrate that co-decoding can bring significant improvements to baseline systems. We also conduct extensive investigations when different settings of codecoding are applied, and make comparisons with related methods such as word-level system combination of hypothesis selection from multiple n-best lists. The rest of the paper is structured as follows. Section 2 gives a formal description of the codecoding model, the strategy to apply consensus information and hypotheses ranking in decoding. In Section 3, we make detailed comparison between co-decoding and related work such as system combination and hypotheses selection out of multiple systems. Experimental results and discussions are presented in Section 4. Section 5 concludes the paper. 2 Collaborative Decoding 2.1 Overview Collaborative decoding does not present a full SMT model as other SMT decoders do such as Pharaoh (Koehn, 2004) or Hiero (Chiang, 2005). Instead, it provides a framework that accommodates and coordinates multiple MT decoders. Conceptually, collaborative decoding incorporates the following four constituents: 1. Co-decoding model. A co-decoding model consists of a set of member models, which are a set of augmented baseline models. We call decoders based on member models member decoders, and those based on baseline models baseline decoders. In our work, any Maximum A Posteriori (MAP) SMT model with log-linear formulation (Och, 2002) can be a qualified candidate for a baseline model. The requirement for a loglinear model aims to provide a natural way to integrate the new co-decoding features. 2. Co-decoding features. Member models are built by adding additional translation consensus -based co-decoding features to baseline models. A baseline model can be viewed as a special case of member model with all codecoding feature values set to 0. Accordingly, a baseline decoder can be viewed as a special setting of a member decoder. 3. Decoder coordinating. In co-decoding, each member decoder cannot proceed solely based on its own agenda. To share consensus statistics with others, the decoding must be performed in a coordinated way. 4. Model training. Since we use multiple interrelated decoders and introduce more features in member models, we also need to address the parameter estimation issue in the framework of co-decoding. In the following sub-sections we first establish a general model for co-decoding, and then present details of feature design and decoder implementation, as well as parameter estimation in the codecoding framework. We leave the investigation of using specific member models to the experiment section. 2.2 Generic Collaborative Decoding Model For a given source sentence f, a member model in co-decoding finds the best translation 𝑒∗ among the set of possible candidate translations ℋ(𝑓) based on a scoring function 𝐹: 𝑒∗= argmax𝑒∈ℋ(𝑓)𝐹(𝑒) (1) In the following, we will use 𝑑𝑘 to denote the 𝑘𝑡ℎ member decoder, and also use the notation ℋ𝑘(𝑓) for the translation hypothesis space of f determined by 𝑑𝑘. The 𝑚𝑡ℎ member model can be written as: 𝐹𝑚 𝑒 = Φ𝑚(𝑓, 𝑒) + Ψ𝑘(𝑒, ℋ𝑘(𝑓)) 𝑘,𝑘≠𝑚 (2) where Φ𝑚(𝑓, 𝑒) is the score function of the 𝑚𝑡ℎ baseline model, and each Ψ𝑘(𝑒, ℋ𝑘(𝑓)) is a partial consensus score function with respect to 𝑑𝑘 and is defined over e and ℋ𝑘 𝑓 : Ψ𝑘 𝑒, ℋ𝑘 𝑓 = 𝜆𝑘,𝑙 ℎ𝑘,𝑙(𝑒, ℋ𝑘 𝑓 ) 𝑙 (3) where each ℎ𝑘,𝑙(𝑒, ℋ𝑘 𝑓 ) is a feature function based on a consensus measure between e and ℋ𝑘 𝑓 , and 𝜆𝑘,𝑙 is the corresponding feature weight. Feature index l ranges over all consensus-based features in Equation 3. 2.3 Decoder Coordination Before discussing the design and computation of translation consensus -based features, we first 586 describe the multiple decoder coordination issue in co-decoding. Note that in Equation 2, though the baseline score function Φ𝑚 𝑓, 𝑒 can be computed inside each decoder, the case of Ψ𝑘(𝑒, ℋ𝑘(𝑓)) is more complicated. Because usually it is not feasible to enumerate the entire hypothesis space for machine translation, we approximate ℋ𝑘 𝑓 with n-best hypotheses by convention. Then there is a circular dependency between co-decoding features and ℋ𝑘(𝑓) : on one hand, searching for n-best approximation of ℋ𝑘(𝑓) requires using Equation 2 to select topranked hypotheses; while on the other hand, Equation 2 cannot be computed until every ℋ𝑘(𝑓) is available. We address this issue by employing a bootstrapping method, in which the key idea is that we can use baseline models’ n-best hypotheses as seeds, and iteratively refine member models’ n-best hypotheses with co-decoding. Similar to a typical phrase-based decoder (Koehn, 2004), we associate each hypothesis with a coverage vector c to track translated source words in it. We will use ℋ𝑘(𝑐, 𝑓) for the set of hypotheses associated with c, and we also denote with ℋ𝑘(𝑓) = ℋ𝑘(𝑐, 𝑓) 𝑐 the set of all hypotheses generated by member decoder 𝑑𝑘 in decoding. The codecoding process can be described as follows: 1. For each member decoder 𝑑𝑘, perform decoding with a baseline model, and memorize all translation hypotheses generated during decoding in ℋ𝑘(𝑓); 2. Re-group translation hypotheses in ℋ𝑘(𝑓) into a set of buckets ℋ𝑘 𝑐, 𝑓 by the coverage vector c associated with each hypothesis; 3. Use member decoders to re-decode source sentence 𝑓 with member models. For member decoder 𝑑𝑘, consensus-based features of any hypotheses associated with coverage vector c are computed based on current setting of ℋ𝑠 𝑐, 𝑓 for all s but k. New hypotheses generated by 𝑑𝑘 in re-decoding are cached in ℋ𝑘 ′ (𝑓); 4. Update all ℋ𝑘(𝑓) with ℋ𝑘 ′ (𝑓); 5. Iterate from step 2 to step 4 until a preset iteration limit is reached. In the iterative decoding procedure described above, hypotheses of different decoders can be mutually improved. For example, given two decoders 𝑑1 and 𝑑2 with hypotheses sets ℋ1 and ℋ2, improvements on ℋ1 enable 𝑑2 to improve ℋ2, and in turn ℋ1 benefits from improved ℋ2, and so forth. Step 2 is used to facilitate the computation of feature functions ℎ𝑘,𝑙(𝑒, ℋ𝑘 ∙ ), which require both e and every hypothesis in ℋ𝑘 ∙ should be translations of the same set of source words. This step seems to be redundant for CKY-style MT decoders (Liu et al., 2006; Xiong et al., 2006; Chiang, 2005) since the grouping is immediately available from decoders because all hypotheses spanning the same range of source sentence have been stacked together in the same chart cell. But to be a general framework, this step is necessary for some state-of-the-art phrase-based decoders (Koehn, 2007; Och and Ney, 2004) because in these decoders, hypotheses with different coverage vectors can co-exist in the same bin, or hypotheses associated with the same coverage vector might appear in different bins. Note that a member model does not enlarge the theoretical search space of its baseline model, the only change is hypothesis scoring. By rerunning a complete decoding process, member model can be applied to re-score all hypotheses explored by a decoder. Therefore step 3 can be viewed as full-scale hypothesis re-ranking because the re-ranking scope is beyond the limited n-best hypotheses currently cached in ℋ𝑘. In the implementation of member decoders, there are two major modifications compared to their baseline decoders. One is the support for co-decoding features, including computation of feature values and the use of augmented codecoding score function (Equation 2) for hypothesis ranking and pruning. The other is hypothesis grouping based on coverage vector and a mechanism to effectively access grouped hypotheses in step 2 and step 3. 2.4 Co-decoding Features We now present the consensus-based feature functions ℎ𝑘,𝑙(𝑒, ℋ𝑘 𝑓 ) introduced in Equation 3. In this work all the consensus-based features have the following formulation: ℎ𝑘,𝑙 𝑒, ℋ𝑘 𝑓 = 𝑃 𝑒′ 𝑑𝑘 𝐺𝑙(𝑒, 𝑒′) 𝑒′∈ℋ𝑘 𝑓 (4) where e is a translation of f by decoder 𝑑𝑚(𝑚≠ 𝑘), 𝑒′ is a translation in ℋ𝑘 𝑓 and 𝑃 𝑒′ 𝑑𝑘 is the posterior probability of translation 𝑒′ determined by decoder 𝑑𝑘 given source sentence f. 𝐺𝑙(𝑒, 𝑒′) is a consensus measure defined on e and 𝑒′, by varying which different feature functions can be obtained. 587 Referring to the log-linear model formulation, the translation posterior 𝑃 𝑒′ 𝑑𝑘 can be computed as: 𝑃 𝑒′ 𝑑𝑘 = exp 𝛼𝐹𝑘 𝑒′ exp 𝛼𝐹𝑘 𝑒′′ 𝑒′′ ∈ℋ𝑘 𝑓 (5) where 𝐹𝑘(∙) is the score function given in Equation 2, and 𝛼 is a scaling factor following the work of Tromble et al. (2008) To compute the consensus measures, we further decompose each 𝐺𝑙 𝑒, 𝑒′ into n-gram matching statistics between e and 𝑒′. Here we do not discriminate among different lexical n-grams and are only concerned with statistics aggregation of all n-grams of the same order. For each ngram of order n, we introduce a pair of complementary consensus measure functions 𝐺𝑛+ 𝑒, 𝑒′ and 𝐺𝑛− 𝑒, 𝑒′ described as follows: 𝐺𝑛+ 𝑒, 𝑒′ is the n-gram agreement measure function which counts the number of occurrences in 𝑒′of n-grams in e. So the corresponding feature value will be the expected number of occurrences in ℋ𝑘 𝑓 of all n-grams in e: 𝐺𝑛+ 𝑒, 𝑒′ = 𝜏(𝑒𝑖 𝑖+𝑛−1, 𝑒′) 𝑒 −𝑛+1 𝑖=1 where 𝜏(∙,∙) is a binary indicator function – 𝜏 𝑒𝑖 𝑖+𝑛−1, 𝑒′ is 1 if the n-gram 𝑒𝑖 𝑖+𝑛−1 occurs in 𝑒′ and 0 otherwise. 𝐺𝑛− 𝑒, 𝑒′ is the n-gram disagreement measure function which is complementary to 𝐺𝑛+ 𝑒, 𝑒′ : 𝐺𝑛− 𝑒, 𝑒′ = 1 −𝜏 𝑒𝑖 𝑖+𝑛−1, 𝑒′ 𝑒 −𝑛+1 𝑖=1 This feature is designed because 𝐺𝑛+ 𝑒, 𝑒′ does not penalize long translation with low precision. Obviously we have the following: 𝐺𝑛+ 𝑒, 𝑒′ + 𝐺𝑛− 𝑒, 𝑒′ = 𝑒 −𝑛+ 1 So if the weights of agreement and disagreement features are equal, the disagreement-based features will be equivalent to the translation length features. Using disagreement measures instead of translation length there could be two potential advantages: 1) a length feature has been included in the baseline model and we do not need to add one; 2) we can scale disagreement features independently and gain more modeling flexibility. Similar to a language model score, n-gram consensus -based feature values cannot be summed up from smaller hypotheses. Instead, it must be re-computed when building each new hypothesis. 2.5 Model Training We adapt the Minimum Error Rate Training (MERT) (Och, 2003) algorithm to estimate parameters for each member model in co-decoding. Let 𝝀𝑚 be the feature weight vector for member decoder 𝑑𝑚, the training procedure proceeds as follows: 1. Choose initial values for 𝝀1, … , 𝝀𝑀 2. Perform co-decoding using all member decoders on a development set D with 𝝀1, … , 𝝀𝑀. For each decoder 𝑑𝑚, find a new feature weight vector 𝝀𝑚 ′ which optimizes the specified evaluation criterion L on D using the MERT algorithm based on the n-best list ℋ𝑚 generated by 𝑑𝑚: 𝝀𝑚 ′ = argmax𝝀 𝐿 (𝑇|𝝀, ℋ𝑚 , 𝐷)) where T denotes the translations selected by re-ranking the translations in ℋ𝑚 using a new feature weight vector 𝝀 3. Let 𝝀1 = 𝝀1 ′ , … , 𝝀𝑀= 𝝀𝑀 ′ and repeat step 2 until convergence or a preset iteration limit is reached. Figure 1. Model training for co-decoding In step 2, there is no global criterion to optimize the co-decoding parameters across member models. Instead, parameters of different member models are tuned to maximize the evaluation criteria on each member decoder’s own n-best output. Figure 1 illustrates the training process of co-decoding with 2 member decoders. Source sentence decoder1 decoder2 ℋ1 MERT ℋ2 MERT co-decoding ref 1   2   588 2.6 Output Selection Since there is more than one model in codecoding, we cannot rely on member model’s score function to choose one best translation from multiple decoders’ outputs because the model scores are not directly comparable. We will examine the following two system combination -based solutions to this task:  Word-level system combination (Rosti et al., 2007) of member decoders’ n-best outputs  Hypothesis selection from combined n-best lists as proposed in Hildebrand and Vogel (2008) 3 Experiments In this section we present experiments to evaluate the co-decoding method. We first describe the data sets and baseline systems. 3.1 Data and Metric We conduct our experiments on the test data from the NIST 2005 and NIST 2008 Chinese-toEnglish machine translation tasks. The NIST 2003 test data is used for development data to estimate model parameters. Statistics of the data sets are shown in Table 1. In our experiments all the models are optimized with case-insensitive NIST version of BLEU score and we report results using this metric in percentage numbers. Data set # Sentences # Words NIST 2003 (dev) 919 23,782 NIST 2005 (test) 1,082 29,258 NIST 2008 (test) 1,357 31,592 Table 1: Data set statistics We use the parallel data available for the NIST 2008 constrained track of Chinese-toEnglish machine translation task as bilingual training data, which contains 5.1M sentence pairs, 128M Chinese words and 147M English words after pre-processing. GIZA++ is used to perform word alignment in both directions with default settings, and the intersect-diag-grow method is used to generate symmetric word alignment refinement. The language model used for all models (include decoding models and system combination models described in Section 2.6) is a 5-gram model trained with the English part of bilingual data and xinhua portion of LDC English Gigaword corpus version 3. 3.2 Member Decoders We use three baseline decoders in the experiments. The first one (SYS1) is re-implementation of Hiero, a hierarchical phrase-based decoder. Phrasal rules are extracted from all bilingual sentence pairs, while rules with variables are extracted only from selected data sets including LDC2003E14, LDC2003E07, LDC2005T06 and LDC2005T10, which contain around 350,000 sentence pairs, 8.8M Chinese words and 10.3M English words. The second one (SYS2) is a BTG decoder with lexicalized reordering model based on maximum entropy principle as proposed by Xiong et al. (2006). We use all the bilingual data to extract phrases up to length 3. The third one (SYS3) is a string-to-dependency tree –based decoder as proposed by Shen et al. (2008). For rule extraction we use the same setting as in SYS1. We parsed the language model training data with Berkeley parser, and then trained a dependency language model based on the parsing output. All baseline decoders are extended with n-gram consensus –based co-decoding features to construct member decoders. By default, the beam size of 20 is used for all decoders in the experiments. We run two iterations of decoding for each member decoder, and hold the value of 𝛼 in Equation 5 as a constant 0.05, which is tuned on the test data of NIST 2004 Chinese-toEnglish machine translation task. 3.3 Translation Results We first present the overall results of codecoding on both test sets using the settings as we described. For member decoders, up to 4gram agreement and disagreement features are used. We also implemented the word-level system combination (Rosti et al., 2007) and the hypothesis selection method (Hildebrand and Vogel, 2008). 20-best translations from all decoders are used in the experiments for these two combination methods. Parameters for both system combination and hypothesis selection are also tuned on NIST 2003 test data. The results are shown in Table 2. NIST 2005 NIST 2008 SYS1 38.66/40.08 27.67/29.19 SYS2 38.04/39.93 27.25/29.14 SYS3 39.50/40.32 28.75/29.68 Word-level Comb 40.45/40.85 29.52/30.35 Hypo Selection 40.09/40.50 29.02/29.71 Table 2: Co-decoding results on test data 589 In the Table 2, the results of a member decoder and its corresponding baseline decoder are grouped together with the later one for the member decoders. On both test sets, every member decoder performs significantly better than its baseline decoder (using the method proposed in Koehn (2004) for statistical significance test). We apply system combination methods to the n-best outputs of both baseline decoders and member decoders. We notice that we can achieve even better performance by applying system combination methods to member decoders’ nbest outputs. However, the improvement margins are smaller than those of baseline decoders on both test sets. This could be the result of less diversified outputs from co-decoding than those from baseline decoders. In particular, the results for hypothesis selection are only slightly better than the best system in co-decoding. We also evaluate the performance of system combination using different n-best sizes, and the results on NIST 2005 data set are shown in Figure 2, where bl- and co- legends denote combination results of baseline decoding and co-decoding respectively. From the results we can see that combination based on co-decoding’s outputs performs consistently better than that based on baseline decoders’ outputs for all n-best sizes we experimented with. However, we did not observe any significant improvements for both combination schemes when n-best size is larger than 20. Figure 2. Performance of system combination with different sizes of n-best lists One interesting observation in Table 2 is that the performance gap between baseline decoders is narrowed through co-decoding. For example, the 1.5 points gap between SYS2 and SYS3 on NIST 2008 data set is narrowed to 0.5. Actually we find that the TER score between two member decoders’ outputs are significantly reduced (as shown in Table 3), which indicates that the outputs become more similar due to the use of consensus information. For example, the TER score between SYS2 and SYS3 of the NIST 2008 outputs are reduced from 0.4238 to 0.2665. NIST 2005 NIST 2008 SYS1 vs. SYS2 0.3190/0.2274 0.4016/0.2686 SYS1 vs. SYS3 0.3252/0.1840 0.4176/0.2469 SYS2 vs. SYS3 0.3498/0.2171 0.4238/0.2665 Table 3: TER scores between co-decoding translation outputs In the rest of this section we run a series of experiments to investigate the impacts of different factors in co-decoding. All the results are reported on NIST 2005 test set. We start with investigating the performance gain due to partial hypothesis re-ranking. Because Equation 3 is a general model that can be applied to both partial hypothesis and n-best (full hypothesis) re-scoring, we compare the results of both cases. Figure 3 shows the BLEU score curves with up to 1000 candidates used for reranking. In Figure 3, the suffix p denotes results for partial hypothesis re-ranking, and f for n-best re-ranking only. For partial hypothesis reranking, obtaining more top-ranked results requires increasing the beam size, which is not affordable for large numbers in experiments. We work around this issue by approximating beam sizes larger than 20 by only enlarging the beam size for the span covering the entire source sentence. From Figure 3 we can see that all decoders can gain improvements before the size of candidate set reaches 100. When the size is larger than 50, co-decoding performs consistently and significantly better than the re-ranking results on any baseline decoder’s n-best outputs. Figure 3. Partial hypothesis vs. n-best re-ranking results on NIST 2005 test data Figure 4 shows the BLEU scores of a twosystem co-decoding as a function of re-decoding iterations. From the results we can see that the results for both decoders converge after two iterations. In Figure 4, iteration 0 denotes decoding with baseline model. The setting of iteration 1 can be viewed as the case of partial co-decoding, in 39.5 39.8 40.0 40.3 40.5 40.8 41.0 41.3 10 20 50 100 200 bl-comb co-comb bl-hyposel co-hyposel 38.0 38.5 39.0 39.5 40.0 40.5 41.0 41.5 10 20 50 100 200 500 1000 SYS1f SYS2f SYS3f SYS1p SYS2p SYS3p 590 which one decoder uses member model and the other keeps using baseline model. The results show that member models help each other: although improvements can be made using a single member model, best BLEU scores can only be achieved when both member models are used as shown by the results of iteration 2. The results also help justify the independent parameter estimation of member decoders described in Section 2.5, since optimizing the performance of one decoder will eventually bring performance improvements to all member decoders. Figure 4. Results using incremental iterations in co-decoding Next we examine the impacts of different consensus-based features in co-decoding. Table 4 shows the comparison results of a two-system co-decoding using different settings of n-gram agreement and disagreement features. It is clearly shown that both n-gram agreement and disagreement types of features are helpful, and using them together is the best choice. SYS1 SYS2 Baseline 38.66 38.04 +agreement –disagreement 39.36 39.02 –agreement +disagreement 39.12 38.67 +agreement +disagreement 39.68 39.61 Table 4: Co-decoding with/without n-gram agreement and disagreement features In Table 5 we show in another dimension the impact of consensus-based features by restricting the maximum order of n-grams used to compute agreement statistics. SYS1 SYS2 1 38.75 38.27 2 39.21 39.10 3 39.48 39.25 4 39.68 39.61 5 39.52 39.36 6 39.58 39.47 Table 5: Co-decoding with varied n-gram agreement and disagreement features From the results we do not observe BLEU improvement for 𝑛> 4. One reason could be that the data sparsity for high-order n-grams leads to over fitting on development data. We also empirically investigated the impact of scaling factor 𝛼 in Equation 5. It is observed in Figure 5 that the optimal value is between 0.01 ~ 0.1 on both development and test data. Figure 5. Impact of scaling factor 𝛼 4 Discussion Word-level system combination (system combination hereafter) (Rosti et al., 2007; He et al., 2008) has been proven to be an effective way to improve machine translation quality by using outputs from multiple systems. Our method is different from system combination in several ways. System combination uses unigram consensus only and a standalone decoding model irrelevant to single decoders. Our method uses agreement information of n-grams, and consensus features are integrated into decoding models. By constructing a confusion network, system combination is able to generate new translations different from any one in the input n-best lists, while our method does not extend the search spaces of baseline decoding models. Member decoders only change the scoring and ranking of the candidates in the search spaces. Results in Table 2 show that these two approaches can be used together to obtain further improvements. The work on multi-system hypothesis selection of Hildebrand and Vogel (2008) bears more resemblance to our method in that both make use of n-gram agreement statistics. They also empirically show that n-gram agreement is the most important factor for improvement apart from language models. Lattice MBR decoding (Tromble et al., 2008) also uses n-gram agreement statistics. Their work focuses on exploring larger evidence space by using a translation lattice instead of the n-best list. They also show the connection between expected n-gram change and corpus Log-BLEU loss. 37.5 38.0 38.5 39.0 39.5 40.0 0 1 2 3 4 SYS1 SYS2 38.0 38.5 39.0 39.5 40.0 0 0.01 0.03 0.05 0.1 0.2 0.5 1 Dev SYS1 Dev SYS2 Test SYS1 Test SYS2 591 5 Conclusion Improving machine translation with multiple systems has been a focus in recent SMT research. In this paper, we present a framework of collaborative decoding, in which multiple MT decoders are coordinated to search for better translations by re-ranking partial hypotheses using augmented log-linear models with translation consensus -based features. An iterative approach is proposed to re-rank all hypotheses explored in decoding. Experimental results show that with collaborative decoding every member decoder performs significantly better than its baseline decoder. In the future, we will extend our method to use lattice or hypergraph to compute consensus statistics instead of n-best lists. References Necip Fazil Ayan, Jing Zheng, and Wen Wang. 2008. Improving alignments for better confusion networks for combining machine translation systems. In Proc. Coling, pages 33-40. Srinivas Bangalore, German Bordel and Giuseppe Riccardi. 2001. Computing consensus translation from multiple machine translation systems. In Proc. ASRU, pages 351-354. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. ACL, pages 263-270. Xiaodong He, Mei Yang, Jianfeng Gao, Patrick Nguyen, and Robert Moore. 2008. Indirect-hmmbased hypothesis for combining outputs from machine translation systems. In Proc. EMNLP, pages 98-107. Almut Silja Hildebrand and Stephan Vogel. 2008. Combination of machine translation systems via hypothesis selection from combined n-best lists. In 8th AMTA conference, pages 254-261. Philipp Koehn, 2004. Statistical significance tests for machine translation evaluation. In Proc. EMNLP. Philipp Koehn, 2004. Pharaoh: a beam search decoder for phrase-based statistical machine translation model. In Proc. 6th AMTA Conference, pages 115124. Philipp Koehn, Hieu Hoang, Alexandra Brich, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proc. ACL, demonstration session. Shankar Kumar and William Byrne 2004. Minimum Bayes-Risk Decoding for Statistical Machine Translation. In HLT-NAACL, pages 169-176. Yang Liu, Qun Liu, Shouxun Lin. 2006. Tree-tostring alignment template for statistical machine translation. In Proc. ACL-Coling, pages 609-616. Evgeny Matusov, Nicola Ueffi ng, and Hermann Ney. 2006. Computing consensus translation from multiple machine translation systems using enchanced hypotheses alignment. In Proc. EACL, pages 3340. Franz Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proc. ACL, pages 295302. Franz Och. 2003. Minimum error rate training in statistical machine translation. In Proc. ACL, pages 160-167. Franz Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4), pages 417449 Antti-Veikko Rosti, Necip Fazil Ayan, Bing Xiang, Spyros Matsoukas, Richard Schwartz, and Bonnie Dorr. 2007. Combining outputs from multiple machine translation systems. In HLT-NAACL, pages 228-235 Antti-Veikko Rosti, Bing Zhang, Spyros Matsoukas, and Richard Schwartz. 2008. Incremental hypothesis alignment for building confusion networks with application to machine translation system combination. In Proc. Of the Third ACL Workshop on Statistical Machine Translation, pages 183-186. K.C. Sim, W. Byrne, M. Gales, H. Sahbi, and P. Woodland. 2007. Consensus network decoding for statistical machine translation system combination. In ICASSP. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proc. HLT-ACL, pages 577-585. Roy W. Tromble, Shankar Kumar, Franz Och, and Wolfgang Macherey. 2008. Lattice minimum bayes-risk decoding for statistical machine translation. In Proc. EMNLP, pages 620-629. Deyi Xiong, Qun Liu and Shouxun Lin. 2006. Maximum entropy based phrase reordering model for statistical machine translation. In Proc. ACL, pages 521-528. 592
2009
66
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 593–601, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Variational Decoding for Statistical Machine Translation Zhifei Li and Jason Eisner and Sanjeev Khudanpur Department of Computer Science and Center for Language and Speech Processing Johns Hopkins University, Baltimore, MD 21218, USA [email protected], [email protected], [email protected] Abstract Statistical models in machine translation exhibit spurious ambiguity. That is, the probability of an output string is split among many distinct derivations (e.g., trees or segmentations). In principle, the goodness of a string is measured by the total probability of its many derivations. However, finding the best string (e.g., during decoding) is then computationally intractable. Therefore, most systems use a simple Viterbi approximation that measures the goodness of a string using only its most probable derivation. Instead, we develop a variational approximation, which considers all the derivations but still allows tractable decoding. Our particular variational distributions are parameterized as n-gram models. We also analytically show that interpolating these n-gram models for different n is similar to minimumrisk decoding for BLEU (Tromble et al., 2008). Experiments show that our approach improves the state of the art. 1 Introduction Ambiguity is a central issue in natural language processing. Many systems try to resolve ambiguities in the input, for example by tagging words with their senses or choosing a particular syntax tree for a sentence. These systems are designed to recover the values of interesting latent variables, such as word senses, syntax trees, or translations, given the observed input. However, some systems resolve too many ambiguities. They recover additional latent variables— so-called nuisance variables—that are not of interest to the user.1 For example, though machine translation (MT) seeks to output a string, typical MT systems (Koehn et al., 2003; Chiang, 2007) 1These nuisance variables may be annotated in training data, but it is more common for them to be latent even there, i.e., there is no supervision as to their “correct” values. will also recover a particular derivation of that output string, which specifies a tree or segmentation and its alignment to the input string. The competing derivations of a string are interchangeable for a user who is only interested in the string itself, so a system that unnecessarily tries to choose among them is said to be resolving spurious ambiguity. Of course, the nuisance variables are important components of the system’s model. For example, the translation process from one language to another language may follow some hidden tree transformation process, in a recursive fashion. Many features of the model will crucially make reference to such hidden structures or alignments. However, collapsing the resulting spurious ambiguity—i.e., marginalizing out the nuisance variables—causes significant computational difficulties. The goodness of a possible MT output string should be measured by summing up the probabilities of all its derivations. Unfortunately, finding the best string is then computationally intractable (Sima’an, 1996; Casacuberta and Higuera, 2000).2 Therefore, most systems merely identify the single most probable derivation and report the corresponding string. This corresponds to a Viterbi approximation that measures the goodness of an output string using only its most probable derivation, ignoring all the others. In this paper, we propose a variational method that considers all the derivations but still allows tractable decoding. Given an input string, the original system produces a probability distribution p over possible output strings and their derivations (nuisance variables). Our method constructs a second distribution q ∈Q that approximates p as well as possible, and then finds the best string according to q. The last step is tractable because each q ∈Q is defined (unlike p) without reference to nuisance variables. Notice that q here does not approximate the entire translation process, but only 2May and Knight (2006) have successfully used treeautomaton determinization to exactly marginalize out some of the nuisance variables, obtaining a distribution over parsed translations. However, they do not marginalize over these parse trees to obtain a distribution over translation strings. 593 the distribution over output strings for a particular input. This is why it can be a fairly good approximation even without using the nuisance variables. In practice, we approximate with several different variational families Q, corresponding to ngram (Markov) models of different orders. We geometrically interpolate the resulting approximations q with one another (and with the original distribution p), justifying this interpolation as similar to the minimum-risk decoding for BLEU proposed by Tromble et al. (2008). Experiments show that our approach improves the state of the art. The methods presented in this paper should be applicable to collapsing spurious ambiguity for other tasks as well. Such tasks include dataoriented parsing (DOP), applications of Hidden Markov Models (HMMs) and mixture models, and other models with latent variables. Indeed, our methods were inspired by past work on variational decoding for DOP (Goodman, 1996) and for latent-variable parsing (Matsuzaki et al., 2005). 2 Background 2.1 Terminology In MT, spurious ambiguity occurs both in regular phrase-based systems (e.g., Koehn et al. (2003)), where different segmentations lead to the same translation string (Figure 1), and in syntax-based systems (e.g., Chiang (2007)), where different derivation trees yield the same string (Figure 2). In the Hiero system (Chiang, 2007) we are using, each string corresponds to about 115 distinct derivations on average. We use x to denote the input string, and D(x) to consider the set of derivations then considered by the system. Each derivation d ∈D(x) yields some translation string y = Y(d) in the target language. We write D(x, y) def= {d ∈D(x) : Y(d) = y} to denote the set of all derivations that yield y. Thus, the set of translations permitted by the model is T(y) def= {y : D(x, y) ̸= ∅} (or equivalently, T(y) def= {Y(d) : d ∈D(x)}). We write y∗for the translation string that is actually output. 2.2 Maximum A Posterior (MAP) Decoding For a given input sentence x, a decoding method identifies a particular “best” output string y∗. The maximum a posteriori (MAP) decision rule is y∗ = argmax y∈T(x) p(y | x) (1) machine translation software ! " # $ % & machine translation software ! " # $ % & Figure 1: Segmentation ambiguity in phrase-based MT: two different segmentations lead to the same translation string. S->(! ", machine) S->(#$, translation) S->(%&, software) S->(! ", machine) #$ S->(%&, software) S->(S0 S1, S0 S1) S->(S0 S1, S0 S1) S->(S0 #$ S1, S0 translation S1) Figure 2: Tree ambiguity in syntax-based MT: two derivation trees yield the same translation string. (An alternative decision rule, minimum Bayes risk (MBR), will be discussed in Section 4.) To obtain p(y | x) above, we need to marginalize over a nuisance variable, the derivation of y. Therefore, the MAP decision rule becomes y∗ = argmax y∈T(x) X d∈D(x,y) p(y, d | x) (2) where p(y, d | x) is typically derived from a loglinear model as follows, p(y, d | x) = eγ·s(x,y,d) Z(x) = eγ·s(x,y,d) P y,d eγ·s(x,y,d) (3) where γ is a scaling factor to adjust the sharpness of the distribution, the score s(x, y, d) is a learned linear combination of features of the triple (x, y, d), and Z(x) is a normalization constant. Note that p(y, d | x) = 0 if y ̸= Y(d). Our derivation set D(x) is encoded in polynomial space, using a hypergraph or lattice.3 However, both |D(x)| and |T(x)| may be exponential in |x|. Since the marginalization needs to be carried out for each member of T(x), the decoding problem of (2) turns out to be NP-hard,4 as shown by Sima’an (1996) for a similar problem. 3A hypergraph is analogous to a parse forest (Huang and Chiang, 2007). (A finite-state lattice is a special case.) It can be used to encode exponentially many hypotheses generated by a phrase-based MT system (e.g., Koehn et al. (2003)) or a syntax-based MT system (e.g., Chiang (2007)). 4Note that the marginalization for a particular y would be tractable; it is used at training time in certain training objective functions, e.g., maximizing the conditional likelihood of a reference translation (Blunsom et al., 2008). 594 2.3 Viterbi Approximation To approximate the intractable decoding problem of (2), most MT systems (Koehn et al., 2003; Chiang, 2007) use a simple Viterbi approximation, y∗ = argmax y∈T(x) pViterbi(y | x) (4) = argmax y∈T(x) max d∈D(x,y) p(y, d | x) (5) = Y argmax d∈D(x) p(y, d | x) ! (6) Clearly, (5) replaces the sum in (2) with a max. In other words, it approximates the probability of a translation string by the probability of its mostprobable derivation. (5) is found quickly via (6). The Viterbi approximation is simple and tractable, but it ignores most derivations. 2.4 N-best Approximation (or Crunching) Another popular approximation enumerates the N best derivations in D(x), a set that we call ND(x). Modifying (2) to sum over only these derivations is called crunching by May and Knight (2006): y∗ = argmax y∈T(x) pcrunch(y | x) (7) = argmax y∈T(x) X d∈D(x,y)∩ND(x) p(y, d | x) 3 Variational Approximate Decoding The Viterbi and crunching methods above approximate the intractable decoding of (2) by ignoring most of the derivations. In this section, we will present a novel variational approximation, which considers all the derivations but still allows tractable decoding. 3.1 Approximate Inference There are several popular approaches to approximate inference when exact inference is intractable (Bishop, 2006). Stochastic techniques such as Markov Chain Monte Carlo are exact in the limit of infinite runtime, but tend to be too slow for large problems. By contrast, deterministic variational methods (Jordan et al., 1999), including messagepassing (Minka, 2005), are inexact but scale up well. They approximate the original intractable distribution with one that factorizes better or has a specific parametric form (e.g., Gaussian). In our work, we use a fast variational method. Variational methods generally work as follows. When exact inference under a complex model p is intractable, one can approximate the posterior p(y | x) by a tractable model q(y), where q ∈Q is chosen to minimize some information loss such as the KL divergence KL(p ∥q). The simpler model q can then act as a surrogate for p during inference. 3.2 Variational Decoding for MT For each input sentence x, we assume that a baseline MT system generates a hypergraph HG(x) that compactly encodes the derivation set D(x) along with a score for each d ∈D(x),5 which we interpret as p(y, d | x) (or proportional to it). For any single y ∈T(x), it would be tractable using HG(x) to compute p(y | x) = P d p(y, d | x). However, as mentioned, it is intractable to find argmaxy p(y | x) as required by the MAP decoding (2), so we seek an approximate distribution q(y) ≈p(y | x).6 For a fixed x, we seek a distribution q ∈Q that minimizes the KL divergence from p to q (both regarded as distributions over y):7 q∗ = argmin q∈Q KL(p ∥q) (8) = argmin q∈Q X y∈T(x) (p log p −p log q) (9) = argmax q∈Q X y∈T(x) p log q (10) So far, in order to approximate the intractable optimization problem (2), we have defined another optimization problem (10). If computing p(y | x) during decoding is computationally intractable, one might wonder if the optimization problem (10) is any simpler. We will show this is the case. The trick is to parameterize q as a factorized distribution such that the estimation of q∗ and decoding using q∗are both tractable through efficient dynamic programs. In the next three subsections, we will discuss the parameterization, estimation, and decoding, respectively. 3.2.1 Parameterization of q In (10), Q is a family of distributions. If we select a large family Q, we can allow more complex distributions, so that q∗will better approximate p. If we select a smaller family Q, we can 5The baseline system may return a pruned hypergraph, which has the effect of pruning D(x) and T(x) as well. 6Following the convention in describing variational inference, we write q(y) instead of q(y | x), even though q(y) always depends on x implicitly. 7To avoid clutter, we denote p(y | x) by p, and q(y) by q. We drop p log p from (9) because it is constant with respect to q. We then flip the sign and change argmin to argmax. 595 guarantee that q∗will have a simple form with many conditional independencies, so that q∗(y) and y∗= argmaxy q∗(y) are easier to compute. Since each q(y) is a distribution over output strings, a natural choice for Q is the family of n-gram models. To obtain a small KL divergence (8), we should make n as large as possible. In fact, q∗→p as n →∞. Of course, this last point also means that our computation becomes intractable as n →∞.8 However, if p(y | x) is defined by a hypergraph HG(x) whose structure explicitly incorporates an m-gram language model, both training and decoding will be efficient when m ≥n. We will give algorithms for this case that are linear in the size of HG(x).9 Formally, each q ∈Q takes the form q(y) = Y w∈W q(r(w) | h(w))cw(y) (11) where W is a set of n-gram types. Each w ∈W is an n-gram, which occurs cw(y) times in the string y, and w may be divided into an (n −1)-gram prefix h(w) (the history) and a 1-gram suffix r(w) (the rightmost or current word). 8Blunsom et al. (2008) effectively do take n = ∞, by maintaining the whole translation string in the dynamic programming state. They alleviate the computation cost somehow by using aggressive beam pruning, which might be sensible for their relatively small task (e.g., input sentences of < 10 words). But, we are interested in improving the performance for a large-scale system, and thus their method is not a viable solution. Moreover, we observe in our experiments that using a larger n does not improve much over n = 2. 9A reviewer asks about the interaction with backed-off language models. The issue is that the most compact finitestate representations of these (Allauzen et al., 2003), which exploit backoff structure, are not purely m-gram for any m. They yield more compact hypergraphs (Li and Khudanpur, 2008), but unfortunately those hypergraphs might not be treatable by Fig. 4—since where they back off to less than an n-gram, e is not informative enough for line 8 to find w. We sketch a method that works for any language model given by a weighted FSA, L. The variational family Q can be specified by any deterministic weighted FSA, Q, with weights parameterized by φ. One seeks φ to minimize (8). Intersect HG(x) with an “unweighted” version of Q in which all arcs have weight 1, so that Q does not prefer any string to another. By lifting weights into an expectation semiring (Eisner, 2002), it is then possible to obtain expected transition counts in Q (where the expectation is taken under p), or other sufficient statistics needed to estimate φ. This takes only time O(|HG(x)|) when L is a left-to-right refinement of Q (meaning that any two prefix strings that reach the same state in L also reach the same state in Q), for then intersecting L or HG(x) with Q does not split any states. That is the case when L and Q are respectively pure m-gram and n-gram models with m ≥n, as assumed in (12) and Figure 4. It is also the case when Q is a pure n-gram model and L is constructed not to back off beyond n-grams; or when the variational family Q is defined by deliberately taking the FSA Q to have the same topology as L. The parameters that specify a particular q ∈Q are the (normalized) conditional probability distributions q(r(w) | h(w)). We will now see how to estimate these parameters to approximate p(· | x) for a given x at test time. 3.2.2 Estimation of q∗ Note that the objective function (8)–(10) asks us to approximate p as closely as possible, without any further smoothing. (It is assumed that p is already smoothed appropriately, having been constructed from channel and language models that were estimated with smoothing from finite training data.) In fact, if p were the empirical distribution over strings in a training corpus, then q∗of (10) is just the maximum-likelihood n-gram model—whose parameters, trivially, are just unsmoothed ratios of the n-gram and (n−1)-gram counts in the training corpus. That is, q∗(r(w) | h(w)) = c(w) c(h(w)). Our actual job is exactly the same, except that p is specified not by a corpus but by the hypergraph HG(x). The only change is that the n-gram counts ¯c(w) are no longer integers from a corpus, but are expected counts under p:10 q∗(r(w) | h(w)) = ¯c(w) ¯c(h(w)) = (12) P y cw(y)p(y | x) P y ch(w)(y)p(y | x) = P y,d cw(y)p(y, d | x) P y,d ch(w)(y)p(y, d | x) Now, the question is how to efficiently compute (12) from the hypergraph HG(x). To develop the intuition, we first present a brute-force algorithm in Figure 3. The algorithm is brute-force since it first needs to unpack the hypergraph and enumerate each possible derivation in the hypergraph (see line 1), which is computationally intractable. The algorithm then enumerates each n-gram and (n −1)-gram in y and accumulates its soft count into the expected count, and finally obtains the parameters of q∗by taking count ratios via (12). Figure 4 shows an efficient version that exploits the packed-forest structure of HG(x) in computing the expected counts. Specifically, it first runs the inside-outside procedure, which annotates each node (say v) with both an inside weight β(v) and an outside weight α(v). The inside-outside also finds Z(x), the total weight of all derivations. With these weights, the algorithm then explores the hypergraph once more to collect the expected 10One can prove (12) via Lagrange multipliers, with q∗(· | h) constrained to be a normalized distribution for each h. 596 Brute-Force-MLE(HG(x)) 1 for y, d in HG(x)  each derivation 2 for w in y  each n-gram type 3  accumulate soft count 4 ¯c(w) + = cw(y) · p(y, d | x) 5 ¯c(h(w)) + = cw(y) · p(y, d | x) 6 q∗←MLE using formula (12) 7 return q∗ Figure 3: Brute-force estimation of q∗. Dynamic-Programming-MLE(HG(x)) 1 run inside-outside on the hypergraph HG(x) 2 for v in HG(x)  each node 3 for e ∈B(v)  each incoming hyperedge 4 ce ←pe · α(v)/Z(x) 5 for u ∈T(e)  each antecedent node 6 ce ←ce · β(u) 7  accumulate soft count 8 for w in e  each n-gram type 9 ¯c(w) + = cw(e) · ce 10 ¯c(h(w)) + = cw(e) · ce 11 q∗←MLE using formula (12) 12 return q∗ Figure 4: Dynamic programming estimation of q∗. B(v) represents the set of incoming hyperedges of node v; pe represents the weight of the hyperedge e itself; T(e) represents the set of antecedent nodes of hyperedge e. Please refer to the text for the meanings of other notations. counts. For each hyperedge (say e), it first gets the posterior weight ce (see lines 4-6). Then, for each n-gram type (say w), it increments the expected count by cw(e) · ce, where cw(e) is the number of copies of n-gram w that are added by hyperedge e, i.e., that appear in the yield of e but not in the yields of any of its antecedents u ∈T(e). While there may be exponentially many derivations, the hypergraph data structure represents them in polynomial space by allowing multiple derivations to share subderivations. The algorithm of Figure 4 may be run over this packed forest in time O(|HG(x)|) where |HG(x)| is the hypergraph’s size (number of hyperedges). 3.2.3 Decoding with q∗ When translating x at runtime, the q∗constructed from HG(x) will be used as a surrogate for p during decoding. We want its most probable string: y∗ = argmax y q∗(y) (13) Since q∗is an n-gram model, finding y∗is equivalent to a shortest-path problem in a certain graph whose edges correspond to n-grams (weighted with negative log-probabilities) and whose vertices correspond to (n −1)-grams. However, because q∗only approximates p, y∗of (13) may be locally appropriate but globally inadequate as a translation of x. Observe, e.g., that an ngram model q∗(y) will tend to favor short strings y, regardless of the length of x. Suppose x = le chat chasse la souris (“the cat chases the mouse”) and q∗is a bigram approximation to p(y | x). Presumably q∗(the | START), q∗(mouse | the), and q∗(END | mouse) are all large in HG(x). So the most probable string y∗under q∗may be simply “the mouse,” which is short and has a high probability but fails to cover x. Therefore, a better way of using q∗is to restrict the search space to the original hypergraph, i.e.: y∗ = argmax y∈T(x) q∗(y) (14) This ensures that y∗is a valid string in the original hypergraph HG(x), which will tend to rule out inadequate translations like “the mouse.” If our sole objective is to get a good approximation to p(y | x), we should just use a single n-gram model q∗whose order n is as large as possible, given computational constraints. This may be regarded as favoring n-grams that are likely to appear in the reference translation (because they are likely in the derivation forest). However, in order to score well on the BLEU metric for MT evaluation (Papineni et al., 2001), which gives partial credit, we would also like to favor lower-order ngrams that are likely to appear in the reference, even if this means picking some less-likely highorder n-grams. For this reason, it is useful to interpolate different orders of variational models, y∗ = argmax y∈T(x) X n θn · log q∗ n(y) (15) where n may include the value of zero, in which case log q∗ 0(y) def= |y|, corresponding to a conventional word penalty feature. In the geometric interpolation above, the weight θn controls the relative veto power of the n-gram approximation and can be tuned using MERT (Och, 2003) or a minimum risk procedure (Smith and Eisner, 2006). Lastly, note that Viterbi and variational approximation are different ways to approximate the exact probability p(y | x), and each of them has pros and cons. Specifically, Viterbi approximation uses the correct probability of one complete 597 derivation, but ignores most of the derivations in the hypergraph. In comparison, the variational approximation considers all the derivations in the hypergraph, but uses only aggregate statistics of fragments of derivations. Therefore, it is desirable to interpolate further with the Viterbi approximation when choosing the final translation output:11 y∗= argmax y∈T(x) X n θn · log q∗ n(y) + θv · log pViterbi(y | x) (16) where the first term corresponds to the interpolated variational decoding of (15) and the second term corresponds to the Viterbi decoding of (4).12 Assuming θv > 0, the second term penalizes translations with no good derivation in the hypergraph.13 For n ≤ m, any of these decoders (14)– (16) may be implemented efficiently by using the n-gram variational approximations q∗to rescore HG(x)—preserving its hypergraph topology, but modifying the hyperedge weights.14 While the original weights gave derivation d a score of log p(d | x), the weights as modified for (16) will give d a score of P n θn · log q∗ n(Y(d)) + θv · log p(d | x). We then find the best-scoring derivation and output its target yield; that is, we find argmaxy∈T(x) via Y(argmaxd∈D(x)). 4 Variational vs. Min-Risk Decoding In place of the MAP decoding, another commonly used decision rule is minimum Bayes risk (MBR): y∗= argmin y R(y) = argmin y X y′ l(y, y′)p(y′ | x) (17) 11It would also be possible to interpolate with the N-best approximations (see Section 2.4), with some complications. 12Zens and Ney (2006) use a similar decision rule as here and they also use posterior n-gram probabilities as feature functions, but their model estimation and decoding are over an N-best, which is trivial in terms of computation. 13Already at (14), we explicitly ruled out translations y having no derivation at all in the hypergraph. However, suppose the hypergraph were very large (thanks to a large or smoothed translation model and weak pruning). Then (14)’s heuristic would fail to eliminate bad translations (“the mouse”), since nearly every string y ∈Σ∗would be derived as a translation with at least a tiny probability. The “soft” version (16) solves this problem, since unlike the “hard” (14), it penalizes translations that appear only weakly in the hypergraph. As an extreme case, translations not in the hypergraph at all are infinitely penalized (log pViterbi(y) = log 0 = −∞), making it natural for the decoder not to consider them, i.e., to do only argmaxy∈T(x) rather than argmaxy∈Σ∗. 14One might also want to use the q∗ n or smoothed versions of them to rescore additional hypotheses, e.g., hypotheses proposed by other systems or by system combination. where l(y, y′) represents the loss of y if the true answer is y′, and the risk of y is its expected loss.15 Statistical decision theory shows MBR is optimal if p(y′ | x) is the true distribution, while in practice p(y′ | x) is given by a model at hand. We now observe that our variational decoding resembles the MBR decoding of Tromble et al. (2008). They use the following loss function, of which a linear approximation to BLEU (Papineni et al., 2001) is a special case, l(y, y′) = −(θ0|y| + X w∈N θwcw(y)δw(y′)) (18) where w is an n-gram type, N is a set of n-gram types with n ∈[1, 4], cw(y) is the number of occurrence of the n-gram w in y, and δw(y′) is an indicator function to check if y′ contains at least one occurrence of w. With the above loss function, Tromble et al. (2008) derive the MBR rule16 y∗= argmax y (θ0|y| + X w∈N θwcw(y)g(w | x)) (19) where g(w | x) is a specialized “posterior” probability of the n-gram w, and is defined as g(w | x) = X y′ δw(y′)p(y′ | x) (20) Now, let us divide N, which contains n-gram types of different n, into several subsets Wn, each of which contains only the n-grams with a given length n. We can now rewrite (19) as follows, y∗= argmax y X n θn · gn(y | x) (21) by assuming θw = θ|w| and, gn(y | x)= ( |y| if n = 0 P w∈Wn g(w | x)cw(y) if n > 0 (22) Clearly, their rule (21) has a quite similar form to our rule (15), and we can relate (20) to (12) and (22) to (11). This justifies the use of interpolation in Section 3.2.3. However, there are several important differences. First, the n-gram “posterior” of (20) is very expensive to compute. In fact, it requires an intersection between each n-gram in the lattice and the lattice itself, as is done by Tromble 15The MBR becomes the MAP decision rule of (1) if a socalled zero-one loss function is used: l(y, y′) = 0 if y = y′; otherwise l(y, y′) = 1. 16Note that Tromble et al. (2008) only consider MBR for a lattice without hidden structures, though their method can be in principle applied in a hypergraph with spurious ambiguity. 598 et al. (2008). In comparison, the optimal n-gram probabilities of (12) can be computed using the inside-outside algorithm, once and for all. Also, g(w | x) of (20) is not normalized over the history of w, while q∗(r(w) | h(w)) of (12) is. Lastly, the definition of the n-gram model is different. While the model (11) is a proper probabilistic model, the function of (22) is simply an approximation of the average n-gram precisions of y. A connection between variational decoding and minimum-risk decoding has been noted before (e.g., Matsuzaki et al. (2005)), but the derivation above makes the connection formal. DeNero et al. (2009) concurrently developed an alternate to MBR, called consensus decoding, which is similar to ours in practice although motivated quite differently. 5 Experimental Results We report results using an open source MT toolkit, called Joshua (Li et al., 2009), which implements Hiero (Chiang, 2007). 5.1 Experimental Setup We work on a Chinese to English translation task. Our translation model was trained on about 1M parallel sentence pairs (about 28M words in each language), which are sub-sampled from corpora distributed by LDC for the NIST MT evaluation using a sampling method based on the ngram matches between training and test sets in the foreign side. We also used a 5-gram language model with modified Kneser-Ney smoothing (Chen and Goodman, 1998), trained on a data set consisting of a 130M words in English Gigaword (LDC2007T07) and the English side of the parallel corpora. We use GIZA++ (Och and Ney, 2000), a suffix-array (Lopez, 2007), SRILM (Stolcke, 2002), and risk-based deterministic annealing (Smith and Eisner, 2006)17 to obtain word alignments, translation models, language models, and the optimal weights for combining these models, respectively. We use standard beam-pruning and cube-pruning parameter settings, following Chiang (2007), when generating the hypergraphs. The NIST MT’03 set is used to tune model weights (e.g. those of (16)) and the scaling factor 17We have also experimented with MERT (Och, 2003), and found that the deterministic annealing gave results that were more consistent across runs and often better. Decoding scheme MT’04 MT’05 Viterbi 35.4 32.6 MBR (K=1000) 35.8 32.7 Crunching (N=10000) 35.7 32.8 Crunching+MBR (N=10000) 35.8 32.7 Variational (1to4gram+wp+vt) 36.6 33.5 Table 1: BLEU scores for Viterbi, Crunching, MBR, and variational decoding. All the systems improve significantly over the Viterbi baseline (paired permutation test, p < 0.05). In each column, we boldface the best result as well as all results that are statistically indistinguishable from it. In MBR, K is the number of unique strings. For Crunching and Crunching+MBR, N represents the number of derivations. On average, each string has about 115 distinct derivations. The variational method “1to4gram+wp+vt” is our full interpolation (16) of four variational n-gram models (“1to4gram”), the Viterbi baseline (“vt”), and a word penalty feature (“wp”). γ of (3),18 and MT’04 and MT’05 are blind testsets. We will report results for lowercase BLEU-4, using the shortest reference translation in computing brevity penalty. 5.2 Main Results Table 1 presents the BLEU scores under Viterbi, crunching, MBR, and variational decoding. Both crunching and MBR show slight significant improvements over the Viterbi baseline; variational decoding gives a substantial improvement. The difference between MBR and Crunching+MBR lies in how we approximate the distribution p(y′ | x) in (17).19 For MBR, we take p(y′ | x) to be proportional to pViterbi(y′ | x) if y′ is among the K best distinct strings on that measure, and 0 otherwise. For Crunching+MBR, we take p(y′ | x) to be proportional to pcrunch(y′ | x), which is based on the N best derivations. 5.3 Results of Different Variational Decoding Table 2 presents the BLEU results under different ways in using the variational models, as discussed in Section 3.2.3. As shown in Table 2a, decoding with a single variational n-gram model (VM) as per (14) improves the Viterbi baseline (except the case with a unigram VM), though often not statistically significant. Moreover, a bigram (i.e., “2gram”) achieves the best BLEU scores among the four different orders of VMs. The interpolation between a VM and a word penalty feature (“wp”) improves over the unigram 18We found the BLEU scores are not very sensitive to γ, contrasting to the observations by Tromble et al. (2008). 19We also restrict T(x) to {y : p(y | x) > 0}, using the same approximation for p(y | x) as we did for p(y′ | x). 599 (a) decoding with a single variational model Decoding scheme MT’04 MT’05 Viterbi 35.4 32.6 1gram 25.9 24.5 2gram 36.1 33.4 3gram 36.0∗ 33.1 4gram 35.8∗ 32.9 (b) interpolation between a single variational model and a word penalty feature 1gram+wp 29.7 27.7 2gram+wp 35.5 32.6 3gram+wp 36.1∗ 33.1 4gram+wp 35.7∗ 32.8∗ (c) interpolation of a single variational model, the Viterbi model, and a word penalty feature 1gram+wp+vt 35.6∗ 32.8∗ 2gram+wp+vt 36.5∗ 33.5∗ 3gram+wp+vt 35.8∗ 32.9∗ 4gram+wp+vt 35.6∗ 32.8∗ (d) interpolation of several n-gram VMs, the Viterbi model, and a word penalty feature 1to2gram+wp+vt 36.6∗ 33.6∗ 1to3gram+wp+vt 36.6∗ 33.5∗ 1to4gram+wp+vt 36.6∗ 33.5∗ Table 2: BLEU scores under different variational decoders discussed in Section 3.2.3. A star ∗indicates a result that is significantly better than Viterbi decoding (paired permutation test, p < 0.05). We boldface the best system and all systems that are not significantly worse than it. The brevity penalty BP in BLEU is always 1, meaning that on average y∗is no shorter than the reference translation, except for the “1gram” systems in (a), which suffer from brevity penalties of 0.826 and 0.831. VM dramatically, but does not improve higherorder VMs (Table 2b). Adding the Viterbi feature (“vt”) into the interpolation further improves the lower-order models (Table 2c), and all the improvements over the Viterbi baseline become statistically significant. At last, interpolation of several variational models does not yield much further improvement over the best previous model, but makes the results more stable (Table 2d). 5.4 KL Divergence of Approximate Models While the BLEU scores reported show the practical utility of the variational models, it is also interesting to measure how well each individual variational model q(y) approximates the distribution p(y | x). Ideally, the quality of approximation should be measured by the KL divergence KL(p ∥q) def= H(p, q) −H(p), where the crossentropy H(p, q) def= −P y p(y | x) log q(y), and Measure H(p, ·) Hd(p) H(p) bits/word q∗ 1 q∗ 2 q∗ 3 q∗ 4 ≈ MT’04 2.33 1.68 1.57 1.53 1.36 1.03 MT’05 2.31 1.69 1.58 1.54 1.37 1.04 Table 3: Cross-entropies H(p, q) achieved by various approximations q. The notation H denotes the sum of crossentropies of all test sentences, divided by the total number of test words. A perfect approximation would achieve H(p), which we estimate using the true Hd(p) and a 10000-best list. the entropy H(p) def= −P y p(y | x) log p(y | x). Unfortunately H(p) (and hence KL = H(p, q) − H(p)) is intractable to compute. But, since H(p) is the same for all q, we can simply use H(p, q) to compare different models q. Table 3 reports the cross-entropies H(p, q) for various models q. We also report the derivational entropy Hd(p) def= −P d p(d | x) log p(d | x).20 From this, we obtain an estimate of H(p) by observing that the “gap” Hd(p) −H(p) equals Ep(y)[H(d | y)], which we estimate from our 10000-best list. Table 3 confirms that higher-order variational models (drawn from a larger family Q) approximate p better. This is necessarily true, but it is interesting to see that most of the improvement is obtained just by moving from a unigram to a bigram model. Indeed, although Table 3 shows that better approximations can be obtained by using higher-order models, the best BLEU score in Tables 2a and 2c was obtained by the bigram model. After all, p cannot perfectly predict the reference translation anyway, hence may not be worth approximating closely; but p may do a good job of predicting bigrams of the reference translation, and the BLEU score rewards us for those. 6 Conclusions and Future Work We have successfully applied the general variational inference framework to a large-scale MT task, to approximate the intractable problem of MAP decoding in the presence of spurious ambiguity. We also showed that interpolating variational models with the Viterbi approximation can compensate for poor approximations, and that interpolating them with one another can reduce the Bayes risk and improve BLEU. Our empirical results improve the state of the art. 20Both H(p, q) and Hd(p) involve an expectation over exponentially many derivations, but they can be computed in time only linear in the size of HG(x) using an expectation semiring (Eisner, 2002). In particular, H(p, q) can be found as −P d∈D(x) p(d | x) log q(Y(d)). 600 Many interesting research directions remain open. To approximate the intractable MAP decoding problem of (2), we can use different variational distributions other than the n-gram model of (11). Interpolation with other models is also interesting, e.g., the constituent model in Zhang and Gildea (2008). We might also attempt to minimize KL(q ∥p) rather than KL(p ∥q), in order to approximate the mode (which may be preferable since we care most about the 1-best translation under p) rather than the mean of p (Minka, 2005). One could also augment our n-gram models with non-local string features (Rosenfeld et al., 2001) provided that the expectations of these features could be extracted from the hypergraph. Variational inference can also be exploited to solve many other intractable problems in MT (e.g., word/phrase alignment and system combination). Finally, our method can be used for tasks beyond MT. For example, it can be used to approximate the intractable MAP decoding inherent in systems using HMMs (e.g. speech recognition). It can also be used to approximate a context-free grammar with a finite state automaton (Nederhof, 2005). References Cyril Allauzen, Mehryar Mohri, and Brian Roark. 2003. Generalized algorithms for constructing statistical language models. In ACL, pages 40–47. Christopher M. Bishop. 2006. Pattern recognition and machine learning. Springer. Phil Blunsom, Trevor Cohn, and Miles Osborne. 2008. A discriminative latent variable model for statistical machine translation. In ACL, pages 200–208. Francisco Casacuberta and Colin De La Higuera. 2000. Computational complexity of problems on probabilistic grammars and transducers. In ICGI, pages 15–24. Stanley F. Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical report. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. John DeNero, David Chiang, and Kevin Knight. 2009. Fast consensus decoding over translation forests. In ACL-IJCNLP. Jason Eisner. 2002. Parameter estimation for probabilistic finite-state transducers. In ACL, pages 1–8. Joshua Goodman. 1996. Efficient algorithms for parsing the DOP model. In EMNLP, pages 143–152. Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In ACL, pages 144–151. M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. 1999. An introduction to variational methods for graphical models. In Learning in Graphical Models. MIT press. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In NAACL, pages 48–54. Zhifei Li and Sanjeev Khudanpur. 2008. A scalable decoder for parsing-based machine translation with equivalent language model state maintenance. In ACL SSST, pages 10–18. Zhifei Li, Chris Callison-Burch, Chris Dyer, Juri Ganitkevitch, Sanjeev Khudanpur, Lane Schwartz, Wren Thornton, Jonathan Weese, and Omar. Zaidan. 2009. Joshua: An open source toolkit for parsingbased machine translation. In WMT09, pages 135– 139. Adam Lopez. 2007. Hierarchical phrase-based translation with suffix arrays. In EMNLP-CoNLL, pages 976–985. Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2005. Probabilistic CFG with latent annotations. In ACL, pages 75–82. Jonathan May and Kevin Knight. 2006. A better n-best list: practical determinization of weighted finite tree automata. In NAACL, pages 351–358. Tom Minka. 2005. Divergence measures and message passing. In Microsoft Research Technical Report (MSR-TR-2005-173). Microsoft Research. Mark-Jan Nederhof. 2005. A general technique to train language models on language models. Comput. Linguist., 31(2):173–186. Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In ACL, pages 440– 447. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In ACL, pages 160– 167. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. In ACL, pages 311– 318. Roni Rosenfeld, Stanley F. Chen, and Xiaojin Zhu. 2001. Whole-sentence exponential language models: A vehicle for linguistic-statistical integration. Computer Speech and Language, 15(1). Khalil Sima’an. 1996. Computational complexity of probabilistic disambiguation by means of treegrammars. In COLING, pages 1175–1180. David A. Smith and Jason Eisner. 2006. Minimum risk annealing for training log-linear models. In ACL, pages 787–794. Andreas Stolcke. 2002. Srilm - an extensible language modeling toolkit. In ICSLP, pages 901–904. Roy Tromble, Shankar Kumar, Franz Och, and Wolfgang Macherey. 2008. Lattice Minimum BayesRisk decoding for statistical machine translation. In EMNLP, pages 620–629. Richard Zens and Hermann Ney. 2006. N-gram posterior probabilities for statistical machine translation. In WMT06, pages 72–77. Hao Zhang and Daniel Gildea. 2008. Efficient multipass decoding for synchronous context free grammars. In ACL, pages 209–217. 601
2009
67
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 602–610, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Unsupervised Learning of Narrative Schemas and their Participants Nathanael Chambers and Dan Jurafsky Stanford University, Stanford, CA 94305 {natec,jurafsky}@stanford.edu Abstract We describe an unsupervised system for learning narrative schemas, coherent sequences or sets of events (arrested(POLICE,SUSPECT), convicted( JUDGE, SUSPECT)) whose arguments are filled with participant semantic roles defined over words (JUDGE = {judge, jury, court}, POLICE = {police, agent, authorities}). Unlike most previous work in event structure or semantic role learning, our system does not use supervised techniques, hand-built knowledge, or predefined classes of events or roles. Our unsupervised learning algorithm uses coreferring arguments in chains of verbs to learn both rich narrative event structure and argument roles. By jointly addressing both tasks, we improve on previous results in narrative/frame learning and induce rich frame-specific semantic roles. 1 Introduction This paper describes a new approach to event semantics that jointly learns event relations and their participants from unlabeled corpora. The early years of natural language processing (NLP) took a “top-down” approach to language understanding, using representations like scripts (Schank and Abelson, 1977) (structured representations of events, their causal relationships, and their participants) and frames to drive interpretation of syntax and word use. Knowledge structures such as these provided the interpreter rich information about many aspects of meaning. The problem with these rich knowledge structures is that the need for hand construction, specificity, and domain dependence prevents robust and flexible language understanding. Instead, modern work on understanding has focused on shallower representations like semantic roles, which express at least one aspect of the semantics of events and have proved amenable to supervised learning from corpora like PropBank (Palmer et al., 2005) and Framenet (Baker et al., 1998). Unfortunately, creating these supervised corpora is an expensive and difficult multi-year effort, requiring complex decisions about the exact set of roles to be learned. Even unsupervised attempts to learn semantic roles have required a pre-defined set of roles (Grenager and Manning, 2006) and often a hand-labeled seed corpus (Swier and Stevenson, 2004; He and Gildea, 2006). In this paper, we describe our attempts to learn script-like information about the world, including both event structures and the roles of their participants, but without pre-defined frames, roles, or tagged corpora. Consider the following Narrative Schema, to be defined more formally later. The events on the left follow a set of participants through a series of connected events that constitute a narrative: A search B A arrest B D convict B B plead C D acquit B D sentence B A = Police B = Suspect C = Plea D = Jury Events Roles Being able to robustly learn sets of related events (left) and frame-specific role information about the argument types that fill them (right) could assist a variety of NLP applications, from question answering to machine translation. Our previous work (Chambers and Jurafsky, 2008) relied on the intuition that in a coherent text, any two events that are about the same participants are likely to be part of the same story or narrative. The model learned simple aspects of narrative structure (‘narrative chains’) by extracting events that share a single participant, the protagonist. In this paper we extend this work to represent sets of situation-specific events not unlike scripts, caseframes (Bean and Riloff, 2004), and FrameNet frames (Baker et al., 1998). This paper shows that verbs in distinct narrative chains can be merged into an improved single narrative schema, while the shared arguments across verbs can provide rich information for inducing semantic roles. 602 2 Background This paper addresses two areas of work in event semantics, narrative event chains and semantic role labeling. We begin by highlighting areas in both that can mutually inform each other through a narrative schema model. 2.1 Narrative Event Chains Narrative Event Chains are partially ordered sets of events that all involve the same shared participant, the protagonist (Chambers and Jurafsky, 2008). A chain contains a set of verbs representing events, and for each verb, the grammatical role filled by the shared protagonist. An event is a verb together with its constellation of arguments. An event slot is a tuple of an event and a particular argument slot (grammatical relation), represented as a pair ⟨v, d⟩where v is a verb and d ∈{subject, object, prep}. A chain is a tuple (L, O) where L is a set of event slots and O is a partial (temporal) ordering. We will write event slots in shorthand as (X pleads) or (pleads X) for ⟨pleads, subject⟩and ⟨pleads, object⟩. Below is an example chain modeling criminal prosecution. L = (X pleads), (X admits), (convicted X), (sentenced X) O = {(pleads, convicted), (convicted, sentenced), ...} A graphical view is often more intuitive: admits pleads sentenced convicted (X admits) (X pleads) (convicted X) (sentenced X) In this example, the protagonist of the chain is the person being prosecuted and the other unspecified event slots remain unfilled and unconstrained. Chains in the Chambers and Jurafsky (2008) model are ordered; in this paper rather than address the ordering task we focus on event and argument induction, leaving ordering as future work. The Chambers and Jurafsky (2008) model learns chains completely unsupervised, (albeit after parsing and resolving coreference in the text) by counting pairs of verbs that share coreferring arguments within documents and computing the pointwise mutual information (PMI) between these verb-argument pairs. The algorithm creates chains by clustering event slots using their PMI scores, and we showed this use of co-referring arguments improves event relatedness. Our previous work, however, has two major limitations. First, the model did not express any information about the protagonist, such as its type or role. Role information (such as knowing whether a filler is a location, a person, a particular class of people, or even an inanimate object) could crucially inform learning and inference. Second, the model only represents one participant (the protagonist). Representing the other entities involved in all event slots in the narrative could potentially provide valuable information. We discuss both of these extensions next. 2.1.1 The Case for Arguments The Chambers and Jurafsky (2008) narrative chains do not specify what type of argument fills the role of protagonist. Chain learning and clustering is based only on the frequency with which two verbs share arguments, ignoring any features of the arguments themselves. Take this example of an actual chain from an article in our training data. Given this chain of five events, we want to choose other events most likely to occur in this scenario. hunt use accuse suspect search fly charge ? One of the top scoring event slots is (fly X). Narrative chains incorrectly favor (fly X) because it is observed during training with all five event slots, although not frequently with any one of them. An event slot like (charge X) is much more plausible, but is unfortunately scored lower by the model. Representing the types of the arguments can help solve this problem. Few types of arguments are shared between the chain and (fly X). However, (charge X) shares many arguments with (accuse X), (search X) and (suspect X) (e.g., criminal and suspect). Even more telling is that these arguments are jointly shared (the same or coreferent) across all three events. Chains represent coherent scenarios, not just a set of independent pairs, so we want to model argument overlap across all pairs. 2.1.2 The Case for Joint Chains The second problem with narrative chains is that they make judgments only between protagonist arguments, one slot per event. All entities and slots 603 in the space of events should be jointly considered when making event relatedness decisions. As an illustration, consider the verb arrest. Which verb is more related, convict or capture? A narrative chain might only look at the objects of these verbs and choose the one with the highest score, usually choosing convict. But in this case the subjects offer additional information; the subject of arrest (police) is different from that of convict (judge). A more informed decision prefers capture because both the objects (suspect) and subjects (police) are identical. This joint reasoning is absent from the narrative chain model. 2.2 Semantic Role Labeling The task of semantic role learning and labeling is to identify classes of entities that fill predicate slots; semantic roles seem like they’d be a good model for the kind of argument types we’d like to learn for narratives. Most work on semantic role labeling, however, is supervised, using Propbank (Palmer et al., 2005), FrameNet (Baker et al., 1998) or VerbNet (Kipper et al., 2000) as gold standard roles and training data. More recent learning work has applied bootstrapping approaches (Swier and Stevenson, 2004; He and Gildea, 2006), but these still rely on a hand labeled seed corpus as well as a pre-defined set of roles. Grenegar and Manning (2006) use the EM algorithm to learn PropBank roles from unlabeled data, and unlike bootstrapping, they don’t need a labeled corpus from which to start. However, they do require a predefined set of roles (arg0, arg1, etc.) to define the domain of their probabilistic model. Green and Dorr (2005) use WordNet’s graph structure to cluster its verbs into FrameNet frames, using glosses to name potential slots. We differ in that we attempt to learn frame-like narrative structure from untagged newspaper text. Most similar to us, Alishahi and Stevenson (2007) learn verb specific semantic profiles of arguments using WordNet classes to define the roles. We learn situation-specific classes of roles shared by multiple verbs. Thus, two open goals in role learning include (1) unsupervised learning and (2) learning the roles themselves rather than relying on pre-defined role classes. As just described, Chambers and Jurafsky (2008) offers an unsupervised approach to event learning (goal 1), but lacks semantic role knowledge (goal 2). The following sections describe a model that addresses both goals. 3 Narrative Schemas The next sections introduce typed narrative chains and chain merging, extensions that allow us to jointly learn argument roles with event structure. 3.1 Typed Narrative Chains The first step in describing a narrative schema is to extend the definition of a narrative chain to include argument types. We now constrain the protagonist to be of a certain type or role. A Typed Narrative Chain is a partially ordered set of event slots that share an argument, but now the shared argument is a role defined by being a member of a set of types R. These types can be lexical units (such as observed head words), noun clusters, or other semantic representations. We use head words in the examples below, but we also evaluate with argument clustering by mapping head words to member clusters created with the CBC clustering algorithm (Pantel and Lin, 2002). We define a typed narrative chain as a tuple (L, P, O) with L and O the set of event slots and partial ordering as before. Let P be a set of argument types (head words) representing a single role. An example is given here: L = {(hunt X), (X use), (suspect X), (accuse X), (search X)} P = {person, government, company, criminal, ...} O = {(use, hunt), (suspect, search), (suspect, accuse) ... } 3.2 Learning Argument Types As mentioned above, narrative chains are learned by parsing the text, resolving coreference, and extracting chains of events that share participants. In our new model, argument types are learned simultaneously with narrative chains by finding salient words that represent coreferential arguments. We record counts of arguments that are observed with each pair of event slots, build the referential set for each word from its coreference chain, and then represent each observed argument by the most frequent head word in its referential set (ignoring pronouns and mapping entity mentions with person pronouns to a constant PERSON identifier). As an example, the following contains four worker mentions: But for a growing proportion of U.S. workers, the troubles really set in when they apply for unemployment benefits. Many workers find their benefits challenged. 604 L = {X arrest, X charge, X raid, X seize, X confiscate, X detain, X deport } P = {police, agent, authority, government} Figure 1: A typed narrative chain. The four top arguments are given. The ordering O is not shown. The four bolded terms are coreferential and (hopefully) identified by coreference. Our algorithm chooses the head word of each phrase and ignores the pronouns. It then chooses the most frequent head word as the most salient mention. In this example, the most salient term is workers. If any pair of event slots share arguments from this set, we count workers. In this example, the pair (X find) and (X apply) shares an argument (they and workers). The pair ((X find),(X apply)) is counted once for narrative chain induction, and ((X find), (X apply), workers) once for argument induction. Figure 1 shows the top occurring words across all event slot pairs in a criminal scenario chain. This chain will be part of a larger narrative schema, described in section 3.4. 3.3 Event Slot Similarity with Arguments We now formalize event slot similarity with arguments. Narrative chains as defined in (Chambers and Jurafsky, 2008) score a new event slot ⟨f, g⟩ against a chain of size n by summing over the scores between all pairs: chainsim(C, ⟨f, g⟩) = n X i=1 sim(⟨ei, di⟩, ⟨f, g⟩) (1) where C is a narrative chain, f is a verb with grammatical argument g, and sim(e, e′) is the pointwise mutual information pmi(e, e′). Growing a chain by one adds the highest scoring event. We extend this function to include argument types by defining similarity in the context of a specific argument a: sim(⟨e, d⟩, ˙ e′, d′¸ , a) = pmi(⟨e, d⟩, ˙ e′, d′¸ ) + λ log freq(⟨e, d⟩, ˙ e′, d′¸ , a) (2) where λ is a constant weighting factor and freq(b, b′, a) is the corpus count of a filling the arguments of events b and b′. We then score the entire chain for a particular argument: score(C, a) = n−1 X i=1 n X j=i+1 sim(⟨ei, di⟩, ⟨ej, dj⟩, a) (3) Using this chain score, we finally extend chainsim to score a new event slot based on the argument that maximizes the entire chain’s score: chainsim′(C, ⟨f, g⟩) = max a (score(C, a) + n X i=1 sim(⟨ei, di⟩, ⟨f, g⟩, a)) (4) The argument is now directly influencing event slot similarity scores. We will use this definition in the next section to build Narrative Schemas. 3.4 Narrative Schema: Multiple Chains Whereas a narrative chain is a set of event slots, a Narrative Schema is a set of typed narrative chains. A schema thus models all actors in a set of events. If (push X) is in one chain, (Y push) is in another. This allows us to model a document’s entire narrative, not just one main actor. 3.4.1 The Model A narrative schema is defined as a 2-tuple N = (E, C) with E a set of events (here defined as verbs) and C a set of typed chains over the event slots. We represent an event as a verb v and its grammatical argument positions Dv ⊆ {subject, object, prep}. Thus, each event slot ⟨v, d⟩for all d ∈Dv belongs to a chain c ∈C in the schema. Further, each c must be unique for each slot of a single verb. Using the criminal prosecution domain as an example, a narrative schema in this domain is built as in figure 2. The three dotted boxes are graphical representations of the typed chains that are combined in this schema. The first represents the event slots in which the criminal is involved, the second the police, and the third is a court or judge. Although our representation uses a set of chains, it is equivalent to represent a schema as a constraint satisfaction problem between ⟨e, d⟩event slots. The next section describes how to learn these schemas. 3.4.2 Learning Narrative Schemas Previous work on narrative chains focused on relatedness scores between pairs of verb arguments (event slots). The clustering step which built chains depended on these pairwise scores. Narrative schemas use a generalization of the entire verb with all of its arguments. A joint decision can be made such that a verb is added to a schema if both its subject and object are assigned to chains in the schema with high confidence. For instance, it may be the case that (Y pull over) scores well with the ‘police’ chain in 605 police, agent criminal, suspect guilty, innocent judge, jury arrest charge convict sentence arrest charge convict plead sentence police,agent judge,jury arrest charge convict plead sentence criminal,suspect Figure 2: Merging typed chains into a single unordered Narrative Schema. figure 3. However, the object of (pull over A) is not present in any of the other chains. Police pull over cars, but this schema does not have a chain involving cars. In contrast, (Y search) scores well with the ‘police’ chain and (search X) scores well in the ‘defendant’ chain too. Thus, we want to favor search instead of pull over because the schema is already modeling both arguments. This intuition leads us to our event relatedness function for the entire narrative schema N, not just one chain. Instead of asking which event slot ⟨v, d⟩is a best fit, we ask if v is best by considering all slots at once: narsim(N, v) = X d∈Dv max(β, max c∈CN chainsim′(c, ⟨v, d⟩)) (5) where CN is the set of chains in our narrative N. If ⟨v, d⟩does not have strong enough similarity with any chain, it creates a new one with base score β. The β parameter balances this decision of adding to an existing chain in N or creating a new one. 3.4.3 Building Schemas We use equation 5 to build schemas from the set of events as opposed to the set of event slots that previous work on narrative chains used. In Chambers and Jurafsky (2008), narrative chains add the best ⟨e, d⟩based on the following: max j:0<j<m chainsim(c, ⟨vj, gj⟩) (6) where m is the number of seen event slots in the corpus and ⟨vj, gj⟩is the jth such possible event slot. Schemas are now learned by adding events that maximize equation 5: max j:0<j<|v| narsim(N, vj) (7) where |v| is the number of observed verbs and vj is the jth such verb. Verbs are incrementally added to a narrative schema by strength of similarity. arrest charge seize confiscate defendant, nichols, smith, simpson police, agent, authorities, government license immigrant, reporter, cavalo, migrant, alien detain deport raid Figure 3: Graphical view of an unordered schema automatically built starting from the verb ‘arrest’. A β value that encouraged splitting was used. 4 Sample Narrative Schemas Figures 3 and 4 show two criminal schemas learned completely automatically from the NYT portion of the Gigaword Corpus (Graff, 2002). We parse the text into dependency graphs and resolve coreferences. The figures result from learning over the event slot counts. In addition, figure 5 shows six of the top 20 scoring narrative schemas learned by our system. We artificially required the clustering procedure to stop (and sometimes continue) at six events per schema. Six was chosen as the size to enable us to compare to FrameNet in the next section; the mean number of verbs in FrameNet frames is between five and six. A low β was chosen to limit chain splitting. We built a new schema starting from each verb that occurs in more than 3000 and less than 50,000 documents in the NYT section. This amounted to approximately 1800 verbs from which we show the top 20. Not surprisingly, most of the top schemas concern business, politics, crime, or food. 5 Frames and Roles Most previous work on unsupervised semantic role labeling assumes that the set of possible 606 A produce B A sell B A manufacture B A *market B A distribute B A -develop B A ∈{company, inc, corp, microsoft, iraq, co, unit, maker, ...} B ∈{drug, product, system, test, software, funds, movie, ...} B trade C B fell C A *quote B B fall C B -slip C B rise C A ∈{} B ∈{dollar, share, index, mark, currency, stock, yield, price, pound, ...} C ∈{friday, most, year, percent, thursday monday, share, week, dollar, ...} A boil B A slice B A -peel B A saute B A cook B A chop B A ∈{wash, heat, thinly, onion, note} B ∈{potato, onion, mushroom, clove, orange, gnocchi } A detain B A confiscate B A seize B A raid B A search B A arrest B A ∈{police, agent, officer, authorities, troops, official, investigator, ... } B ∈{suspect, government, journalist, monday, member, citizen, client, ... } A *uphold B A *challenge B A rule B A enforce B A *overturn B A *strike down B A ∈{court, judge, justice, panel, osteen, circuit, nicolau, sporkin, majority, ...} B ∈{law, ban, rule, constitutionality, conviction, ruling, lawmaker, tax, ...} A own B A *borrow B A sell B A buy back B A buy B A *repurchase B A ∈{company, investor, trader, corp, enron, inc, government, bank, itt, ...} B ∈{share, stock, stocks, bond, company, security, team, funds, house, ... } Figure 5: Six of the top 20 scored Narrative Schemas. Events and arguments in italics were marked misaligned by FrameNet definitions. * indicates verbs not in FrameNet. - indicates verb senses not in FameNet. found convict acquit defendant, nichols, smith, simpson jury, juror, court, judge, tribunal, senate sentence deliberate deadlocked Figure 4: Graphical view of an unordered schema automatically built from the verb ‘convict’. Each node shape is a chain in the schema. classes is very small (i.e, PropBank roles ARG0 and ARG1) and is known in advance. By contrast, our approach induces sets of entities that appear in the argument positions of verbs in a narrative schema. Our model thus does not assume the set of roles is known in advance, and it learns the roles at the same time as clustering verbs into frame-like schemas. The resulting sets of entities (such as {police, agent, authorities, government} or {court, judge, justice}) can be viewed as a kind of schema-specific semantic role. How can this unsupervised method of learning roles be evaluated? In Section 6 we evaluate the schemas together with their arguments in a cloze task. In this section we perform a more qualitative evalation by comparing our schema to FrameNet. FrameNet (Baker et al., 1998) is a database of frames, structures that characterize particular situations. A frame consists of a set of events (the verbs and nouns that describe them) and a set of frame-specific semantic roles called frame elements that can be arguments of the lexical units in the frame. FrameNet frames share commonalities with narrative schemas; both represent aspects of situations in the world, and both link semantically related words into frame-like sets in which each predicate draws its argument roles from a frame-specific set. They differ in that schemas focus on events in a narrative, while frames focus on events that share core participants. Nonetheless, the fact that FrameNet defines frame-specific argument roles suggests that comparing our schemas and roles to FrameNet would be elucidating. We took the 20 learned narrative schemas described in the previous section and used FrameNet to perform qualitative evaluations on three aspects of schema: verb groupings, linking structure (the mapping of each argument role to syntactic subject or object), and the roles themselves (the set of entities that constitutes the schema roles). Verb groupings To compare a schema’s event selection to a frame’s lexical units, we first map the top 20 schemas to the FrameNet frames that have the largest overlap with each schema’s six verbs. We were able to map 13 of our 20 narratives to FrameNet (for the remaining 7, no frame contained more than one of the six verbs). The remaining 13 schemas contained 6 verbs each for a total of 78 verbs. 26 of these verbs, however, did not occur in FrameNet, either at all, or with the correct sense. Of the remaining 52 verb mappings, 35 (67%) occurred in the closest FrameNet frame or in a frame one link away. 17 verbs (33%) 607 occurred in a different frame than the one chosen. We examined the 33% of verbs that occurred in a different frame. Most occurred in related frames, but did not have FrameNet links between them. For instance, one schema includes the causal verb trade with unaccusative verbs of change like rise and fall. FrameNet separates these classes of verbs into distinct frames, distinguishing motion frames from caused-motion frames. Even though trade and rise are in different FrameNet frames, they do in fact have the narrative relation that our system discovered. Of the 17 misaligned events, we judged all but one to be correct in a narrative sense. Thus although not exactly aligned with FrameNet’s notion of event clusters, our induction algorithm seems to do very well. Linking structure Next, we compare a schema’s linking structure, the grammatical relation chosen for each verb event. We thus decide, e.g., if the object of the verb arrest (arrest B) plays the same role as the object of detain (detain B), or if the subject of detain (B detain) would have been more appropriate. We evaluated the clustering decisions of the 13 schemas (78 verbs) that mapped to frames. For each chain in a schema, we identified the frame element that could correctly fill the most verb arguments in the chain. The remaining arguments were considered incorrect. Because we assumed all verbs to be transitive, there were 156 arguments (subjects and objects) in the 13 schema. Of these 156 arguments, 151 were correctly clustered together, achieving 96.8% accuracy. The schema in figure 5 with events detain, seize, arrest, etc. shows some of these errors. The object of all of these verbs is an animate theme, but confiscate B and raid B are incorrect; people cannot be confiscated/raided. They should have been split into their own chain within the schema. Argument Roles Finally, we evaluate the learned sets of entities that fill the argument slots. As with the above linking evaluation, we first identify the best frame element for each argument. For example, the events in the top left schema of figure 5 map to the Manufacturing frame. Argument B was identified as the Product frame element. We then evaluate the top 10 arguments in the argument set, judging whether each is a reasonable filler of the role. In our example, drug and product are correct Product arguments. An incorrect argument is test, as it was judged that a test is not a product. We evaluated all 20 schemas. The 13 mapped schemas used their assigned frames, and we created frame element definitions for the remaining 7 that were consistent with the syntactic positions. There were 400 possible arguments (20 schemas, 2 chains each), and 289 were judged correct for a precision of 72%. This number includes Person and Organization names as correct fillers. A more conservative metric removing these classes results in 259 (65%) correct. Most of the errors appear to be from parsing mistakes. Several resulted from confusing objects with adjuncts. Others misattached modifiers, such as including most as an argument. The cooking schema appears to have attached verbal arguments learned from instruction lists (wash, heat, boil). Two schemas require situations as arguments, but the dependency graphs chose as arguments the subjects of the embedded clauses, resulting in 20 incorrect arguments in these schema. 6 Evaluation: Cloze The previous section compared our learned knowledge to current work in event and role semantics. We now provide a more formal evaluation against untyped narrative chains. The two main contributions of schema are (1) adding typed arguments and (2) considering joint chains in one model. We evaluate each using the narrative cloze test as in (Chambers and Jurafsky, 2008). 6.1 Narrative Cloze The cloze task (Taylor, 1953) evaluates human understanding of lexical units by removing a random word from a sentence and asking the subject to guess what is missing. The narrative cloze is a variation on this idea that removes an event slot from a known narrative chain.Performance is measured by the position of the missing event slot in a system’s ranked guess list. This task is particularly attractive for narrative schemas (and chains) because it aligns with one of the original ideas behind Schankian scripts, namely that scripts help humans ‘fill in the blanks’ when language is underspecified. 6.2 Training and Test Data We count verb pairs and shared arguments over the NYT portion of the Gigaword Corpus (years 1994-2004), approximately one million articles. 608 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 1000 1050 1100 1150 1200 1250 1300 1350 Training Data from 1994−X Ranked Position Narrative Cloze Test Chain Typed Chain Schema Typed Schema Figure 6: Results with varying sizes of training data. We parse the text into typed dependency graphs with the Stanford Parser (de Marneffe et al., 2006), recording all verbs with subject, object, or prepositional typed dependencies. Unlike in (Chambers and Jurafsky, 2008), we lemmatize verbs and argument head words. We use the OpenNLP1 coreference engine to resolve entity mentions. The test set is the same as in (Chambers and Jurafsky, 2008). 100 random news articles were selected from the 2001 NYT section of the Gigaword Corpus. Articles that did not contain a protagonist with five or more events were ignored, leaving a test set of 69 articles. We used a smaller development set of size 17 to tune parameters. 6.3 Typed Chains The first evaluation compares untyped against typed narrative event chains. The typed model uses equation 4 for chain clustering. The dotted line ‘Chain’ and solid ‘Typed Chain’ in figure 6 shows the average ranked position over the test set. The untyped chains plateau and begin to worsen as the amount of training data increases, but the typed model is able to improve for some time after. We see a 6.9% gain at 2004 when both lines trend upwards. 6.4 Narrative Schema The second evaluation compares the performance of the narrative schema model against single narrative chains. We ignore argument types and use untyped chains in both (using equation 1 instead 1http://opennlp.sourceforge.net/ of 4). The dotted line ‘Chain’ and solid ‘Schema’ show performance results in figure 6. Narrative Schemas have better ranked scores in all data sizes and follow the previous experiment in improving results as more data is added even though untyped chains trend upward. We see a 3.3% gain at 2004. 6.5 Typed Narrative Schema The final evaluation combines schemas with argument types to measure overall gain. We evaluated with both head words and CBC clusters as argument representations. Not only do typed chains and schemas outperform untyped chains, combining the two gives a further performance boost. Clustered arguments improve the results further, helping with sparse argument counts (‘Typed Schema’ in figure 6 uses CBC arguments). Overall, using all the data (by year 2004) shows a 10.1% improvement over untyped narrative chains. 7 Discussion Our significant improvement in the cloze evaluation shows that even though narrative cloze does not evaluate argument types, jointly modeling the arguments with events improves event clustering. Likewise, the FrameNet comparison suggests that modeling related events helps argument learning. The tasks mutually inform each other. Our argument learning algorithm not only performs unsupervised induction of situation-specific role classes, but the resulting roles and linking structures may also offer the possibility of (unsupervised) FrameNet-style semantic role labeling. Finding the best argument representation is an important future direction. The performance of our noun clusters in figure 6 showed that while the other approaches leveled off, clusters continually improved with more data. The exact balance between lexical units, clusters, or more general (traditional) semantic roles remains to be solved, and may be application specific. We hope in the future to show that a range of NLU applications can benefit from the rich inferential structures that narrative schemas provide. Acknowledgments This work is funded in part by NSF (IIS-0811974). We thank the reviewers and the Stanford NLP Group for helpful suggestions. 609 References Afra Alishahi and Suzanne Stevenson. 2007. A computational usage-based model for learning general properties of semantic roles. In The 2nd European Cognitive Science Conference, Delphi, Greece. Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Christian Boitet and Pete Whitelock, editors, ACL-98, pages 86–90, San Francisco, California. Morgan Kaufmann Publishers. David Bean and Ellen Riloff. 2004. Unsupervised learning of contextual role knowledge for coreference resolution. Proc. of HLT/NAACL, pages 297– 304. Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In Proceedings of ACL-08, Hawaii, USA. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC-06, pages 449–454. David Graff. 2002. English Gigaword. Linguistic Data Consortium. Rebecca Green and Bonnie J. Dorr. 2005. Frame semantic enhancement of lexical-semantic resources. In ACL-SIGLEX Workshop on Deep Lexical Acquisition, pages 57–66. Trond Grenager and Christopher D. Manning. 2006. Unsupervised discovery of a statistical verb lexicon. In EMNLP. Shan He and Daniel Gildea. 2006. Self-training and co-training for semantic role labeling: Primary report. Technical Report 891, University of Rochester. Karin Kipper, Hoa Trang Dang, and Martha Palmer. 2000. Class-based construction of a verb lexicon. In Proceedings of AAAI-2000, Austin, TX. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: A corpus annotated with semantic roles. Computational Linguistics, 31(1):71–106. Patrick Pantel and Dekang Lin. 2002. Document clustering with committees. In ACM Conference on Research and Development in Information Retrieval, pages 199–206, Tampere, Finland. Roger C. Schank and Robert P. Abelson. 1977. Scripts, plans, goals and understanding. Lawrence Erlbaum. Robert S. Swier and Suzanne Stevenson. 2004. Unsupervised semantic role labelling. In EMNLP. Wilson L. Taylor. 1953. Cloze procedure: a new tool for measuring readability. Journalism Quarterly, 30:415–433. 610
2009
68
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 611–619, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Learning a Compositional Semantic Parser using an Existing Syntactic Parser Ruifang Ge Raymond J. Mooney Department of Computer Sciences University of Texas at Austin Austin, TX 78712 {grf,mooney}@cs.utexas.edu Abstract We present a new approach to learning a semantic parser (a system that maps natural language sentences into logical form). Unlike previous methods, it exploits an existing syntactic parser to produce disambiguated parse trees that drive the compositional semantic interpretation. The resulting system produces improved results on standard corpora on natural language interfaces for database querying and simulated robot control. 1 Introduction Semantic parsing is the task of mapping a natural language (NL) sentence into a completely formal meaning representation (MR) or logical form. A meaning representation language (MRL) is a formal unambiguous language that supports automated inference, such as first-order predicate logic. This distinguishes it from related tasks such as semantic role labeling (SRL) (Carreras and Marquez, 2004) and other forms of “shallow” semantic analysis that do not produce completely formal representations. A number of systems for automatically learning semantic parsers have been proposed (Ge and Mooney, 2005; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008). Given a training corpus of NL sentences annotated with their correct MRs, these systems induce an interpreter for mapping novel sentences into the given MRL. Previous methods for learning semantic parsers do not utilize an existing syntactic parser that provides disambiguated parse trees.1 However, accurate syntactic parsers are available for many 1Ge and Mooney (2005) use training examples with semantically annotated parse trees, and Zettlemoyer and Collins (2005) learn a probabilistic semantic parsing model which initially requires a hand-built, ambiguous CCG grammar template. (a) If our player 2 has the ball, then position our player 5 in the midfield. ((bowner (player our {2})) (do (player our {5}) (pos (midfield)))) (b) Which river is the longest? answer(x1,longest(x1,river(x1))) Figure 1: Sample NLs and their MRs in the ROBOCUP and GEOQUERY domains respectively. languages and could potentially be used to learn more effective semantic analyzers. This paper presents an approach to learning semantic parsers that uses parse trees from an existing syntactic analyzer to drive the interpretation process. The learned parser uses standard compositional semantics to construct alternative MRs for a sentence based on its syntax tree, and then chooses the best MR based on a trained statistical disambiguation model. The learning system first employs a word alignment method from statistical machine translation (GIZA++ (Och and Ney, 2003)) to acquire a semantic lexicon that maps words to logical predicates. Then it induces rules for composing MRs and estimates the parameters of a maximumentropy model for disambiguating semantic interpretations. After describing the details of our approach, we present experimental results on standard corpora demonstrating improved results on learning NL interfaces for database querying and simulated robot control. 2 Background In this paper, we consider two domains. The first is ROBOCUP (www.robocup.org). In the ROBOCUP Coach Competition, soccer agents compete on a simulated soccer field and receive coaching instructions in a formal language called CLANG (Chen et al., 2003). Figure 1(a) shows a sample instruction. The second domain is GEOQUERY, where a logical query language based on Prolog is used to query a database on U.S. geography (Zelle and Mooney, 1996). The logical lan611 CONDITION (bowner PLAYER ) (player TEAM our {UNUM}) 2 (a) P BOWNER P PLAYER P OUR P UNUM (b) S NP PRP$ our NP NN player CD 2 VP VB has NP DET the NN ball (c) Figure 2: Parses for the condition part of the CLANG in Figure 1(a): (a) The parse of the MR. (b) The predicate argument structure of (a). (c) The parse of the NL. PRODUCTION PREDICATE RULE→(CONDITION DIRECTIVE) P RULE CONDITION→(bowner PLAYER) P BOWNER PLAYER→(player TEAM {UNUM}) P PLAYER TEAM→our P OUR UNUM→2 P UNUM DIRECTIVE→(do PLAYER ACTION) P DO ACTION→(pos REGION) P POS REGION→(midfield) P MIDFIELD Table 1: Sample production rules for parsing the CLANG example in Figure 1(a) and their corresponding predicates. guage consists of both first-order and higher-order predicates. Figure 1(b) shows a sample query in this domain. We assume that an MRL is defined by an unambiguous context-free grammar (MRLG), so that MRs can be uniquely parsed, a standard requirement for computer languages. In an MRLG, each production rule introduces a single predicate in the MRL, where the type of the predicate is given in the left hand side (LHS), and the number and types of its arguments are defined by the nonterminals in the right hand side (RHS). Therefore, the parse of an MR also gives its predicate-argument structure. Figure 2(a) shows the parse of the condition part of the MR in Figure 1(a) using the MRLG described in (Wong, 2007), and its predicateargument structure is in Figure 2(b). Sample MRLG productions and their predicates for parsing this example are shown in Table 1, where the predicate P PLAYER takes two arguments (a1 and a2) of type TEAM and UNUM (uniform number). 3 Semantic Parsing Framework This section describes our basic framework, which is based on a fairly standard approach to computational semantics (Blackburn and Bos, 2005). The framework is composed of three components: 1) an existing syntactic parser to produce parse trees for NL sentences; 2) learned semantic knowledge (cf. Sec. 5), including a semantic lexicon to assign possible predicates (meanings) to words, and a set of semantic composition rules to construct possible MRs for each internal node in a syntactic parse given its children’s MRs; and 3) a statistical disambiguation model (cf. Sec. 6) to choose among multiple possible semantic constructs as defined by the semantic knowledge. The process of generating the semantic parse for an NL sentence is as follows. First, the syntactic parser produces a parse tree for the NL sentence. Second, the semantic lexicon assigns possible predicates to each word in the sentence. Third, all possible MRs for the sentence are constructed compositionally in a recursive, bottom-up fashion following its syntactic parse using composition rules. Lastly, the statistical disambiguation model scores each possible MR and returns the one with the highest score. Fig. 3(a) shows one possible semantically-augmented parse tree (SAPT) (Ge and Mooney, 2005) for the condition part of the example in Fig. 1(a) given its syntactic parse in Fig. 2(c). A SAPT adds a semantic label to each non-leaf node in the syntactic parse tree. The label specifies the MRL predicate for the node and its remaining (unfilled) arguments. The compositional process assumes a binary parse tree suitable for predicate-argument composition; parses in Penn-treebank style are binarized using Collins’ (1999) method. Consider the construction of the SAPT in Fig. 3(a). First, each word is assigned a semantic label. Most words are assigned an MRL predicate. For example, the word player is assigned the predicate P PLAYER with its two unbound arguments, a1 and a2, indicated using λ. Words that do not introduce a predicate are given the label NULL, like the and ball.2 Next, a semantic label is as2The words the and ball are not truly “meaningless” since the predicate P BOWNER (ball owner) is conveyed by the 612 P BOWNER P PLAYER P OUR our λa1P PLAYER ⟨λa1λa2⟩P PLAYER player P UNUM 2 λa1P BOWNER λa1P BOWNER has NULL NULL the NULL ball (a) SAPT (bowner (player our {2})) (player our {2}) our our λa1 (player a1 {2}) ⟨λa1λa2⟩(player a1 {a2} ) player 2 2 λa1(bowner a1) λa1(bowner a1) has NULL NULL the NULL ball (b) Semantic Derivation Figure 3: Semantic parse for the condition part of the example in Fig. 1(a) using the syntactic parse in Fig. 2(c): (a) A SAPT with syntactic labels omitted for brevity. (b) The semantic derivation of the MR. signed to each internal node using learned composition rules that specify how arguments are filled when composing two MRs (cf. Sec. 5). The label λa1P PLAYER indicates that the remaining argument a2 of the P PLAYER child is filled by the MR of the other child (labeled P UNUM). Finally, the SAPT is used to guide the composition of the sentence’s MR. At each internal node, an MR for the node is built from the MRs of its children by filling an argument of a predicate, as illustrated in the semantic derivation shown in Fig. 3(b). Semantic composition rules (cf. Sec. 5) are used to specify the argument to be filled. For the node spanning player 2, the predicate P PLAYER and its second argument P UNUM are composed to form the MR: λa1 (player a1 {2}). Composing an MR with NULL leaves the MR unchanged. An MR is said to be complete when it contains no remaining λ variables. This process continues up the phrase has the ball. For simplicity, predicates are introduced by a single word, but statistical disambiguation (cf. Sec. 6) uses surrounding words to choose a meaning for a word whose lexicon entry contains multiple possible predicates. tree until a complete MR for the entire sentence is constructed at the root. 4 Ensuring Meaning Composition The basic compositional method in Sec. 3 only works if the syntactic parse tree strictly follows the predicate-argument structure of the MR, since meaning composition at each node is assumed to combine a predicate with one of its arguments. However, this assumption is not always satisfied, for example, in the case of verb gapping and flexible word order. We use constructing the MR for the directive part of the example in Fig. 1(a) according to the syntactic parse in Fig. 4(b) as an example. Given the appropriate possible predicate attached to each word in Fig. 5(a), the node spanning position our player 5 has children, P POS and P PLAYER, that are not in a predicate-argument relation in the MR (see Fig. 4(a)). To ensure meaning composition in this case, we automatically create macro-predicates that combine multiple predicates into one, so that the children’s MRs can be composed as argu613 P DO P PLAYER P OUR P UNUM P POS P MIDFIELD (a) VP ADVP RB then VP VP VB position NP our player 5 PP IN in NP DT the NN midfield (b) Figure 4: Parses for the directive part of the CLANG in Fig. 1(a): (a) The predicate-argument structure of the MR. (b) The parse of the NL (the parse of the phrase our player 5 is omitted for brevity). ments to a macro-predicate. Fig. 5(b) shows the macro-predicate P DO POS (DIRECTIVE→(do PLAYER (pos REGION))) formed by merging the P DO and P POS in Fig. 4(a). The macro-predicate has two arguments, one of type PLAYER (a1) and one of type REGION (a2). Now, P POS and P PLAYER can be composed as arguments to this macro-predicate as shown in Fig. 5(c). However, it requires assuming a P DO predicate that has not been formally introduced. To indicate this, a lambda variable, p1, is introduced that ranges over predicates and is provisionally bound to P DO, as indicated in Fig. 5(c) using the notation p1:do. Eventually, this predicate variable must be bound to a matching predicate introduced from the lexicon. In the example, p1:do is eventually bound to the P DO predicate introduced by the word then to form a complete MR. Macro-predicates are introduced as needed during training in order to ensure that each MR in the training set can be composed using the syntactic parse of its corresponding NL given reasonable assignments of predicates to words. For each SAPT node that does not combine a predicate with a legal argument, a macro-predicate is formed by merging all predicates on the paths from the child predicates to their lowest common ancestor (LCA) in the MR parse. Specifically, a child MR becomes an argument of the macro-predicate if it is complete (i.e. contains no λ variables); otherwise, it also becomes part of the macro-predicate and its λ variables become additional arguments of the macro-predicate. For the node spanning position our player 5 in the example, the LCA of the children P PLAYER and P POS is their immediate parent P DO, therefore P DO is included in the macro-predicate. The complete child P PLAYER becomes the first argument of the macro-predicate. The incomplete child P POS is added to the macropredicate P DO POS and its λ variable becomes another argument. For improved generalization, once a predicate in a macro-predicate becomes complete, it is removed from the corresponding macro-predicate label in the SAPT. For the node spanning position our player 5 in the midfield in Fig. 5(a), P DO POS becomes P DO once the arguments of pos are filled. In the following two sections, we describe the two subtasks of inducing semantic knowledge and a disambiguation model for this enhanced compositional framework. Both subtasks require a training set of NLs paired with their MRs. Each NL sentence also requires a syntactic parse generated using Bikel’s (2004) implementation of Collins parsing model 2. Note that unlike SCISSOR (Ge and Mooney, 2005), training our method does not require gold-standard SAPTs. 5 Learning Semantic Knowledge Learning semantic knowledge starts from learning the mapping from words to predicates. We use an approach based on Wong and Mooney (2006), which constructs word alignments between NL sentences and their MRs. Normally, word alignment is used in statistical machine translation to match words in one NL to words in another; here it is used to align words with predicates based on a ”parallel corpus” of NL sentences and MRs. We assume that each word alignment defines a possible mapping from words to predicates for building a SAPT and semantic derivation which compose the correct MR. A semantic lexicon and composition rules are then extracted directly from the 614 P DO ⟨λa1λa2⟩P DO then λp1P DO POS = λp1P DO ⟨λp1λa2⟩P DO POS λa1P POS position P PLAYER our player 5 P MIDFIELD NULL in P MIDFIELD NULL the P MIDFIELD midfield (a) SAPT P DO a1:PLAYER P POS a2:REGION (b) Macro-Predicate P DO POS (do (player our {5}) (pos (midfield))) ⟨λa1λa2⟩(do a1a2) then λp1(p1:do (player our {5}) (pos (midfield))) ⟨λp1λa2⟩(p1:do (player our {5}) (pos a2)) λa1(pos a1) position (player our {5}) our player 5 (midfield) NULL in (midfield) NULL the (midfield) midfield (c) Semantic Derivation Figure 5: Semantic parse for the directive part of the example in Fig. 1(a) using the syntactic parse in Fig. 4(b): (a) A SAPT with syntactic labels omitted for brevity. (b) The predicate-argument structure of macro-predicate P DO POS (c) The semantic derivation of the MR. nodes of the resulting semantic derivations. Generation of word alignments for each training example proceeds as follows. First, each MR in the training corpus is parsed using the MRLG. Next, each resulting parse tree is linearized to produce a sequence of predicates by using a topdown, left-to-right traversal of the parse tree. Then the GIZA++ implementation (Och and Ney, 2003) of IBM Model 5 is used to generate the five best word/predicate alignments from the corpus of NL sentences each paired with the predicate sequence for its MR. After predicates are assigned to words using word alignment, for each alignment of a training example and its syntactic parse, a SAPT is generated for composing the correct MR using the processes discussed in Sections 3 and 4. Specifically, a semantic label is assigned to each internal node of each SAPT, so that the MRs of its children are composed correctly according to the MR for this example. There are two cases that require special handling. First, when a predicate is not aligned to any word, the predicate must be inferred from context. For example, in CLANG, our player is frequently just referred to as player and the our must be inferred. When building a SAPT for such an alignment, the assumed predicates and arguments are simply bound to their values in the MR. Second, when a predicate is aligned to several words, i.e. it is represented by a phrase, the alignment is transformed into several alignments where each predicate is aligned to each single word in order to fit the assumptions of compositional semantics. Given the SAPTs constructed from the results of word-alignment, a semantic derivation for each training sentence is constructed using the methods described in Sections 3 and 4. Composition rules 615 are then extracted from these derivations. Formally, composition rules are of the form: Λ1.P1 + Λ2.P2 ⇒{Λp.Pp, R} (1) where P1, P2 and Pp are predicates for the left child, right child, and parent node, respectively. Each predicate includes a lambda term Λ of the form ⟨λpi1, . . . , λpim, λaj1, . . . , λajn⟩, an unordered set of all unbound predicate and argument variables for the predicate. The component R specifies how some arguments of the parent predicate are filled when composing the MR for the parent node. It is of the form: {ak1=R1, . . . , akl=Rl}, where Ri can be either a child (ci), or a child’s complete argument (ci, aj) if the child itself is not complete. For instance, the rule extracted for the node for player 2 in Fig. 3(b) is: ⟨λa1λa2⟩.P PLAYER + P UNUM ⇒{λa1.P PLAYER, a2=c2}, and for position our player 5 in Fig. 5(c): λa1.P POS + P PLAYER ⇒{⟨λp1λa2⟩.P DO POS, a1=c2}, and for position our player 5 in the midfield: ⟨λp1λa2⟩.P DO POS + P MIDFIELD ⇒{λp1.P DO POS, {a1=(c1,a1), a2=c2}}. The learned semantic knowledge is necessary for handling ambiguity, such as that involving word senses and semantic roles. It is also used to ensure that each MR is a legal string in the MRL. 6 Learning a Disambiguation Model Usually, multiple possible semantic derivations for an NL sentence are warranted by the acquired semantic knowledge, thus disambiguation is needed. To learn a disambiguation model, the learned semantic knowledge (see Section 5) is applied to each training example to generate all possible semantic derivations for an NL sentence given its syntactic parse. Here, unique word alignments are not required, and alternative interpretations compete for the best semantic parse. We use a maximum-entropy model similar to that of Zettlemoyer and Collins (2005) and Wong and Mooney (2006). The model defines a conditional probability distribution over semantic derivations (D) given an NL sentence S and its syntactic parse T: Pr(D|S, T; ¯θ) = exp P i θifi(D) Z¯θ(S, T) (2) where ¯f (f1, . . . , fn) is a feature vector parameterized by ¯θ, and Z¯θ(S, T) is a normalizing factor. Three simple types of features are used in the model. First, are lexical features which count the number of times a word is assigned a particular predicate. Second, are bilexical features which count the number of times a word is assigned a particular predicate and a particular word precedes or follows it. Last, are rule features which count the number of times a particular composition rule is applied in the derivation. The training process finds a parameter ¯θ∗that (approximately) maximizes the sum of the conditional log-likelihood of the MRs in the training set. Since no specific semantic derivation for an MR is provided in the training data, the conditional loglikelihood of an MR is calculated as the sum of the conditional probability of all semantic derivations that lead to the MR. Formally, given a set of NLMR pairs {(S1, M1), (S2, M2), ..., (Sn, Mn)} and the syntactic parses of the NLs {T1, T2, ..., Tn}, the parameter ¯θ∗is calculated as: ¯θ∗ = arg max ¯θ n X i=1 log Pr(Mi|Si, Ti; ¯θ) (3) = arg max ¯θ n X i=1 log X D∗ i Pr(D∗ i |Si, Ti; ¯θ) where D∗ i is a semantic derivation that produces the correct MR Mi. L-BFGS (Nocedal, 1980) is used to estimate the parameters ¯θ∗. The estimation requires statistics that depend on all possible semantic derivations and all correct semantic derivations of an example, which are not feasibly enumerated. A variant of the Inside-Outside algorithm (Miyao and Tsujii, 2002) is used to efficiently collect the necessary statistics. Following Wong and Mooney (2006), only candidate predicates and composition rules that are used in the best semantic derivations for the training set are retained for testing. No smoothing is used to regularize the model; We tried using a Gaussian prior (Chen and Rosenfeld, 1999), but it did not improve the results. 7 Experimental Evaluation We evaluated our approach on two standard corpora in CLANG and GEOQUERY. For CLANG, 300 instructions were randomly selected from the log files of the 2003 ROBOCUP Coach 616 Competition and manually translated into English (Kuhlmann et al., 2004). For GEOQUERY, 880 English questions were gathered from various sources and manually translated into Prolog queries (Tang and Mooney, 2001). The average sentence lengths for the CLANG and GEOQUERY corpora are 22.52 and 7.48, respectively. Our experiments used 10-fold cross validation and proceeded as follows. First Bikel’s implementation of Collins parsing model 2 was trained to generate syntactic parses. Second, a semantic parser was learned from the training set augmented with their syntactic parses. Finally, the learned semantic parser was used to generate the MRs for the test sentences using their syntactic parses. If a test example contains constructs that did not occur in training, the parser may fail to return an MR. We measured the performance of semantic parsing using precision (percentage of returned MRs that were correct), recall (percentage of test examples with correct MRs returned), and F-measure (harmonic mean of precision and recall). For CLANG, an MR was correct if it exactly matched the correct MR, up to reordering of arguments of commutative predicates like and. For GEOQUERY, an MR was correct if it retrieved the same answer as the gold-standard query, thereby reflecting the quality of the final result returned to the user. The performance of a syntactic parser trained only on the Wall Street Journal (WSJ) can degrade dramatically in new domains due to corpus variation (Gildea, 2001). Experiments on CLANG and GEOQUERY showed that the performance can be greatly improved by adding a small number of treebanked examples from the corresponding training set together with the WSJ corpus. Our semantic parser was evaluated using three kinds of syntactic parses. Listed together with their PARSEVAL F-measures these are: gold-standard parses from the treebank (GoldSyn, 100%), a parser trained on WSJ plus a small number of in-domain training sentences required to achieve good performance, 20 for CLANG (Syn20, 88.21%) and 40 for GEOQUERY (Syn40, 91.46%), and a parser trained on no in-domain data (Syn0, 82.15% for CLANG and 76.44% for GEOQUERY). We compared our approach to the following alternatives (where results for the given corpus were Precision Recall F-measure GOLDSYN 84.73 74.00 79.00 SYN20 85.37 70.00 76.92 SYN0 87.01 67.00 75.71 WASP 88.85 61.93 72.99 KRISP 85.20 61.85 71.67 SCISSOR 89.50 73.70 80.80 LU 82.50 67.70 74.40 Table 2: Performance on CLANG. Precision Recall F-measure GOLDSYN 91.94 88.18 90.02 SYN40 90.21 86.93 88.54 SYN0 81.76 78.98 80.35 WASP 91.95 86.59 89.19 Z&C 91.63 86.07 88.76 SCISSOR 95.50 77.20 85.38 KRISP 93.34 71.70 81.10 LU 89.30 81.50 85.20 Table 3: Performance on GEOQUERY. available): SCISSOR (Ge and Mooney, 2005), an integrated syntactic-semantic parser; KRISP (Kate and Mooney, 2006), an SVM-based parser using string kernels; WASP (Wong and Mooney, 2006; Wong and Mooney, 2007), a system based on synchronous grammars; Z&C (Zettlemoyer and Collins, 2007)3, a probabilistic parser based on relaxed CCG grammars; and LU (Lu et al., 2008), a generative model with discriminative reranking. Note that some of these approaches require additional human supervision, knowledge, or engineered features that are unavailable to the other systems; namely, SCISSOR requires gold-standard SAPTs, Z&C requires hand-built template grammar rules, LU requires a reranking model using specially designed global features, and our approach requires an existing syntactic parser. The F-measures for syntactic parses that generate correct MRs in CLANG are 85.50% for syn0 and 91.16% for syn20, showing that our method can produce correct MRs even when given imperfect syntactic parses. The results of semantic parsers are shown in Tables 2 and 3. First, not surprisingly, more accurate syntactic parsers (i.e. ones trained on more in-domain data) improved our approach. Second, in CLANG, all of our methods outperform WASP and KRISP, which also require no additional information during training. In GEOQUERY, Syn0 has significantly worse results than WASP and our other systems using better syntactic parses. This is not surprising since Syn0’s F-measure for syntactic parsing is only 76.44% in GEOQUERY due to a lack 3These results used a different experimental setup, training on 600 examples, and testing on 280 examples. 617 Precision Recall F-measure GOLDSYN 61.14 35.67 45.05 SYN20 57.76 31.00 40.35 SYN0 53.54 22.67 31.85 WASP 88.00 14.37 24.71 KRISP 68.35 20.00 30.95 SCISSOR 85.00 23.00 36.20 Table 4: Performance on CLANG40. Precision Recall F-measure GOLDSYN 95.73 89.60 92.56 SYN20 93.19 87.60 90.31 SYN0 91.81 85.20 88.38 WASP 91.76 75.60 82.90 SCISSOR 98.50 74.40 84.77 KRISP 84.43 71.60 77.49 LU 91.46 72.80 81.07 Table 5: Performance on GEO250 (20 in-domain sentences are used in SYN20 to train the syntactic parser). of interrogative sentences (questions) in the WSJ corpus. Note the results for SCISSOR, KRISP and LU on GEOQUERY are based on a different meaning representation language, FUNQL, which has been shown to produce lower results (Wong and Mooney, 2007). Third, SCISSOR performs better than our methods on CLANG, but it requires extra human supervision that is not available to the other systems. Lastly, a detailed analysis showed that our improved performance on CLANG compared to WASP and KRISP is mainly for long sentences (> 20 words), while performance on shorter sentences is similar. This is consistent with their relative performance on GEOQUERY, where sentences are normally short. Longer sentences typically have more complex syntax, and the traditional syntactic analysis used by our approach results in better compositional semantic analysis in this situation. We also ran experiments with less training data. For CLANG, 40 random examples from the training sets (CLANG40) were used. For GEOQUERY, an existing 250-example subset (GEO250) (Zelle and Mooney, 1996) was used. The results are shown in Tables 4 and 5. Note the performance of our systems on GEO250 is higher than that on GEOQUERY since GEOQUERY includes more complex queries (Tang and Mooney, 2001). First, all of our systems gave the best F-measures (except SYN0 compared to SCISSOR in CLANG40), and the differences are generally quite substantial. This shows that our approach significantly improves results when limited training data is available. Second, in CLANG, reducing the training data increased the difference between SYN20 and SYN0. This suggests that the quality of syntactic parsing becomes more important when less training data is available. This demonstrates the advantage of utilizing existing syntactic parsers that are learned from large open domain treebanks instead of relying just on the training data. We also evaluated the impact of the word alignment component by replacing Giza++ by goldstandard word alignments manually annotated for the CLANG corpus. The results consistently showed that compared to using gold-standard word alignment, Giza++ produced lower semantic parsing accuracy when given very little training data, but similar or better results when given sufficient training data (> 160 examples). This suggests that, given sufficient data, Giza++ can produce effective word alignments, and that imperfect word alignments do not seriously impair our semantic parsers since the disambiguation model evaluates multiple possible interpretations of ambiguous words. Using multiple potential alignments from Giza++ sometimes performs even better than using a single gold-standard word alignment because it allows multiple interpretations to be evaluated by the global disambiguation model. 8 Conclusion and Future work We have presented a new approach to learning a semantic parser that utilizes an existing syntactic parser to drive compositional semantic interpretation. By exploiting an existing syntactic parser trained on a large treebank, our approach produces improved results on standard corpora, particularly when training data is limited or sentences are long. The approach also exploits methods from statistical MT (word alignment) and therefore integrates techniques from statistical syntactic parsing, MT, and compositional semantics to produce an effective semantic parser. Currently, our results comparing performance on long versus short sentences indicates that our approach is particularly beneficial for syntactically complex sentences. Follow up experiments using a more refined measure of syntactic complexity could help confirm this hypothesis. Reranking could also potentially improve the results (Ge and Mooney, 2006; Lu et al., 2008). Acknowledgments This research was partially supported by NSF grant IIS–0712097. 618 References Daniel M. Bikel. 2004. Intricacies of Collins’ parsing model. Computational Linguistics, 30(4):479–511. Patrick Blackburn and Johan Bos. 2005. Representation and Inference for Natural Language: A First Course in Computational Semantics. CSLI Publications, Stanford, CA. Xavier Carreras and Luis Marquez. 2004. Introduction to the CoNLL-2004 shared task: Semantic role labeling. In Proc. of 8th Conf. on Computational Natural Language Learning (CoNLL-2004), Boston, MA. Stanley F. Chen and Ronald Rosenfeld. 1999. A Gaussian prior for smoothing maximum entropy model. Technical Report CMU-CS-99-108, School of Computer Science, Carnegie Mellon University. Mao Chen, Ehsan Foroughi, Fredrik Heintz, Spiros Kapetanakis, Kostas Kostiadis, Johan Kummeneje, Itsuki Noda, Oliver Obst, Patrick Riley, Timo Steffens, Yi Wang, and Xiang Yin. 2003. Users manual: RoboCup soccer server manual for soccer server version 7.07 and later. Available at http:// sourceforge.net/projects/sserver/. Michael Collins. 1999. Head-driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Ruifang Ge and Raymond J. Mooney. 2005. A statistical semantic parser that integrates syntax and semantics. In Proc. of 9th Conf. on Computational Natural Language Learning (CoNLL-2005), pages 9–16. Ruifang Ge and Raymond J. Mooney. 2006. Discriminative reranking for semantic parsing. In Proc. of the 21st Intl. Conf. on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING/ACL-06), Sydney, Australia, July. Daniel Gildea. 2001. Corpus variation and parser performance. In Proc. of the 2001 Conf. on Empirical Methods in Natural Language Processing (EMNLP01), Pittsburgh, PA, June. Rohit J. Kate and Raymond J. Mooney. 2006. Using string-kernels for learning semantic parsers. In Proc. of the 21st Intl. Conf. on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING/ACL-06), pages 913–920, Sydney, Australia, July. Greg Kuhlmann, Peter Stone, Raymond J. Mooney, and Jude W. Shavlik. 2004. Guiding a reinforcement learner with natural language advice: Initial results in RoboCup soccer. In Proc. of the AAAI-04 Workshop on Supervisory Control of Learning and Adaptive Systems, San Jose, CA, July. Wei Lu, Hwee Tou Ng, Wee Sun Lee, and Luke S. Zettlemoyer. 2008. A generative model for parsing natural language to meaning representations. In Proc. of the Conf. on Empirical Methods in Natural Language Processing (EMNLP-08), Honolulu, Hawaii, October. Yusuke Miyao and Jun’ichi Tsujii. 2002. Maximum entropy estimation for feature forests. In Proc. of Human Language Technology Conf.(HLT-2002), San Diego, CA, March. Jorge Nocedal. 1980. Updating quasi-Newton matrices with limited storage. Mathematics of Computation, 35(151):773–782, July. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Lappoon R. Tang and Raymond J. Mooney. 2001. Using multiple clause constructors in inductive logic programming for semantic parsing. In Proc. of the 12th European Conf. on Machine Learning, pages 466–477, Freiburg, Germany. Yuk Wah Wong and Raymond J. Mooney. 2006. Learning for semantic parsing with statistical machine translation. In Proc. of Human Language Technology Conf. / N. American Chapter of the Association for Computational Linguistics Annual Meeting (HLT-NAACL-2006), pages 439–446. Yuk Wah Wong and Raymond J. Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Proc. of the 45th Annual Meeting of the Association for Computational Linguistics (ACL-07), pages 960–967. Yuk Wah Wong. 2007. Learning for Semantic Parsing and Natural Language Generation Using Statistical Machine Translation Techniques. Ph.D. thesis, Department of Computer Sciences, University of Texas, Austin, TX, August. Also appears as Artificial Intelligence Laboratory Technical Report AI07343. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proc. of 13th Natl. Conf. on Artificial Intelligence (AAAI-96), pages 1050–1055. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proc. of the 21th Annual Conf. on Uncertainty in Artificial Intelligence (UAI-05). Luke S. Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proc. of the 2007 Joint Conf. on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL-07), pages 678–687, Prague, Czech Republic, June. 619
2009
69
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 55–63, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Cross Language Dependency Parsing using a Bilingual Lexicon∗ Hai Zhao(赵 赵 赵海 海 海)†‡, Yan Song(宋 宋 宋彦 彦 彦)†, Chunyu Kit†, Guodong Zhou‡ †Department of Chinese, Translation and Linguistics City University of Hong Kong 83 Tat Chee Avenue, Kowloon, Hong Kong, China ‡School of Computer Science and Technology Soochow University, Suzhou, China 215006 {haizhao,yansong,ctckit}@cityu.edu.hk, [email protected] Abstract This paper proposes an approach to enhance dependency parsing in a language by using a translated treebank from another language. A simple statistical machine translation method, word-by-word decoding, where not a parallel corpus but a bilingual lexicon is necessary, is adopted for the treebank translation. Using an ensemble method, the key information extracted from word pairs with dependency relations in the translated text is effectively integrated into the parser for the target language. The proposed method is evaluated in English and Chinese treebanks. It is shown that a translated English treebank helps a Chinese parser obtain a state-ofthe-art result. 1 Introduction Although supervised learning methods bring stateof-the-art outcome for dependency parser inferring (McDonald et al., 2005; Hall et al., 2007), a large enough data set is often required for specific parsing accuracy according to this type of methods. However, to annotate syntactic structure, either phrase- or dependency-based, is a costly job. Until now, the largest treebanks1 in various languages for syntax learning are with around one million words (or some other similar units). Limited data stand in the way of further performance enhancement. This is the case for each individual language at least. But, this is not the case as we observe all treebanks in different languages as a whole. For example, of ten treebanks for CoNLL2007 shared task, none includes more than 500K ∗The study is partially supported by City University of Hong Kong through the Strategic Research Grant 7002037 and 7002388. The first author is sponsored by a research fellowship from CTL, City University of Hong Kong. 1It is a tradition to call an annotated syntactic corpus as treebank in parsing community. tokens, while the sum of tokens from all treebanks is about two million (Nivre et al., 2007). As different human languages or treebanks should share something common, this makes it possible to let dependency parsing in multiple languages be beneficial with each other. In this paper, we study how to improve dependency parsing by using (automatically) translated texts attached with transformed dependency information. As a case study, we consider how to enhance a Chinese dependency parser by using a translated English treebank. What our method relies on is not the close relation of the chosen language pair but the similarity of two treebanks, this is the most different from the previous work. Two main obstacles are supposed to confront in a cross-language dependency parsing task. The first is the cost of translation. Machine translation has been shown one of the most expensive language processing tasks, as a great deal of time and space is required to perform this task. In addition, a standard statistical machine translation method based on a parallel corpus will not work effectively if it is not able to find a parallel corpus that right covers source and target treebanks. However, dependency parsing focuses on the relations of word pairs, this allows us to use a dictionarybased translation without assuming a parallel corpus available, and the training stage of translation may be ignored and the decoding will be quite fast in this case. The second difficulty is that the outputs of translation are hardly qualified for the parsing purpose. The most challenge in this aspect is morphological preprocessing. We regard that the morphological issue should be handled aiming at the specific language, our solution here is to use character-level features for a target language like Chinese. The rest of the paper is organized as follows. The next section presents some related existing work. Section 3 describes the procedure on tree55 bank translation and dependency transformation. Section 4 describes a dependency parser for Chinese as a baseline. Section 5 describes how a parser can be strengthened from the translated treebank. The experimental results are reported in Section 6. Section 7 looks into a few issues concerning the conditions that the proposed approach is suitable for. Section 8 concludes the paper. 2 The Related Work As this work is about exploiting extra resources to enhance an existing parser, it is related to domain adaption for parsing that has been draw some interests in recent years. Typical domain adaptation tasks often assume annotated data in new domain absent or insufficient and a large scale unlabeled data available. As unlabeled data are concerned, semi-supervised or unsupervised methods will be naturally adopted. In previous works, two basic types of methods can be identified to enhance an existing parser from additional resources. The first is usually focus on exploiting automatic generated labeled data from the unlabeled data (Steedman et al., 2003; McClosky et al., 2006; Reichart and Rappoport, 2007; Sagae and Tsujii, 2007; Chen et al., 2008), the second is on combining supervised and unsupervised methods, and only unlabeled data are considered (Smith and Eisner, 2006; Wang and Schuurmans, 2008; Koo et al., 2008). Our purpose in this study is to obtain a further performance enhancement by exploiting treebanks in other languages. This is similar to the above first type of methods, some assistant data should be automatically generated for the subsequent processing. The differences are what type of data are concerned with and how they are produced. In our method, a machine translation method is applied to tackle golden-standard treebank, while all the previous works focus on the unlabeled data. Although cross-language technique has been used in other natural language processing tasks, it is basically new for syntactic parsing as few works were concerned with this issue. The reason is straightforward, syntactic structure is too complicated to be properly translated and the cost of translation cannot be afforded in many cases. However, we empirically find this difficulty may be dramatically alleviated as dependencies rather than phrases are used for syntactic structure representation. Even the translation outputs are not so good as the expected, a dependency parser for the target language can effectively make use of them by only considering the most related information extracted from the translated text. The basic idea to support this work is to make use of the semantic connection between different languages. In this sense, it is related to the work of (Merlo et al., 2002) and (Burkett and Klein, 2008). The former showed that complementary information about English verbs can be extracted from their translations in a second language (Chinese) and the use of multilingual features improves classification performance of the English verbs. The latter iteratively trained a model to maximize the marginal likelihood of tree pairs, with alignments treated as latent variables, and then jointly parsing bilingual sentences in a translation pair. The proposed parser using features from monolingual and mutual constraints helped its log-linear model to achieve better performance for both monolingual parsers and machine translation system. In this work, cross-language features will be also adopted as the latter work. However, although it is not essentially different, we only focus on dependency parsing itself, while the parsing scheme in (Burkett and Klein, 2008) based on a constituent representation. Among of existing works that we are aware of, we regard that the most similar one to ours is (Zeman and Resnik, 2008), who adapted a parser to a new language that is much poorer in linguistic resources than the source language. However, there are two main differences between their work and ours. The first is that they considered a pair of sufficiently related languages, Danish and Swedish, and made full use of the similar characteristics of two languages. Here we consider two quite different languages, English and Chinese. As fewer language properties are concerned, our approach holds the more possibility to be extended to other language pairs than theirs. The second is that a parallel corpus is required for their work and a strict statistical machine translation procedure was performed, while our approach holds a merit of simplicity as only a bilingual lexicon is required. 3 Treebank Translation and Dependency Transformation 3.1 Data As a case study, this work will be conducted between the source language, English, and the target language, Chinese, namely, we will investigate 56 how a translated English treebank enhances a Chinese dependency parser. For English data, the Penn Treebank (PTB) 3 is used. The constituency structures is converted to dependency trees by using the same rules as (Yamada and Matsumoto, 2003) and the standard training/development/test split is used. However, only training corpus (sections 2-21) is used for this study. For Chinese data, the Chinese Treebank (CTB) version 4.0 is used in our experiments. The same rules for conversion and the same data split is adopted as (Wang et al., 2007): files 1-270 and 400-931 as training, 271-300 as testing and files 301-325 as development. We use the gold standard segmentation and part-of-speech (POS) tags in both treebanks. As a bilingual lexicon is required for our task and none of existing lexicons are suitable for translating PTB, two lexicons, LDC Chinese-English Translation Lexicon Version 2.0 (LDC2002L27), and an English to Chinese lexicon in StarDict2, are conflated, with some necessary manual extensions, to cover 99% words appearing in the PTB (the most part of the untranslated words are named entities.). This lexicon includes 123K entries. 3.2 Translation A word-by-word statistical machine translation strategy is adopted to translate words attached with the respective dependency information from the source language to the target one. In detail, a word-based decoding is used, which adopts a loglinear framework as in (Och and Ney, 2002) with only two features, translation model and language model, P(c|e) = exp[P2 i=1 λihi(c, e)] P c exp[P2 i=1 λihi(c, e)] Where h1(c, e) = log(pγ(c|e)) is the translation model, which is converted from the bilingual lexicon, and h2(c, e) = log(pθ(c)) is the language model, a word trigram model trained from the CTB. In our experiment, we set two weights λ1 = λ2 = 1. 2StarDict is an open source dictionary software, available at http://stardict.sourceforge.net/. The conversion process of the source treebank is completed by three steps as the following: 1. Bind POS tag and dependency relation of a word with itself; 2. Translate the PTB text into Chinese word by word. Since we use a lexicon rather than a parallel corpus to estimate the translation probabilities, we simply assign uniform probabilities to all translation options. Thus the decoding process is actually only determined by the language model. Similar to the “bag translation” experiment in (Brown et al., 1990), the candidate target sentences made up by a sequence of the optional target words are ranked by the trigram language model. The output sentence will be generated only if it is with maximum probability as follows, c = argmax{pθ(c)pγ(c|e)} = argmax pθ(c) = argmax Y pθ(wc) A beam search algorithm is used for this process to find the best path from all the translation options; As the training stage, especially, the most time-consuming alignment sub-stage, is skipped, the translation only includes a decoding procedure that takes about 4.5 hours for about one million words of the PTB in a 2.8GHz PC. 3. After the target sentence is generated, the attached POS tags and dependency information of each English word will also be transferred to each corresponding Chinese word. As word order is often changed after translation, the pointer of each dependency relationship, represented by a serial number, should be re-calculated. Although we try to perform an exact word-byword translation, this aim cannot be fully reached in fact, as the following case is frequently encountered, multiple English words have to be translated into one Chinese word. To solve this problem, we use a policy that lets the output Chinese word only inherits the attached information of the highest syntactic head in the original multiple English words. 4 Dependency Parsing: Baseline 4.1 Learning Model and Features According to (McDonald and Nivre, 2007), all data-driven models for dependency parsing that have been proposed in recent years can be described as either graph-based or transition-based. 57 Table 1: Feature Notations Notation Meaning s The word in the top of stack s′ The first word below the top of stack. s−1,s1... The first word before(after) the word in the top of stack. i, i+1,... The first (second) word in the unprocessed sequence, etc. dir Dependent direction h Head lm Leftmost child rm Rightmost child rn Right nearest child form word form pos POS tag of word cpos1 coarse POS: the first letter of POS tag of word cpos2 coarse POS: the first two POS tags of word lnverb the left nearest verb char1 The first character of a word char2 The first two characters of a word char−1 The last character of a word char−2 The last two characters of a word . ’s, i.e., ‘s.dprel’ means dependent label of character in the top of stack + Feature combination, i.e., ‘s.char+i.char’ means both s.char and i.char work as a feature function. Although the former will be also used as comparison, the latter is chosen as the main parsing framework by this study for the sake of efficiency. In detail, a shift-reduce method is adopted as in (Nivre, 2003), where a classifier is used to make a parsing decision step by step. In each step, the classifier checks a word pair, namely, s, the top of a stack that consists of the processed words, and, i, the first word in the (input) unprocessed sequence, to determine if a dependent relation should be established between them. Besides two dependency arc building actions, a shift action and a reduce action are also defined to maintain the stack and the unprocessed sequence. In this work, we adopt a left-to-right arc-eager parsing model, that means that the parser scans the input sequence from left to right and right dependents are attached to their heads as soon as possible (Hall et al., 2007). While memory-based and margin-based learning approaches such as support vector machines are popularly applied to shift-reduce parsing, we apply maximum entropy model as the learning model for efficient training and adopting overlapped features as our work in (Zhao and Kit, 2008), especially, those character-level ones for Chinese parsing. Our implementation of maximum entropy adopts L-BFGS algorithm for parameter optimization as usual. With notations defined in Table 1, a feature set as shown in Table 2 is adopted. Here, we explain some terms in Tables 1 and 2. We used a large scale feature selection approach as in (Zhao et al., 2009) to obtain the feature set in Table 2. Some feature notations in this paper are also borrowed from that work. The feature curroot returns the root of a partial parsing tree that includes a specified node. The feature charseq returns a character sequence whose members are collected from all identified children for a specified word. In Table 2, as for concatenating multiple substrings into a feature string, there are two ways, seq and bag. The former is to concatenate all substrings without do something special. The latter will remove all duplicated substrings, sort the rest and concatenate all at last. Note that we systemically use a group of character-level features. Surprisingly, as to our best knowledge, this is the first report on using this type of features in Chinese dependency parsing. Although (McDonald et al., 2005) used the prefix of each word form instead of word form itself as features, character-level features here for Chinese is essentially different from that. As Chinese is basically a character-based written language. Character plays an important role in many means, most characters can be formed as single-character words, and Chinese itself is character-order free rather than word-order free to some extent. In addition, there is often a close connection between the meaning of a Chinese word and its first or last character. 4.2 Parsing using a Beam Search Algorithm In Table 2, the feature preactn returns the previous parsing action type, and the subscript n stands for the action order before the current action. These are a group of Markovian features. Without this type of features, a shift-reduce parser may directly scan through an input sequence in linear time. Otherwise, following the work of (Duan et al., 2007) and (Zhao, 2009), the parsing algorithm is to search a parsing action sequence with the maximal probability. Sdi = argmax Y i p(di|di−1di−2...), where Sdi is the object parsing action sequence, p(di|di−1...) is the conditional probability, and di 58 Figure 1: A comparison before and after translation Table 2: Features for Parsing in.form, n = 0, 1 i.form + i1.form in.char2 + in+1.char2, n = −1, 0 i.char−1 + i1.char−1 in.char−2 n = 0, 3 i1.char−2 + i2.char−2 +i3.char−2 i.lnverb.char−2 i3.pos in.pos + in+1.pos, n = 0, 1 i−2.cpos1 + i−1.cpos1 i1.cpos1 + i2.cpos1 + i3.cpos1 s′ 2.char1 s′.char−2 + s′ 1.char−2 s′ −2.cpos2 s′ −1.cpos2 + s′ 1.cpos2 s′.cpos2 + s′ 1.cpos2 s’.children.cpos2.seq s’.children.dprel.seq s’.subtree.depth s′.h.form + s′.rm.cpos1 s′.lm.char2 + s′.char2 s.h.children.dprel.seq s.lm.dprel s.char−2 + i1.char−2 s.charn + i.charn, n = −1, 1 s−1.pos + i1.pos s.pos + in.pos, n = −1, 0, 1 s : i|linePath.form.bag s′.form + i.form s′.char2 + in.char2, n = −1, 0, 1 s.curroot.pos + i.pos s.curroot.char2 + i.char2 s.children.cpos2.seq + i.children.cpos2.seq s.children.cpos2.seq + i.children.cpos2.seq + s.cpos2 + i.cpos2 s′.children.dprel.seq + i.children.dprel.seq preact−1 preact−2 preact−2+preact−1 is i-th parsing action. We use a beam search algorithm to find the object parsing action sequence. 5 Exploiting the Translated Treebank As we cannot expect too much for a word-by-word translation, only word pairs with dependency relation in translated text are extracted as useful and reliable information. Then some features based on a query in these word pairs according to the current parsing state (namely, words in the current stack and input) will be derived to enhance the Chinese parser. A translation sample can be seen in Figure 1. Although most words are satisfactorily translated, to generate effective features, what we still have to consider at first is the inconsistence between the translated text and the target text. In Chinese, word lemma is always its word form itself, this is a convenient characteristic in computational linguistics and makes lemma features unnecessary for Chinese parsing at all. However, Chinese has a special primary processing task, i.e., word segmentation. Unfortunately, word definitions for Chinese are not consistent in various linguistical views, for example, seven segmentation conventions for computational purpose are formally proposed since the first Bakeoff3. Note that CTB or any other Chinese treebank has its own word segmentation guideline. Chinese word should be strictly segmented according to the guideline before POS tags and dependency relations are annotated. However, as we say the 3Bakeoff is a Chinese processing share task held by SIGHAN. 59 English treebank is translated into Chinese word by word, Chinese words in the translated text are exactly some entries from the bilingual lexicon, they are actually irregular phrases, short sentences or something else rather than words that follows any existing word segmentation convention. If the bilingual lexicon is not carefully selected or refined according to the treebank where the Chinese parser is trained from, then there will be a serious inconsistence on word segmentation conventions between the translated and the target treebanks. As all concerned feature values here are calculated from the searching result in the translated word pair list according to the current parsing state, and a complete and exact match cannot be always expected, our solution to the above segmentation issue is using a partial matching strategy based on characters that the words include. Above all, a translated word pair list, L, is extracted from the translated treebank. Each item in the list consists of three elements, dependant word (dp), head word (hd) and the frequency of this pair in the translated treebank, f. There are two basic strategies to organize the features derived from the translated word pair list. The first is to find the most matching word pair in the list and extract some properties from it, such as the matched length, part-of-speech tags and so on, to generate features. Note that a matching priority serial should be defined aforehand in this case. The second is to check every matching models between the current parsing state and the partially matched word pair. In an early version of our approach, the former was implemented. However, It is proven to be quite inefficient in computation. Thus we adopt the second strategy at last. Two matching model feature functions, φ(·) and ψ(·), are correspondingly defined as follows. The return value of φ(·) or ψ(·) is the logarithmic frequency of the matched item. There are four input parameters required by the function φ(·). Two parameters of them are about which part of the stack(input) words is chosen, and other two are about which part of each item in the translated word pair is chosen. These parameters could be set to full or charn as shown in Table 1, where n = ..., −2, −1, 1, 2, .... For example, a possible feature could be φ(s.full, i.char1, dp.full, hd.char1), it tries to find a match in L by comparing stack word and dp word, and the first character of input word Table 3: Features based on the translated treebank φ(i.char3, s′.full, dp.char3, hd.full)+i.char3 +s′.form φ(i.char3, s.char2, dp.char3, hd.char2)+s.char2 φ(i.char3, s.full, dp.char3, hd.char2)+s.form ψ(s′.char−2, hd.char−2, head)+i.pos+s′.pos φ(i.char3, s.full, dp.char3, hd.char2)+s.full φ(s′.full, i.char4, dp.full, hd.char4)+s′.pos+i.pos ψ(i.full, hd.char2, root)+i.pos+s.pos ψ(i.full, hd.char2, root)+i.pos+s′.pos ψ(s.full, dp.full, dependant)+i.pos pairscore(s′.pos, i.pos)+s′.form+i.form rootscore(s′.pos)+s′.form+i.form rootscore(s′.pos)+i.pos and the first character of hd word. If such a match item in L is found, then φ(·) returns log(f). There are three input parameters required by the function ψ(·). One parameter is about which part of the stack(input) words is chosen, and the other is about which part of each item in the translated word pair is chosen. The third is about the matching type that may be set to dependant, head, or root. For example, the function ψ(i.char1, hd.full, root) tries to find a match in L by comparing the first character of input word and the whole dp word. If such a match item in L is found, then ψ(·) returns log(f) as hd occurs as ROOT f times. As having observed that CTB and PTB share a similar POS guideline. A POS pair list from PTB is also extract. Two types of features, rootscore and pairscore are used to make use of such information. Both of them returns the logarithmic value of the frequency for a given dependent event. The difference is, rootscore counts for the given POS tag occurring as ROOT, and pairscore counts for two POS tag combination occurring for a dependent relationship. A full adapted feature list that is derived from the translated word pairs is in Table 3. 6 Evaluation Results The quality of the parser is measured by the parsing accuracy or the unlabeled attachment score (UAS), i.e., the percentage of tokens with correct head. Two types of scores are reported for comparison: “UAS without p” is the UAS score without all punctuation tokens and “UAS with p” is the one with all punctuation tokens. The results with different feature sets are in Table 4. As the features preactn are involved, a 60 beam search algorithm with width 5 is used for parsing, otherwise, a simple shift-reduce decoding is used. It is observed that the features derived from the translated text bring a significant performance improvement as high as 1.3%. Table 4: The results with different feature sets features with p without p baseline -d 0.846 0.858 +da 0.848 0.860 +Tb -d 0.859 0.869 +d 0.861 0.870 a+d: using three Markovian features preact and beam search decoding. b+T: using features derived from the translated text as in Table 3. To compare our parser to the state-of-the-art counterparts, we use the same testing data as (Wang et al., 2005) did, selecting the sentences length up to 40. Table 5 shows the results achieved by other researchers and ours (UAS with p), which indicates that our parser outperforms any other ones 4. However, our results is only slightly better than that of (Chen et al., 2008) as only sentences whose lengths are less than 40 are considered. As our full result is much better than the latter, this comparison indicates that our approach improves the performance for those longer sentences. Table 5: Comparison against the state-of-the-art full up to 40 (McDonald and Pereira, 2006)a 0.825 (Wang et al., 2007) 0.866 (Chen et al., 2008) 0.852 0.884 Ours 0.861 0.889 aThis results was reported in (Wang et al., 2007). The experimental results in (McDonald and Nivre, 2007) show a negative impact on the parsing accuracy from too long dependency relation. For the proposed method, the improvement relative to dependency length is shown in Figure 2. From the figure, it is seen that our method gives observable better performance when dependency lengths are larger than 4. Although word order is changed, the results here show that the useful information from the translated treebank still help those long distance dependencies. 4There is a slight exception: using the same data splitting, (Yu et al., 2008) reported UAS without p as 0.873 versus ours, 0.870. 1 4 7 10 13 16 19 0.4 0.5 0.6 0.7 0.8 0.9 1 Dependency Length F1 basline: +d +T: +d Figure 2: Performance vs. dependency length 7 Discussion If a treebank in the source language can help improve parsing in the target language, then there must be something common between these two languages, or more precisely, these two corresponding treebanks. (Zeman and Resnik, 2008) assumed that the morphology and syntax in the language pair should be very similar, and that is so for the language pair that they considered, Danish and Swedish, two very close north European languages. Thus it is somewhat surprising that we show a translated English treebank may help Chinese parsing, as English and Chinese even belong to two different language systems. However, it will not be so strange if we recognize that PTB and CTB share very similar guidelines on POS and syntactics annotation. Since it will be too abstract in discussing the details of the annotation guidelines, we look into the similarities of two treebanks from the matching degree of two word pair lists. The reason is that the effectiveness of the proposed method actually relies on how many word pairs at every parsing states can find their full or partial matched partners in the translated word pair list. Table 6 shows such a statistics on the matching degree distribution from all training samples for Chinese parsing. The statistics in the table suggest that most to-be-check word pairs during parsing have a full or partial hitting in the translated word pair list. The latter then obtains an opportunity to provide a great deal of useful guideline information to help determine how the former should be tackled. Therefore we have cause for attributing the effectiveness of the proposed method to the similarity of these two treebanks. From Table 6, 61 we also find that the partial matching strategy defined in Section 5 plays a very important role in improving the whole matching degree. Note that our approach is not too related to the characteristics of two languages. Our discussion here brings an interesting issue, which difference is more important in cross language processing, between two languages themselves or the corresponding annotated corpora? This may be extensively discussed in the future work. Table 6: Matching degree distribution dependant-match head-match Percent (%) None None 9.6 None Partial 16.2 None Full 9.9 Partial None 12.4 Partial Partial 42.6 Partial Full 7.3 Full None 3.7 Full Partial 7.0 Full Full 0.2 Note that only a bilingual lexicon is adopted in our approach. We regard it one of the most merits for our approach. A lexicon is much easier to be obtained than an annotated corpus. One of the remained question about this work is if the bilingual lexicon should be very specific for this kind of tasks. According to our experiences, actually, it is not so sensitive to choose a highly refined lexicon or not. We once found many words, mostly named entities, were outside the lexicon. Thus we managed to collect a named entity translation dictionary to enhance the original one. However, this extra effort did not receive an observable performance improvement in return. Finally we realize that a lexicon that can guarantee two word pair lists highly matched is sufficient for this work, and this requirement may be conveniently satisfied only if the lexicon consists of adequate highfrequent words from the source treebank. 8 Conclusion and Future Work We propose a method to enhance dependency parsing in one language by using a translated treebank from another language. A simple statistical machine translation technique, word-by-word decoding, where only a bilingual lexicon is necessary, is used to translate the source treebank. As dependency parsing is concerned with the relations of word pairs, only those word pairs with dependency relations in the translated treebank are chosen to generate some additional features to enhance the parser for the target language. The experimental results in English and Chinese treebanks show the proposed method is effective and helps the Chinese parser in this work achieve a state-of-the-art result. Note that our method is evaluated in two treebanks with a similar annotation style and it avoids using too many linguistic properties. Thus the method is in the hope of being used in other similarly annotated treebanks 5. For an immediate example, we may adopt a translated Chinese treebank to improve English parsing. Although there are still something to do, the remained key work has been as simple as considering how to determine the matching strategy for searching the translated word pair list in English according to the framework of our method. . Acknowledgements We’d like to give our thanks to three anonymous reviewers for their insightful comments, Dr. Chen Wenliang for for helpful discussions and Mr. Liu Jun for helping us fix a bug in our scoring program. References Peter F. Brown, John Cocke, Stephen A. Della Pietra, Vincent J. Della Pietra, Fredrick Jelinek, John D. Lafferty, Robert L. Mercer, and Paul S. Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16(2):79–85. David Burkett and Dan Klein. 2008. Two languages are better than one (for syntactic parsing). In EMNLP-2008, pages 877–886, Honolulu, Hawaii, USA. Wenliang Chen, Daisuke Kawahara, Kiyotaka Uchimoto, Yujie Zhang, and Hitoshi Isahara. 2008. Dependency parsing with short dependency relations in unlabeled data. In Proceedings of IJCNLP-2008, Hyderabad, India, January 8-10. Xiangyu Duan, Jun Zhao, and Bo Xu. 2007. Probabilistic parsing action models for multi-lingual dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 940–946, Prague, Czech, June 28-30. Johan Hall, Jens Nilsson, Joakim Nivre, G¨ulsen Eryiˇgit, Be´ata Megyesi, Mattias Nilsson, and Markus Saers. 2007. Single malt or 5For example, Catalan and Spanish treebanks from the AnCora(-Es/Ca) Multilevel Annotated Corpus that are annotated by the Universitat de Barcelona (CLiC-UB) and the Universitat Politècnica de Catalunya (UPC). 62 blended? a study in multilingual parser optimization. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 933–939, Prague, Czech, June. Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Proceedings of ACL-08: HLT, pages 595–603, Columbus, Ohio, USA, June. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Reranking and self-training for parser adaptation. In Proceedings of ACL-COLING 2006, pages 337–344, Sydney, Australia, July. Ryan McDonald and Joakim Nivre. 2007. Characterizing the errors of data-driven dependency parsing models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2007), pages 122–131, Prague, Czech, June 28-30. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of EACL-2006, pages 81–88, Trento, Italy, April. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of ACL-2005, pages 91–98, Ann Arbor, Michigan, USA, June 2530. Paola Merlo, Suzanne Stevenson, Vivian Tsang, and Gianluca Allaria. 2002. A multilingual paradigm for automatic verb classification. In ACL-2002, pages 207–214, Philadelphia, Pennsylvania, USA. Joakim Nivre, Johan Hall, Sandra K¨ubler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The conll 2007 shared task on dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, page 915–932, Prague, Czech, June. Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proceedings of IWPT2003), pages 149–160, Nancy, France, April 23-25. Franz Josef Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of ACL2002, pages 295–302, Philadelphia, USA, July. Roi Reichart and Ari Rappoport. 2007. Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets. In Proceedings of ACL-2007, pages 616–623, Prague, Czech Republic, June. Kenji Sagae and Jun’ichi Tsujii. 2007. Dependency parsing and domain adaptation with lr models and parser ensembles. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, page 1044–1050, Prague, Czech, June 28-30. Noah A. Smith and Jason Eisner. 2006. Annealing structural bias in multilingual weighted grammar induction. In Proceedings of ACL-COLING 2006, page 569–576, Sydney, Australia, July. Mark Steedman, Miles Osborne, Anoop Sarkar, Stephen Clark, Rebecca Hwa, Julia Hockenmaier, Paul Ruhlen, Steven Baker, and Jeremiah Crim. 2003. Bootstrapping statistical parsers from small datasets. In Proceedings of EACL-2003, page 331–338, Budapest, Hungary, April. Qin Iris Wang and Dale Schuurmans. 2008. Semisupervised convex training for dependency parsing. In Proceedings of ACL-08: HLT, pages 532–540, Columbus, Ohio, USA, June. Qin Iris Wang, Dale Schuurmans, and Dekang Lin. 2005. Strictly lexical dependency parsing. In Proceedings of IWPT-2005, pages 152–159, Vancouver, BC, Canada, October. Qin Iris Wang, Dekang Lin, and Dale Schuurmans. 2007. Simple training of dependency parsers via structured boosting. In Proceedings of IJCAI 2007, pages 1756–1762, Hyderabad, India, January. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT-2003), page 195–206, Nancy, France, April. Kun Yu, Daisuke Kawahara, and Sadao Kurohashi. 2008. Chinese dependency parsing with large scale automatically constructed case structures. In Proceedings of COLING-2008, pages 1049–1056, Manchester, UK, August. Daniel Zeman and Philip Resnik. 2008. Crosslanguage parser adaptation between related languages. In Proceedings of IJCNLP 2008 Workshop on NLP for Less Privileged Languages, pages 35– 42, Hyderabad, India, January. Hai Zhao and Chunyu Kit. 2008. Parsing syntactic and semantic dependencies with two single-stage maximum entropy models. In Proceeding of CoNLL2008, pages 203–207, Manchester, UK. Hai Zhao, Wenliang Chen, Chunyu Kit, and Guodong Zhou. 2009. Multilingual dependency learning: A huge feature engineering method to semantic dependency parsing. In Proceedings of CoNLL-2009, Boulder, Colorado, USA. Hai Zhao. 2009. Character-level dependencies in chinese: Usefulness and learning. In EACL-2009, pages 879–887, Athens, Greece. 63
2009
7
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 620–628, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Latent Variable Models of Concept-Attribute Attachment Joseph Reisinger∗ Department of Computer Sciences The University of Texas at Austin Austin, Texas 78712 [email protected] Marius Pas¸ca Google Inc. 1600 Amphitheatre Parkway Mountain View, California 94043 [email protected] Abstract This paper presents a set of Bayesian methods for automatically extending the WORDNET ontology with new concepts and annotating existing concepts with generic property fields, or attributes. We base our approach on Latent Dirichlet Allocation and evaluate along two dimensions: (1) the precision of the ranked lists of attributes, and (2) the quality of the attribute assignments to WORDNET concepts. In all cases we find that the principled LDA-based approaches outperform previously proposed heuristic methods, greatly improving the specificity of attributes at each concept. 1 Introduction We present a Bayesian approach for simultaneously extending Is-A hierarchies such as those found in WORDNET (WN) (Fellbaum, 1998) with additional concepts, and annotating the resulting concept graph with attributes, i.e., generic property fields shared by instances of that concept. Examples of attributes include “height” and “eyecolor” for the concept Person or “gdp” and “president” for Country. Identifying and extracting such attributes relative to a set of flat (i.e., nonhierarchically organized) labeled classes of instances has been extensively studied, using a variety of data, e.g., Web search query logs (Pas¸ca and Van Durme, 2008), Web documents (Yoshinaga and Torisawa, 2007), and Wikipedia (Suchanek et al., 2007; Wu and Weld, 2008). Building on the current state of the art in attribute extraction, we propose a model-based approach for mapping flat sets of attributes annotated with class labels into an existing ontology. This inference problem is divided into two main components: (1) identifying the appropriate parent concept for each labeled class and (2) learning ∗Contributions made during an internship at Google. the correct level of abstraction for each attribute in the extended ontology. For example, consider the task of annotating WN with the labeled class renaissance painters containing the class instances Pisanello, Hieronymus Bosch, and Jan van Eyck and associated with the attributes “famous works” and “style.” Since there is no WN concept for renaissance painters, the latter would need to be mapped into WN under, e.g., Painter. Furthermore, since “famous works” and “style” are not specific to renaissance painters (or even the WN concept Painter), they should be placed at the most appropriate level of abstraction, e.g., Artist. In this paper, we show that both of these goals can be realized jointly using a probabilistic topic model, namely hierarchical Latent Dirichlet Allocation (LDA) (Blei et al., 2003b). There are three main advantages to using a topic model as the annotation procedure: (1) Unlike hierarchical clustering (Duda et al., 2000), the attribute distribution at a concept node is not composed of the distributions of its children; attributes found specific to the concept Painter would not need to appear in the distribution of attributes for Person, making the internal distributions at each concept more meaningful as attributes specific to that concept; (2) Since LDA is fully Bayesian, its model semantics allow additional prior information to be included, unlike standard models such as Latent Semantic Analysis (Hofmann, 1999), improving annotation precision; (3) Attributes with multiple related meanings (i.e., polysemous attributes) are modeled implicitly: if an attribute (e.g., “style”) occurs in two separate input classes (e.g., poets and car models), then that attribute might attach at two different concepts in the ontology, which is better than attaching it at their most specific common ancestor (Whole) if that ancestor is too general to be useful. However, there is also a pressure for these two occurrences to attach to a single concept. We use WORDNET 3.0 as the specific test ontology for our annotation procedure, and evalu620 anticancer drugs: mechanism of action, uses, extravasation, solubility, contraindications, side effects, chemistry, molecular weight, history, mode of action bollywood actors: biography, filmography, age, biodata, height, profile, autobiography, new wallpapers, latest photos, family pictures citrus fruits: nutrition, health benefits, nutritional value, nutritional information, calories, nutrition facts, history european countries: population, flag, climate, president, economy, geography, currency, population density, topography, vegetation, religion, natural resources london boroughs: population, taxis, local newspapers, mp, lb, street map, renault connexions, local history microorganisms: cell structure, taxonomy, life cycle, reproduction, colony morphology, scientific name, virulence factors, gram stain, clipart renaissance painters: early life, bibliography, short biography, the david, bio, painting, techniques, homosexuality, birthplace, anatomical drawings, famous paintings Figure 1: Examples of labeled attribute sets extracted using the method from (Pas¸ca and Van Durme, 2008). ate three variants: (1) a fixed structure approach where each flat class is attached to WN using a simple string-matching heuristic, and concept nodes are annotated using LDA, (2) an extension of LDA allowing for sense selection in addition to annotation, and (3) an approach employing a nonparametric prior over tree structures capable of inferring arbitrary ontologies. The remainder of this paper is organized as follows: §2 describes the full ontology annotation framework, §3 introduces the LDA-based topic models, §4 gives the experimental setup, §5 gives results, §6 gives related work and §7 concludes. 2 Ontology Annotation Input to our ontology annotation procedure consists of sets of class instances (e.g., Pisanello, Hieronymus Bosch) associated with class labels (e.g., renaissance painters) and attributes (e.g., “birthplace”, “famous works”, “style” and “early life”). Clusters of noun phrases (instances) are constructed using distributional similarity (Lin and Pantel, 2002; Hearst, 1992) and are labeled by applying “such-as” surface patterns to raw Web text (e.g., “renaissance painters such as Hieronymous Bosch”), yielding 870K instances in more than 4500 classes (Pas¸ca and Van Durme, 2008). Attributes for each flat labeled class are extracted from anonymized Web search query logs using the minimally supervised procedure in (Pas¸ca, 2008)1. Candidate attributes are ranked based on their weighted Jaccard similarity to a set of 5 manually provided seed attributes for the 1Similar query data, including query strings and frequency counts, is available from, e.g., (Gao et al., 2007) LDA β θ z α D T w η β θ z α D T w η c Fixed Structure LDA β θ z α D ∞ w η T c γ nCRP T w w w Figure 2: Graphical models for the LDA variants; shaded nodes indicate observed quantities. class european countries. Figure 1 illustrates several such labeled attribute sets (the underlying instances are not depicted). Naturally, the attributes extracted are not perfect, e.g., “lb” and “renault connexions” as attributes for london boroughs. We propose a set of Bayesian generative models based on LDA that take as input labeled attribute sets generated using an extraction procedure such as the above and organize the attributes in WN according to their level of generality. Annotating WN with attributes proceeds in three steps: (1) attaching labeled attribute sets to leaf concepts in WN using string distance, (2) inferring an attribute model using one of the LDA variants discussed in § 3, and (3) generating ranked lists of attributes for each concept using the model probabilities (§ 4.3). 3 Hierarchical Topic Models 3.1 Latent Dirichlet Allocation The underlying mechanism for our annotation procedure is LDA (Blei et al., 2003b), a fully Bayesian extension of probabilistic Latent Semantic Analysis (Hofmann, 1999). Given D labeled attribute sets wd, d ∈D, LDA infers an unstructured set of T latent annotated concepts over which attribute sets decompose as mixtures.2 The latent annotated concepts represent semantically coherent groups of attributes expressed in the data, as shown in Example 1. The generative model for LDA is given by θd|α ∼ Dir(α), d ∈1 . . . D βt|η ∼ Dir(η), t ∈1 . . . T zi,d|θd ∼ Mult(θd), i ∈1 . . . |wd| wi,d|βzi,d ∼ Mult(βzi,d), i ∈1 . . . |wd| (1) where α and η are hyperparameters smoothing the per-attribute set distribution over concepts and per-concept attribute distribution respectively (see Figure 2 for the graphical model). We are interested in the case where w is known and we want 2In topic modeling literature, attributes are words and attribute sets are documents. 621 to compute the conditional posterior of the remaining random variables p(z, β, θ|w). This distribution can be approximated efficiently using Gibbs sampling. See (Blei et al., 2003b) and (Griffiths and Steyvers, 2002) for more details. (Example 1) Given 26 labeled attribute sets falling into three broad semantic categories: philosophers, writers and actors (e.g., sets for contemporary philosophers, women writers, bollywood actors), LDA is able to infer a meaningful set of latent annotated concepts: quotations teachings virtue ethics philosophies biography sayings new movies filmography official website biography email address autobiography writing style influences achievements bibliography family tree short biography (philosopher) (writer) (actor) (concept labels manually added for the latent annotated concepts are shown in parentheses). Note that with a flat concept structure, attributes can only be separated into broad clusters, so the generality/specificity of attributes cannot be inferred. Parameters were α=1, η=0.1, T=3. 3.2 Fixed-Structure LDA In this paper, we extend LDA to model structural dependencies between latent annotated concepts (cf. (Li and McCallum, 2006; Sivic et al., 2008)); In particular, we fix the concept structure to correspond to the WN Is-A hierarchy. Each labeled attribute set is assigned to a leaf concept in WN based on the edit distance between the concept label and the attribute set label. Possible latent concepts for this set include the concepts along all paths from its attachment point to the WN root, following Is-A relation edges. Therefore, any two labeled attribute sets share a number of latent concepts based on their similarity in WN: all labeled attribute sets share at least the root concept, and may share more concepts depending on their most specific, common ancestor. Under such a model, more general attributes naturally attach to latent concept nodes closer to the root, and more specific attributes attach lower (Example 2). Formally, we introduce into LDA an extra set of random variables cd identifying the subset of concepts in T available to attribute set d, as shown in the diagram at the middle of Figure 2.3 For example, with a tree structure, cd would be constrained to correspond to the concept nodes in T on the path from the root to the leaf containing d. Equation 1 can be adapted to this case if the index t is taken to range over concepts applicable to attribute set d. 3Abusing notation, we use T to refer to a structured set of concepts and to refer to the number of concepts in flat LDA (Example 2 ) Fixing the latent concept structure to correspond to WN (dark/purple nodes), and attaching each labeled attribute set (examples depicted by light/orange nodes) yields the annotated hierarchy: works picture writings history biography philosophy natural rights criticism ethics law literary criticism books essays short stories novels tattoos funeral filmography biographies net worth person philosopher writer actor scholar intellectual performer entertainer literate communicator bollywood actors women writers contemporary philosophers Attribute distributions for the small nodes are not shown. Dotted lines indicate multiple paths from the root, which can be inferred using sense selection. Unlike with the flat annotated concept structure, with a hierarchical concept structure, attributes can be separated by their generality. Parameters were set at α=1 and η=0.1. 3.3 Sense-Selective LDA For each labeled attribute set, determining the appropriate parent concept in WN is difficult since a single class label may be found in many different synsets (for example, the class bollywood actors might attach to the “thespian” sense of Actor or the “doer” sense). Fixed-hierarchy LDA can be extended to perform automatic sense selection by placing a distribution over the leaf concepts c, describing the prior probability of each possible path through the concept tree. For WN, this amounts to fixing the set of concepts to which a labeled attribute set can attach (e.g., restricting it to a semantically similar subset) and assigning a probability to each concept (e.g., using the relative WN concept frequencies). The probability for each sense attachment cd becomes p(cd|w, c−d, z) ∝p(wd|c, w−d, z)p(cd|c−d), i.e., the complete conditionals for sense selection. p(cd|c−d) is the conditional probability for attaching attribute set d at cd (e.g., simply the prior p(cd|c−d) def = p(cd) in the WN case). A closed form expression for p(wd|c, w−d, z) is derived in (Blei et al., 2003a). 3.4 Nested Chinese Restaurant Process In the final model, shown in the diagram on the right side of Figure 2, LDA is extended hierarchically to infer arbitrary fixed-depth tree structures 622 from data. Unlike the fixed-structure and senseselective approaches which use the WN hierarchy directly, the nCRP generates its own annotated hierarchy whose concept nodes do not necessarily correspond to WN concepts (Example 3). Each node in the tree instead corresponds to a latent annotated concept with an arbitrary number of subconcepts, distributed according to a Dirichlet Process (Ferguson, 1973). Due to its recursive structure, the underlying model is called the nested Chinese Restaurant Process (nCRP). The model in Equation 1 is extended with cd|γ ∼nCRP(γ, L), d ∈D i.e., latent concepts for each attribute set are drawn from an nCRP. The hyperparameter γ controls the probability of branching via the per-node Dirichlet Process, and L is the fixed tree depth. An efficient Gibbs sampling procedure is given in (Blei et al., 2003a). (Example 3) Applying nCRP to the same three semantic categories: philosophers, writers and actors, yields the model: biography date of birth childhood picture family works books quotations critics poems teachings virtue ethics structuralism philosophies political theory criticism short stories style poems complete works accomplishments official website profile life story achievements filmography pictures new movies official site works (root) (philosopher) (writer) (actor) bollywood actors women writers contemporary philosophers (manually added labels are shown in parentheses). Unlike in WN, the inferred structure naturally places philosopher and writer under the same subconcept, which is also separate from actor. Hyperparameters were α=0.1, η=0.1, γ=1.0. 4 Experimental Setup 4.1 Data Analysis We employ two data sets derived using the procedure in (Pas¸ca and Van Durme, 2008): the full set of automatic extractions generated in § 2, and a subset consisting of all attribute sets that fall under the hierarchies rooted at the WN concepts living thing#1 (i.e., the first sense of living thing), substance#7, location#1, person#1, organization#1 and food#1, manually selected to cover a highprecision subset of labeled attribute sets. By comparing the results across the two datasets we can measure each model’s robustness to noise. In the full dataset, there are 4502 input attribute sets with a total of 225K attributes (24K unique), of which 8121 occur only once. The 10 attributes occurring in the most sets (history, definition, picture(s), images, photos, clipart, timeline, clip art, types) account for 6% of the total. For the subset, there are 1510 attribute sets with 76K attributes (11K unique), of which 4479 occur only once. 4.2 Model Settings Baseline: Each labeled attribute set is mapped to the most common WN concept with the closest label string distance (Pas¸ca, 2008). Attributes are propagated up the tree, attaching to node c if they are contained in a majority of c’s children. LDA: LDA is used to infer a flat set of T = 300 latent annotated concepts describing the data. The concept selection smoothing parameter is set as α=100. The smoother for the per-concept multinomial over words is set as η=0.1.4 The effects of concept structure on attribute precision can be isolated by comparing the structured models to LDA. Fixed-Structure LDA (fsLDA): The latent concept hierarchy is fixed based on WN (§ 3.2), and labeled attribute sets are mapped into it as in baseline. The concept graph for each labeled attribute set wd is decomposed into (possibly overlapping) chains, one for each unique path from the WN root to wd’s attachment point. Each path is assigned a copy wd, reducing the bias in attribute sets with many unique ancestor concepts.5 The final models contain 6566 annotated concepts on average. Sense-Selective LDA (ssLDA): For the sense selective approach (§ 3.3), the set of possible sense attachments for each attribute set is taken to be all WN concepts with the lowest edit distance to its label, and the conditional probability of each sense attachment p(cd) is set proportional to its relative frequency. This procedure results in 2 to 3 senses per attribute set on average, yielding models with 7108 annotated concepts. Arbitrary hierarchy (nCRP): For the arbitrary hierarchy model (§ 3.4), we set the maximum tree depth L=5, per-concept attribute smoother η=0.05, concept assignment smoother α=10 and nCRP branching proportion γ=1.0. The resulting 4(Parameter setting) Across all models, the main results in this paper are robust to changes in α. For nCRP, changes in η and γ affect the size of the learned model but have less effect on the final precision. Larger values for L give the model more flexibility, but take longer to train. 5Reducing the directed-acyclic graph to a tree ontology did not significantly affect precision. 623 models span 380 annotated concepts on average. 4.3 Constructing Ranked Lists of Attributes Given an inferred model, there are several ways to construct ranked lists of attributes: Per-Node Distribution: In fsLDA and ssLDA, attribute rankings can be constructed directly for each WN concept c, by computing the likelihood of attribute w attaching to c, L(c|w) = p(w|c) averaged over all Gibbs samples (discarding a fixed number of samples for burn-in). Since c’s attribute distribution is not dependent on the distributions of its children, the resulting distribution is biased towards more specific attributes. Class-Entropy (CE): In all models, the inferred latent annotated concepts can be used to smooth the attribute rankings for each labeled attribute set. Each sample from the posterior is composed of two components: (1) a multinomial distribution over a set of WN nodes, p(c|wd, α) for each attribute set wd, where the (discrete) values of c are WN concepts, and (2) a multinomial distribution over attributes p(w|c, η) for each WN concept c. To compute an attribute ranking for wd, we have p(w|wd) = X c p(w|c, η)p(c|wd, α). Given this new ranking for each attribute set, we can compute new rankings for each WN concept c by averaging again over all the wd that appear as (possible indirect) descendants of c. Thus, this method uses LDA to first perform reranking on the raw extractions before applying the baseline ontology induction procedure (§ 4.2).6 CE ranking exhibits a “conservation of entropy” effect, whereby the proportion of general to specific attributes in each attribute set wd remains the same in the posterior. If set A contains 10 specific attributes and 30 generic ones, then the latter will be favored over the former in the resulting distribution 3 to 1. Conservation of entropy is a strong assumption, and in particular it hinders improving the specificity of attribute rankings. Class-Entropy+Prior: The LDA-based models do not inherently make use of any ranking information contained in the original extractions. However, such information can be incorporated in the form of a prior. The final ranking method combines CE with an exponential prior over the attribute rank in the baseline extraction. For each attribute set, we compute the probability of each 6One simple extension is to run LDA again on the CE ranked output, yielding an iterative procedure; however, this was not found to significantly affect precision. attribute p(w|wd) = plda(w|wd)pbase(w|wd), assuming a parametric form for pbase(w|wd) def = θr(w,wd). Here, r(w, wd) is the rank of w in attribute set d. In all experiments reported, θ=0.9. 4.4 Evaluating Attribute Attachment For the WN-based models, in addition to measuring the average precision of the reranked attributes, it is also useful to evaluate the assignment of attributes to WN concepts. For this evaluation, human annotators were asked to determine the most appropriate WN synset(s) for a set of gold attributes, taking into account polysemous usage. For each model, ranked lists of possible concept assignments C(w) are generated for each attribute w, using L(c|w) for ranking. The accuracy of a list C(w) for an attribute w is measured by a scoring metric that corresponds to a modification (Pas¸ca and Alfonseca, 2009) of the mean reciprocal rank score (Voorhees and Tice, 2000): DRR = max 1 rank(c) × (1 + PathToGold) where rank(c) is the rank (from 1 up to 10) of a concept c in C(w), and PathToGold is the length of the minimum path along Is-A edges in the conceptual hierarchies between the concept c, on one hand, and any of the gold-standard concepts manually identified for the attribute w, on the other hand. The length PathToGold is 0, if the returned concept is the same as the gold-standard concept. Conversely, a gold-standard attribute receives no credit (that is, DRR is 0) if no path is found in the hierarchies between the top 10 concepts of C(w) and any of the gold-standard concepts, or if C(w) is empty. The overalll precision of a given model is the average of the DRR scores of individual attributes, computed over the gold assignment set (Pas¸ca and Alfonseca, 2009). 5 Results 5.1 Attribute Precision Precision was manually evaluated relative to 23 concepts chosen for broad coverage.7 Table 1 shows precision at n and the Mean Average Precision (MAP); In all LDA-based models, the Bayes average posterior is taken over all Gibbs samples 7(Precision evaluation) Attributes were hand annotated using the procedure in (Pas¸ca and Van Durme, 2008) and numerical precision scores (1.0 for vital, 0.5 for okay and 0.0 for incorrect) were assigned for the top 50 attributes per concept. 25 reference concepts were originally chosen, but 2 were not populated with attributes in any method, and hence were excluded from the comparison. 624 Model Precision @ MAP 5 10 20 50 Base (unranked) 0.45 0.48 0.47 0.44 0.46 Base (ranked) 0.77 0.77 0.69 0.58 0.67 LDA† -24 · 105 CE 0.64 0.53 0.52 0.56 0.55 CE+Prior 0.80 0.73 0.74 0.58 0.69 Fixed-structure (fsLDA) -22 · 105 Per-Node 0.43 0.41 0.42 0.41 0.42 CE 0.75 0.68 0.63 0.55 0.63 CE+Prior 0.78 0.77 0.71 0.59 0.69 Sense-selective (ssLDA) -18 · 105 Per-Node 0.37 0.44 0.42 0.41 0.42 CE 0.69 0.68 0.65 0.58 0.64 CE+Prior 0.81 0.80 0.72 0.60 0.70 nCRP† -14 · 105 CE 0.74 0.76 0.73 0.65 0.72 CE+Prior 0.88 0.85 0.81 0.68 0.78 Subset only Base (unranked) 0.61 0.62 0.62 0.60 0.62 Base (ranked) 0.79 0.82 0.72 0.65 0.72 –WN living thing 0.73 0.80 0.71 0.65 0.69 –WN substance 0.80 0.80 0.69 0.53 0.68 –WN location 0.95 0.93 0.84 0.75 0.84 –WN person 0.75 0.83 0.75 0.77 0.77 –WN organization 0.60 0.70 0.60 0.68 0.63 –WN food 0.90 0.85 0.58 0.45 0.64 Fixed-structure (fsLDA) -77 · 104 Per-Node 0.64 0.58 0.52 0.56 0.55 CE 0.90 0.83 0.78 0.73 0.78 CE+Prior 0.88 0.86 0.80 0.66 0.78 –WN living thing 0.83 0.88 0.78 0.63 0.77 –WN substance 0.85 0.83 0.78 0.66 0.76 –WN location 0.95 0.95 0.88 0.75 0.85 –WN person 1.00 0.93 0.91 0.76 0.87 –WN organization 0.80 0.70 0.80 0.76 0.75 –WN food 0.80 0.70 0.63 0.40 0.59 nCRP† -45 · 104 CE 0.88 0.88 0.78 0.71 0.79 CE+Prior 0.90 0.88 0.83 0.67 0.79 Table 1: Precision at n and mean-average precision for all models and data sets. Inset plots show log-likelihood of each Gibbs sample, indicating convergence except in the case of nCRP. † indicates models that do not generate annotated concepts corresponding to WN nodes and hence have no per-node scores. after burn-in.8 The improvements in average precision are important, given the amount of noise in the raw extracted data. When prior attribute rank information (PerNode and CE scores) from the baseline extractions is not incorporated, all LDA-based models outperform the unranked baseline (Table 1). In particular, LDA yields a 17% reduction in error (MAP) 8(Bayes average vs. maximum a-posteriori) The full Bayesian average posterior consistently yielded higher precision than the maximum a-posteriori model. For the per-node distributions, the fsLDA Bayes average model exhibits a 17% reduction in relative error over the maximum a-posteriori estimate and for ssLDA there was a 26% reduction. Model DRR Scores all (n) found (n) Base (unranked) 0.14 (150) 0.24 (91) Base (ranked) 0.17 (150) 0.21 (123) Fixed-structure (fsLDA) 0.31 (150) 0.37 (128) Sense-selective (ssLDA) 0.31 (150) 0.37 (128) Subset only Base (unranked) 0.15 (97) 0.27 (54) Base (ranked) 0.18 (97) 0.24 (74) WN living thing 0.29 (27) 0.35 (22) WN substance 0.21 (12) 0.32 (8) WN location 0.12 (30) 0.17 (20) WN person 0.37 (18) 0.44 (15) WN organization 0.15 (31) 0.17 (27) WN food 0.15 (6) 0.22 (4) Fixed-structure (fsLDA) 0.37 (97) 0.47 (77) WN living thing 0.45 (27) 0.55 (22) WN substance 0.48 (12) 0.64 (9) WN location 0.34 (30) 0.44 (23) WN person 0.44 (18) 0.52 (15) WN organization 0.44 (31) 0.71 (19) WN food 0.60 (6) 0.72 (5) Table 2: All measures the DRR score relative to the entire gold assignment set; found measures DRR only for attributes with DRR(w)>0; n is the number of scores averaged. over the baseline, fsLDA yields a 31% reduction, ssLDA yields a 33% reduction and nCRP yields a 48% reduction (24% reduction over fsLDA). Performance also improves relative to the ranked baseline when prior ranking information is incorporated in the LDA-based models, as indicated by CE+Prior scores in Table 1. LDA and fsLDA reduce relative error by 6%, ssLDA by 9% and nCRP by 33%. Furthermore, nCRP precision without ranking information surpasses the baseline with ranking information, indicating robustness to extraction noise. Precision curves for individual attribute sets are shown in Figure 3. Overall, learning unconstrained hierarchies (nCRP) increases precision, but as the inferred node distributions do not correspond to WN concepts they cannot be used for annotation. One benefit to using an admixture model like LDA is that each concept node in the resulting model contains a distribution over attributes specific only to that node (in contrast to, e.g., hierarchical agglomerative clustering). Although absolute precision is lower as more general attributes have higher average precision (Per-Node scores in Table 1), these distributions are semantically meaningful in many cases (Figure 4) and furthermore can be used to calculate concept assignment precision for each attribute.9 9Per-node distributions (and hence DRR) were not evalu625 Figure 3: Precision (%) vs. rank plots (log scale) of attributes broken down across 18 labeled test attribute sets. Ranked lists of attributes are generated using the CE+Prior method. 5.2 Concept Assignment Precision The precision of assigning attributes to various concepts is summarized in Table 2. Two scores are given: all measures DRR relative to the entire gold assignment set, and found measures DRR only for attributes with DRR(w)>0. Comparing the scores gives an estimate of whether coverage or precision is responsible for differences in scores. fsLDA and ssLDA both yield a 20% reduction in relative error (17.2% increase in absolute DRR) over the unranked baseline and a 17.2% reduction (14.2% absolute increase) over the ranked baseline. 5.3 Subset Precision and DRR Precision scores for the manually selected subset of extractions are given in the second half of Table 1. Relative to the unranked baseline, fsLDA and nCRP yield 42% and 44% reductions in error respectively, and relative to the ranked baseline they both yield a 21.4% reduction. In terms of absolute precision, there is no benefit to adding in prior ranking knowledge to fsLDA or nCRP, indicating diminishing returns as average baseline precision increases (Baseline vs. fsLDA/nCRP CE scores). Broken down across each of the subhierarchies, LDA helps in all cases except food. DRR scores for the subset are given in the lower half of Table 2. Averaged over all gold test attributes, DRR scores double when using fsLDA. These results can be misleading, however, due to artificially low coverage. Hence, Table 2 also shows DRR scores broken down over each subhierarchy, In this case fsLDA more than doubles the DRR relative to the baseline for substance and location, and triples it for organization and food. ated for LDA or nCRP, because they are not mapped to WN. 6 Related Work A large body of previous work exists on extending WORDNET with additional concepts and instances (Snow et al., 2006; Suchanek et al., 2007); these methods do not address attributes directly. Previous literature in attribute extraction takes advantage of a range of data sources and extraction procedures (Chklovski and Gil, 2005; Tokunaga et al., 2005; Pas¸ca and Van Durme, 2008; Yoshinaga and Torisawa, 2007; Probst et al., 2007; Van Durme et al., 2008; Wu and Weld, 2008). However these methods do not address the task of determining the level of specificity for each attribute. The closest studies to ours are (Pas¸ca, 2008), implemented as the baseline method in this paper; and (Pas¸ca and Alfonseca, 2009), which relies on heuristics rather than formal models to estimate the specificity of each attribute. 7 Conclusion This paper introduced a set of methods based on Latent Dirichlet Allocation (LDA) for jointly extending the WORDNET ontology and annotating its concepts with attributes (see Figure 4 for the end result). LDA significantly outperformed a previous approach both in terms of the concept assignment precision (i.e., determining the correct level of generality for an attribute) and the meanaverage precision of attribute lists at each concept (i.e., filtering out noisy attributes from the base extraction set). Also, relative precision of the attachment models was shown to improve significantly when the raw extraction quality increased, showing the long-term viability of the approach. 626 entity physical entity bollywood actors actor new wallpapers upcoming movies baby pictures latest wallpapers performer filmography new movies schedule new pictures new pics entertainer hairstyle hairstyles music videos songs new pictures sexy pictures person bio autobiography childhood bibliography accomplishments timeline organism causal agent living thing photos taxonomy scientific name reproduction life cycle habitat whole object history pictures images picture photos timeline renaissance painters painter influenced impressionist the life 's paintings style of watercolor artist self portrait paintings famous works self portraits painting techniques famous paintings creator influences artwork style work art technique european countries European country recreation national costume prime minister political parties royal family national parks country state codes zipcodes country profile currencies national anthem telephone codes administrative district sights weather forecast culture tourist spots state map district traditional dress per capita income tourist spot cuisine folk dances industrial policy region population nightlife street map temperature location climate tourist attractions geography weather tourism economy drug danger half life ingredients side effects withdrawal symptoms sexual side effects agent pharmacokinetics mechanism of action long term effects pharmacology contraindications mode of action substance matter chemistry ingredients chemical structure dangers chemical formula msds liquors liquor drink mixes apparitions pitchers existence fantasy art alcohol carbohydrates carbs calories alcohol content pronunciation glass beverage drug of abuse sugar content alcohol content caffeine content serving temperature alcohol percentage shelf life liquid food advertisements sugar content adverts brand nutrition information storage temperature shelf life nutritional facts nutrition information flavors nutrition nutritional information fluid recepies gift baskets receipes rdi daily allowance fondue recipes substance density uses physical properties melting point chemical properties chemical structure abstraction london boroughs borough registry office school term dates local history renault citizens advice bureau leisure centres vegetables vegetable pests nutritional values music store essential oil nutrition value dna extraction produce fiber electricity potassium nutritional values nutrition value dna extraction food solid material properties refractive index thermal properties phase diagram thermal expansion aneurysm parasites parasite pathogen phobia mortality rate symptoms treatment orchestras orchestra recordings broadcasts recording christmas ticket conductor musical organization dvorak recordings conductor instrument broadcasts hall organization careers ceo phone number annual report london company social group jobs website logo address mission statement president group ancient cities city port cost of living canadian embassy city air pollution cheap hotels municipality sightseeing weather forecast tourist guide american school zoo hospitals • • • red wines wine grape vintage chart grapes city food pairings cheese Figure 4: Example per-node attribute distribution generated by fsLDA. Light/orange nodes represent labeled attribute sets attached to WN, and the full hypernym graph is given for each in dark/purple nodes. White nodes depict the top attributes predicted for each WN concept. These inferred annotations exhibit a high degree of concept specificity, naturally becoming more general at higher levels of the ontology. Some annotations, such as for the concepts Agent, Substance, Living Thing and Person have high precision and specificity while others, such as Liquor and Actor need improvement. Overall, the more general concepts yield better annotations as they are averaged over many labeled attribute sets, reducing noise. 627 References D. Blei, T. Griffiths, M. Jordan, and J. Tenenbaum. 2003a. Hierarchical topic models and the nested Chinese restaurant process. In Proceedings of the 17th Conference on Neural Information Processing Systems (NIPS-2003), pages 17–24, Vancouver, British Columbia. D. Blei, A. Ng, and M. Jordan. 2003b. Latent dirichlet allocation. Machine Learning Research, 3:993– 1022. T. Chklovski and Y. Gil. 2005. An analysis of knowledge collected from volunteer contributors. In Proceedings of the 20th National Conference on Artificial Intelligence (AAAI-05), pages 564–571, Pittsburgh, Pennsylvania. R. Duda, P. Hart, and D. Stork. 2000. Pattern Classification. John Wiley and Sons. C. Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database and Some of its Applications. MIT Press. T. Ferguson. 1973. A bayesian analysis of some nonparametric problems. Annals of Statistics, 1(2):209– 230. W. Gao, C. Niu, J. Nie, M. Zhou, J. Hu, K. Wong, and H. Hon. 2007. Cross-lingual query suggestion using query logs of different languages. In Proceedings of the 30th ACM Conference on Research and Development in Information Retrieval (SIGIR-07), pages 463–470, Amsterdam, The Netherlands. T. Griffiths and M. Steyvers. 2002. A probabilistic approach to semantic representation. In Proceedings of the 24th Conference of the Cognitive Science Society (CogSci02), pages 381–386, Fairfax, Virginia. M. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th International Conference on Computational Linguistics (COLING-92), pages 539–545, Nantes, France. T. Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22nd ACM Conference on Research and Development in Information Retrieval (SIGIR-99), pages 50–57, Berkeley, California. W. Li and A. McCallum. 2006. Pachinko allocation: DAG-structured mixture models of topic correlations. In Proceedings of the 23rd International Conference on Machine Learning (ICML-06), pages 577–584, Pittsburgh, Pennsylvania. D. Lin and P. Pantel. 2002. Concept discovery from text. In Proceedings of the 19th International Conference on Computational linguistics (COLING-02), pages 1–7, Taipei, Taiwan. M. Pas¸ca and E. Alfonseca. 2009. Web-derived resources for Web Information Retrieval: From conceptual hierarchies to attribute hierarchies. In Proceedings of the 32nd International Conference on Research and Development in Information Retrieval (SIGIR-09), Boston, Massachusetts. M. Pas¸ca and B. Van Durme. 2008. Weaklysupervised acquisition of open-domain classes and class attributes from web documents and query logs. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL-08), pages 19–27, Columbus, Ohio. M. Pas¸ca. 2008. Turning Web text and search queries into factual knowledge: Hierarchical class attribute extraction. In Proceedings of the 23rd National Conference on Artificial Intelligence (AAAI08), pages 1225–1230, Chicago, Illinois. K. Probst, R. Ghani, M. Krema, A. Fano, and Y. Liu. 2007. Semi-supervised learning of attribute-value pairs from product descriptions. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-07), pages 2838–2843, Hyderabad, India. J. Sivic, B. Russell, A. Zisserman, W. Freeman, and A. Efros. 2008. Unsupervised discovery of visual object class hierarchies. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR-08), pages 1–8, Anchorage, Alaska. R. Snow, D. Jurafsky, and A. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL-06), pages 801–808, Sydney, Australia. F. Suchanek, G. Kasneci, and G. Weikum. 2007. Yago: a core of semantic knowledge unifying WordNet and Wikipedia. In Proceedings of the 16th World Wide Web Conference (WWW-07), pages 697–706, Banff, Canada. K. Tokunaga, J. Kazama, and K. Torisawa. 2005. Automatic discovery of attribute words from Web documents. In Proceedings of the 2nd International Joint Conference on Natural Language Processing (IJCNLP-05), pages 106–118, Jeju Island, Korea. B. Van Durme, T. Qian, and L. Schubert. 2008. Class-driven attribute extraction. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING-2008), pages 921–928, Manchester, United Kingdom. E.M. Voorhees and D.M. Tice. 2000. Building a question-answering test collection. In Proceedings of the 23rd International Conference on Research and Development in Information Retrieval (SIGIR00), pages 200–207, Athens, Greece. F. Wu and D. Weld. 2008. Automatically refining the Wikipedia infobox ontology. In Proceedings of the 17th World Wide Web Conference (WWW-08), pages 635–644, Beijing, China. N. Yoshinaga and K. Torisawa. 2007. Open-domain attribute-value acquisition from semi-structured texts. In Proceedings of the 6th International Semantic Web Conference (ISWC-07), Workshop on Text to Knowledge: The Lexicon/Ontology Interface (OntoLex-2007), pages 55–66, Busan, South Korea. 628
2009
70
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 629–637, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP The Chinese Aspect Generation Based on Aspect Selection Functions Guowen Yang The Institute of linguistics Chinese Academy of Social Sciences 5 Jianguomennei Dajie, 100732 Beijing P.R.China John A. Bateman FB10, Sprach und Literaturwissenschaften Bremen University 28334 Germany [email protected] [email protected] Abstract This paper describes our system for generating Chinese aspect expressions. In the system, the semantics of different aspects is characterized by specific temporal and conceptual features. The semantic applicability conditions of each individual aspect are theoretically represented by an aspect selection function (ASF). The generation is realized by evaluating implemented inquiries which formally define the ASFs, traversing the grammatical network, and making aspect selections. 1 Introduction Aspect is one of the most controversial topics among linguists and philosophers. Unlike the function of tense, which relates the time of situation to a deictic center, aspects are different ways of viewing the states of a situation with respect to the situation’s internal temporal constituency (Yang, 2007). This paper describes our system for generating Chinese aspect expressions. The aspect forms covered in the present research were derived from a corpus analysis. The main task of the aspect research from a computational perspective is to implement computationally both the semantic interpretations and the grammatical realizations of aspects as formulated in theoretical work. The theoretical principle of this is, to a large extent, based on Montague’s intensional logic (Montague, 1970; Dowty, 1979; Bestougeff and Ligozat, 1992, Portner and Partee, 2002). It is held that the goal of semantics is to present the truth conditions for each well formed sentence. In previous studies there are some fruitful experiments on computationally processing temporal information in Chinese, e.g. Lee and Hsu’s Chinese to English machine translation system (1990), Li, Wong, and Yuan’s temporal information-extraction system (2001), Xue’s machine learning system (2008), and Xue, Zhong & Chen’s tense annotation system (2008). However, a systematic investigation, including the implementation of the semantics of aspects, has rarely been carried out before and is one of the main contributions of the present research. Aspects are determined by both situation types, which build specific situation structures, and particular viewpoints that construct specific temporal relations between the viewing points and the internal temporal constituencies of situations. These two kinds of factors, which influence aspect selections, can be characterized by aspectual features. This makes it possible for us to use a function which takes relevant time points and concepts as its parameters and “calculates” the truth value of the semantic applicability conditions of a specific aspect in order to make a corresponding aspect selection in language generation. We term this function the Aspect Selection Function (ASF). The ASFs are used for the theoretical descriptions of the aspects and, at the same time, they are the basis for our computational implementation of the semantics of the aspects. Our system has been implemented as a grammar for the KPML multilingual generator (Bateman, 1997a, 1997b) which is equipped with a large systemic grammar and all the technical components required in generation, including an input component, a traversal component, a realization component, and so on. This brings direct benefits for us in both theoretical and implementational respects since we could then focus on the linguistic treatment of the Chinese aspects. The paper is organized into five sections. In the next three sections the semantic features of the aspects, the aspect selection functions, and the detailed description of the generation of the 629 aspects will be given. Finally in Section 5, we make a brief conclusion. 2 The semantic features of the aspects One of the methods adopted in aspect studies is to use semantic features to characterize different situations (cf. Comrie, 1976; Smith, 1991, 1997; Olsen, 1997; and Dai, 1997). This is also taken as the basic principle in the present research. For the purpose of characterizing the semantics of an aspect, the features needed are not only those reflecting the properties of situation types, but also those reflecting the temporal relations between the viewing points and the internal temporal constituencies of the situations. When we establish a system of aspects, we say that the features used are necessary and sufficient if different aspects included can be distinguished from each other by means of these features. Consequently, the more aspect expressions are involved, the more aspectual features are needed. Two kinds of aspectual features are proposed in the present research. One kind of aspectual feature can be directly represented in terms of relations holding over time points. These are termed features of temporal relations (FTR). For example, the feature durative, which is used for situations extended in time, can be represented with the temporal relation t1<t2 where t1 and t2 denote two time points bounding the situation. Similarly, the feature punctual (momentary), which is used for situations theoretically taking a moment, can be formally represented by the temporal relation t1=t2. There is then a further kind of aspectual feature which cannot be directly represented by temporal relations, although they may also concern temporal properties of situations. This kind of feature can only be represented by parameters which serve to provide a conceptual classification of the situations involved; therefore, they are termed features of conceptual type (FCP), such as dynamic and repeatable. In addition, there is a special kind of aspectual feature which reflects qualitative properties of temporal relations: far-precede and shortlyprecede. These two features indicate qualitative distances between time points; the former means that one time point is linearly much before another time point on the time axis; the latter means that one time point is only a little before another time point. In specific context, these kinds of qualitative properties are reflected relatively in comparative or inclusive relations between temporal and spatial scopes of situations. Aspectual features are the basic elements to be used for aspect descriptions. The range of aspectual features is not held to be crosslinguistically valid. In the present research, the following aspectual features are used to describe Chinese aspects. The states of relational type formed by the verbs like 是 (shì, be), 有(yǒu, have), 等于(děngyú, equal) etc. are associated with relational processes (Halliday, 1985) and therefore not included in the features listed. In the following feature definitions, ti refers to the initial time of a situation, tt the terminating time of a situation, and tr the reference time of an aspect. In the present research, we define the reference time as the time from which the state of a situation with respect to the situation’s internal temporal constituency is contextually examined. (1) durative (FTR): describes situations which take time. It is represented by the temporal relation ti<tt. (2) punctual (FTR): describes situations which theoretically take only a moment of time. It is formally represented by the temporal relation ti=tt. (3) realized (FTR): describes situations which have occurred, have existed, or have shown the property of reality by some specific time. It is represented by the temporal relation tt≤tr. (4) dynamic-state (FCP): describes a durative changing situation. (5) stative-state (FCP): describes a durative unchanging situation associated with the activity meaning of an activity verb. (6) change-of-state (FCP): indicates either the inception or termination of a situation. (7) event (FCP): describes a dynamic situation viewed as a complete whole (Comrie, 1976, p.13) and is aspectually associated with the occurrence, taking place, or completion of the situation. (8) repeatable (FCP): describes situations which can occur repeatedly. (9) specific (FCP): when a time point is specific, it has a particular position on the time axis which can be determined from context. (10) unspecific (FCP): when a time point is unspecific, its position on the time axis is unknown. 630 (11) far-precede (FCP): indicates a qualitative distance, one end point of which is linearly much before another end point. (12) shortly-precede (FCP): indicates a qualitative distance, one end point of which is linearly a little before another end point. (13) excluded (FCP): when one of the end points of a time interval has the feature excluded, the time interval is open at that point. (14) included (FCP): when one of the end points of a time interval has the feature included, the time interval is closed at that point. Concerning the opening and closure of a time interval at its end points, two principles are proposed by the present research. The opening and closure of a time interval at its end points can be determined according to the following principles, which we term exclusiveness principles (ELPs): ELP (1) For the initial time ti: when the initial time ti of the situation is specific, then the time interval at the initial time ti is considered closed; when the initial time ti of the situation is unspecific, then the time interval at the initial time ti is considered open. ELP (2) For the terminating time tt: when the situation does not hold at the terminating time tt, the time interval is considered closed at the terminating time; when the situation still holds at the terminating time tt, the time interval is considered open at the terminating time. As far as the temporal structures of aspects are concerned, there is an extreme case: when the terminating time tt precedes the reference time tr, in which case the time period of the situation is definitely closed at the terminating time tt. The semantic feature telicity indicating that the situation referred to has an internal end point (cf. Vendler, 1967; Comrie, 1976) is not used in the present research for the Chinese aspect descriptions. The feature telicity is not an effective feature for characterizing Chinese aspects from a language generation point of view because there is no single aspect of the present aspect system that absolutely requires that the situations expressed be telic or atelic. 3 The aspect selection functions Specific features of temporal relations and the conceptual features together build semantic applicability conditions for each individual aspect. The semantic applicability conditions are represented by the aspect selection function (ASF) of the aspect. The ASF of a specific aspect assumed by the present research is, therefore, principally composed of two sets of predicates: one set of predicates for testing temporal relations (Allen, 1984; Yang & Bateman, 2002), another for testing the values, i.e. conceptual features, of parameters associated with the aspect. All the predicates are connected with conjunctions at the top level. At the lower levels, the logical relations among the predicates can be a conjunction and, a disjunction or and a negation not. To evaluate the truth condition of the ASF for a specific aspect, the values of all relevant temporal relations and parameters are evaluated. When all the predicates are true, i.e., all of the required conditions are met, the value of the ASF is true; otherwise, the value of the ASF is false. In the predicates of the ASFs, there are two kinds of parameters: temporal parameters associated with the time points involved in the temporal structures of the aspects and conceptual parameters associated with the specific conceptual features of the aspects. The conceptual features will be taken as values of the corresponding parameters and represented by EQUAL(p, c), in which ‘p’ refers to a parameter and ‘c’ refers to the conceptual feature associated with that parameter. Some of the parameters are given as follows: (1) STATE-ACTION-PROPERTYp (SAPp): this parameter indicates whether the property of the situation is dynamicstate, stative-state, state, or event. The subscript p denotes Process. (2) CHANGEABILITYp (CBp): this parameter indicates whether the situation has the feature change-of-state. (3) REPEATABILITYp: this parameter indicates whether the situation is repeatable. (4) RETRIEVALt (RTt): this parameter indicates whether the time point t is specific or unspecific. 631 (5) POSITIONt1-t2: this parameter indicates whether the time point t1, which precedes another time point t2, is much before (far-precede) or a little before (shortly-precede) time point t2. (6) EXCLUSIVENESSt (EXLt): this parameter indicates whether the time point t, which is one of the end points of a time interval, has the feature excluded or included. We now take the unmarked-realized (URE) aspect V+了 (V+le) as an example to illustrate the structure of the ASF. The URE aspect V+了 (V+le) is one of the perfective aspects, serving to indicate that the occurrence, development, or change of the situation is realized (not necessarily complete) by some specific time. The temporal structure of the aspect is shown in Figure 1. ti tt=tr Figure 1 The temporal structure of the URE aspect V+了(V+le): {ti, tt}, (ti<tt or ti=tt), tt=tr, RTtr=specific The temporal structure in Figure 1 is explained as follows: The situations expressed in the URE aspect can be either punctual, i.e. ti=tt, or durative, i.e. ti<tt. The time interval of the situation defined by “{ti, tt}” is either closed or open at its ends. The feature realized is represented by specifying that the terminating time equals the reference time, i.e. tt=tr, rather than that the terminating time either equals or precedes the reference time, i.e. tt≤tr ― the latter is the general condition for all perfective aspects. In Figure 1, the case of a punctual situation, i.e. ti=tt, is theoretically taken as a very short time period and not explicitly represented. RTtr indicates that the reference time tr is a specific time point. In addition to the temporal relations explained above, the URE aspect V+ 了 (V+le) has prominently three characteristics associated with the situation properties. When the URE aspect V+了(V+le) expresses a durative situation, the situation can be either a state or an event. When the process is of relational type, a change of state should be emphasized. When a change of state is involved in the situation, the URE aspect V+了 (V+le) focuses on the realization of the event, rather than the resultative state, unless current relevance is indicated in context. These three characteristics can be respectively represented by corresponding conceptual features associated with the parameters SAPp, CBp, and PROCESS involved in the predicates of the ASF of the URE aspect as shown in Figure 2. The ASFs are used for the purpose of theoretical descriptions, but also, as we shall see in the next section, give the basis for the implementation of the semantics of the aspects. Fure(ti, tt, tr, RTtr, CBp, SAPp, PROCESS, EXLti, EXLtt) → (AND(OR(SAME(ti, tt)) (PRECEDE(ti, tt))) (SAME(tt, tr)) (EQUAL(RTtr, specific)) (OR(EQUAL(EXLti, included)) (EQUAL(EXLti, excluded))) (OR(EQUAL(EXLtt, included)) (EQUAL(EXLtt, excluded))) (AND(PRECEDE(ti, tt)) (OR(EQUAL(SAPp, state)) (EQUAL(SAPp, event)))) (OR(NOT(EQUAL(PROCESS, relational-process))) (AND(EQUAL(PROCESS, relational-process)) (EQUAL(CBp, change-of- state)))) (OR(EQUAL(CBp, not-change-of-state)) (AND(EQUAL(CBp, change-of- state)) (EQUAL(SAPp, event))))) Figure 2 The ASF of the URE aspect V+了(V+le) 4 The generation of the aspect expressions 4.1 Inquiries, choosers, and the input specifi- cations The present research uses the multilingual generator KPML as its implementation platform and takes Systemic Functional Grammar (SFG) as its theoretical basis. Fourteen primary simple aspects, and twenty-six complex aspects are organized into a hierarchical system network. In a system network, grammatical units are constructed by corresponding traversals of that network. Each path through the network from the root to an end node corresponds to a specific language expression. If we need to produce a specific expression, semantically appropriate choices need to be made so as to follow a path 632 leading to the creation of that expression. The system is guided by the joint actions of the inquiries and choosers of the system (Fawcett, 1987; Matthiessen and Bateman, 1991; Bateman, 2000; Yang & Bateman, 2002). “A chooser is straightforwardly represented as a ‘decision tree’ with a particular kind of inquiry, called a branching inquiry, forming the decision points” (Bateman 1997c, p.20). Inquiries are responsible for finding the answers required by choosers by accessing semantic information represented in input specifications, written in the form of the Sentence Plan Language (SPL) (Kasper, 1989; Bateman, 1997a), and in the knowledge base of the system. The semantics of an aspect associated with a sentence to be generated is represented in the input specification. The time points involved in the temporal structure of the aspect to be generated, i.e. the initial time, the terminating time, and the reference time(s), are presented with specific time values in the input specification. The speaking time has a default value corresponding to the present moment. All the parameters characterizing the conceptual features of the aspect to be generated are also included in the input specification. The implemented inquiries, written in LISP, play a crucial role in the generation of the aspect expressions. The implemented inquiries associated with different types of aspects formally define the semantic applicability conditions represented by the ASFs of the aspects. Each implemented inquiry has a set of parameters with specific values to represent temporal relations and conceptual features of a specific aspect. The inquiry is composed of a set of predicates which will have the value T when the conditions defined are satisfied. The truth condition of an inquiry will be met only when all the predicates involved have the value T. Hence, evaluating an implemented inquiry refers to the process of testing the truth conditions of all the predicates involved in the inquiry according to the semantic information represented in the corresponding input specification. In the implemented inquiries, two basic predicates PRECEDE(t1, t2) and SAME(t1, t2) are used to test temporal relations involved in the semantic applicability conditions of different aspects. In the generation, the parameters t1 and t2, are replaced with the values of the initial time ti, the terminating time tt, or the reference time tr, which are given in the input specifications. Logically, given a specific context, the precedence of two points can be determined in terms of concepts PAST, PRESENT, and FUTURE with reference to a relative deictic center. To evaluate the precedence between two time points, nine different time values are defined on the time axis as shown in Figure 3. In Figure 3, the values at-past-present, at-present, and at-future-present correspond to three time points. The other six values correspond to specific intervals on the time axis. The time points within each interval are given a specific time value, as shown below, where “-∞” stands for the infinite past, and “+∞” stands for the infinite future: (-∞, at-present) = at-past; (at-present, +∞) = at-future; (-∞, at-past-present) = at-past-past; (at-past-present, at-present) = at-past-future; (at-present, at-future-present) = at-future-past; (at-future-present, +∞) = at-future-future. The nine qualitative time values defined above build a calculating system for time comparison in the present research. To generate a specific aspect, i.e., from semantics to the surface expression of the aspect, what we need to do is to distribute each time point involved in the temporal structure of the aspect with one of the qualitative time values and to establish appropriate temporal relations between them as to be illustrated in the next section. at-past at-future at-past-past at-past-future at-future-past at-future-future at-past-present at-present at-future-present Figure 3 Nine qualitative values of time on the time axis 633 4.2 An example of generating the aspect expressions In this section we illustrate the generation process with an example. We focus on the generation of the aspect expressions and ignore the generation process for the other sentence constituents. Because of the limitation of input associated with relevant files in the implementation, we use numbers 1, 2, 3, and 4 to refer to the four tones of Chinese characters in all the realization statements. For instance, wang1, wang2, wang3, and wang4 refer to wāng, wáng, wăng, and wàng respectively. The neutral tone is not marked by numbers. In the present case, the semantics represented in the input specification is set for the situation that “zhe4 sou1 chuan2 jin1tian1 zhuang1yun4 le yi1 liang4 you3 gu4zhang4 de ka3che1” (The ship loaded an inoperative truck today). The situation happened at a specific time today (jin1tian1) and was finished before the speaking time, i.e., the present moment. The situation refers to an event rather than a state. The process of loading the truck took a period of time; and the realization of the situation is focused. Our aim now is to generate an appropriate aspect expression for this particular loading situation by applying the semantic information represented in the input specification. The aspect-related semantic information in the input specification is as follows. Because the situation was finished before the present moment, we can consider that both the initial time ti and the terminating time tt precede the present moment. Because the situation took a period of time, the initial time ti is regarded as preceding the terminating time tt. Because the realization of the situation is focused, the reference time tr is considered as being located at the terminating time. Representing these temporal relations with our qualitative time values illustrated in Figure 3, we have the following: SPEAKINGTIME has the value at-present which is a default value. REFERENCETIME has the value at-past-future and is a specific time point. INITIALTIME, with respect to other relevant times, has the value atpast-past, indicating the occurrence time of the situation. The value of TERMINATINGTIME equals that of REFERENCETIME, i.e. at-pastfuture. This means that the state of the situation indicated by the aspect to be generated is viewed from the terminating time. Theoretically, the reference time of an aspect, i.e., the viewing point of the aspect, establishes specific temporal relations with the internal temporal constituency of the situation reflecting what the speaker focuses on when s/he views the state of a situation. In our present case, the reference time is placed at the terminating time of the situation, indicating that the speaker’s focus is on the termination or completion of the situation. If REFERENCE-TIME has other values, e.g., atpresent, which is after the terminating time, it indicates that the focus of the speaker is on either the recent past or experiential meaning of the situation, rather than on the termination or completion of the situation. Correspondingly, they show temporal structures of different aspects. In the input specification, several parameters are also used to represent the conceptual features of the aspect. The conceptual features and the corresponding parameters define the space of possible aspect-related semantic variation: this shows precisely which facets of aspectual semantics are grammaticised in the language; the particular grammatical consequences are then distributed over the grammatical choice points defined in the grammatical component. When using this for generation, any given situation to be expressed must be ‘re-conceptualized’ in terms of parameters provided. This should be done by the user interested in investigating the grammatical realizations of distinct temporal relations. The parameter CHANGEABLITY has the value change-of-state, indicating the completion of the situation. The value of the parameter STATE-ACTION-PROPERTY is event, indicating that the situation is not of type state and can be viewed as a whole. Both the parameter EXCLUSIVENESS-TI and the parameter EXCLUSIVENESS-TT have the value included, indicating that the time interval over which the situation holds is closed at its two end points. This means that the situation occurred at some specific time and finished. The parameter REPEATABILITY has the value irrelevant, indicating not being related to any particular conceptual feature. Referring to the semantics above, we follow the system traversal to generate an aspect expression by evaluating the relevant inquiries. The traversal starts from the system of WITHASPECT-MARKER-TYPE and needs to make a choice among its three options: imperfective, perfective, and imminent. Corresponding to the definition of aspect in the present research, perfective, imperfective, and imminent aspects are interpreted in the following ways: perfective 634 is the way of viewing the states of a situation with respect to its internal temporal constituency from outside the situation structure: the viewing point of the aspect is after or equal to the terminating time, i.e., tt≤tr; imperfective is the way of viewing the states of a situation with respect to its internal temporal constituency inside the situation structure: the viewing point of the aspect ranges from the initial time, including the initial time, to the terminating time of the situation, i.e., ti≤tr<tt; imminent is the way of viewing the occurrence of a situation from outside the situation structure and with the viewing point shortly before the initial time of the situation, i.e., tr<ti and Ptr-ti=shortly-precede. The temporal relations of the perfective, imperfective, and imminent aspects are captured by specifying appropriate values for the inquiries named perfective-q-code, imperfective-q-code, and imminent-q-code respectively. When operating within the context of a full generation system, these values would generally be provided via the results of text planning in the usual manner. The with-aspect-marker-type chooser, which takes the form of a decision tree as described in section 4.1, is in charge of making the selection by asking relevant inquiries to see what type of aspect has the semantic applicability conditions which match the semantic inputs represented in the input specification. The fine classification and distinct semantic descriptions of different aspects are sufficient to constrain choice regardless of their particular order of application. Therefore, alternative implementations of the choosers, such as specifications of feature vectors, could be envisioned. Possible consequences of such changes for the other components of the generation architecture would then need to be considered, however. Because in the present case both the terminating time tt and the reference time tr have the value at-pastfuture that meets the temporal condition required by the perfective aspects, the option perfective is chosen and the system of PERFECTIVE-PRIMARY-TYPE is entered. After entering the system of PERFECTIVEPRIMARY-TYPE, a selection among three options recent-past (the REP aspect V+(NP)+lai2zhe), experiential (the unmarkedexperiential aspect V+guo and the markedexperiential aspect ceng2jing1+V+(guo)), and realized (the URE aspect V+le and the perfect aspect yi3jing+V+(le)) has to be made. The perfective-primary-type chooser is responsible for making this choice. Accordingly, the chooser firstly evaluates the inquiries named recent-pastq-code and experiential-q-code respectively. The recent-past (REP) aspect V+(NP)+lai2zhe serves to indicate that a durative situation existed not long ago. The semantic applicability conditions represented by the ASF of the REP aspect V+(NP)+lai2zhe include the following: the situation expressed by the aspect shows the feature durative which can be represented in the temporal relation ti<tt; the terminating time of the situation precedes the reference time, i.e., tt<tr; and the reference time tr is a specific time point. A further condition required is represented by the parameter Ptt-tr with the value shortly-precede, indicating the qualitative distance from tt to tr. The parameter EXL has the value excluded and included for the initial time ti and the terminating time tt respectively. After evaluating the inquiry of recent-past-qcode, the perfective-primary-type chooser gives a negative result, indicating that the semantics presented by the input specification does not match the semantic applicability conditions of the REP aspect. One obvious mismatch is reflected in the temporal relation between the terminating time tt and the reference time tr. The condition given by the input specification is tt=tr, while the condition required by the REP aspect is tt<tr. After failing to select the REP aspect, the perfective-primary-type chooser continues to evaluate the inquiry associated with the experiential aspects. The experiential aspects include the unmarked-experiential (UEX) aspect V+guo and the marked-experiential (MEX) aspect ceng2jing1+V+(guo). Although the two experiential aspects have some differences in usage (cf. Yang, 2007), they have the same aspectual function to indicate that a situation existed at least once in the past and was over, not having current relevance. The semantic applicability conditions shared by the two experiential aspects are: the terminating time tt precedes the reference time tr; the situation referred to has the feature repeatable; the parameter EXLti has the value either excluded or included; the parameter EXLtt has the value included. Similarly to the failure of selecting the REP aspect elaborated above, the evaluation of the 635 inquiry of experiential-q-code will also fail because the semantic applicability conditions of the experiential aspects do not meet the semantic information shown in the input specification. Except for the mismatch of the temporal relations, the conceptual feature repeatable required by the experiential aspects is also absent in the input specification. When both the REP aspect and the experiential aspects have been excluded, the perfective-primary-type chooser selects aspects of realized type, and then the traversal enters the system of REALIZED-TYE, then a further selection between the URE aspect V+le and the PEF aspect yi3jing+V+(le) has to be made. The realized-type chooser is responsible for making this selection. To make the selection, the realized-type chooser firstly evaluates the inquiry unmarked-realized-q-code to check whether the semantic applicability conditions of the URE aspect V+le can be met. The inquiry unmarkedrealized-q-code is defined according to the ASF of the URE aspect as shown in Figure 2. The realized-type chooser evaluates the inquiry unmarked-realized-q-code by comparing the input semantics with the semantic applicability conditions of the URE aspect. The evaluation of the unmarked-realized inquiry succeeds because all the predicates of the unmarked-realized-qcode give the value T (Due to the space limit, we will not describe the whole process of evaluation in detail here). Hence, according to the algorithm of the realized-type chooser, the URE aspect V+le should be chosen and the perfect-q-code does not need to be evaluated. The generated sentence, marked up to show its constituency, is then as follows: ((zhe4/这)(sou1/艘)(chuan2/船)) ((jin1tian1/今天)) this CL ship today (zhuang1yun4/装运) (le/了) ((yi1/一) (liang4/辆) load URE one CL ((you3 gu4zhang4/有故障)(de/的)) have problem of (ka3che1/卡车.)) truck (The ship loaded an inoperative truck today.) 5 Conclusion With the method elaborated above, a test-bed of forty aspect expressions of the Chinese aspect system has been correctly generated in the forms of both Chinese phonetic alphabet and characters. In the present research the application of the ASFs provides a formal way to represent semantic applicability conditions of the aspects; the grammatical network built on the basis of systemic functional grammar systematically organizes and distinguishes semantic functions of different aspects. The computational implementation verifies both grammatical organization and semantic descriptions of the Chinese aspects. The complete system files and the sentences generated are available on the website: “http://www.fb10.uni- bremen.de/anglistik/langpro/kpml/ genbank/chinese.htm”. Acknowledgement We thank Peter Lang Publisher for allowing us to use the relevant contents of the book (Yang, 2007) in this article. We also thank the anonymous reviewers for their valuable comments and revision suggestions for the manuscript. References Allen, J.F. (1984) Towards a General Theory of Action and Time, Artificial Intelligence, 1984, 23, p.123-154 Bateman, J.A. 1997a. Enabling technology for multilingual natural language generation: the KPML development. Natural Language Engineering, 3(1), pp.15-55. Bateman, J.A. 1997b. KPML Development Environment: multilingual linguistic resource development and sentence generation. (Release 1.1). GMD-Studie Number 304. German National Center for Information Technology (GMD), Sankt Augustin, Germany. Bateman, J.A. 1997c. Sentence generation and systemic grammar: an introduction. English draft written for: Iwanami Lecture Series: Language Sciences, Volume 8. Tokyo: Iwanami Shoten Publisher (in Japanese). Bateman, J.A. 2000. Multilinguality and multifunctionality in linguistic description and some possible applications. Sprachtypol. Univ. Forsch. (STUF), Berlin 53 (2000) 2, pp.131-154. Bestougeff, H. and G. Ligozat. Translator: I.D. Alexander-Craig. 1992. Logical Tools for Temporal Knowledge Representation. Ellis Horwood Limited. England. Comrie, Bernard. 1976. Aspect. Cambridge, England: Cambridge University Press. 636 Dai, Yaojing. 1997. 《现代汉语时体研究》 (The Study of Chinese Aspect). Zhejiang Education Press. Dowty, David R. 1979. Word Meaning and Montague Grammar. Dordrecht: Reidel. Fawcett, R.P. 1987. System networks in the lexicalgrammar. In Halliday, M.A.K., and Fawcett, R.P. (eds.) New developments in systemic linguistics Vol 1: Theory and description. London: Pinter. Halliday, M.A.K. 1994 (second edition of 1985). An Introduction to Functional Grammar (second edition). London: Edward Arnold. Kasper, Robert T. 1989. A flexible interface for linking applications to PENMAN’s sentence generator. In Proceedings of the DARPA Workshop on Speech and Natural Languages. Available from USC/Information Sciences Institute, Marina del Rey, CA. Lee, Hsi-Jian and Hsu, Ren-Rong. 1990. An ERS model for tense and aspect information in Chinese sentences. In Proceedings of ROCLING III, R.O.C. Computational Linguistics Conference III. Taipei: Tsing Hua University. pp.213-234. Li, Wenjie, Kam-Fai Wong, and Chunfa Yuan. 2001. A model for processing temporal references in Chinese. In Proceedings of ACL’2001 Workshop on Temporal and Spatial Information Processing, Toulouse, France. pp.33-40. Matthiessen, M.I.M. and Bateman, J.A. 1991. Text generation and Systemic-Functional Linguistics, Experiences from English and Japanese. Pinter Publishers, London. Montague, R. 1970. English as a formal language. In Richmond H. Thomason (ed.) 1974. Formal Philosophy, Selected Paper of Richard Montague. Yale University Press. New Haven and London. pp.188-221. Olsen, Mari B. 1997. A Semantic and Pragmatic Model of Lexical and Grammatical Aspect. Garland Publishing, Inc. Portner, P. and B.H. Partee. 2002. Formal Semantics. Blackwell Publishers Ltd. Smith, C.S. 1997 (second edition of 1991). The Parameter of Aspect. Kluwer Academic Publishers. Vendler, Zeno. 1967. Linguistics in Philosophy. Ithaca: Cornell University Press. Xue, Nianwen. 2008. Automatic inference of the temporal location of situations in Chinese text. In Proceedings of EMNLP-2008. Waikiki, Honolulu, Hawaii. Xue, Nianwen, Hua Zhong, and Kai-Yun Chen. 2008. Annotating “tense” in a tense-less language. In Proceedings of LREC 2008. Marrakesh. Morocco. Yang, Guowen and J.A. Bateman. 2002. The Chinese aspect system and its semantic interpretation. In Shu-Chuan Tseng (ed.) Proceedings of the 19th International Conference on Computational Linguistics (COLING-2002). August 26-30, Taipei. ISBN 1-55860-894-X, Morgan Kaufmann Publishers, Vol. 2, pp. 1128-1134. Yang, Guowen 2007 The Semantics of Chinese Aspects — Theoretical Descriptions and a Computational Implementation. Peter Lang. Frankfurt am Main. 637
2009
71
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 638–646, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Quantitative modeling of the neural representation of adjective-noun phrases to account for fMRI activation Kai-min K. Chang1 Vladimir L. Cherkassky2 Tom M. Mitchell3 Marcel Adam Just2 Language Technologies Institute1 Center for Cognitive Brain Imaging2 Machine Learning Department3 Carnegie Mellon University Pittsburgh, PA 15213, U.S.A. {kkchang,cherkassky,tom.mitchell,just}@cmu.edu Abstract Recent advances in functional Magnetic Resonance Imaging (fMRI) offer a significant new approach to studying semantic representations in humans by making it possible to directly observe brain activity while people comprehend words and sentences. In this study, we investigate how humans comprehend adjective-noun phrases (e.g. strong dog) while their neural activity is recorded. Classification analysis shows that the distributed pattern of neural activity contains sufficient signal to decode differences among phrases. Furthermore, vector-based semantic models can explain a significant portion of systematic variance in the observed neural activity. Multiplicative composition models of the two-word phrase outperform additive models, consistent with the assumption that people use adjectives to modify the meaning of the noun, rather than conjoining the meaning of the adjective and noun. 1 Introduction How humans represent meanings of individual words and how lexical semantic knowledge is combined to form complex concepts are issues fundamental to the study of human knowledge. There have been a variety of approaches from different scientific communities trying to characterize semantic representations. Linguists have tried to characterize the meaning of a word with feature-based approaches, such as semantic roles (Kipper et al., 2006), as well as word-relation approaches, such as WordNet (Miller, 1995). Computational linguists have demonstrated that a word’s meaning is captured to some extent by the distribution of words and phrases with which it commonly co-occurs (Church & Hanks, 1990). Psychologists have studied word meaning through feature-norming studies (Cree & McRae, 2003) in which human participants are asked to list the features they associate with various words. There are also efforts to recover the latent semantic structure from text corpora using techniques such as LSA (Landauer & Dumais, 1997) and topic models (Blei et al., 2003). Recent advances in functional Magnetic Resonance Imaging (fMRI) provide a significant new approach to studying semantic representations in humans by making it possible to directly observe brain activity while people comprehend words and sentences. fMRI measures the hemodynamic response (changes in blood flow and blood oxygenation) related to neural activity in the human brain. Images can be acquired at good spatial resolution and reasonable temporal resolution – the activity level of 15,000 - 20,000 brain volume elements (voxels) of about 50 mm3 each can be measured every 1 second. Recent multivariate analyses of fMRI activity have shown that classifiers can be trained to decode which of several visually presented objects or object categories a person is contemplating, given the person’s fMRImeasured neural activity (Cox and Savoy, 2003; O'Toole et al., 2005; Haynes and Rees, 2006; Mitchell et al., 2004). Furthermore, Mitchell et al. (2008) showed that word features computed from the occurrences of stimulus words (within a trillion-token Google text corpus that captures the typical use of words in English text) can predict the brain activity associated with the 638 meaning of these words. They developed a generative model that is capable of predicting fMRI neural activity well enough that it can successfully match words it has not yet encountered to their previously unseen fMRI images with accuracies far above chance level. The distributed pattern of neural activity encodes the meanings of words, and the model’s success indicates some initial access to the encoding. Given these early succesess in using fMRI to discriminate categorial information and to model lexical semantic representations of individual words, it is interesting to ask whether a similar approach can be used to study the representation of adjective-noun phrases. In this study, we applied the vector-based models of semantic composition used in computational linguistics to model neural activation patterns obtained while subjects comprehended adjective-noun phrases. In an object-contemplation task, human participants were presented with 12 text labels of objects (e.g. dog) and were instructed to think of the same properties of the stimulus object consistently during multiple presentations of each item. The participants were also shown adjective-noun phrases, where adjectives were used to modify the meaning of nouns (e.g. strong dog). Mitchell and Lapata (2008) presented a framework for representing the meaning of phrases and sentences in vector space. They discussed how an additive model, a multiplicative model, a weighted additive model, a Kintsch (2001) model, and a model which combines multiplicative and additive models can be used to model human behavior in similiarity judgements when human participants were presented with a reference containing a subjectverb phrase (e.g., horse ran) and two landmarks (e.g., galloped and dissolved) and asked to choose which landmark was most similiar to the reference (in this case, galloped). They compared the composition models to human similarity ratings and found that all models were statistically significantly correlated with human judgements. Moreover, the multiplicative and combined model performed signficantlly better than the non-compositional models. Our approach is similar to that of Mitchell and Lapata (2008) in that we compared additive and multiplicative models to non-compositional models in terms of their ability to model human data. Our work differs from these efforts because we focus on modeling neural activity while people comprehend adjective-noun phrases. In section 2, we describe the experiment and how functional brain images were acquired. In section 3, we apply classifier analysis to see if the distributed pattern of neural activity contains sufficient signal to discriminate among phrases. In section 4, we discuss a vector-based approach to modeling the lexical semantic knowledge using word occurrence measures in a text corpus. Two composition models, namely the additive and the multiplicative models, along with two non-composition models, namely the adjective and the noun models, are used to explain the systematic variance in neural activation. Section 5 distinguishes between two types of adjectives that are used in our stimuli: attribute-specifying adjectives and object-modifying adjectives. Classifier analysis suggests people interpret the two types of adjectives differently. Finally, we discuss some of the implications of our work and suggest some future studies. 2 Brain Imaging Experiments on Adjective-Noun Comprehension 2.1 Experimental Paradigm Nineteen right-handed adults (aged between 18 and 32) from the Carnegie Mellon community participated and gave informed consent approved by the University of Pittsburgh and Carnegie Mellon Institutional Review Boards. Four additional participants were excluded from the analysis due to head motion greater than 2.5 mm. The stimuli were text labels of 12 concrete nouns from 4 semantic categories with 3 exemplars per category. The 12 nouns were bear, cat, dog (animal); bottle, cup, knife (utensil); carrot, corn, tomato (vegetable); airplane, train, and truck (vehicle; see Table 1). The fMRI neural signatures of these objects have been found in previous studies to elicit different neural activity. The participants were also shown each of the 12 nouns paired with an adjective, where the adjectives are expected to emphasize certain semantic properties of the nouns. For instance, in the case of strong dog, the adjective is used to emphasize the visual or physical aspect (e.g. muscular) of a dog, as opposed to the behavioral aspects (e.g. play, eat, petted) that people more often associate with the term. Notice that the last three adjectives in Table 1 are marked by asterisks to denote they are object-modifying adjectives. These adjectives appear to behave differently from the ordinary attribute-specifying adjectives. Section 5 is devoted to discussing the different adjective types in more detail. 639 Adjective Noun Category Soft Bear Animal Large Cat Animal Strong Dog Animal Plastic Bottle Utensil Small Cup Utensil Sharp Knife Utensil Hard Carrot Vegetable Cut Corn Vegetable Firm Tomato Vegetable Paper* Airplane Vehicle Model* Train Vehicle Toy* Truck Vehicle Table 1. Word stimuli. Asterisks mark the object-modifying adjectives, as opposed to the ordinary attribute-specifying adjectives. To ensure that participants had a consistent set of properties to think about, they were each asked to generate and write a set of properties for each exemplar in a session prior to the scanning session (such as “4 legs, house pet, fed by me” for dog). However, nothing was done to elicit consistency across participants. The entire set of 24 stimuli was presented 6 times during the scanning session, in a different random order each time. Participants silently viewed the stimuli and were asked to think of the same item properties consistently across the 6 presentations of the items. Each stimulus was presented for 3s, followed by a 7s rest period, during which the participants were instructed to fixate on an X displayed in the center of the screen. There were two additional presentations of fixation, 31s each, at the beginning and end of each session, to provide a baseline measure of activity. 2.2 Data Acquisition and Processing Functional images were acquired on a Siemens Allegra 3.0T scanner (Siemens, Erlangen, Germany) at the Brain Imaging Research Center of Carnegie Mellon University and the University of Pittsburgh using a gradient echo EPI pulse sequence with TR = 1000 ms, TE = 30 ms, and a 60° flip angle. Seventeen 5-mm thick oblique-axial slices were imaged with a gap of 1mm between slices. The acquisition matrix was 64 x 64 with 3.125 x 3.125 x 5-mm voxels. Data processing were performed with Statistical Parametric Mapping software (SPM2, Wellcome Department of Cognitive Neurology, London, UK; Friston, 2005). The data were corrected for slice timing, motion, and linear trend, and were temporally smoothed with a high-pass filter using a 190s cutoff. The data were normalized to the MNI template brain image using a 12parameter affine transformation and resampled to 3 x 3 x 6-mm3 voxels. The percent signal change (PSC) relative to the fixation condition was computed for each item presentation at each voxel. The mean of the four images (mean PSC) acquired within a 4s window, offset 4s from the stimulus onset (to account for the delay in hemodynamic response), provided the main input measure for subsequent analysis. The mean PSC data for each word presentation were further normalized to have mean zero and variance one to equate the variation between participants over exemplars. Due to the inherent limitations in the temporal properties of fMRI data, we consider here only the spatial distribution of the neural activity after the stimuli are comprehended and do not attempt to model the cogntive process of comprehension. 3 Does the distribution of neural activity encode sufficient signal to classify adjective-noun phrases? 3.1 Classifier Analysis We are interested in whether the distribution of neural activity encodes sufficient signal to decode both nouns and adjective-noun phrases. Given the observed neural activity when participants comprehended the adjective-noun phrases, Gaussian Naïve Bayes classifiers were trained to identify cognitive states associated with viewing stimuli from the evoked patterns of functional activity (mean PSC). For instance, the classifier would predict which of the 24 exemplars the participant was viewing and thinking about. Separate classifiers were also trained for classifying the isolated nouns, the phrases, and the 4 semantic categories. Since fMRI acquires the neural activity at 15,000 – 20,000 distinct voxel locations, many of which might not exhibit neural activity that encodes word or phrase meaning, the classifier analysis selected the voxels whose responses to the 24 different items were most stable across presentations. Voxel stability was computed as the average pairwise correlation between 24 item vectors across presentations. The focus on the most stable voxels effectively increased the signal-to-noise ratio in the data and facilitated further analysis by classifiers. Many of our previous analyses have indicated that 120 voxels is a set size suitable for our purposes. 640 Classification results were evaluated using 6fold cross validation, where one of the 6 repetitions was left out for each fold. The voxel selection procedure was performed separately inside each fold, using only the training data. Since multiple classes were involved, rank accuracy was used (Mitchell et al., 2004) to evaluate the classifier. Given a new fMRI image to classify, the classifier outputs a rank-ordered list of possible class labels from most to least likely. The rank accuracy is defined as the percentile rank of the correct class in this ordered output list. Rank accuracy ranges from 0 to 1. Classification analysis was performed separately for each participant, and the mean rank accuracy was computed over the participants. 3.2 Results and Discussion Table 2 shows the results of the exemplar-level classification analysis. All classification accuracies were significantly higher than chance (p < 0.05), where the chance level for each classification is determined based on the empirical distribution of rank accuracies for randomly generated null models. One hundred null models were generated by permuting the class labels. The classifier was able to distinguish among the 24 exemplars with mean rank accuracies close to 70%. We also determined the classification accuracies separately for nouns only and phrases only. Distinct classifiers were trained. Classification accuracies were significantly higher (p < 0.05) for the nouns, calculated with a paired t-test. For 3 participants, the classifier did not achieve reliable classification accuracies for the phrase stimuli. Moreover, we determined the classification accuracies separately for each semantic category of stimuli. There were no significant differences in accuracy across categories, except for the difference between vegetables and vehicles. Classifier Racc All 24 exemplars 0.69 Nouns 0.71 Phrases 0.64 Animals 0.67 Tools 0.66 Vegetables 0.65 Vehicles 0.69 Table 2. Rank accuracies for classifiers. Distinct classifiers were trained to distinguish all 24 examples, nouns only, phrases only, and only words within each of the 4 semantic categories. High classification accuracies indicate that the distributed pattern of neural activity does encode sufficient signal to discriminate differences among stimuli. The classification accuracy for the nouns was on par with previous research, providing a replication of previous findings (Mitchell et al, 2004). The classifiers performed better on the nouns than the phrases, consistent with our expectation that characterizing phrases is more difficult than characterizing nouns in isolation. It is easier for participants to recall properties associated with a familiar object than to comprehend a noun whose meaning is further modified by an adjective. The classification analysis also helps us to identify participants whose mental representations for phrases are consistent across phrase presentations. Subsequent regression analysis on phrase activation will be based on subjects who perform the phrase task well. 4 Using vector-based models of semantic representation to account for the systematic variances in neural activity 4.1 Lexical Semantic Representation Computational linguists have demonstrated that a word’s meaning is captured to some extent by the distribution of words and phrases with which it commonly co-occurs (Church and Hanks, 1990). Consequently, Mitchell et al. (2008) encoded the meaning of a word as a vector of intermediate semantic features computed from the co-occurrences with stimulus words within the Google trillion-token text corpus that captures the typical use of words in English text. Motivated by existing conjectures regarding the centrality of sensory-motor features in neural representations of objects (Caramazza and Shelton, 1998), they selected a set of 25 semantic features defined by 25 verbs: see, hear, listen, taste, smell, eat, touch, rub, lift, manipulate, run, push, fill, move, ride, say, fear, open, approach, near, enter, drive, wear, break, and clean. These verbs generally correspond to basic sensory and motor activities, actions performed on objects, and actions involving changes in spatial relationships. Because there are only 12 stimuli in our experiment, we consider only 5 sensory verbs (see hear, smell, eat and touch) to avoid overfitting with the full set of 25 verbs. Following the work of Bullinaria and Levy (2007), we consider the “basic semantic vector” which normalizes n(c,t), the count of times context word c occurs within a window of 5 words around the target word t. The 641 basic semantic vector is thus the vector of conditional probabilities, ( ) ( ) ( ) ( ) ( ) ∑ = = c t c n t c n t p t c p t c p , , , | where all components are positive and sum to one. Table 3 shows the semantic representation for strong and dog. Notice that strong is heavily loaded on see and smell, whereas dog is heavily loaded on eat and see, consistent with the intuitive interpretation of these two words. See Hear Smell Eat Touch Strong 0.63 0.06 0.26 0.03 0.03 Dog 0.34 0.06 0.05 0.54 0.02 Table 3. The lexical semantic representation for strong and dog. 4.2 Semantic Composition We adopt the vector-based semantic composition models discussed in Mitchell and Lapata (2008). Let u and v denote the meaning of the adjective and noun, respectively, and let p denote the composition of the two words in vector space. We consider two non-composition models, the adjective model and the noun model, as well as two composition models, the additive model and the multplicative model. The adjective model assumes that the meaning of the composition is the same as the adjective: u p = The noun model assumes that the meaning of the composition is the same as the noun: v p = The adjective model and the noun model correspond to the assumption that when people comprehend phrases, they focus exclusively on one of the two words. This serves as a baseline for comparison to other models. The additive model assumes the meaning of the composition is a linear combination of the adjective and noun vector: v B u A p ⋅ + ⋅ = where A and B are vectors of weighting coefficients. The multiplicative model assumes the meaning of the composition is the element-wise product of the two vectors: v u C p ⋅ ⋅ = Mitchell and Lapata (2008) fitted the parameters of the weighting vectors A, B, and C, though we assume A = B = C = 1, since we are interested in the model comparison. Also, there are no model complexity issues, since the number of parameters in the four models is the same. More critically, the additive model and multiplicative model correspond to different cognitive processes. On the one hand, the additive model assumes that people concatenate the meanings of the two words when comprehending phrases. On the other hand, the multiplicative model assumes that the contribution of u is scaled to its relevance to v, or vice versa. Notice that the former assumption of the multiplicative model corresponds to the modifier-head interpretation where adjectives are used to modify the meaning of nouns. To foreshadow our results, we found the modifier-head interpretation of the multiplicative model to best account for the neural activity observed in adjective-noun phrase data. Table 4 shows the semantic representation for strong dog under each of the four models. Although the multiplicative model appears to have small loadings on all features, the relative distribution of loadings still encodes sufficient information, as our later analysis will show. Notice how the additive model concatenates the meaning of two words and is heavily loaded on see, eat, and smell, whereas the multiplicative model zeros out unshared features like eat and smell. As a result, the multiplicative model predicts that the visual aspects will be emphasized when a participant is thinking about strong dog, while the additive model predicts that, in addition, the behavioral aspects (e.g., eat, smell, and hear) of dog will be emphasized. See Hear Smell Eat Touch Adj 0.63 0.06 0.26 0.03 0.03 Noun 0.34 0.06 0.05 0.54 0.02 Add 0.96 0.12 0.31 0.57 0.04 Multi 0.21 0.00 0.01 0.01 0.00 Table 4. The semantic representation for strong dog under the adjective, noun, additive, and multiplicative models. 642 Notice that these 4 vector-based semantic composition models ignore word order. This corresponds to the bag-of-words assumption, such that the representation for strong dog will be the same as that of dog strong. The bag-of-words model is used as a simplifying assumption in several semantic models, including LSA (Landauer & Dumais, 1997) and topic models (Blei et al., 2003). There were two main hypotheses that we tested. First, people usually regard the noun in the adjective-noun pair as the linguistic head. Therefore, meaning associated with the noun should be more evoked. Thus, we predicted that the noun model would outperform the adjective model. Second, people make more interpretations that use adjectives to modify the meaning of the noun, rather than disjunctive interpretations that add together or take the union of the semantic features of the two words. Thus, we predicted that the multiplicative model would outperform the additive model. 4.3 Regression Fit In this analysis, we train a regression model to fit the activation profile for the 12 phrase stimuli. We focused on subjects for whom the classifier established reliable classification accuracies for the phrase stimuli. The regression model examined to what extent the semantic feature vectors (explanatory variables) can account for the variation in neural activity (response variable) across the 12 stimuli. All explanatory variables were entered into the regression model simultaneously. More precisely, the predicted activity av at voxel v in the brain for word w is given by ( ) ∑ = + = n i v i vi v w f a 1 ε β where fi(w) is the value of the ith intermediate semantic feature for word w, βvi is the regression coefficient that specifies the degree to which the ith intermediate semantic feature activates voxel v, and εv is the model’s error term that represents the unexplained variation in the response variable. Least squares estimates of βvi were obtained to minimize the sum of squared errors in reconstructing the training fMRI images. An L2 regularization with lambda = 1.0 was added to prevent overfitting given the high parameter-todata-points ratios. A regression model was trained for each of the 120 voxels and the reported R2 is the average across the 120 voxels. R2 measures the amount of systematic variance explained by the model. Regression results were evaluated using 6-fold cross validation, where one of the 6 repetitions was left out for each fold. Linear regression assumes a linear dependency among the variables and compares the variance due to the independent variables against the variance due to the residual errors. While the linearity assumption may be overly simplistic, it reflects the assumption that fMRI activity often reflects a superimposition of contributions from different sources, and has provided a useful first order approximation in the field (Mitchell et al., 2008). 4.4 Results and Discussion The second column of Table 5 shows the R2 regression fit (averaged across 120 voxels) of the adjective, noun, additive, and multiplicative model to the neural activity observed in adjective-noun phrase data. The noun model significantly (p < 0.05) outperformed the adjective model, estimated with a paired t-test. Moreover, the difference between the additive and adjective models was not significant, whereas the difference between the additive and noun models was significant (p < 0.05). The multiplicative model significantly (p < 0.05) outperformed both of the non-compositional models, as well as the additive model. More importantly, the two hypotheses that we were testing were both verified. Notice Table 5 supports our hypothesis that the noun model should outperform the adjective model based on the assumption that the noun is generally more central to the phrase meaning than is the adjective. Table 5 also supports our hypothesis that the multiplicative model should outperform the additive model, based on the assumption that adjectives are used to emphasize particular semantic features that will already be represented in the semantic feature vector of the noun. Our findings here are largely consistent with Mitchell and Lapata (2008). R2 Racc Adjective 0.34 0.57 Noun 0.36 0.61 Additive 0.35 0.60 Multiplicative 0.42 0.62 Table 5. Regression fit and regression-based classification rank accuracy of the adjective, noun, additive, and multiplicative models for phrase stimuli. 643 Following Mitchell et al. (2008), the regression model can be used to decode mental states. Specifically, for each regression model, the estimated regression weights can be used to generate the predicted activity for each word. Then, a previously unseen neural activation vector is identified with the class of the predicted activation that had the highest correlation with the given observed neural activation vector. Notice that, unlike Mitchell et al. (2008), where the regression model was used to make predictions for items outside the training set, here we are just showing that the regression model can be used for classification purposes. The third column of Table 5 shows the rank accuracies classifying mental concepts using the predicted activation from the adjective, noun, additive, and multiplicative models. All rank accuracies were significantly higher (p < 0.05) than chance, where the chance level for each classification is again determined by permutation testing. More importantly, here we observe a ranking of these four models similar to that observed for the regression analysis. Namely, the noun model performs significantly better (p < 0.05) than the adjective model, and the multiplicative model performs significantly better (p < 0.05) than the additive model. However, the difference between the multiplicative model and the noun model is not statistically significant in this case. 5 Comparing the attribute-specifying adjectives with the object-modifying adjectives Some of the phrases contained adjectives that changed the meaning of the noun. In the case of vehicle nouns, adjectives were chosen to modify the manipulability of the nouns (e.g., to make an airplane more manipulable, paper was chosen as the modifier). This type of modifier raises two issues. First, these modifiers (e.g. paper, model, toy) more typically assume the part of speech (POS) tag of nouns, unlike our other modifiers (e.g., soft, large, strong) whose typical POS tag is adjective. Second, these modifiers combine with the noun to denote a very different object from the noun in isolation (paper airplane, model train, toy truck), in comparison to other cases where the adjective simply specifies an attribute of the noun (soft bear, large cat, strong dog, etc.). In order to study this difference, we performed classification analysis separately for the attribute-specifying adjectives and the objectmodifying adjectives. Our hypothesis is that the phrases with attribute-specifying adjectives will be much more difficult to distinguish from the original nouns than the adjectives that change the referent. For instance, we hypothesize that it is much more difficult to distinguish the neural representation for strong dog versus dog than it is to distinguish the neural representation for paper airplane versus airplane. To verify this, Gaussian Naïve Bayes classifiers were trained to discriminate between each of the 12 pairs of nouns and adjective-noun phrases. The average classification for phrases with object-modifying adjectives is 0.76, whereas classification accuracies for phrases with attribute-specifying adjectives are 0.68. The difference is statistically significant at p < 0.05. This result supports our hypothesis. Furthermore, we performed regression-based classification separately for the two types of adjectives. Notice that the number of phrases with object-modifying adjectives is much less than the number of phrases with attribute-specifying adjectives (3 vs. 9). This affects the parameter-todata-points ratio in our regression model. Consequently, an L2 regularization with lambda = 10.0 was used to prevent overfitting. Table 6 shows a pattern similar to that seen in section 4 is observed for the attribute-specifying adjectives. That is, the noun model outperformed the adjective model and the multiplicative model outperformed the additive model when using attributespecifying adjectives. However, for the objectmodifying adjectives, the noun model no longer outperformed the adjective model. Moreover, the additive model performed better than the noun model. Although neither difference is statistically significant, this clearly shows a pattern different from the attribute-specifying adjectives. This result suggests that when interpreting phrases like paper airplane, it is more important to consider contributions from the adjectives, compared to when interpreting phrases like strong dog, where the contribution from the adjective is simply to specify a property of the item typically referred to by the noun in isolation. Attributespecifying Objectmodifying Adjective 0.57 0.65 Noun 0.62 0.64 Additive 0.61 0.65 Multiplicative 0.63 0.67 Table 6. Separate regression-based classification rank accuracy for phrases with attributespecifying or object-modifying adjectives. 644 In light of this observation, we plan to extend our analysis of adjective-nouns phrases to nounnoun phrases, where participants will be shown noun phrases (e.g. carrot knife) and instructed to think of a likely meaning for the phrases. Unlike adjective-noun phrases, where a single interpretation often dominates, noun-noun combinations allow multiple interpretations (e.g., carrot knife can be interpreted as a knife that is specifically used to cut carrots or a knife carved out of carrots). There exists an extensive literature on the conceptual combination of noun-noun phrases. Costello and Keane (1997) provide extensive studies on the polysemy of conceptual combination. More importantly, they outline different rules of combination, including property mapping, relational mapping, hybrid mapping, etc. It will be interesting to see if different composition models better account for neural activation when different kinds of combination rules are used. 6 Contribution and Conclusion Experimental results have shown that the distributed pattern of neural activity while people are comprehending adjective-noun phrases does contain sufficient information to decode the stimuli with accuracies significantly above chance. Furthermore, vector-based semantic models can explain a significant portion of systematic variance in observed neural activity. Multiplicative composition models outperform additive models, a trend that is consistent with the assumption that people use adjectives to modify the meaning of the noun, rather than conjoining the meaning of the adjective and noun. In this study, we represented the meaning of both adjectives and nouns in terms of their cooccurrences with 5 sensory verbs. While this type of representation might be justified for concrete nouns (hypothesizing that their neural representations are largely grounded in sensorymotor features), it might be that a different representation is needed for adjectives. Further research is needed to investigate alternative representations for both nouns and adjectives. Moreover, the composition models that we presented here are overly simplistic in a number of ways. We look forward to future research to extend the intermediate representation and to experiment with different modeling methodologies. An alternative approach is to model the semantic representation as a hidden variable using a generative probabilistic model that describes how neural activity is generated from some latent semantic representation. We are currently exploring the infinite latent semantic feature model (ILFM; Griffiths & Ghahramani, 2005), which assumes a non-parametric Indian Buffet prior to the binary feature vector and models neural activation with a linear Gaussian model. The basic proposition of the model is that the human semantic knowledge system is capable of storing an infinite list of features (or semantic components) associated with a concept; however, only a subset is actively recalled during any given task (contextdependent). Thus, a set of latent indicator variables is introduced to indicate whether a feature is actively recalled at any given task. We are investigating if the compositional models also operate in the learned latent semantic space. The premise of our research relies on advancements in the fields of computational linguistics and cognitive neuroimaging. Indeed, we are at an especially opportune time in the history of the study of language, when linguistic corpora allow word meanings to be computed from the distribution of word co-occurrence in a trilliontoken text corpus, and brain imaging technology allows us to directly observe and model neural activity associated with the conceptual combination of lexical items. An improved understanding of language processing in the brain could yield a more biologically-informed model of semantic representation of lexical knowledge. We therefore look forward to further brain imaging studies shedding new light on the nature of human representation of semantic knowledge. Acknowledgements This research was supported by the National Science Foundation, Grant No. IIS-0835797, and by the W. M. Keck Foundation. We would like to thank Jennifer Moore for help in preparation of the manuscript. References Blei, D. M., Ng, A. Y., Jordan, and M. I.. 2003. Latent dirichlet allocation. Journal of Machine Learning Research 3, 993-1022. Bullinaria, J., and Levy, J. 2007. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavioral Research Methods, 39:510-526. Caramazza, A., and Shelton, J. R. 1998. Domainspecific knowledge systems in the brain the animate inanimate distinction. Journal of Cognitive Neuroscience 10(1), 1-34. 645 Church, K. W., and Hanks, P. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16, 22-29. Cree, G. S., and McRae, K. 2003. Analyzing the factors underlying the structure and computation of the meaning of chipmunk, cherry, chisel, cheese, and cello (and many other such concrete nouns). Journal of Experimental Psychology: General 132(2), 163-201. Costello, F., and Keane, M. 2001. Testing two theories of conceptual combination: Alignment versus diagnosticity in the comprehension and production of combined concepts. Journal of Experimental Psychology: Learning, Memory & Cognition, 27(1): 255-271. Cox, D. D., and Savoy, R. L. 2003. Functioning magnetic resonance imaging (fMRI) "brain reading": Detecting and classifying distributed patterns of fMRI activity in human visual cortex. NeuroImage 19, 261-270. Friston, K. J. 2005. Models of brain function in neuroimaging. Annual Review of Psychology 56, 57-87. Griffiths, T. L., and Ghahramani, Z. 2005. Infinite latent feature models and the Indian buffet process. Gatsby Unit Technical Report GCNU-TR-2005001. Haynes, J. D., and Rees, G. 2006. Decoding mental states from brain activity in humans. Nature Reviews Neuroscience 7(7), 523-534. Kintsch, W. 2001. Prediction. Cognitive Science, 25(2):173-202. Landauer, T.K., and Dumais, S. T. 1997. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2), 211240. Miller, G. A. 1995. WordNet: A lexical database for English. Communications of the ACM 38, 39-41. Mitchell, J., and Lapata, M. 2008. Vector-based models of semantic composition. Proceedings of ACL08: HLT, 236-244. Mitchell, T., Hutchinson, R., Niculescu, R. S., Pereira, F., Wang, X., Just, M. A., and Newman, S. D. 2004. Learning to decode cognitive states from brain images. Machine Learning 57, 145-175. Mitchell, T., Shinkareva, S.V., Carlson, A., Chang, K.M., Malave, V.L., Mason, R.A., and Just, M.A. 2008. Predicting human brain activity associated with the meanings of nouns. Science 320, 11911195. O'Toole, A. J., Jiang, F., Abdi, H., and Haxby, J. V. 2005. Partially distributed representations of objects and faces in ventral temporal cortex. Journal of Cognitive Neuroscience, 17, 580-590. 646
2009
72
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 647–655, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Capturing Salience with a Trainable Cache Model for Zero-anaphora Resolution Ryu Iida Department of Computer Science Tokyo Institute of Technology 2-12-1, ˆOokayama, Meguro, Tokyo 152-8552, Japan [email protected] Kentaro Inui Yuji Matsumoto Graduate School of Information Science Nara Institute of Science and Technology 8916-5, Takayama, Ikoma Nara 630-0192, Japan {inui,matsu}@is.naist.jp Abstract This paper explores how to apply the notion of caching introduced by Walker (1996) to the task of zero-anaphora resolution. We propose a machine learning-based implementation of a cache model to reduce the computational cost of identifying an antecedent. Our empirical evaluation with Japanese newspaper articles shows that the number of candidate antecedents for each zero-pronoun can be dramatically reduced while preserving the accuracy of resolving it. 1 Introduction There have been recently increasing concerns with the need for anaphora resolution to make NLP applications such as IE and MT more reliable. In particular, for languages such as Japanese, anaphora resolution is crucial for resolving a phrase in a text to its referent since phrases, especially nominative arguments of predicates, are frequently omitted by anaphoric functions in discourse (Iida et al., 2007b). Many researchers have recently explored machine learning-based methods using considerable amounts of annotated data provided by, for example, the Message Understanding Conference and Automatic Context Extraction programs (Soon et al., 2001; Ng and Cardie, 2002; Yang et al., 2008; McCallum and Wellner, 2003, etc.). These methods reach a level comparable to or better than the state-of-the-art rule-based systems (e.g. Baldwin (1995)) by recasting the task of anaphora resolution into classification or clustering problems. However, such approaches tend to disregard theoretical findings from discourse theories, such as Centering Theory (Grosz et al., 1995). Therefore, one of the challenging issues in this area is to incorporate such findings from linguistic theories into machine learning-based approaches. A typical machine learning-based approach to zero-anaphora resolution searches for an antecedent in the set of candidates appearing in all the preceding contexts. However, computational time makes this approach largely infeasible for long texts. An alternative approach is to heuristically limit the search space (e.g. the system deals with candidates only occurring in the N previous sentences). Various research such as Yang et al. (2008) has adopted this approach, but it also leads to problems when an antecedent is located far from its anaphor, causing it to be excluded from target candidate antecedents. On the other hand, rule-based methods derived from theoretical background such as Centering Theory (Grosz et al., 1995) only deal with the salient discourse entities at each point of the discourse status. By incrementally updating the discourse status, the set of candidates in question is automatically limited. Although these methods have a theoretical advantage, they have a serious drawback in that Centering Theory only retains information about the previous sentence. A few methods have attempted to overcome this fault (Suri and McCoy, 1994; Hahn and Strube, 1997), but they are overly dependent upon the restrictions fundamental to the notion of centering. We hope that by relaxing such restrictions it will be possible for an anaphora resolution system to achieve a good balance between accuracy and computational cost. From this background, we focus on the issue of reducing candidate antecedents (discourse entities) for a given anaphor. Inspired by Walker’s argument (Walker, 1996), we propose a machine learning-based caching mechanism that captures the most salient candidates at each point of the discourse for efficient anaphora resolution. More specifically, we choose salient candidates for each sentence from the set of candidates appearing in that sentence and the candidates which are already 647 in the cache. Searching only through the set of salient candidates, the computational cost of zeroanaphora resolution is effectively reduced. In the empirical evaluation, we investigate how efficiently this caching mechanism contributes to reducing the search space while preserving accuracy. This paper focuses on Japanese though the proposed cache mechanism may be applicable to any language. This paper is organized as follows. First, Section 2 presents the task of zero-anaphora resolution and then Section 3 gives an overview of previous work. Next, in Section 4 we propose a machine learning-based cache model. Section 5 presents the antecedent identification and anaphoricity determination models used in the experiments. To evaluate the model, we conduct several empirical evaluations and report their results in Section 6. Finally, we conclude and discuss the future direction of this research in Section 7. 2 Zero-anaphora resolution In this paper, we consider only zero-pronouns that function as an obligatory argument of a predicate. A zero-pronoun may or may not have its antecedent in the discourse; in the case it does, we say the zero-pronoun is anaphoric. On the other hand, a zero-pronoun whose referent does not explicitly appear in the discourse is called a non-anaphoric zero-pronoun. A zero-pronoun is typically nonanaphoric when it refers to an extralinguistic entity (e.g. the first or second person) or its referent is unspecified in the context. The task of zero-anaphora resolution can be decomposed into two subtasks: anaphoricity determination and antecedent identification. In anaphoricity determination, the model judges whether a zero-pronoun is anaphoric (i.e. a zeropronoun has an antecedent in a text) or not. If a zero-pronoun is anaphoric, the model must detect its antecedent. For example, in example (1) the model has to judge whether or not the zero-pronoun in the second sentence (i.e. the nominative argument of the predicate ‘to hate’) is anaphoric, and then identify its correct antecedent as ‘Mary.’ (1) Maryi-wa Johnj-ni (φj-ga) tabako-o Maryi-TOP Johnj-DAT (φj-NOM) smoking-OBJ yameru-youni it-ta . quit-COMP say-PAST PUNC Mary told John to quit smoking. (φi-ga) tabako-o kirai-dakarada . (φi-NOM) smoking-OBJ hate-BECAUSE PUNC Because (she) hates people smoking. 3 Previous work Early methods for zero-anaphora resolution were developed with rule-based approaches in mind. Theory-oriented rule-based methods (Kameyama, 1986; Walker et al., 1994), for example, focus on the Centering Theory (Grosz et al., 1995) and are designed to collect the salient candidate antecedents in the forward-looking center (Cf) list, and then choose the most salient candidate, Cp, as an antecedent of a zero-pronoun according to heuristic rules (e.g. topic > subject > indirect object > direct object > others1). Although these methods have a theoretical advantage, they have a serious drawback in that the original Centering Theory is restricted to keeping information about the previous sentence only. In order to loosen this restriction, the Centering-based methods have been extended for reaching an antecedent appearing further from its anaphor. For example, Suri and McCoy (1994) proposed a method for capturing two kinds of Cp, that correspond to the most salient discourse entities within the local transition and within the global focus of a text. Hahn and Strube (1997) estimate hierarchical discourse segments of a text by taking into account a series of Cp and then the resolution model searches for an antecedent in the estimated segment. Although these methods remedy the drawback of Centering, they still overly depend on the notion of Centering such as Cp. On the other hand, the existing machine learning-based methods (Aone and Bennett, 1995; McCarthy and Lehnert, 1995; Soon et al., 2001; Ng and Cardie, 2002; Seki et al., 2002; Isozaki and Hirao, 2003; Iida et al., 2005; Iida et al., 2007a, etc.) have been developed with less attention given to such a problem. These methods exhaustively search for an antecedent within the list of all candidate antecedents until the beginning of the text. Otherwise, the process to search for antecedents is heuristically carried out in a limited search space (e.g. the previous N sentences of an anaphor) (Yang et al., 2008). 4 Machine learning-based cache model As mentioned in Section 2, the procedure for zero-anaphora resolution can be decomposed into two subtasks, namely anaphoricity determination and antecedent identification. In this paper, these two subtasks are carried out according to the selection-then-classification model (Iida et al., 1‘A > B’ means A is more salient than B. 648 2005), chosen because it it has the advantage of using broader context information for determining the anaphoricity of a zero-pronoun. It does this by examining whether the context preceding the zeropronoun in the discourse has a plausible candidate antecedent or not. With this model, antecedent identification is performed first, and anaphoricity determination second, that is, the model identifies the most likely candidate antecedent for a given zero-pronoun and then it judges whether or not the zero-pronoun is anaphoric. As discussed by Iida et al. (2007a), intrasentential and inter-sentential zero-anaphora resolution should be dealt with by taking into account different kinds of information. Syntactic patterns are useful clues for intra-sentential zero-anaphora resolution, whereas rhetorical clues such as connectives may be more useful for inter-sentential cases. Therefore, the intra-sentential and intersentential zero-anaphora resolution models are separately trained by exploiting different feature sets as shown in Table 2. In addition, as mentioned in Section 3, intersentential cases have a serious problem where the search space of zero-pronouns grows linearly with the length of the text. In order to avoid this problem, we incorporate a caching mechanism originally addressed by Walker (1996) into the following procedure of zero-anaphora resolution by limiting the search space at step 3 and by updating the cache at step 5. Zero-anaphora resolution process: 1. Intra-sentential antecedent identification: For a given zero-pronoun ZP in a given sentence S, select the most-likely candidate antecedent A1 from the candidates appearing in S by the intrasentential antecedent identification model. 2. Intra-sentential anaphoricity determination: Estimate plausibility p1 that A1 is the true antecedent, and return A1 if p1 ≥θintra2 or go to 3 otherwise. 3. Inter-sentential antecedent identification: Select the most-likely candidate antecedent A2 from the candidates appearing in the cache as explained in Section 4.1 by the inter-sentential antecedent identification model. 4. Inter-sentential anaphoricity determination: Estimate plausibility p2 that A2 is the true antecedent, and return A2 if p2 ≥θinter3 or return 2θintra is a preselected threshold. 3θinter is a preselected threshold. non-anaphoric otherwise. 5. After processing all zero-pronouns in S, the cache is updated. The resolution process is continued until the end of the discourse. 4.1 Dynamic cache model Because the original work of the cache model by Walker (1996) is not fully specified for implementation, we specify how to retain the salient candidates based on machine learning in order to capture both local and global foci of discourse. In Walker (1996)’s discussion of the cache model in discourse processing, it was presumed to operate under a limited attention constraint. According to this constraint, only a limited number of candidates can be considered in processing. Applying the concept of cache to computer hardware, the cache represents working memory and the main memory represents long-term memory. The cache only holds the most salient entities, while the rest are moved to the main memory for possible later consideration as a cache candidate. If a new candidate antecedent is retrieved from main memory and inserted into the cache, or enters the cache directly during processing, other candidates in the cache have to be displaced due to the limited capacity of the cache. Which candidate to displace is determined by a cache replacement policy. However, the best policy for this is still unknown. In this paper, we recast the cache replacement policy as a ranking problem in machine learning. More precisely, we choose the N best candidates for each sentence from the set of candidates appearing in that sentence and the candidates that are already in the cache. Following this cache model, named the dynamic cache model, anaphora resolution is performed by repeating the following two processes. 1. Cache update: cache Ci for sentence Si is created from the candidates in the previous sentence Si−1 and the ones in the previous cache Ci−1. 2. Inter-sentential zero-anaphora resolution: cache Ci is used as the search space for inter-sentential zero-anaphora resolution in sentence Si (see Step 3 of the aforementioned zero-anaphora resolution process). For each cache update (see Figure 1), a current cache Ci is created by choosing the N most salient candidates from the M candidates in Si−1 and the N candidates in the previous cache Ci−1. In order to implement this mechanism, we train the model 649 ... 1 ) 1 ( − i c 2 ) 1 ( − i c M i c ) 1 ( − ... 2 ) 1 ( − i e N i e ) 1 ( − 1 − iS 1 − i C i C cache sentence cache update antecedent identification 1 ) 1 ( − i e ... 2 ie iN e 1ie ij φ Figure 1: Anaphora resolution using the dynamic cache model so that it captures the salience of each candidate. To reflect this, each training instance is labeled as either retained or discarded. If an instance is referred to by an zero-pronoun appearing in any of the following sentences, it is labeled as retained; otherwise, it is labeled as discarded. Training instances are created in the algorithm detailed in Figure 2. The algorithm is designed with the following two points in mind. First, the cache model must capture the salience of each discourse entity according to the recency of its entity at each discourse status because typically the more recently an entity appears, the more salient it is. To reflect this, training instances are created from candidates as they appear in the text, and are labeled as retained from the point of their appearance until their referring zero-pronoun is reached, at which time they are labeled as discarded if they are never referred to by any zeropronouns in the succeeding context. Suppose, the situation shown in Figure 3, where cij is the j-th candidate in sentence Si. In this situation, for example, candidate c12 is labeled as retained when creating training instances for sentence S1, but labeled as discarded from S2 onwards, because of the appearance of its zeropronoun. Another candidate c13 which is never referred to in the text is labeled as discarded for all training instances. Second, we need to capture the ‘relative’ salience of candidates appearing in the current discourse for each cache update, as also exploited in the tournament-based or ranking-based approaches to anaphora resolution (Iida et al., 2003; Yang et al., 2003; Denis and Baldridge, 2008). To solve it, we use a ranker trained on the instances created as described above. In order to train the ranker, we adopt the Ranking SVM algorithm (Joachims, 2002), which learns a weight vector to rank candidates for a given partial ranking of each discourse entity. Each training instance is created from the set of retained candidates, Ri, paired with the set of discarded candidates, Di, in each sentence. To Function makeTrainingInstances (T: input text) C := NULL // set of preceding candidates S := NULL // set of training instances i := 1; // init while (exists si) // si: i-th sentence in T Ei := extractCandidates(si) Ri := extractRetainedInstances(Ei, T) Di := Ei\Ri ri:= extractRetainedInstances(C, T) Ri := Ri ∪ri Di := Di ∪(C\ri) S := S ∪{⟨Ri, Di⟩} C := updateSalienceInfo(C) C := C ∪Ei i := i + 1 endwhile return S end Function extractRetainedInstances (S, T) R := NULL // init while (elm ∈S) if (elm is anaphoric with a zero-pronoun located in the following sentences of T) R := R ∪elm endif endwhile return R end Function updateSalienceInfo (C, si) while (c ∈C) if (c is anaphoric with a zero pronoun in si) c.position := i; // update the position information endif endwhile return C end Figure 2: Pseudo-code for creating training instances 1S 11 c 12 c 13 c 14 c 2 S 21 c 22 c 23 c iφ j φ 3 S 31 c 32 c 33 c kφ retained discarded 11 c 12 c 13 c 14 c lφ training instances retained discarded 11 c 22 c 13 c 14 c 21 c 23 c 12 c Figure 3: Creating training instnaces define the partial ranking of candidates, we simply rank candidates in Ri as first place and candidates in Di as second place. 4.2 Static cache model Other research on discourse such as Grosz and Sidner (1986) has studied global focus, which generally refers to the entity or set of entities that are salient throughout the entire discourse. Since global focus may not be captured by Centeringbased models, we also propose another cache model which directly captures the global salience of a text. To train the model, all the candidates in a text which have an inter-sentential anaphoric relation with zero-pronouns are used as positive instances and the others used as negative ones. Unlike the 650 Table 1: Feature set used in the cache models Feature Description POS Part-of-speech of C followed by IPADIC4. IN QUOTE 1 if C is located in a quoted sentence; otherwise 0. BEGINNING 1 if C is located in the beginnig of a text; otherwise 0. CASE MARKER Case marker, such as wa (TOPIC) and ga (SUBJECT), of C. DEP END 1 if C has a dependency relation with the last bunsetsu unit (i.e. a basic unit in Japanese) in a sentence ; otherwise 0. CONN* The set of connectives intervening between C and Z. Each conjunction is encoded into a binary feature. IN CACHE* 1 if C is currently stored in the cache; otherwise 0. SENT DIST* Distance between C and Z in terms of a sentence. CHAIN NUM The number of anaphoric chain, i.e. the number of antecedents of Z in the situation that zero-pronouns in the preceding contexts are completely resolved by the zero-anaphora resolution model. C is a candidate antecedent, and Z stands for a target zeropronoun. Features marked with an asterisk are only used in the dynamic cache model. dynamic cache model, this model does not update the cache dynamically, but simply selects for each given zero-pronoun the N most salient candidates from the preceding sentences according to the rank provided by the trained ranker. We call this model the static cache model. 4.3 Features used in the cache models The feature set used in the cache model is shown in Table 1. The ‘CASE MARKER’ feature roughly captures the salience of the local transition dealt with in Centering Theory, and is also intended to capture the global foci of a text coupled with the BEGINNING feature. The CONN feature is expected to capture the transitions of a discourse relation because each connective functions as a marker of a discourse relation between two adjacent discourse segments. In addition, the recency of a candidate antecedent can be even important when an entity occurs as a zero-pronoun in discourse. For example, when a discourse entity e appearing in sentence si is referred to by a zero-pronoun later in sentence sj(i<j), entity e is considered salient again at the point of sj. To reflect this way of updating salience, we overwrite the information about the appearance position of candidate e in sj, which is performed by the function updateSalienceInfo in Figure 2. This allows the cache model to handle updated salience 4http://chasen.naist.jp/stable/ipadic/ features such as CHAIN NUM in proceeding cache updates. 5 Antecedent identification and anaphoricity determination models As an antecedent identification model, we adopt the tournament model (Iida et al., 2003) because in a preliminary experiment it achieved better performance than other state-of-the-art ranking-based models (Denis and Baldridge, 2008) in this task setting. To train the tournament model, the training instances are created by extracting an antecedent paired with each of the other candidates for learning a preference of which candidate is more likely to be an antecedent. At the test phase, the model conducts a tournament consisting of a series of matches in which candidate antecedents compete with one another. Note that in the case of intersentential zero-anaphora resolution the tournament is arranged between candidates in the cache. For learning the difference of two candidates in the cache, training instances are also created by only extracting candidates from the cache. For anaphoricity determination, the model has to judge whether a zero-pronoun is anaphoric or not. To create the training instances for the binary classifier, the most likely candidate of each given zeropronoun is chosen by the tournament model and then it is labeled as anaphoric (positive) if the chosen candidate is indeed the antecedent of the zeropronoun5, or otherwise labeled as non-anaphoric (negative). To create models for antecedent identification and anaphoricity determination, we use a Support Vector Machine (Vapnik, 1998)6 with a linear kernel and its default parameters. To use the feature set shown in Table 2, morpho-syntactic analysis of a text is performed by the Japanese morpheme analyzer Chasen and the dependency parser CaboCha. In the tournament model, the features of two competing candidates are distinguished from each other by adding the prefix of either ‘left’ or ‘right.’ 6 Experiments We investigate how the cache model contributes to candidate reduction. More specifically, we ex5In the original selection-then-classification model (Iida et al., 2005), positive instances are created by all the correct pairs of a zero-pronoun and its antecedent, however in this paper we use only antecedents selected by the tournament model as the most likely candidates in the set of candidates because this method leads to better performance. 6http://svmlight.joachims.org/ 651 Table 2: Feature set used in zero-anaphora resolution Feature Type Feature Description Lexical HEAD BF Characters of right-most morpheme in NP (PRED). PRED FUNC Characters of functional words followed by PRED. Grammatical PRED VOICE 1 if PRED contains auxiliaries such as ‘(ra)reru’; otherwise 0. POS Part-of-speech of NP (PRED) followed by IPADIC (Asahara and Matsumoto, 2003). PARTICLE Particle followed by NP, such as ‘wa (topic)’, ‘ga (subject)’, ‘o (object)’. Semantic NE Named entity of NP: PERSON, ORGANIZATION, LOCATION, ARTIFACT, DATE, TIME, MONEY, PERCENT or N/A. SELECT PREF The score of selectional preference, which is the mutual information estimated from a large number of triplets ⟨Noun, Case, Predicate⟩. Positional SENTNUM Distance between NP and PRED. BEGINNING 1 if NP is located in the beggining of sentence; otherwise 0. END 1 if NP is located in the end of sentence; otherwise 0. PRED NP 1 if PRED precedes NP; otherwise 0. NP PRED 1 if NP precedes PRED; otherwise 0. Discourse CL RANK A rank of NP in forward looking-center list. CL ORDER A order of NP in forward looking-center list. CONN** The connectives intervesing between NP and PRED. Path PATH FUNC* Characters of functional words in the shortest path in the dependency tree between PRED and NP. PATH POS* Part-of-speech of functional words in shortest patn in the dependency tree between PRED and NP. NP and PRED stand for a bunsetsu-chunk of a candidate antecedent and a bunsetsu-chunk of a predicate which has a target zero-pronoun respectively. The features marked with an asterisk are used during intra-sentential zero-anaphora resolution. The feature marked with two asterisks is used during inter-sentential zero-anaphora resolution. plore the candidate reduction ratio of each cache model as well as its coverage, i.e. how often each cache model retains correct antecedents (Section 6.2). We also evaluate the performance of both antecedent identification on inter-sentential zero-anaphora resolution (Section 6.3) and the overall zero-anaphora resolution (Section 6.4). 6.1 Data set In this experiment, we take the ellipsis of nominative arguments of predicates as target zeropronouns because they are most frequently omitted in Japanese, for example, 45.5% of the nominative arguments of predicates are omitted in the NAIST Text Corpus (Iida et al., 2007b). As the data set, we use part of the NAIST Text Corpus, which is publicly available, consisting of 287 newspaper articles in Japanese. The data set contains 1,007 intra-sentential zero-pronouns, 699 inter-sentential zero-pronouns and 593 exophoric zero-pronouns, totalling 2299 zero-pronouns. We conduct 5-fold cross-validation using this data set. A development data set consists of 60 articles for setting parameters of inter-sentential anaphoricity determination, θinter, on overall zero-anaphora resolution. It contains 417 intra-sentential, 298 intersentential and 174 exophoric zero-pronouns. 6.2 Evaluation of the caching mechanism In this experiment, we directly compare the proposed static and dynamic cache models with the heuristic methods presented in Section 2. Note that 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 0.2 0.4 0.6 0.8 1 coverage # of classification in antecedent identification process n=5 n=10 n=15 n=20 n=all CM SM (s=1) SM (s=2) SM (s=3) DCM (w/o ZAR) DCM (with ZAR) SCM CM: centering-based cache model, SM: sentence-based cache model, SCM: static cache model, DCM (w/o ZAR): dynamic cache model disregarding updateSalienceInfo, DCM (with ZAR): dynamic cache model using the information of correct zero-anaphoric relations, n: cache size and s: # of sentences. Figure 4: Coverage of each cache model the salience information (i.e. the function updateSalienceInfo) in the dynamic cache model is disregarded in this experiment because its performance crucially depends on the performance of the zeroanaphora resolution model. The performance of the cache model is evaluated by coverage, which is a percentage of retained antecedents when appearing zero-pronouns refer to an antecedent in a preceding sentence, i.e. we evaluate the cases of inter-sentential anaphora resolution. As a baseline, we adopt the following two cache models. One is the Centering-derived model which only stores the preceding ‘wa’ (topic)-marked or 652 ‘ga’ (subject)-marked candidate antecedents in the cache. It is an approximation of the model proposed by Nariyama (2002) for extending the local focus transition defined by Centering Theory. We henceforth call this model the centering-based cache model. The other baseline model stores candidates appearing in the N previous sentences of a zero-pronoun to simulate a heuristic approach used in works like Soon et al. (2001). We call this model the sentence-based cache model. By comparing these baselines with our cache models, we can see whether our models contribute to more efficiently storing salient candidates or not. The above dynamic cache model retains the salient candidates independently of the results of antecedent identification conducted in the preceding contexts. However, if the zero-anaphora resolution in the current utterance is performed correctly, it will be available for use as information about the recency of candidates and the anaphoric chain of each candidate. Therefore, we also investigate whether correct zero-anaphora resolution contributes to the dynamic cache model or not. To integrate zero-anaphora resolution information, we create training instances of the dynamic cache model by updating the recency using the function ‘updateSalienceInfo’ shown in Figure 2 and also using an additional feature, CHAIN NUM, defined in Table 1. The results are shown in Figure 47. We can see the effect of the machine learning-based cache models in comparison to the other two heuristic models. The results demonstrate that the former achieves good coverage at each point compared to the latter. In addition, the difference between the static and dynamic cache models demonstrates that the dynamic one is always better then the static. It may be this way because the dynamic cache model simultaneously retains global focus of a given text and the locally salient entities in the current discourse. By comparing the dynamic cache model using correct zero-anaphora resolution (denoted by DCM (with ZAR) in Figure 4) and the one without it (DCM (w/o ZAR)), we can see that correct zeroanaphora resolution contributes to improving the caching for every cache size. However, in the practical setting the current zero-anaphora resolu7Expressions such as verbs were rarely annotated as antecedents, so these are not extracted as candidate antecedents in our current setting. This is the reason why the coverage of using all the candidates is less than 1.0. tion system sometimes chooses the wrong candidate as an antecedent or does not choose any candidate due to wrong anaphoricity determination, negatively impacting the performance of the cache model. For this reason, in the following two experiments we decided not to use zero-anaphora resolution in the dynamic cache model. 6.3 Evaluation of inter-sentential zeroanaphora resolution We next investigate the impact of the dynamic cache model shown in Section 4.1 on the antecedent identification task of inter-sentential zeroanaphora resolution altering the cache size from 5 to the number of all candidates. We compare the following three cache model within the task of inter-sentential antecedent identification: the centering-based cache model, the sentence-based cache model and the dynamic cache model disregarding updateSalienceInfo (i.e. DCM (w/o ZAR) in Figure 4). We also investigate the computational time of the process of inter-sentential antecedent identification with each cache model altering its parameter 8. The results are shown in Table 3. From these results, we can see the antecedent identification model using the dynamic cache model obtains almost the same accuracy for every cache size. It indicates that if the model can acquire a small number of the most salient discourse entities in the current discourse, the model achieves accuracy comparable to the model which searches all the preceding discourse entities, while drastically reducing the computational time. The results also show that the current antecedent identification model with the dynamic cache model does not necessarily outperform the model with the baseline cache models. For example, the sentence-based cache model using the preceding two sentences (SM (s=2)) achieved an accuracy comparable to the dynamic cache model with the cache size 15 (DCM (n=15)), both spending almost the same computational time. This is supposed to be due to the limited accuracy of the current antecedent identification model. Since the dynamic cache models provide much better search spaces than the baseline models as shown in Figure 4, there is presumably more room for improvement with the dynamic cache models. More investigations are to be concluded in our future 8All experiments were conducted on a 2.80 GHz Intel Xeon with 16 Gb of RAM. 653 Table 3: Results on antecedent identification model accuracy runtime coverage (Figure 4) CM 0.441 (308/699) 11m03s 0.651 SM(s=1) 0.381 (266/699) 6m54s 0.524 SM(s=2) 0.448 (313/699) 13m14s 0.720 SM(s=3) 0.466 (326/699) 19m01s 0.794 DCM(n=5) 0.446 (312/699) 4m39s 0.664 DCM(n=10) 0.441 (308/699) 8m56s 0.764 DCM(n=15) 0.442 (309/699) 12m53s 0.858 DCM(n=20) 0.443 (310/699) 16m35s 0.878 DCM(n=1000) 0.452 (316/699) 53m44s 0.928 CM: centering-based cache model, SM: sentence-based cache model, DCM: dynamic cache model, n: cache size, s: number of the preceding sentences. work. 6.4 Overall zero-anaphora resolution We finally investigate the effects of introducing the proposed model on overall zero-anaphora resolution including intra-sentential cases. The resolution is carried out according to the procedure described in Section 4. By comparing the zeroanaphora resolution model with different cache sizes, we can see whether or not the model using a small number of discourse entities in the cache achieves performance comparable to the original one in a practical setting. For intra-sentential zero-anaphora resolution, we adopt the model proposed by Iida et al. (2007a), which exploits syntactic patterns as features that appear in the dependency path of a zero-pronoun and its candidate antecedent. Note that for simplicity we use bag-of-functional words and their part-of-speech intervening between a zero-pronoun and its candidate antecedent as features instead of learning syntactic patterns with the Bact algorithm (Kudo and Matsumoto, 2004). We illustrated the recall-precision curve of each model by altering the threshold parameter of intrasentential anaphoricity determination, which is shown in Figure 5. The results show that all models achieved almost the same performance when decreasing the cache size. It indicates that it is enough to cache a small number of the most salient candidates in the current zero-anaphora resolution model, while coverage decreases when the cache size is smaller as shown in Figure 4. 7 Conclusion We propose a machine learning-based cache model in order to reduce the computational cost of zero-anaphora resolution. We recast discourse status updates as ranking problems of discourse entities by adopting the notion of caching originally 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 precision recall n=5 n=10 n=15 n=20 n=1000 Figure 5: Recall-precision curves on overall zero-anaphora resolution introduced by Walker (1996). More specifically, we choose the N most salient candidates for each sentence from the set of candidates appearing in that sentence and the candidates which are already in the cache. Using this mechanism, the computational cost of the zero-anaphora resolution process is reduced by searching only the set of salient candidates. Our empirical evaluation on Japanese zero-anaphora resolution shows that our learningbased cache model drastically reduces the search space while preserving accuracy. The procedure for zero-anaphora resolution adopted in our model assumes that resolution is carried out linearly, i.e. an antecedent is independently selected without taking into account any other zero-pronouns. However, trends in anaphora resolution have shifted from such linear approaches to more sophisticated ones which globally optimize the interpretation of all the referring expressions in a text. For example, Poon and Domingos (2008) has empirically reported that such global approaches achieve performance better than the ones based on incrementally processing a text. Because their work basically builds on inductive logic programing, we can naturally extend this to incorporate our caching mechanism into the global optimization by expressing cache constraints as predicate logic, which is one of our next challenges in this research area. References C. Aone and S. W. Bennett. 1995. Evaluating automated and manual acquisition of anaphora resolution strategies. In Proceedings of 33th Annual Meeting of the Association for Computational Linguistics (ACL), pages 122–129. M. Asahara and Y. Matsumoto, 2003. IPADIC User Manual. Nara Institute of Science and Technology, Japan. B. Baldwin. 1995. CogNIAC: A Discourse Processing Engine. Ph.D. thesis, Department of Computer and Information Sciences, University of Pennsylvania. P. Denis and J. Baldridge. 2008. Specialized models and ranking for coreference resolution. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 660–669. 654 B. J. Grosz and C. L. Sidner. 1986. Attention, intentions, and the structure of discourse. Computational Linguistics, 12:175–204. B. J. Grosz, A. K. Joshi, and S. Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203–226. U. Hahn and M. Strube. 1997. Centering in-the-large: computing referential discourse segments. In Proceedings of the 8th conference on European chapter of the Association for Computational Linguistics, pages 104–111. R. Iida, K. Inui, H. Takamura, and Y. Matsumoto. 2003. Incorporating contextual cues in trainable models for coreference resolution. In Proceedings of the 10th EACL Workshop on The Computational Treatment of Anaphora, pages 23–30. R. Iida, K. Inui, and Y. Matsumoto. 2005. Anaphora resolution by antecedent identification followed by anaphoricity determination. ACM Transactions on Asian Language Information Processing (TALIP), 4(4):417–434. R. Iida, K. Inui, and Y. Matsumoto. 2007a. Zero-anaphora resolution by learning rich syntactic pattern features. ACM Transactions on Asian Language Information Processing (TALIP), 6(4). R. Iida, M. Komachi, K. Inui, and Y. Matsumoto. 2007b. Annotating a japanese text corpus with predicate-argument and coreference relations. In Proceeding of the ACL Workshop ‘Linguistic Annotation Workshop’, pages 132–139. H. Isozaki and T. Hirao. 2003. Japanese zero pronoun resolution based on ranking rules and machine learning. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 184–191. T. Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD), pages 133–142. M. Kameyama. 1986. A property-sharing constraint in centering. In Proceedings of the 24th ACL, pages 200–206. T. Kudo and Y. Matsumoto. 2004. A boosting algorithm for classification of semi-structured text. In Proceedings of the 2004 EMNLP, pages 301–308. A. McCallum and B. Wellner. 2003. Toward conditional models of identity uncertainty with application to proper noun coreference. In Proceedings of the IJCAI Workshop on Information Integration on the Web, pages 79–84. J. F. McCarthy and W. G. Lehnert. 1995. Using decision trees for coreference resolution. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, pages 1050–1055. S. Nariyama. 2002. Grammar for ellipsis resolution in japanese. In Proceedings of the 9th International Conference on Theoretical and Methodological Issues in Machine Translation, pages 135–145. V. Ng and C. Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th ACL, pages 104–111. H. Poon and P. Domingos. 2008. Joint unsupervised coreference resolution with Markov Logic. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 650–659. K. Seki, A. Fujii, and T. Ishikawa. 2002. A probabilistic method for analyzing japanese anaphora integrating zero pronoun detection and resolution. In Proceedings of the 19th COLING, pages 911–917. W. M. Soon, H. T. Ng, and D. C. Y. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. L. Z. Suri and K. F. McCoy. 1994. Raft/rapr and centering: a comparison and discussion of problems related to processing complex sentences. Computational Linguistics, 20(2):301–317. V. N. Vapnik. 1998. Statistical Learning Theory. Adaptive and Learning Systems for Signal Processing Communications, and control. John Wiley & Sons. M. Walker, M. Iida, and S. Cote. 1994. Japanese discourse and the process of centering. Computational Linguistics, 20(2):193–233. M. A. Walker. 1996. Limited attention and discourse structure. Computational Linguistics, 22(2):255–264. X. Yang, G. Zhou, J. Su, and C. L. Tan. 2003. Coreference resolution using competition learning approach. In Proceedings of the 41st ACL, pages 176–183. X. Yang, J. Su, J. Lang, C. L. Tan, T. Liu, and S. Li. 2008. An entity-mention model for coreference resolution with inductive logic programming. In Proceedings of ACL-08: HLT, pages 843–851. 655
2009
73
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 656–664, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Conundrums in Noun Phrase Coreference Resolution: Making Sense of the State-of-the-Art Veselin Stoyanov Cornell University Ithaca, NY [email protected] Nathan Gilbert University of Utah Salt Lake City, UT [email protected] Claire Cardie Cornell University Ithaca, NY [email protected] Ellen Riloff University of Utah Salt Lake City, UT [email protected] Abstract We aim to shed light on the state-of-the-art in NP coreference resolution by teasing apart the differences in the MUC and ACE task definitions, the assumptions made in evaluation methodologies, and inherent differences in text corpora. First, we examine three subproblems that play a role in coreference resolution: named entity recognition, anaphoricity determination, and coreference element detection. We measure the impact of each subproblem on coreference resolution and confirm that certain assumptions regarding these subproblems in the evaluation methodology can dramatically simplify the overall task. Second, we measure the performance of a state-of-the-art coreference resolver on several classes of anaphora and use these results to develop a quantitative measure for estimating coreference resolution performance on new data sets. 1 Introduction As is common for many natural language processing problems, the state-of-the-art in noun phrase (NP) coreference resolution is typically quantified based on system performance on manually annotated text corpora. In spite of the availability of several benchmark data sets (e.g. MUC-6 (1995), ACE NIST (2004)) and their use in many formal evaluations, as a field we can make surprisingly few conclusive statements about the state-of-theart in NP coreference resolution. In particular, it remains difficult to assess the effectiveness of different coreference resolution approaches, even in relative terms. For example, the 91.5 F-measure reported by McCallum and Wellner (2004) was produced by a system using perfect information for several linguistic subproblems. In contrast, the 71.3 F-measure reported by Yang et al. (2003) represents a fully automatic end-to-end resolver. It is impossible to assess which approach truly performs best because of the dramatically different assumptions of each evaluation. Results vary widely across data sets. Coreference resolution scores range from 85-90% on the ACE 2004 and 2005 data sets to a much lower 6070% on the MUC 6 and 7 data sets (e.g. Soon et al. (2001) and Yang et al. (2003)). What accounts for these differences? Are they due to properties of the documents or domains? Or do differences in the coreference task definitions account for the differences in performance? Given a new text collection and domain, what level of performance should we expect? We have little understanding of which aspects of the coreference resolution problem are handled well or poorly by state-of-the-art systems. Except for some fairly general statements, for example that proper names are easier to resolve than pronouns, which are easier than common nouns, there has been little analysis of which aspects of the problem have achieved success and which remain elusive. The goal of this paper is to take initial steps toward making sense of the disparate performance results reported for NP coreference resolution. For our investigations, we employ a state-of-the-art classification-based NP coreference resolver and focus on the widely used MUC and ACE coreference resolution data sets. We hypothesize that performance variation within and across coreference resolvers is, at least in part, a function of (1) the (sometimes unstated) assumptions in evaluation methodologies, and (2) the relative difficulty of the benchmark text corpora. With these in mind, Section 3 first examines three subproblems that play an important role in coreference resolution: named entity recognition, anaphoricity determination, and coreference element detection. We quantitatively measure the impact of each of these subproblems on coreference resolution performance as a whole. Our results suggest that the availability of accurate detectors for anaphoricity or coreference elements could substantially improve the performance of state-ofthe-art resolvers, while improvements to named entity recognition likely offer little gains. Our results also confirm that the assumptions adopted in 656 MUC ACE Relative Pronouns no yes Gerunds no yes Nested non-NP nouns yes no Nested NEs no GPE & LOC premod Semantic Types all 7 classes only Singletons no yes Table 1: Coreference Definition Differences for MUC and ACE. (GPE refers to geo-political entities.) some evaluations dramatically simplify the resolution task, rendering it an unrealistic surrogate for the original problem. In Section 4, we quantify the difficulty of a text corpus with respect to coreference resolution by analyzing performance on different resolution classes. Our goals are twofold: to measure the level of performance of state-of-the-art coreference resolvers on different types of anaphora, and to develop a quantitative measure for estimating coreference resolution performance on new data sets. We introduce a coreference performance prediction (CPP) measure and show that it accurately predicts the performance of our coreference resolver. As a side effect of our research, we provide a new set of much-needed benchmark results for coreference resolution under common sets of fully-specified evaluation assumptions. 2 Coreference Task Definitions This paper studies the six most commonly used coreference resolution data sets. Two of those are from the MUC conferences (MUC-6, 1995; MUC7, 1997) and four are from the Automatic Content Evaluation (ACE) Program (NIST, 2004). In this section, we outline the differences between the MUC and ACE coreference resolution tasks, and define terminology for the rest of the paper. Noun phrase coreference resolution is the process of determining whether two noun phrases (NPs) refer to the same real-world entity or concept. It is related to anaphora resolution: a NP is said to be anaphoric if it depends on another NP for interpretation. Consider the following: John Hall is the new CEO. He starts on Monday. Here, he is anaphoric because it depends on its antecedent, John Hall, for interpretation. The two NPs also corefer because each refers to the same person, JOHN HALL. As discussed in depth elsewhere (e.g. van Deemter and Kibble (2000)), the notions of coreference and anaphora are difficult to define precisely and to operationalize consistently. Furthermore, the connections between them are extremely complex and go beyond the scope of this paper. Given these complexities, it is not surprising that the annotation instructions for the MUC and ACE data sets reflect different interpretations and simplifications of the general coreference relation. We outline some of these differences below. Syntactic Types. To avoid ambiguity, we will use the term coreference element (CE) to refer to the set of linguistic expressions that participate in the coreference relation, as defined for each of the MUC and ACE tasks.1 At times, it will be important to distinguish between the CEs that are included in the gold standard — the annotated CEs — from those that are generated by the coreference resolution system — the extracted CEs. At a high level, both the MUC and ACE evaluations define CEs as nouns, pronouns, and noun phrases. However, the MUC definition excludes (1) “nested” named entities (NEs) (e.g. “America” in “Bank of America”), (2) relative pronouns, and (3) gerunds, but allows (4) nested nouns (e.g. “union” in “union members”). The ACE definition, on the other hand, includes relative pronouns and gerunds, excludes all nested nouns that are not themselves NPs, and allows premodifier NE mentions of geo-political entities and locations, such as “Russian” in “Russian politicians”. Semantic Types. ACE restricts CEs to entities that belong to one of seven semantic classes: person, organization, geo-political entity, location, facility, vehicle, and weapon. MUC has no semantic restrictions. Singletons. The MUC data sets include annotations only for CEs that are coreferent with at least one other CE. ACE, on the other hand, permits “singleton” CEs, which are not coreferent with any other CE in the document. These substantial differences in the task definitions (summarized in Table 1) make it extremely difficult to compare performance across the MUC and ACE data sets. In the next section, we take a closer look at the coreference resolution task, analyzing the impact of various subtasks irrespective of the data set differences. 1We define the term CE to be roughly equivalent to (a) the notion of markable in the MUC coreference resolution definition and (b) the structures that can be mentions in the descriptions of ACE. 657 3 Coreference Subtask Analysis Coreference resolution is a complex task that requires solving numerous non-trivial subtasks such as syntactic analysis, semantic class tagging, pleonastic pronoun identification and antecedent identification to name a few. This section examines the role of three such subtasks — named entity recognition, anaphoricity determination, and coreference element detection — in the performance of an end-to-end coreference resolution system. First, however, we describe the coreference resolver that we use for our study. 3.1 The RECONCILEACL09 Coreference Resolver We use the RECONCILE coreference resolution platform (Stoyanov et al., 2009) to configure a coreference resolver that performs comparably to state-of-the-art systems (when evaluated on the MUC and ACE data sets under comparable assumptions). This system is a classification-based coreference resolver, modeled after the systems of Ng and Cardie (2002b) and Bengtson and Roth (2008). First it classifies pairs of CEs as coreferent or not coreferent, pairing each identified CE with all preceding CEs. The CEs are then clustered into coreference chains2 based on the pairwise decisions. RECONCILE has a pipeline architecture with four main steps: preprocessing, feature extraction, classification, and clustering. We will refer to the specific configuration of RECONCILE used for this paper as RECONCILEACL09. Preprocessing. The RECONCILEACL09 preprocessor applies a series of language analysis tools (mostly publicly available software packages) to the source texts. The OpenNLP toolkit (Baldridge, J., 2005) performs tokenization, sentence splitting, and part-of-speech tagging. The Berkeley parser (Petrov and Klein, 2007) generates phrase structure parse trees, and the de Marneffe et al. (2006) system produces dependency relations. We employ the Stanford CRF-based Named Entity Recognizer (Finkel et al., 2004) for named entity tagging. With these preprocessing components, RECONCILEACL09 uses heuristics to correctly extract approximately 90% of the annotated CEs for the MUC and ACE data sets. Feature Set. To achieve roughly state-of-theart performance, RECONCILEACL09 employs a 2A coreference chain refers to the set of CEs that refer to a particular entity. dataset docs CEs chains CEs/ch tr/tst split MUC6 60 4232 960 4.4 30/30 (st) MUC7 50 4297 1081 3.9 30/20 (st) ACE-2 159 2630 1148 2.3 130/29 (st) ACE03 105 3106 1340 2.3 74/31 ACE04 128 3037 1332 2.3 90/38 ACE05 81 1991 775 2.6 57/24 Table 2: Dataset characteristics including the number of documents, annotated CEs, coreference chains, annotated CEs per chain (average), and number of documents in the train/test split. We use st to indicate a standard train/test split. fairly comprehensive set of 61 features introduced in previous coreference resolution systems (see Bengtson and Roth (2008)). We briefly summarize the features here and refer the reader to Stoyanov et al. (2009) for more details. Lexical (9): String-based comparisons of the two CEs, such as exact string matching and head noun matching. Proximity (5): Sentence and paragraph-based measures of the distance between two CEs. Grammatical (28): A wide variety of syntactic properties of the CEs, either individually or as a pair. These features are based on part-of-speech tags, parse trees, or dependency relations. For example: one feature indicates whether both CEs are syntactic subjects; another indicates whether the CEs are in an appositive construction. Semantic (19): Capture semantic information about one or both NPs such as tests for gender and animacy, semantic compatibility based on WordNet, and semantic comparisons of NE types. Classification and Clustering. We configure RECONCILEACL09 to use the Averaged Perceptron learning algorithm (Freund and Schapire, 1999) and to employ single-link clustering (i.e. transitive closure) to generate the final partitioning.3 3.2 Baseline System Results Our experiments rely on the MUC and ACE corpora. For ACE, we use only the newswire portion because it is closest in composition to the MUC corpora. Statistics for each of the data sets are shown in Table 2. When available, we use the standard test/train split. Otherwise, we randomly split the data into a training and test set following a 70/30 ratio. 3In trial runs, we investigated alternative classification and clustering models (e.g. C4.5 decision trees and SVMs; best-first clustering). The results were comparable. 658 Scoring Algorithms. We evaluate using two common scoring algorithms4 — MUC and B3. The MUC scoring algorithm (Vilain et al., 1995) computes the F1 score (harmonic mean) of precision and recall based on the identifcation of unique coreference links. We use the official MUC scorer implementation for the two MUC corpora and an equivalent implementation for ACE. The B3 algorithm (Bagga and Baldwin, 1998) computes a precision and recall score for each CE: precision(ce) = |Rce ∩Kce|/|Rce| recall(ce) = |Rce ∩Kce|/|Kce|, where Rce is the coreference chain to which ce is assigned in the response (i.e. the system-generated output) and Kce is the coreference chain that contains ce in the key (i.e. the gold standard). Precision and recall for a set of documents are computed as the mean over all CEs in the documents and the F1 score of precision and recall is reported. B3 Complications. Unlike the MUC score, which counts links between CEs, B3 presumes that the gold standard and the system response are clusterings over the same set of CEs. This, of course, is not the case when the system automatically identifies the CEs, so the scoring algorithm requires a mapping between extracted and annotated CEs. We will use the term twin(ce) to refer to the unique annotated/extracted CE to which the extracted/annotated CE is matched. We say that a CE is twinless (has no twin) if no corresponding CE is identified. A twinless extracted CE signals that the resolver extracted a spurious CE, while an annotated CE is twinless when the resolver fails to extract it. Unfortunately, it is unclear how the B3 score should be computed for twinless CEs. Bengtson and Roth (2008) simply discard twinless CEs, but this solution is likely too lenient — it doles no punishment for mistakes on twinless annotated or extracted CEs and it would be tricked, for example, by a system that extracts only the CEs about which it is most confident. We propose two different ways to deal with twinless CEs for B3. One option, B3all, retains all twinless extracted CEs. It computes the preci4We also experimented with the CEAF score (Luo, 2005), but excluded it due to difficulties dealing with the extracted, rather than annotated, CEs. CEAF assigns a zero score to each twinless extracted CE and weights all coreference chains equally, irrespective of their size. As a result, runs with extracted CEs exhibit very low CEAF precision, leading to unreliable scores. sion as above when ce has a twin, and computes the precision as 1/|Rce| if ce is twinless. (Similarly, recall(ce) = 1/|Kce| if ce is twinless.) The second option, B30, discards twinless extracted CEs, but penalizes recall by setting recall(ce) = 0 for all twinless annotated CEs. Thus, B30 presumes that all twinless extracted CEs are spurious. Results. Table 3, box 1 shows the performance of RECONCILEACL09 using a default (0.5) coreference classifier threshold. The MUC score is highest for the MUC6 data set, while the four ACE data sets show much higher B3 scores as compared to the two MUC data sets. The latter occurs because the ACE data sets include singletons. The classification threshold, however, can be gainfully employed to control the trade-off between precision and recall. This has not traditionally been done in learning-based coreference resolution research — possibly because there is not much training data available to sacrifice as a validation set. Nonetheless, we hypothesized that estimating a threshold from just the training data might be effective. Our results (BASELINE box in Table 3) indicate that this indeed works well.5 With the exception of MUC6, results on all data sets and for all scoring algorithms improve; moreover, the scores approach those for runs using an optimal threshold (box 3) for the experiment as determined by using the test set. In all remaining experiments, we learn the threshold from the training set as in the BASELINE system. Below, we resume our investigation of the role of three coreference resolution subtasks and measure the impact of each on overall performance. 3.3 Named Entities Previous work has shown that resolving coreference between proper names is relatively easy (e.g. Kameyama (1997)) because string matching functions specialized to the type of proper name (e.g. person vs. location) are quite accurate. Thus, we would expect a coreference resolution system to depend critically on its Named Entity (NE) extractor. On the other hand, state-of-the-art NE taggers are already quite good, so improving this component may not provide much additional gain. To study the influence of NE recognition, we replace the system-generated NEs of 5All experiments sample uniformly from 1000 threshold values. 659 ReconcileACL09 MUC6 MUC7 ACE-2 ACE03 ACE04 ACE05 1. DEFAULT THRESHOLD (0.5) MUC 70.40 58.20 65.76 66.73 56.75 64.30 B3all 69.91 62.88 77.25 77.56 73.03 72.82 B30 68.55 62.80 76.59 77.27 72.99 72.43 2. BASELINE MUC 68.50 62.80 65.99 67.87 62.03 67.41 = THRESHOLD ESTIMATION B3all 70.88 65.86 78.29 79.39 76.50 73.71 B30 68.43 64.57 76.63 77.88 75.41 72.47 3. OPTIMAL THRESHOLD MUC 71.20 62.90 66.83 68.35 62.11 67.41 B3all 72.31 66.52 78.50 79.41 76.53 74.25 B30 69.49 64.64 76.83 78.27 75.51 72.94 4. BASELINE with MUC 69.90 66.37 70.35 62.88 67.72 perfect NEs B3all 72.31 78.06 80.22 77.01 73.92 B30 67.91 76.55 78.35 75.22 72.90 5. BASELINE with MUC 85.80* 81.10* 76.39 79.68 76.18 79.42 perfect CEs B3all 76.14 75.88 78.65 80.58 77.79 76.49 B30 76.14 75.88 78.65 80.58 77.79 76.49 6. BASELINE with MUC 82.20* 71.90* 86.63 85.58 83.33 82.84 anaphoric CEs B3all 72.52 69.26 80.29 79.71 76.05 74.33 B30 72.52 69.26 80.29 79.71 76.05 74.33 Table 3: Impact of Three Subtasks on Coreference Resolution Performance. A score marked with a * indicates that a 0.5 threshold was used because threshold selection from the training data resulted in an extreme version of the system, i.e. one that places all CEs into a single coreference chain. RECONCILEACL09 with gold-standard NEs and retrain the coreference classifier. Results for each of the data sets are shown in box 4 of Table 3. (No gold standard NEs are available for MUC7.) Comparison to the BASELINE system (box 2) shows that using gold standard NEs leads to improvements on all data sets with the exception of ACE2 and ACE05, on which performance is virtually unchanged. The improvements tend to be small, however, between 0.5 to 3 performance points. We attribute this to two factors. First, as noted above, although far from perfect, NE taggers generally perform reasonably well. Second, only 20 to 25% of the coreference element resolutions required for these data sets involve a proper name (see Section 4). Conclusion #1: Improving the performance of NE taggers is not likely to have a large impact on the performance of state-of-the-art coreference resolution systems. 3.4 Coreference Element Detection We expect CE detection to be an important subproblem for an end-to-end coreference system. Results for a system that assumes perfect CEs are shown in box 5 of Table 3. For these runs, RECONCILEACL09 uses only the annotated CEs for both training and testing. Using perfect CEs solves a large part of the coreference resolution task: the annotated CEs divulge anaphoricity information, perfect NP boundaries, and perfect information regarding the coreference relation defined for the data set. We see that focusing attention on all and only the annotated CEs leads to (often substantial) improvements in performance on all metrics over all data sets, especially when measured using the MUC score. Conclusion #2: Improving the ability of coreference resolvers to identify coreference elements would likely improve the state-of-the-art immensely — by 10-20 points in MUC F1 score and from 2-12 F1 points for B3. This finding explains previously published results that exhibit striking variability when run with annotated CEs vs. system-extracted CEs. On the MUC6 data set, for example, the best published MUC score using extracted CEs is approximately 71 (Yang et al., 2003), while multiple systems have produced MUC scores of approximately 85 when using annotated CEs (e.g. Luo et al. (2004), McCallum and Wellner (2004)). We argue that providing a resolver with the annotated CEs is a rather unrealistic evaluation: determining whether an NP is part of an annotated coreference chain is precisely the job of a coreference resolver! Conclusion #3: Assuming the availability of CEs unrealistically simplifies the coreference resolution task. 3.5 Anaphoricity Determination Finally, several coreference systems have successfully incorporated anaphoricity determination 660 modules (e.g. Ng and Cardie (2002a) and Bean and Riloff (2004)). The goal of the module is to determine whether or not an NP is anaphoric. For example, pleonastic pronouns (e.g. it is raining) are special cases that do not require coreference resolution. Unfortunately, neither the MUC nor the ACE data sets include anaphoricity information for all NPs. Rather, they encode anaphoricity information implicitly for annotated CEs: a CE is considered anaphoric if is not a singleton.6 To study the utility of anaphoricity information, we train and test only on the “anaphoric” extracted CEs, i.e. the extracted CEs that have an annotated twin that is not a singleton. Note that for the MUC datasets all extracted CEs that have twins are considered anaphoric. Results for this experiment (box 6 in Table 3) are similar to the previous experiment using perfect CEs: we observe big improvements across the board. This should not be surprising since the experimental setting is quite close to that for perfect CEs: this experiment also presumes knowledge of when a CE is part of an annotated coreference chain. Nevertheless, we see that anaphoricity infomation is important. First, good anaphoricity identification should reduce the set of extracted CEs making it closer to the set of annotated CEs. Second, further improvements in MUC score for the ACE data sets over the runs using perfect CEs (box 5) reveal that accurately determining anaphoricity can lead to substantial improvements in MUC score. ACE data includes annotations for singleton CEs, so knowling whether an annotated CE is anaphoric divulges additional information. Conclusion #4: An accurate anaphoricity determination component can lead to substantial improvement in coreference resolution performance. 4 Resolution Complexity Different types of anaphora that have to be handled by coreference resolution systems exhibit different properties. In linguistic theory, binding mechanisms vary for different kinds of syntactic constituents and structures. And in practice, empirical results have confirmed intuitions that different types of anaphora benefit from different classifier features and exhibit varying degrees of difficulty (Kameyama, 1997). However, performance 6Also, the first element of a coreference chain is usually non-anaphoric, but we do not consider that issue here. evaluations rarely include analysis of where stateof-the-art coreference resolvers perform best and worst, aside from general conclusions. In this section, we analyze the behavior of our coreference resolver on different types of anaphoric expressions with two goals in mind. First, we want to deduce the strengths and weaknesses of state-of-the-art systems to help direct future research. Second, we aim to understand why current coreference resolvers behave so inconsistently across data sets. Our hypothesis is that the distribution of different types of anaphoric expressions in a corpus is a major factor for coreference resolution performance. Our experiments confirm this hypothesis and we use our empirical results to create a coreference performance prediction (CPP) measure that successfully estimates the expected level of performance on novel data sets. 4.1 Resolution Classes We study the resolution complexity of a text corpus by defining resolution classes. Resolution classes partition the set of anaphoric CEs according to properties of the anaphor and (in some cases) the antecedent. Previous work has studied performance differences between pronominal anaphora, proper names, and common nouns, but we aim to dig deeper into subclasses of each of these groups. In particular, we distinguish between proper and common nouns that can be resolved via string matching, versus those that have no antecedent with a matching string. Intuitively, we expect that it is easier to resolve the cases that involve string matching. Similarly, we partition pronominal anaphora into several subcategories that we expect may behave differently. We define the following nine resolution classes: Proper Names: Three resolution classes cover CEs that are named entities (e.g. the PERSON, LOCATION, ORGANIZATION and DATE classes for MUC and ACE) and have a prior referent7 in the text. These three classes are distinguished by the type of antecedent that can be resolved against the proper name. (1) PN-e: a proper name is assigned to this exact string match class if there is at least one preceding CE in its gold standard coreference chain that exactly matches it. (2) PN-p: a proper name is assigned to this partial string match class if there is at least one preceding CE in its gold standard chain that has some content words in common. (3) PN-n: a proper name is assigned to this no string match 7We make a rough, but rarely inaccurate, assumption that there are no cataphoric expressions in the data. 661 MUC6 MUC7 ACE2 ACE03 ACE04 ACE05 Avg # % scr # % scr # % scr # % scr # % scr # % scr % scr PN-e 273 17 .87 249 19 .79 346 24 .94 435 25 .93 267 16 .88 373 31 .92 22 .89 PN-p 157 10 .68 79 6 .59 116 8 .86 178 10 .87 194 11 .71 125 10 .71 9 .74 PN-n 18 1 .18 18 1 .28 85 6 .19 79 4 .15 66 4 .21 89 7 .27 4 .21 CN-e 292 18 .82 276 21 .65 84 6 .40 186 11 .68 165 10 .68 134 11 .79 13 .67 CN-p 229 14 .53 239 18 .49 147 10 .26 168 10 .24 147 9 .40 147 12 .43 12 .39 CN-n 194 12 .27 148 11 .15 152 10 .50 148 8 .90 266 16 .32 121 10 .20 11 .18 1+2Pr 48 3 .70 65 5 .66 122 8 .73 76 4 .73 158 9 .77 51 4 .61 6 .70 G3Pr 160 10 .73 50 4 .79 181 12 .83 237 13 .82 246 14 .84 69 60 .81 10 .80 U3Pr 175 11 .49 142 11 .49 163 11 .45 122 7 .48 153 9 .49 91 7 .49 9 .48 Table 4: Frequencies and scores for each resolution class. class if no preceding CE in its gold standard chain has any content words in common with it. Common NPs: Three analogous string match classes cover CEs that have a common noun as a head: (4) CN-e (5) CN-p (6) CN-n. Pronouns: Three classes cover pronouns: (7) 1+2Pr: The anaphor is a 1st or 2nd person pronoun. (8) G3Pr: The anaphor is a gendered 3rd person pronoun (e.g. “she”, “him”). (9) U3Pr: The anaphor is an ungendered 3rd person pronoun. As noted above, resolution classes are defined for annotated CEs. We use the twin relationship to match extracted CEs to annotated CEs and to evaluate performance on each resolution class. 4.2 Scoring Resolution Classes To score each resolution class separately, we define a new variant of the MUC scorer. We compute a MUC-RC score (for MUC Resolution Class) for class C as follows: we assume that all CEs that do not belong to class C are resolved correctly by taking the correct clustering for them from the gold standard. Starting with this correct partial clustering, we run our classifier on all ordered pairs of CEs for which the second CE is of class C, essentially asking our coreference resolver to determine whether each member of class C is coreferent with each of its preceding CEs. We then count the number of unique correct/incorrect links that the system introduced on top of the correct partial clustering and compute precision, recall, and F1 score. This scoring function directly measures the impact of each resolution class on the overall MUC score. 4.3 Results Table 4 shows the results of our resolution class analysis on the test portions of the six data sets. The # columns show the frequency counts for each resolution class, and the % columns show the distributions of the classes in each corpus (i.e. 17% MUC6 MUC7 ACE2 ACE03 ACE04 ACE05 0.92 0.95 0.91 0.98 0.97 0.96 Table 5: Correlations of resolution class scores with respect to the average. of all resolutions in the MUC6 corpus were in the PN-e class). The scr columns show the MUCRC score for each resolution class. The right-hand side of Table 4 shows the average distribution and scores across all data sets. These scores confirm our expectations about the relative difficulty of different types of resolutions. For example, it appears that proper names are easier to resolve than common nouns; gendered pronouns are easier than 1st and 2nd person pronouns, which, in turn, are easier than ungendered 3rd person pronouns. Similarly, our intuition is confirmed that many CEs can be accurately resolved based on exact string matching, whereas resolving against antecedents that do not have overlapping strings is much more difficult. The average scores in Table 4 show that performance varies dramatically across the resolution classes, but, on the surface, appears to be relatively consistent across data sets. None of the data sets performs exactly the same, of course, so we statistically analyze whether the behavior of each resolution class is similar across the data sets. For each data set, we compute the correlation between the vector of MUC-RC scores over the resolution classes and the average vector of MUC-RC scores for the remaining five data sets. Table 5 contains the results, which show high correlations (over .90) for all six data sets. These results indicate that the relative performance of the resolution classes is consistent across corpora. 4.4 Coreference Performance Prediction Next, we hypothesize that the distribution of resolution classes in a corpus explains (at least partially) why performance varies so much from cor662 MUC6 MUC7 ACE2 ACE03 ACE04 ACE05 P 0.59 0.59 0.62 0.65 0.59 0.62 O 0.67 0.61 0.66 0.68 0.62 0.67 Table 6: Predicted (P) vs Observed (O) scores. pus to corpus. To explore this issue, we create a Coreference Performance Prediction (CPP) measure to predict the performance on new data sets. The CPP measure uses the empirical performance of each resolution class observed on previous data sets and forms a predicton based on the make-up of resolution classes in a new corpus. The distribution of resolution classes for a new corpus can be easily determined because the classes can be recognized superficially by looking only at the strings that represent each NP. We compute the CPP score for each of our six data sets based on the average resolution class performance measured on the other five data sets. The predicted score for each class is computed as a weighted sum of the observed scores for each resolution class (i.e. the mean for the class measured on the other five data sets) weighted by the proportion of CEs that belong to the class. The predicted scores are shown in Table 6 and compared with the MUC scores that are produced by RECONCILEACL09.8 Our results show that the CPP measure is a good predictor of coreference resolution performance on unseen data sets, with the exception of one outlier – the MUC6 data set. In fact, the correlation between predicted and observed scores is 0.731 for all data sets and 0.913 excluding MUC6. RECONCILEACL09’s performance on MUC6 is better than predicted due to the higher than average scores for the common noun classes. We attribute this to the fact that MUC6 includes annotations for nested nouns, which almost always fall in the CN-e and CN-p classes. In addition, many of the features were first created for the MUC6 data set, so the feature extractors are likely more accurate than for other data sets. Overall, results indicate that coreference performance is substantially influenced by the mix of resolution classes found in the data set. Our CPP measure can be used to produce a good estimate of the level of performance on a new corpus. 8Observed scores for MUC6 and 7 differ slightly from Table 3 because this part of the work did not use the OPTIONAL field of the key, employed by the official MUC scorer. 5 Related Work The bulk of the relevant related work is described in earlier sections, as appropriate. This paper studies complexity issues for NP coreference resolution using a “good”, i.e. near state-of-the-art, system. For state-of-the-art performance on the MUC data sets see, e.g. Yang et al. (2003); for state-ofthe-art performance on the ACE data sets see, e.g. Bengtson and Roth (2008) and Luo (2007). While other researchers have evaluated NP coreference resolvers with respect to pronouns vs. proper nouns vs. common nouns (Ng and Cardie, 2002b), our analysis focuses on measuring the complexity of data sets, predicting the performance of coreference systems on new data sets, and quantifying the effect of coreference system subcomponents on overall performance. In the related area of anaphora resolution, researchers have studied the influence of subsystems on the overall performance (Mitkov, 2002) as well as defined and evaluated performance on different classes of pronouns (e.g. Mitkov (2002) and Byron (2001)). However, due to the significant differences in task definition, available datasets, and evaluation metrics, their conclusions are not directly applicable to the full coreference task. Previous work has developed methods to predict system performance on NLP tasks given data set characteristics, e.g. Birch et al. (2008) does this for machine translation. Our work looks for the first time at predicting the performance of NP coreference resolvers. 6 Conclusions We examine the state-of-the-art in NP coreference resolution. We show the relative impact of perfect NE recognition, perfect anaphoricity information for coreference elements, and knowledge of all and only the annotated CEs. We also measure the performance of state-of-the-art resolvers on several classes of anaphora and use these results to develop a measure that can accurately estimate a resolver’s performance on new data sets. Acknowledgments. We gratefully acknowledge technical contributions from David Buttler and David Hysom in creating the Reconcile coreference resolution platform. This research was supported in part by the Department of Homeland Security under ONR Grant N0014-07-1-0152 and Lawrence Livermore National Laboratory subcontract B573245. 663 References A. Bagga and B. Baldwin. 1998. Algorithms for Scoring Coreference Chains. In In Linguistic Coreference Workshop at LREC 1998. Baldridge, J. 2005. The OpenNLP project. http://opennlp.sourceforge.net/. D. Bean and E. Riloff. 2004. Unsupervised Learning of Contextual Role Knowledge for Coreference Resolution. In Proceedings of the Annual Meeting of the North American Chapter of the Association for Computational Linguistics (HLT/NAACL 2004). Eric Bengtson and Dan Roth. 2008. Understanding the Value of Features for Coreference Resolution. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 294–303. Association for Computational Linguistics. Alexandra Birch, Miles Osborne, and Philipp Koehn. 2008. Predicting Success in Machine Translation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 745–754. Association for Computational Linguistics. Donna Byron. 2001. The Uncommon Denominator: A Proposal for Consistent Reporting of Pronoun Resolution Results. Computational Linguistics, 27(4):569–578. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating Typed Dependency Parses from Phrase Structure Parses. In LREC. J. Finkel, S. Dingare, H. Nguyen, M. Nissim, and C. Manning. 2004. Exploiting Context for Biomedical Entity Recognition: From Syntax to the Web. In Joint Workshop on Natural Language Processing in Biomedicine and its Applications at COLING 2004. Yoav Freund and Robert E. Schapire. 1999. Large Margin Classification Using the Perceptron Algorithm. In Machine Learning, pages 277–296. Megumi Kameyama. 1997. Recognizing Referential Links: An Information Extraction Perspective. In Workshop On Operational Factors In Practical Robust Anaphora Resolution For Unrestricted Texts. Xiaoqiang Luo, Abe Ittycheriah, Hongyan Jing, Nanda Kambhatla, and Salim Roukos. 2004. A Mention-Synchronous Coreference Resolution Algorithm Based on the Bell Tree. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics. X. Luo. 2005. On Coreference Resolution Performance Metrics. In Proceedings of the 2005 Human Language Technology Conference / Conference on Empirical Methods in Natural Language Processing. Xiaoqiang Luo. 2007. Coreference or Not: A Twin Model for Coreference Resolution. In Proceedings of the Annual Meeting of the North American Chapter of the Association for Computational Linguistics (HLT/NAACL 2007). A. McCallum and B. Wellner. 2004. Conditional Models of Identity Uncertainty with Application to Noun Coreference. In 18th Annual Conference on Neural Information Processing Systems. Ruslan Mitkov. 2002. Anaphora Resolution. Longman, London. MUC-6. 1995. Coreference Task Definition. In Proceedings of the Sixth Message Understanding Conference (MUC-6), pages 335–344. MUC-7. 1997. Coreference Task Definition. In Proceedings of the Seventh Message Understanding Conference (MUC-7). V. Ng and C. Cardie. 2002a. Identifying Anaphoric and Non-Anaphoric Noun Phrases to Improve Coreference Resolution. In Proceedings of the 19th International Conference on Computational Linguistics (COLING 2002). V. Ng and C. Cardie. 2002b. Improving Machine Learning Approaches to Coreference Resolution. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. NIST. 2004. The ACE Evaluation Plan. S. Petrov and D. Klein. 2007. Improved Inference for Unlexicalized Parsing. In Proceedings of the Annual Meeting of the North American Chapter of the Association for Computational Linguistics (HLT/NAACL 2007). W. Soon, H. Ng, and D. Lim. 2001. A Machine Learning Approach to Coreference of Noun Phrases. Computational Linguistics, 27(4):521–541. Veselin Stoyanov, Nathan Gilbert, Claire Cardie, Ellen Riloff, David Buttler, and David Hysom. 2009. Reconcile: A Coreference Resolution Research Platform. Computer Science Technical Report, Cornell University, Ithaca, NY. Kees van Deemter and Rodger Kibble. 2000. On Coreferring: Coreference in MUC and Related Annotation Schemes. Computational Linguistics, 26(4):629–637. M. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L. Hirschman. 1995. A Model-Theoretic Coreference Scoring Theme. In Proceedings of the Sixth Message Understanding Conference (MUC-6). Xiaofeng Yang, Guodong Zhou, Jian Su, and Chew Lim Tan. 2003. Coreference Resolution Using Competition Learning Approach. In ACL ’03: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, pages 176–183. 664
2009
74
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 665–673, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP A Novel Discourse Parser Based on Support Vector Machine Classification David A. duVerle National Institute of Informatics Tokyo, Japan Pierre & Marie Curie University Paris, France [email protected] Helmut Prendinger National Institute of Informatics Tokyo, Japan [email protected] Abstract This paper introduces a new algorithm to parse discourse within the framework of Rhetorical Structure Theory (RST). Our method is based on recent advances in the field of statistical machine learning (multivariate capabilities of Support Vector Machines) and a rich feature space. RST offers a formal framework for hierarchical text organization with strong applications in discourse analysis and text generation. We demonstrate automated annotation of a text with RST hierarchically organised relations, with results comparable to those achieved by specially trained human annotators. Using a rich set of shallow lexical, syntactic and structural features from the input text, our parser achieves, in linear time, 73.9% of professional annotators’ human agreement F-score. The parser is 5% to 12% more accurate than current state-of-the-art parsers. 1 Introduction According to Mann and Thompson (1988), all well-written text is supported by a hierarchically structured set of coherence relations which reflect the authors intent. The goal of discourse parsing is to extract this high-level, rhetorical structure. Dependency parsing and other forms of syntactic analysis provide information on the grammatical structure of text at the sentential level. Discourse parsing, on the other hand, focuses on a higher-level view of text, allowing some flexibility in the choice of formal representation while providing a wide range of applications in both analytical and computational linguistics. Rhetorical Structure Theory (Mann and Thompson, 1988) provides a framework to analyze and study text coherence by defining and applying a set of structural relations to composing units (‘spans’) of text. Annotation of a text within the RST formalism will produce a tree-like structure that not only reflects text-coherence but also provides input for powerful algorithmic tools for tasks such as text regeneration (Piwek et al., 2007). RST parsing can be seen as a two-step process: 1. Segmentation of the input text into elementary discourse units (‘edus’). 2. Generation of the rhetorical structure tree based on ‘rhetorical relations’ (or ‘coherence relations’) as labels of the tree, with the edus constituting its terminal nodes. Mann and Thompson (1988) empirically established 110 distinct rhetorical relations, but pointed out that this set was flexible and open-ended. In addition to rhetorical relations, RST defines the notion of ‘nucleus’, the relatively more important part of the text, and ‘satellite’, which is subordinate to the nucleus. In Fig. 1, the leftmost edu constitutes the satellite (indicated by out-going arrow), and the right-hand statement constitutes the nucleus. Observe that the nucleus itself is a compound of nucleus and satellite. Several attempts to automate discourse parsing have been made. Marcu and Soricut focussed on sentence-level parsing and developed two probabilistic models that use syntactic and lexical information (Soricut and Marcu, 2003). Although their algorithm, called ‘SPADE’, does not produce full-text parse, it demonstrates a correlation between syntactic and discourse information, and their use to identify rhetorical relations even if no signaling cue words are present. 665 R TEMPORAL After plummeting 1.8% at one point during the day, CONTRAST the composite rebounded a little, but finished down 5.52, at 461.70. Figure 1: Example of a simple RST tree (Source: RST Discourse Treebank (Carlson et al., 2001), wsj0667). To the best of our knowledge, Reitter’s (2003b) was the only previous research based exclusively on feature-rich supervised learning to produce text-level RST discourse parse trees. However, his full outline for a working parser, using chartparsing-style techniques, was never implemented. LeThanh et al. (2004) proposed a multi-step algorithm to segment and organize text spans into trees for each successive level of text organization: first at sentence level, then paragraph and finally text. The multi-level approach taken by their algorithm mitigates the combinatorial explosion effect without treating it entirely. At the text-level, and despite the use of beam search to explore the solution space, the algorithm needs to produce and score a large number of trees in order to extract the best candidate, leading, in our experience, to impractical calculation times for large input. More recently, Baldridge and Lascarides (2005) successfully implemented a probabilistic parser that uses headed trees to label discourse relations. Restricting the scope of their research to texts in dialog form exclusively, they elected to use the more specific framework of Segmented Discourse Representation Theory (Asher and Lascarides, 2003) instead of RST. In this paper, we advanced the state-of-the-art in general discourse parsing, with an implemented solution that is computationally efficient and sufficiently accurate for use in real-time interactive applications. The rest of this paper is organized as follows: Section 2 describes the general architecture of our system along with the choices we made with regard to supervised learning. Section 3 explains the different characteristics of the input text used to train our system. Section 4 presents our results, and Section 5 concludes the paper. 2 Building a Discourse Parser 2.1 Assumptions and Restrictions In our work, we focused exclusively on the second step of the discourse parsing problem, i.e., constructing the RST tree from a sequence of edus that have been segmented beforehand. The motivation for leaving aside segmenting were both practical – previous discourse parsing efforts (Soricut and Marcu, 2003; LeThanh et al., 2004) already provide alternatives for standalone segmenting tools – and scientific, namely, the greater need for improvements in labeling. Current state-of-the-art results in automatic segmenting are much closer to human levels than full structure labeling (Fscore ratios of automatic performance over gold standard reported in LeThanh et al. (2004): 90.2% for segmentation, 70.1% for parsing). Another restriction is to use the reduced set of 18 rhetorical relations defined in Carlson et al. (2001) and previously used by Soricut and Marcu (2003). In this set, the 75 relations originally used in the RST Discourse Treebank (RST-DT) corpus (Carlson et al., 2001) are partitioned into 18 classes according to rhetorical similarity (e.g.: PROBLEMSOLUTION, QUESTION-ANSWER, STATEMENTRESPONSE, TOPIC-COMMENT and COMMENTTOPIC are all grouped under one TOPICCOMMENT relation). In accord with previous research (Soricut and Marcu, 2003; Reitter, 2003b; LeThanh et al., 2004), we turned all nary rhetorical relations into nested binary relations (a trivial graph transformation), resulting in more algorithmically manageable binary trees. Finally, we assumed full conformity to the ‘Principle of sequentiality’ (Marcu, 2000), which guarantees that only adjacent spans of text can be put in relation within an RST tree, and drastically reduces the size of the solution space. 2.2 Support Vector Machines At the core of our system is a set of classifiers, trained through supervised-learning, which, given two consecutive spans (atomic edus or RST sub-trees) in an input document, will score the likelihood of a direct structural relation as well as probabilities for such a relation’s label and nuclearity. Using these classifiers and a straightforward bottom-up tree-building algorithm, we can produce a valid tree close to human cross666 validation levels (our gold standard) in linear timecomplexity (see Fig. 2). SVM Classification Training Corpus (RST-TB) Test Corpus Segmentation (SPADE) Penn Treebank Tokenized EDUs EDUs Lexicalized Syntax Trees Syntax Parsing (Charniak's nlparse) Syntax Trees Lexicalization Lexicalization Lexicalized Syntax Trees Syntax Trees Alignment Feature Extraction Alignment Feature Extraction SVM Training SVM Models (Binary and Multiclass) Bottom-up Tree Construction Scored RS sub-trees Rhetorical Structure Tree Tokenization Tokenized EDUs Figure 2: Full system workflow. In order to improve classification accuracy, it is convenient to train two separate classifiers: • S: A binary classifier, for structure (existence of a connecting node between the two input sub-trees). • L: A multi-class classifier, for rhetorical relation and nuclearity labeling. Using our reduced set of 18 superrelations and considering only valid nuclearity options (e.g., (ATTRIBUTION, N, S) and (ATTRIBUTION, S, N), but not (ATTRIBUTION, N, N), as ATTRIBUTION is a purely hypotactic relation group), we come up with a set of 41 classes for our algorithm. Support Vector Machines (SVM) (Vapnik, 1995) are used to model classifiers S and L. SVM refers to a set of supervised learning algorithms that are based on margin maximization. Given our specific type of classification problem, SVMs offer many properties of particular interest. First, as maximum margin classifiers, they sidestep the common issue of overfitting (Scholkopf et al., 1995), and ensure a better control over the generalization error (limiting the impact of using homogeneous newspaper articles that could carry important biases in prose style and lexical content). Second, SVMs offer more resilience to noisy input. Third, depending on the parameters used (see the use of kernel functions below), training time complexity’s dependence on feature vector size is low, in some cases linear. This makes SVM well-fitted to treat classification problems involving relatively large feature spaces such as ours (≈ 105 features). Finally, while most probabilistic classifiers, such as Naive Bayes, strongly assume feature independence, SVMs achieve very good results regardless of input correlations, which is a desirable property for language-related tasks. SVM algorithms make use of the ‘kernel trick’ (Aizerman et al., 1964), a method for using linear classifiers to solve non-linear problems. Kernel methods essentially map input data to a higher-dimensional space before attempting to classify them. The choice of a fitting kernel function requires careful analysis of the data and must weigh the effects on both performance and training time. A compromise needs to be found during evaluation between the general efficiency of non-linear kernels (such as polynomial or Radial Basis Function) and low time-complexity of using a linear function (see Sect. 4). Because the original SVM algorithms build binary classifiers, multi-label classification requires some adaptation. A possible approach is to reduce the multi-classification problem through a set of binary classifiers, each trained either on a single class (“one vs. all”) or by pair (“one vs. one”). Recent research suggests keeping the classification whole, with a reformulation of the original optimization problem to accommodate multiple labels (“C & S”) (Crammer and Singer, 2002). 2.3 Input Data and Feature Extraction Both S and L classifiers are trained using manually annotated documents taken from the RST-DT corpus. Optimal parameters (when applicable) for each kernel function are obtained through automated grid search with n-fold crossvalidation (Staelin, 2003) on the training corpus, while a separate test set is used for performance evaluation. In training mode, classification instances are built by parsing manually annotated trees from the RST-DT corpus paired with lexicalized syntax trees (LS Trees) for each sentence (see Sect. 3). Syntax trees are taken 667 directly from the Penn Treebank corpus (which covers a superset of the RST-DT corpus), then “lexicalized” (i.e. tagged with lexical “heads” on each internal node of the syntactic tree) using a set of canonical head-projection rules (Magerman, 1995; Collins, 2003). Due to small differences in the way they were tokenized and pre-treated, rhetorical tree and LST are rarely a perfect match: optimal alignment is found by minimizing edit distances between word sequences. 2.4 Tree-building Algorithm By repeatedly applying the two classifiers and following a naive bottom-up tree-construction method, we are able to obtain a globally satisfying RST tree for the entire text with excellent timecomplexity. The algorithm starts with a list of all atomic discourse sub-trees (made of single edus in their text order) and recursively selects the best match between adjacent sub-trees (using binary classifier S), labels the newly created sub-tree (using multilabel classifier L) and updates scoring for S, until only one sub-tree is left: the complete rhetorical parse tree for the input text. It can be noted that, thanks to the principle of sequentiality (see Sect. 2.1), each time two sub-trees are merged into a new sub-tree, only connections with adjacent spans on each side are affected, and therefore, only two new scores need to be computed. Since our SVM classifiers work in linear time, the overall time-complexity of our algorithm is O(n). 3 Features Instrumental to our system’s performance is the choice of a set of salient characteristics (“features”) to be used as input to the SVM algorithm for training and classification. Once the features are determined, classification instances can be formally represented as a vector of values in R. We use n-fold validation on S and L classifiers to assess the impact of some sets of features on general performance and eliminate redundant features. However, we worked under the (verified) assumption that SVMs’ capacity to handle highdimensional data and resilience to input noise limit the negative impact of non-useful features. In the following list of features, obtained empirically by trial-and-error, features suffixed by ‘S[pan]’ are sub-tree-specific features, symmetrically extracted from both left and right candidate spans. Features suffixed by ‘F[ull]’ are a function of the two sub-trees considered as a pair. Multilabel features are turned into sets of binary values and trees use a trivial fixed-length binary encoding that assumes fixed depth. 3.1 Textual Organization As evidenced by a number of discourse-parsing efforts focusing on intra-sentential parsing (Marcu, 2000; Soricut and Marcu, 2003), there is a strong correlation between different organizational levels of textual units and sub-trees of the RST tree both at the sentence-level and the paragraph level. Although such correspondences are not a rule (sentences and particularly paragraphs, can often be found split across separate sub-trees), they provide valuable high-level clues, particularly in the task of scoring span relation priority (classifier S): Ex.: “Belong to same sentence”F, “Belong to same paragraph”F, “Number of paragraph boundaries”S, “Number of sentence boundaries”S... As pointed out by Reitter (Reitter, 2003a), we can hypothesize a correlation between span length and some relations (for example, the satellite in a CONTRAST relation will tend to be shorter than the nucleus). Therefore, it seems useful to encode different measures of span size and positioning, using either tokens or edus as a distance unit: Ex.: “Length in tokens”S, “Length in edus”S, “Distance to beginning of sentence in tokens”S, “Size of span over sentence in edus”S, “Distance to end of sentence in tokens”S... In order to better adjust to length variations between different types of text, some features in the above set are duplicated using relative, rather than absolute, values for positioning and distance. 3.2 Lexical Clues and Punctuation While not always present, discourse markers (connectives, cue-words or cue-phrases, etc) have been shown to give good indications on discourse structure and labeling, particularly at the sentencelevel (Marcu, 2000). We use an empirical ngram dictionary (for n ∈{1, 2, 3}) built from the training corpus and culled by frequency. As an advantage over explicit cue-words list, this method 668 also takes into account non-lexical signals such as punctuation and sentence/paragraph boundaries (inserted as artificial tokens in the original text during input formatting) which would otherwise necessitate a separate treatment. We counted and encoded n-gram occurrences while considering only the first and last n tokens of each span. While raising the encoding size compared to a “bag of words” approach, this gave us significantly better performance (classifier accuracy improved by more than 5%), particularly when combined with main constituent features (see Sect. 3.5 below). This is consistent with the suggestion that most meaningful rhetorical signals are located on the edge of the span (Schilder, 2002). We validated this approach by comparing it to results obtained with an explicit list of approximately 300 discourse-signaling cuephrases (Oberlander et al., 1999): performance when using the list of cue-phrases alone was substantially lower than n-grams. 3.3 Simple Syntactic Clues In order to complement signal detection and to achieve better generalization (smaller dependency on lexical content), we opted to add shallow syntactic clues by encoding part-of-speech (POS) tags for both prefix and suffix in each span. Using prefixes or suffixes of length higher than n = 3 did not seem to improve performance significantly. 3.4 Dominance Sets A promising concept introduced by Soricut and Marcu (2003) in their sentence-level parser is the identification of ‘dominance sets’ in the syntax parse trees associated to each input sentence. For example, it could be difficult to correctly identify the scope of the ATTRIBUTION relation in the example shown in Fig. 3. By using the associated syntax tree and studying the sub-trees spanned by each edu (see Fig. 4), it is possible to quickly infer a logical nesting order (“dominance”) between them: 1A > 1B > 1C. This order allows us to favor the relation between 1B and 1C over a relation between 1A and 1B, and thus helps us to make the right structural decision and pick the right-hand tree on Fig. 3. In addition to POS tags around the frontier between each dominance set (see colored nodes in Fig. 4), Soricut and Marcu (2003) note that in order to achieve good results on relation labeling, [Shoney’s Inc. said]1A [it will report a write-off of $2.5 million, or seven cents a share, for its fourth quarter]1B [ended yesterday.]1C (wsj0667) ELABORATION R ATTRIBUTION 1A 1B 1C R ATTRIBUTION 1A ELABORATION 1B 1C Figure 3: Two possible RST parses for a sentence. it is necessary to also consider lexical information (obtained through head word projection of terminal nodes to higher internal nodes). Based on this definition of dominance sets, we include a set of syntactic, lexical and tree-structural features that aim at a good approximation of Marcu & Soricut’s rule-based analysis of dominance sets while keeping parsing complexity low. Ex.: “Distance to root of the syntax tree”S, “Distance to common ancestor in the syntax tree”S, “Dominating node’s lexical head in span”S, “Common ancestor’s POS tag”F, “Common ancestor’s lexical head”F, “Dominating node’s POS tag”F (diamonds in Figure 4, “Dominated node’s POS tag”F (circles in Figure 4), “Dominated node’s sibling’s POS tag”F (rectangles in Figure 4), “Relative position of lexical head in sentence”S... 3.5 Strong Compositionality Criterion We make use of Marcu’s ‘Strong Compositionality Criterion’ (Marcu, 1996) through a very simple and limited set of features, replicating shallow lexical and syntactic features (previously described in Sections 3.2 and 3.3) on a single representative edu (dubbed main constituent) for each span. Main constituents are selected recursively using nuclearity information. We purposely keep the number of features extracted from main constituents comparatively low (therefore limiting the extra dimensionality cost), as we believe our use of rhetorical sub-structures ultimately encodes a variation of Marcu’s compositionality criterion (see Sect. 3.6). 3.6 Rhetorical Sub-structure A large majority of the features considered so far focus exclusively on sentence-level information. 669 1A. 1B. 1C. S NP-SBJ NP NNP Shoney POS 's NNP Inc. VP VBD said SBAR S NP-SBJ PRP it VP MD will VP VB report NP NP DT a NN write-off PP IN of NP NP QP $ $ CD 2.5 CD million , , CC or NP NP CD seven NNS cents NP-ADV DT a NN share , , PP IN for NP NP PRP$ its JJ fourth NN quarter VP VBN ended NP-TMP NN yesterday . . (said) (will) (quarter) (ended) (quarter) (said) (said) (will) (it) Figure 4: Using dominance sets to prioritize structural relations. Circled nodes define dominance sets and studying the frontiers between circles and diamonds gives us a dominance order between each of the three sub-trees considered: 1A > 1B > 1C. Head words obtained through partial lexicalization have been added between parenthesis. In order to efficiently label higher-level relations, we need more structural features that can guide good classification decision on large spans. Hence the idea of encoding each span’s rhetorical subtree into the feature vector seems natural. Beside the role of nuclearity in the sub-structure implied by Marcu’s compositionality criterion (see Sect. 3.5), we expect to see certain correlations between the relation being classified and relation patterns in either sub-tree, based on theoretical considerations and practical observations. The original RST theory suggests the use of ‘schemas’ as higher-order patterns of relations motivated by linguistic theories and verified through empirical analysis of annotated trees (Mann and Thompson, 1988). In addition, some level of correlation between relations at different levels of the tree can be informally observed throughout the corpus. This is trivially the case for n-ary relations such as LIST which have been binarized in our representation, i.e., the presence of several LIST relations in rightmost nodes of a subtree greatly increases the probability that the parent relation might be a LIST itself. 4 Evaluation 4.1 General Considerations In looking to evaluate the performance of our system, we had to work with a number of constraints and difficulties tied to variations in the methodologies used across past works, as well as a lack of consensus with regard to a common evaluation corpus. In order to accommodate these divergences while providing figures to evaluate both relative and absolute performance of our algorithm, we used three different test sets. Absolute performance is measured on the official test subset of the RST-DT corpus. A similarly available subset of doubly-annotated documents from the RST-DT is used to compare results with human agreement on the same task. Lastly, performance against past algorithms is evaluated with another subset of the RST-DT, such as used by LeThanh et al. (2004) in their own evaluation. 4.2 Raw SVM Classification Although our final goal is to achieve good performance on the entire tree-building task, a useful intermediate evaluation of our system can be conducted by measuring raw performance of SVM classifiers. Binary classifier S is trained on 52,683 instances (split approximately 1/3, 2/3 between positive and negative examples), extracted from 350 documents, and tested on 8,558 instances extracted from 50 documents. The feature space dimension is 136,987. Classifier L is trained on 17,742 instances (labeled across 41 classes) and tested on 2,887 instances, of same dimension as for S. Classifier: Binary (S) Multi-label (L) Reitter Kernel Linear Polyn. RBF Linear RBF RBF Software liblinear svmlight svmlight svmmulticlass libsvm svmlight Multi-label C&S 1 vs. 1 1 vs. all Training time 21.4s 5m53s 12m 15m 23m 216m Accuracy 82.2 85.0 82.9 65.8 66.8 61.0 Table 1: SVM Classifier performance. Regarding ‘Multi-label’, see Sect. 2.2. The noticeably good performance of linear 670 kernel methods in the results presented in Table 1 compared to more complex polynomial and RBF kernels, would indicate that our data separates fairly well linearly: a commonly observed effect of high-dimensional input (Chen et al., 2007) such as ours (> 100,000 features). A baseline for absolute comparison on the multi-label classification task is given by Reitter (2003a) on a similar classifier, which assumes perfect segmentation of the input, as ours does. Reitter’s accuracy results of 61% match a smaller set of training instances (7976 instances from 240 documents compared to 17,742 instances in our case) but with considerably less classes (16 rhetorical relation labels with no nuclearity, as opposed to our 41 nuclearized relation classes). Based on these differences, this sub-component of our system, with an accuracy of 66.8%, seems to perform well. Taking into account matters of performance and runtime complexity, we selected a linear kernel for S and an optimally parameterized RBF kernel for L, using modified versions of the liblinear and libsvm software packages. All further evaluations noted here were conducted with these. 4.3 Full System Performance A measure of our full system’s performance is realized by comparing structure and labeling of the RST tree produced by our algorithm to that obtained through manual annotation (our gold standard). Standard performance indicators for such a task are precision, recall and F-score as measured by the PARSEVAL metrics (Black et al., 1991), with the specific adaptations to the case of RST trees made by Marcu (2000, page 143-144). Our first evaluation (see Table 2) was conducted using the standard test subset of 41 files provided by the RST-DT corpus. In order to more accurately compare our results to the gold standard (defined as manual agreement between human annotators), we also evaluated performance using the 52 doubly-annotated files present in the RSTDT as test set (see Table 3). In each case, the remaining 340–350 files are used for training. For each corpus evaluation, the system is run twice: once using perfectly-segmented input (taken from the RST-DT), and once using the output of the SPADE segmenter (Soricut and Marcu, 2003). The first measure gives us a good idea of our system’s optimal performance (given optimal input), while the other gives us a more real-world evaluation, apt for comparison with other systems. In each case, parse trees are evaluated using the four following, increasingly complex, matching criteria: blank tree structure (‘S’), tree structure with nuclearity (‘N’), tree structure with rhetorical relations (‘R’) and our final goal: fully labeled structure with both nuclearity and rhetorical relation labels (‘F’). Segment. Manual SPADE S N R F S N R F Precision 83.0 68.4 55.3 54.8 69.5 56.1 44.9 44.4 Recall 83.0 68.4 55.3 54.8 69.2 55.8 44.7 44.2 F-Score 83.0 68.4 55.3 54.8 69.3 56.0 44.8 44.3 Table 2: Discourse-parser evaluation depending on segmentation using standard test subset System performance Human agreement Segment. Manual SPADE S N R F S N R F S N R F Precision 84.1 70.6 55.6 55.1 70.6 58.1 46.0 45.6 88.0 77.5 66.0 65.2 Recall 84.1 70.6 55.6 55.1 71.2 58.6 46.4 46.0 88.1 77.6 66.1 65.3 F-Score 84.1 70.6 55.6 55.1 70.9 58.3 46.2 45.8 88.1 77.5 66.0 65.3 Table 3: Comparing to human-agreement depending on segmentation using doubly-annotated subset Note: When using perfect segmentation, precision and recall are identical since both trees have same number of constituents. 4.4 Comparison with other Algorithms To the best of our knowledge, only two fully functional text-level discourse parsing algorithms for general text have published their results: Marcu’s decision-tree-based parser (Marcu, 2000) and the multi-level rule-based system built by LeThanh et al. (2004). For each one, evaluation was conducted on a different corpus, using unavailable documents for Marcu’s and a selection of 21 documents from the RST-DT (distinct from RST-DT’s test set) for LeThanh’s. We therefore retrained and evaluated our classifier, using LeThanh’s set of 21 documents as testing subset (and the rest for training) and compared performance (see Table 4). In order to achieve the most uniform conditions possible, we use LeThanh’s results on 14 classes (Marcu’s use 15, ours 18) and select SPADE segmentation figures for both our system and Marcu’s (LeThanh’s 671 system uses its own segmenter and does not provide figures for perfectly segmented input). Structure Nuclearity Relations Algorithm M lT dV M lT dV M lT dV Precision 65.8 54.5 72.4 54.0 47.8 57.8 34.3 40.5 47.8 Recall 34.0 52.9 73.3 21.6 46.4 58.5 13.0 39.3 48.4 F-score 44.8 53.7 72.8 30.9 47.1 58.1 18.8 39.9 48.1 Table 4: Side-by-side text-level algorithms comparison: Marcu (M), LeThanh et al. (lT) and ours (dV) Some discrepancies between reported human agreement F-scores suggest that, despite our best efforts, evaluation metrics used by each author might differ. Another explanation may lie in discrepancies between training/testing subsets used. In order to take into account possibly varying levels of difficulties between corpora, we therefore divided each F-score by the value for human agreement, such as measured by each author (see Table 5). This ratio should give us a fairer measure of success for the algorithm taking into account how well it succeeds in reaching nearhuman level. Structure Nuclearity Relations Algorithm M lT dV M lT dV M lT dV F−scorealgo F−scorehuman 56.0 73.9 83.0 42.9 71.8 75.6 25.7 70.1 73.9 Table 5: Performance scaled by human agreement scores: Marcu (M), LeThahn et al. (lT) and ours (dV) Table 5 shows 83%, 75.6% and 73.9% of human agreement F-scores in structure, nuclearity and relation parsing, respectively. Qualified by the (practical) problems of establishing comparison conditions with scientific rigor, the scores indicate that our system outperforms the previous stateof-the-art (LeThanh’s 73.9%, 71.8% and 70.1%). As suggested by previous research (Soricut and Marcu, 2003), these scores could likely be further improved with the use of better-performing segmenting algorithms. It can however be noted that our system seems considerably less sensitive to imperfect segmenting than previous efforts. For instance, when switching from manual segmentation to automatic, our performance decreases by 12.3% and 12.9% (respectively for structure and relation F-scores) compared to 46% and 67% for Marcu’s system (LeThanh’s performance on perfect input is unknown). 5 Conclusions and Future Work In this paper, we have shown that it is possible to build an accurate automatic text-level discourse parser based on supervised machine-learning algorithms, using a feature-driven approach and a manually annotated corpus. Importantly, our system achieves its accuracy in linear complexity of the input size with excellent runtime performance. The entire test subset in the RSTDT corpus could be fully annotated in a matter of minutes. This opens the way to many novel applications in real-time natural language processing and generation, such as the RST-based transformation of monological text into dialogues acted by virtual agents in real-time (Hernault et al., 2008). Future directions for this work notably include a better tree-building algorithm, with improved exploration of the solution space. Borrowing techniques from generic global optimization metaalgorithms such as simulated annealing (Kirkpatrick et al., 1983) should allow us to better deal with issues of local optimality while retaining acceptable time-complexity. A complete online discourse parser, incorporating the parsing tool presented above combined with a new segmenting method has since been made freely available at http://nlp. prendingerlab.net/hilda/. Acknowledgements This project was jointly funded by Prendinger Lab (NII, Tokyo) and the National Institute for Informatics (Tokyo), as part of a MOU (Memorandum of Understanding) program with Pierre & Marie Curie University (Paris). 672 References M.A. Aizerman, E.M. Braverman, and L.I. Rozonoer. 1964. Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25(6):821–837. N. Asher and A. Lascarides. 2003. Logics of conversation. Cambridge University Press. J. Baldridge and A. Lascarides. 2005. Probabilistic head-driven parsing for discourse structure. In Proceedings of the Ninth Conference on Computational Natural Language Learning, volume 96, page 103. E. Black, S. Abney, S. Flickenger, C. Gdaniec, C. Grishman, P. Harrison, D. Hindle, R. Ingria, F. Jelinek, J. Klavans, M. Liberman, et al. 1991. Procedure for quantitatively comparing the syntactic coverage of English grammars. Proceedings of the workshop on Speech and Natural Language, pages 306–311. L. Carlson, D. Marcu, and M.E. Okurowski. 2001. Building a discourse-tagged corpus in the framework of Rhetorical Structure Theory. Proceedings of the Second SIGdial Workshop on Discourse and Dialogue-Volume 16, pages 1–10. D. Chen, Q. He, and X. Wang. 2007. On linear separability of data sets in feature space. Neurocomputing, 70(13-15):2441–2448. M. Collins. 2003. Head-Driven Statistical Models for Natural Language Parsing. Computational Linguistics, 29(4):589–637. K. Crammer and Y. Singer. 2002. On the algorithmic implementation of multiclass kernel-based vector machines. The Journal of Machine Learning Research, 2:265–292. H. Hernault, P. Piwek, H. Prendinger, and M. Ishizuka. 2008. Generating dialogues for virtual agents using nested textual coherence relations. Proceedings of the 8th International Conference on Intelligent Virtual Agents (IVA’08), LNAI, 5208:139–145, Sept. S. Kirkpatrick, CD Gelatt, and MP Vecchi. 1983. Optimization by Simulated Annealing. Science, 220(4598):671–680. H. LeThanh, G. Abeysinghe, and C. Huyck. 2004. Generating discourse structures for written texts. Proceedings of the 20th international conference on Computational Linguistics. D.M. Magerman. 1995. Statistical decision-tree models for parsing. Proceedings of the 33rd annual meeting on Association for Computational Linguistics, pages 276–283. W.C. Mann and S.A. Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):243–281. D. Marcu. 1996. Building Up Rhetorical Structure Trees. Proceedings of the National Conference on Artificial Intelligence, pages 1069–1074. D. Marcu. 2000. The theory and practice of discourse parsing and summarization. MIT Press. J. Oberlander, J.D. Moore, J. Oberlander, A. Knott, and J. Moore. 1999. Cue phrases in discourse: further evidence for the core: contributor distinction. Proceedings of the 1999 Levels of Representation in Discourse Workshop (LORID’99), pages 87–93. P. Piwek, H. Hernault, H. Prendinger, and M. Ishizuka. 2007. Generating dialogues between virtual agents automatically from text. Proceedings of the 7th International Conference on Intelligent Virtual Agents (IVA ’07), LNCS, 4722:161. D. Reitter. 2003a. Rhetorical Analysis with RichFeature Support Vector Models. Unpublished Master’s thesis, University of Potsdam, Potsdam, Germany. D. Reitter. 2003b. Simple Signals for Complex Rhetorics: On Rhetorical Analysis with RichFeature Support Vector Models. Language, 18(52). F. Schilder. 2002. Robust discourse parsing via discourse markers, topicality and position. Natural Language Engineering, 8(2-3):235–255. B. Scholkopf, C. Burges, and V. Vapnik. 1995. Extracting Support Data for a Given Task. Knowledge Discovery and Data Mining, pages 252–257. R. Soricut and D. Marcu. 2003. Sentence level discourse parsing using syntactic and lexical information. Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, 1:149–156. C. Staelin. 2003. Parameter selection for support vector machines. Hewlett-Packard Company, Tech. Rep. HPL-2002-354R1. V.N. Vapnik. 1995. The nature of statistical learning theory. Springer-Verlag New York, Inc., New York, NY, USA. 673
2009
75
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 674–682, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Genre distinctions for Discourse in the Penn TreeBank Bonnie Webber School of Informatics University of Edinburgh Edinburgh EH8 9LW, UK [email protected] Abstract Articles in the Penn TreeBank were identified as being reviews, summaries, letters to the editor, news reportage, corrections, wit and short verse, or quarterly profit reports. All but the latter three were then characterised in terms of features manually annotated in the Penn Discourse TreeBank — discourse connectives and their senses. Summaries turned out to display very different discourse features than the other three genres. Letters also appeared to have some different features. The two main findings involve (1) differences between genres in the senses associated with intra-sentential discourse connectives, inter-sentential discourse connectives and inter-sentential discourse relations that are not lexically marked; and (2) differences within all four genres between the senses of discourse relations not lexically marked and those that are marked. The first finding means that genre should be made a factor in automated sense labelling of non-lexically marked discourse relations. The second means that lexically marked relations provide a poor model for automated sense labelling of relations that are not lexically marked. 1 Introduction It is well-known that texts differ from each other in a variety of ways, including their topic, the reading level of their intended audience, and their intended purpose (eg, to instruct, to inform, to express an opinion, to summarize, to take issue with or disagree, to correct, to entertain, etc.). This paper considers differences in texts in the wellknown Penn TreeBank (hereafter, PTB) and in particular, how these differences show up in the Penn Discourse TreeBank (Prasad et al., 2008). It first describes ways in which texts can vary (Section 2). It then illustrates the variety of texts to be found in the the PTB and suggests their grouping into four broad genres (Section 3). After a brief introduction to the Penn Discourse TreeBank (hereafter, PDTB) in Section 4, Sections 5 and 6 show that these four genres display differences in connective frequency and in terms of the senses associated with intra-sentential connectives (eg, subordinating conjunctions), inter-sentential connectives (eg, inter-sentential coordinating conjunctions) and those inter-sentential relations that are not lexically marked. Section 7 considers recent efforts to induce effective procedures for automated sense labelling of discourse relations that are not lexically marked (Elwell and Baldridge, 2008; Marcu and Echihabi, 2002; Pitler et al., 2009; Wellner and Pustejovsky, 2007; Wellner, 2008). It makes two points. First, because genres differ from each other in the senses associated with such relations, genre should be made a factor in their automated sense labelling. Secondly, because different senses are being conveyed when a relation is lexically marked than when it isn’t, lexically marked relations provide a poor model for automated sense labelling of relations that are not lexically marked. 2 Two Perspectives on Genre The dimension of text variation of interest here is genre, which can be viewed externally, in terms of the communicative purpose of a text (Swales, 1990), or internally, in terms of features common to texts sharing a communicative purpose. (Kessler et al., 1997) combine these views by saying that a genre should not be so broad that the texts belonging to it don’t share any distinguishing properties — ...we would probably not use the term “genre” to describe merely the class of 674 texts that have the objective of persuading someone to do something, since that class – which would include editorials, sermons, prayers, advertisements, and so forth – has no distinguishing formal properties (Kessler et al., 1997, p. 33). A balanced corpus like the Brown Corpus of American English or the British National Corpus, will sample texts from different genres, to give a representative view of how the language is used. For example, the fifteen categories of published material sampled for the Brown Corpus include PRESS REPORTAGE, PRESS EDITORIALS, PRESS REVIEWS and five different types of FICTION. In contrast, experiments on what genres would be helpful in web search for particular types of information on a topic led (Rosso, 2008), to 18 class labels that his subjects could reliably apply to web pages (here, ones from an .edu domain) with over 50% agreement. These class labels included ARTICLE, COURSE DESCRIPTION, COURSE LIST, DIARY, WEBLOG OR BLOG, FAQ/HELP and FORM. In both Brown’s published material and Rosso’s web pages, the selected class labels (genres) reflect external purpose rather than distinctive internal features. Such features are, however, of great interest in both text analysis and text processing. Text analysts have shown that there are indeed interesting features that correlate more strongly with certain genres than with others. For example, (Biber, 1986) considered 41 linguistic features previously mentioned in the literature, including type/token ratio, average word length, and such frequencies as that of particular words (eg, I/you, it, the proverb do), particular word types (eg, place adverbs, hedges), particular parts-of-speech (eg, past tense verbs, adjectives), and particular syntactic constructions (eg, that-clauses, if-clauses, reduced relative clauses). He found certain clusters of these features (i.e. their presense or absense) correlated well with certain text types. For example, press reportage scored the highest with respect to high frequency of that-clauses and contractions, and low type-token ratio (i.e. a varied vocabulary for a given length of text), while general and romantic fiction scored much lower on these features. (Biber, 2003) showed significant differences in the internal structure of noun phrases used in fiction, news, academic writing and face-to-face conversations. Such features are of similar interest in text processing – in particular, automated genre classification (Dewdney et al., 2001; Finn and Kushmerick, 2006; Kessler et al., 1997; Stamatatos et al., 2000; Wolters and Kirsten, 1999) – which relies on there being reliably detectable features that can be used to distinguish one class from another. This is where the caveat from (Kessler et al., 1997) becomes relevant: A particular genre shouldn’t be taken so broadly as to have no distinguishing features, nor so narrowly as to have no general applicability. But this still allows variability in what is taken to be a genre. There is no one “right set”. 3 Genre in the Penn TreeBank Although the files in the Penn TreeBank (PTB) lack any classificatory meta-data, leading the PTB to be treated as a single homogeneous collection of “news articles”, researchers who have manually examined it in detail have noted that it includes a variety of “financial reports, general interest stories, business-related news, cultural reviews, editorials and letters to the editor” (Carlson et al., 2002, p. 7). To date, ignoring this variety hasn’t really mattered since the PTB has primarily been used in developing word-level and sentence-level tools for automated language analysis such as widecoverage part-of-speech taggers, robust parsers and statistical sentence generators. Any genrerelated differences in word usage and/or syntax have just meant a wider variety of words and sentences shaping the covereage of these tools. However, ignoring this variety may actually hinder the development of robust language technology for analysing and/or generating multi-sentence text. As such, it is worth considering genre in the PTB, since doing so can allow texts from different genres to be weighted differently when tools are being developed. This is a start on such an undertaking. In lieu of any informative meta-data in the PTB files1, I looked at line-level patterns in the 2159 files that make up the Penn Discourse TreeBank subset of the PTB, and then manually confirmed the text types I found.2 The resulting set includes all the 1Subsequent to this paper, I discovered that the TIPSTER Collection (LDC Catalog entry LDC93T3B) contains a small amount of meta-data that can be projected onto the PTB files, to refine the semi-automatic, manually-verified analysis done here. This work is now in progress. 2Similar patterns can also be found among the 153 files in 675 genres noted by Carlson et al. (2002) and others as well: 1. Op-Ed pieces and reviews ending with a byline (73 files): wsj 0071, wsj 0087, wsj 0108, wsj 0186, wsj 0207, wsj 0239, wsj 0257, etc. 2. Sourced articles from another newspaper or magazine (8 files): wsj 1453, wsj 1569, wsj 1623, wsj 1635, wsj 1809, wsj 1970, wsj 2017, wsj 2153 3. Editorials and other reviews, similar to the above, but lacking a by-line or source (11 files): wsj 0039, wsj 0456, wsj 0765, wsj 0794, wsj 0819, wsj 0972, wsj 1259 wsj 1315, etc. 4. Essays on topics commemorating the WSJ’s centennial (12 files): wsj 0022, wsj 0339, wsj 0406, wsj 0676, wsj 0933, 2sj 1164, etc. 5. Daily summaries of offerings and pricings in U.S. and non-U.S. capital markets (13 files): wsj 0125, wsj 0271, wsj 0476, wsj 0612, wsj 0704, wsj 1001, wsj 1161, wsj 1312, wsj 1441, etc. 6. Daily summaries of financially significant events, ending with a summary of the day’s market figures (14 files): wsj 0178, wsj 0350, wsj 0493, wsj 0675, wsj 1043, wsj 1217, etc. 7. Daily summaries of interest rates (12 files): wsj 0219, wsj 0457, wsj 0602, wsj 0986, etc. 8. Summaries of recent SEC filings (4 files): wsj 0599, wsj 0770, wsj 1156, wsj 1247 9. Weekly market summaries (12 files): wsj 0137, wsj 0231, wsj 0374, wsj 0586, wsj 1015, wsj 1187, wsj 1337, wsj 1505, wsj 1723, etc. 10. Letters to the editor (49 files3): wsj 0091, wsj 0094, wsj 0095, wsj 0266, wsj 0268, wsj 0360, wsj 0411, wsj 0433, wsj 0508, wsj 0687, etc. 11. Corrections (24 files): wsj 0104, wsj 0200, wsj 0211, wsj 0410, wsj 0603, wsj 0605, etc. 12. Wit and short verse (14 files): wsj 0139, wsj 0312, wsj 0594, wsj 0403, wsj 0757, etc. 13. Quarterly profit reports – introductory paragraphs alone (11 files): wsj 0190, wsj 0364, wsj 0511, wsj 0696, wsj 1056, wsj 1228, etc. the Penn TreeBank that aren’t included in the PDTB. However, such files were excluded so that all further analyses could be carried out on the same set of files. 3The relation between letters and files is not one-to-one: 13 (26.5%) of these files contain between two and six letters. This is relevant at the end of this section when considering length as a potentially distinguishing feature of a text. 14. News reports (1902 files) A complete listing of these classes can be found in an electronic appendix to this article at the PDTB home page (http://www.seas.upenn.edu/˜pdtb). In order to consider discourse-level features distinctive to genres within the PTB, I have ignored, for the time being, both CORRECTIONS and WIT AND SHORT VERSE since they are so obviously different from the other texts, and also QUARTERLY PROFIT REPORTS, since they turn out to be multiple simply copies of the same text because the distinguishing company listings have been omitted. The remaining eleven classes have been aggregated into four broad genres: ESSAYS (104 files, classes 1-4), SUMMARIES (55 files, classes 5-9), LETTERS (49 files, class 10) and NEWS (1902 files, class 14). The latter corresponds to the Brown Corpus class PRESS REPORTAGE and the class NEWS in the New York Times annotated corpus (Evan Sandhaus, 2008), excluding CORRECTIONS and OBITUARIES. The LETTERS class here corresponds to the NYT class OPINION/LETTERS, while ESSAYS here spans both Brown Corpus classes PRESS REVIEWS and PRESS EDITORIALS, and the NYT corpus classes OPINION/EDITORIALS, OPINION/OPED, FEATURES/XXX/COLUMNS and FEATURES/XXX/REVIEWS, where XXX ranges over Arts, Books, Dining and Wine, Movies, Style, etc. The class called SUMMARIES has no corresponding class in Brown. In the NYT Corpus, it corresponds to those articles whose taxonomic classifiers field is NEWS/BUSINESS and whose types of material field is SCHEDULE. There are two things to note here. First, no claim is being made that these are the only classes to be found in the PTB. For example, the class labelled NEWS contains a subset of 80 short (1-3 sentence) articles announcing personnel changes – eg, promotions, appointments to supervisory boards, etc. (eg, wsj 0001, wsj 0014, wsj 0066, wsj 0069, wsj 0218, etc.) I have not looked for more specific classes because even classes at this level of specificity show that ignoring genrespecific discourse features can hinder the development of robust language technology for either analysing or generating multi-sentence text. Secondly, no claim is being made that the four selected classes comprise the “right” set of genres for future use of the PTB for discourse-related 676 language technology, just that some sensitivity to genre will lead to better performance. Some simple differences between the four broad genre can be seen in Figure 1, in terms of the average length of a file in words, sentences or paragraphs4, and the average number of sentences per paragraph. Figure 1 shows that essays are, on average, longer than texts from the other three classes, and have longer paragraphs. The relevance of the latter will become clear in the next section, when I describe PDTB annotation as background for genre differences related to this annotation. 4 The Penn Discourse TreeBank Genre differences at the level of discourse in the PTB can be seen in the manual annotations of the Penn Discourse TreeBank (Prasad et al., 2008). There are several elements to PDTB annotation. First, the PDTB annotates the arguments of explicit discourse connectives: (1) Even so, according to Mr. Salmore, the ad was ”devastating” because it raised questions about Mr. Courter’s credibility. But it’s building on a long tradition. (0041) Here, the explicit connective (“but”) is underlined. Its first argument, ARG1, is shown in italics and its second, ARG2, in boldface. The number 0041 indicates that the example comes from subsection wsj 0041 of the PTB. Secondly, the PDTB annotates implicit discourse relations between adjacent sentences within the same paragraph, where the second does not contain an explicit inter-sentential connective: (2) The projects already under construction will increase Las Vegas’s supply of hotel rooms by 11,795, or nearly 20%, to 75,500. [Implicit “so”] By a rule of thumb of 1.5 new jobs for each new hotel room, Clark County will have nearly 18,000 new jobs. (0994) With implicit discourse relations, annotators were asked to identify one or more explicit connectives that could be inserted to lexicalize the relation between the arguments. Here, they have been identified as the connective “so”. Where annotators could not identify such an implicit connective, they were asked if they could identify a non-connective phrase in ARG2 (e.g. 4A file usually contains a single article, except (as noted earlier) files in the class LETTERS, which may contain more than one letter. “this means”) that realised the implicit discourse relation instead (ALTLEX), or a relation holding between the second sentence and an entity mentioned in the first (ENTREL), rather than the interpretation of the previous sentence itself: (3) Rated triple-A by Moody’s and S&P, the issue will be sold through First Boston Corp. The issue is backed by a 12% letter of credit from Credit Suisse. If the annotators couldn’t identify either, they would assert that no discourse relation held between the adjacent sentences (NOREL). Note that because resource limitations meant that implicit discourse relations (comprising implicit connectives, ALTLEX, ENTREL and NOREL) were only annotated within paragraphs, longer paragraphs (as there were in ESSAYS) could potentially mean more implicit discourse relations were annotated. The third element of PDTB annotation is that of the senses of connectives, both explicit and implicit. These have been manually annotated using the three-level sense hierarchy described in detail in (Miltsakaki et al., 2008). Briefly, there are four top-level classes: • TEMPORAL, where the situations described in the arguments are related temporally; • CONTINGENCY, where the situation described in one argument causally influences that described in the other; • COMPARISON, used to highlight some prominent difference that holds between the situations described in the two arguments; • EXPANSION, where one argument expands the situation described in the other and moves the narrative or exposition forward. TEMPORAL relations can be further specified to ASYNCHRONOUS and SYNCHRONOUS, depending on whether or not the situations described by the arguments are temporally ordered. CONTINGENCY can be further specified to CAUSE and CONDITION, depending on whether or not the existential status of the arguments depends on the connective (i.e. no for CAUSE, and yes for CONDITION). COMPARISON can be further specified to CONTRAST, where the two arguments share a predicate or property whose difference is being highlighted, and CONCESSION, where “the highlighted differences are related to expectations raised by one 677 Total Total Total Total Avg. words Avg. sentences Avg. ¶s Avg. sentences Genre files paragraphs sentences words per file per file per file per ¶ ESSAYS 104 1580 4774 98376 945.92 45.9 15.2 3.02 SUMMARIES 55 1047 2118 37604 683.71 38.5 19.1 2.02 LETTERS 49 339 739 15613 318.63 15.1 7.1 2.14 NEWS 1902 18437 40095 837367 440.26 21.1 9.7 2.17 Figure 1: Distribution of Words, Sentences and Paragraphs by Genre (¶ stands for “paragraph”.) argument which are then denied by the other” (Miltsakaki et al., 2008, p.282). Finally, EXPANSION has six subtypese, including CONJUNCTION, where the situation described in ARG2, provides new information related to the situation described in ARG1; RESTATEMENT, where ARG2 restates or redescribes the situation described in ARG1; and ALTERNATIVE, where the two arguments evoke situations taken to be alternatives. These two levels are sufficient to show significant differences between genres. The only other thing to note is that annotators could be as specific as they chose in annotating the sense of a connective: If they could not decide on the specific type of COMPARISON holding between the two arguments of a connective, or they felt that both subtypes of COMPARISON were being expressed, they could simply sense annotate the connective with the label COMPARISON. I will comment on this in Section 6. The fourth element of PDTB annotation is attribution (Prasad et al., 2007; Prasad et al., 2008). This was not considered in the current analysis, although here too, genre-related differences are likely. 5 Connective Frequency by Genre The analysis that follows distinguishes between two kinds of relations associated with explicit connectives in the PDTB: (1) intra-sentential discourse relations, which hold between clauses within the same sentence and are associated with subordinating conjunctions, intra-sentential coordinating conjunctions, and discourse adverbials whose arguments occur within the same sentence5); and (2) explicit inter-sentential discourse relations, which hold across sentences and are associated with explicit inter-sentential connectives (inter-sentential coordinating conjunctions and discourse adverbials whose arguments are not 5Limited resources meant that intra-sentential discourse relations associated with subordinators like “in order to” and “so that” or with free adjuncts were not annotated in the PDTB. in the same sentence). It is the latter that are effectively in complementary distribution with implicit discourse relations in the PDTB6, and Figures 2 and 3 show their distribution across the four genres.7 Figure 2 shows that among explicit inter-sentential connectives, S-initial coordinating conjunctions (“And”, “Or” and “But”) are a feature of ESSAYS, SUMMARIES and NEWS but not of LETTERS. LETTERS are written by members of the public, not by the journalists or editors working for the Wall Street Journal. This suggests that the use of S-initial coordinating conjunctions is an element of Wall Street Journal “house style”, as opposed to a common feature of modern writing. Figure 3 shows several things about the different patterning across genres of implicit discourse relations (Columns 4–7 for implicit connectives, ALTLEX, ENTREL and NOREL) and explicit inter-sentential connectives (Column 3). First, SUMMARIES are distinctive in two ways: While the ratio of implicit connectives to explicit inter-sentential connectives is around 3:1 in the other three genres, for SUMMARIES it is around 4:1 – there are just many fewer explicit intersentential connectives. Secondly, while the ratio of ENTREL relations to implicit connectives ranges from 0.19 to 0.32 in the other three genres, in SUMMARIES, ENTREL predominates (as in Example 3 from one of the daily summaries of offerings and pricings). In fact, there are nearly as 6This is not quite true for two reasons — first, because the first argument of a discourse adverbial is not restricted to the immediately adjacent sentence and secondly, because a sentence can have both an initial coordinating conjunction and a discourse adverbial, as in “So, for example, he’ll eat tofu with fried pork rinds.” But it’s a reasonable first approximation. 7Although annotated in the PDTB, throughout this paper I have ignored the S-medial discourse adverbial also, as in “John also eats fish”, since such instances are better regarded as presuppositional. That is, as well as a textual antecedent, they can be licensed through inference (e.g. “John claims to be a vegetarian, but he also eats fish.”) or accommodated by listeners with respect to the spatio-temporal context (e.g. Watching John dig into a bowl of tofu, one might remark “Don’t worry. He also eats fish.”) The other discourse adverbials annotated in the PDTB do not have this property. 678 Total Explicit Density of Explicit S-initial S-initial S-medial Total Inter-Sentential Inter-Sentential Coordinating Discourse Inter-Sentential Genre Sentences Connectives Connectives/Sentence Conjunctions Adverbials Disc Advs ESSAYS 4774 691 0.145 334 (48.3%) 244 (35.3%) 113 (16.4%) SUMMARIES 2118 95 0.045 46 (48.4%) 39 (41.1%) 10 (10.5%) LETTERS 739 85 0.115 26 (30.6%) 37 (43.5%) 18 (21.2%) NEWS 40095 4709 0.117 2389 (50.7%) 1610 (34.2%) 718 (15.3%) Figure 2: Distribution of Explicit Inter-Sentential Connectives. Total Total Explicit Inter-Sentential Inter-Sentential Implicit Genre Discourse Rels Connectives Connectives ENTREL ALTLEX NOREL ESSAYS 3302 691 (20.9%) 2112 (64.0%) 397 (12.0%) 86 (2.6%) 16 (0.5%) SUMMARIES 916 95 (10.4%) 363 (39.6%) 434 (47.4%) 12 (1.3%) 12 (1.3%) LETTERS 433 85 (19.6%) 267 (61.7%) 58 (13.4%) 22 (5.1%) 1 (0.2%) NEWS 23017 4709 (20.5%) 13287 (57.7%) 4293 (18.7%) 504 (2.2%) 224 (1%) Figure 3: Distribution of Inter-Sentential Discourse Relations, including Explicits from Figure 2. many ENTREL relations in summaries as the total of explicit and implicit connectives combined. Finally, it is possible that the higher frequency of alternative lexicalizations of discourse connectives (ALTLEX) in LETTERS than in the other three genres means that they are not part of Wall Street Journal “house style”. (Other elements of WSJ “house style” – or possibly, news style in general – are observable in the significantly higher frequency of direct and indirect quotations in news than in the other three genres. This property is not discussed further here, but is worth investigating in the future.) With respect to explicit intra-sentential connectives, the main point of interest in Figure 4 is that SUMMARIES display a significantly lower density of intra-sentential connectives overall than the other three genres, as well as a significantly lower relative frequency of intra-sentential discourse adverbials. As the next section will show, these intra-sentential connectives, while few, are selected most often to express CONTRAST and situations changing over time, reflecting the nature of SUMMARIES as regular periodic summaries of a changing world. 6 Connective Sense by Genre (Pitler et al., 2008) show a difference across Level 1 senses (COMPARISON, CONTINGENCY, TEMPORAL and EXPANSION) in the PDTB in terms of their tendency to be realised by explicit connectives (a tendency of COMPARISON and TEMPORAL relations) or by Implicit Connectives (a tendency of CONTINGENCY and EXPANSION). Here I show differences (focussing on Level 2 senses, which are more informative) in their frequency of occurance in the four genres, by type of connective: explicit intra-sentential connectives (Figure 5), explicit inter-sentential connectives (Figure 6), and implicit inter-sentential connectives (Figure 7). SUMMARIES and LETTERS are each distinctly different from ESSAYS and NEWS with respect to each type of connective. One difference in sense annotation across the four genres harkens back to a comment made in Section 4 – that annotators could be as specific as they chose in annotating the sense of a connective. If they could not decide between specific level n+1 labels for the sense of a connective, they could simply assign it a level n label. It is perhaps suggestive then of the relative complexity of ESSAYS and LETTERS, as compared to NEWS, that the top-level label COMPARISON was used approximately twice as often in labelling explicit inter-sentential connectives in ESSAYS (7.2%) and LETTERS (9.4%) than in news (4.3%). (The toplevel labels EXPANSION, TEMPORAL and CONTINGENCY were used far less often, as to be simply noise.) In any case, this aspect of readability may be worth further investigation (Pitler and Nenkova, 2008). 7 Automated Sense Labelling of Discourse Connectives The focus here is on automated sense labelling of discourse connectives (Elwell and Baldridge, 2008; Marcu and Echihabi, 2002; Pitler et al., 2009; Wellner and Pustejovsky, 2007; Wellner, 679 Total Density of Intra-Sentential Intra-Sentential Total Intra-Sentential Intra-Sentential Subordinating Coordinating Discourse Genre Sentences Connectives Connectives/Sentence Conjunctions Conjunctions Adverbials ESSAYS 4774 1397 0.293 808 (57.8%) 438 (31.4%) 151 (10.8%) SUMMARIES 2118 275 0.130 166 (60.4%) 99 (36.0%) 10 (3.6%) LETTERS 739 200 0.271 126 (63.0%) 56 (28.0%) 18 (9.0%) NEWS 40095 9336 0.233 5514 (59.1%) 3015 (32.3%) 807 (8.6%) Figure 4: Distribution of Explicit Intra-Sentential Connectives. Relation Essays Summaries Letters News Expansion.Conjunction 253 (18.1%) 50 (18.2%) 31 (15.5%) 1907 (20.4%) Contingency.Cause 208 (14.9%) 37 (13.5%) 32 (16%) 1354 (14.5%) Contingency.Condition 205 (14.7%) 15 (5.5%) 22 (11%) 1082 (11.6%) Temporal.Asynchronous 187 (13.4%) 54 (19.6%) 19 (9.5%) 1444 (15.5%) Comparison.Contrast 187 (13.4%) 56 (20.4%) 29 (14.5%) 1416 (15.2%) Temporal.Synchrony 165 (11.8%) 32 (11.6%) 27 (13.5%) 1061 (11.4%) Total 1397 275 200 9336 Figure 5: Explicit Intra-Sentential Connectives: Most common Level 2 Senses Relation Essays Summaries Letters News Comparison.Contrast 231 (33.4%) 47 (49.5%) 20 (23.5%) 1853 (39.4%) Expansion.Conjunction 156 (22.6%) 24 (25.3%) 20 (23.5%) 1144 (24.3%) Comparison.Concession 75 (10.9%) 11 (11.6%) 5 (5.9%) 462 (9.8%) Comparison 50 (7.2%) – 8 (9.4%) 204 (4.3%) Temporal.Asynchronous 40 (5.8%) 1 (1.1%) 5 (5.8%) 265 (5.6%) Expansion.Instantiation 37 (5.4%) 3 (3.2%) 3 (3.5%) 236 (5.0%) Contingency.Cause 32 (4.6%) 1 (1.1%) 12 (14.1%) 136 (2.9%) Expansion.Restatement 27 (3.9%) – 6 (7.1%) 93 (2.0%) Total 691 95 85 4709 Figure 6: Explicit Inter-Sentential Connectives: Most common Level 2 Senses Relation Essays Summaries Letters News Contingency.Cause 577 (27.3%) 70 (19.28%) 75 (28.1%) 3389 (25.5%) Expansion.Restatement 395 (18.7%) 62 (17.07%) 55 (20.6%) 2591 (19.5%) Expansion.Conjunction 362 (17.1%) 126 (34.7%) 40 (15.0%) 2908 (21.9%) Comparison.Contrast 254 (12.0%) 53 (14.60%) 42 (15.7%) 1704 (12.8%) Expansion.Instantiation 211 (10.0%) 18 (4.96%) 14 (5.2%) 1152 (8.7%) Temporal.Asynchronous 110 (5.2%) 7 (1.93%) 6 (2.3%) 524 (3.9%) Total 2112 363 267 13287 Figure 7: Implicit Connectives: Most common Level 2 Senses Essays Summaries Relation: Implicit Inter-Sent Intra-Sent Implicit Inter-Sent Intra-Sent Contingency.Cause 577 (27.3%) 32 (4.6%) 208 (14.9%) 70 (19.28%) 1 (1.1%) 37 (13.5%) Expansion.Restatement 395 (18.7%) 27 (3.9%) 4 (0.3%) 62 (17.07%) – – Expansion.Conjunction 362 (17.1%) 156 (22.6%) 253 (18.1%) 126 (34.7%) 24 (25.3%) 50 (18.2%) Comparison.Contrast 254 (12.0%) 231 (33.4%) 187 (13.4%) 53 (14.60%) 47 (49.5%) 56 (20.4%) Expansion.Instantiation 211 (10.0%) 37 (5.4%) 5 (0.3%) 18 (5.0%) 3 (3.2%) – Total: 2112 691 1397 363 95 275 Figure 8: Essays and Summaries: Connective sense frequency 680 Letters News Relation: Implicit Inter-Sent Intra-Sent Implicit Inter-Sent Intra-Sent Contingency.Cause 75 (28.1%) 12 (14.1%) 32 (16%) 3389 (25.5%) 136 (2.9%) 1354 (14.5%) Expansion.Restatement 55 (20.6%) 6 (7.1%) 4 (2%) 2591 (19.5%) 93 (2.0%) 20 (0.2%) Expansion.Conjunction 40 (15.0%) 20 (23.5%) 31 (15.5%) 2908 (21.9%) 1144 (24.3%) 1907 (20.4%) Comparison.Contrast 42 (15.7%) 20 (23.5%) 29 (14.5%) 1704 (12.8%) 1853 (39.4%) 1416 (15.2%) Expansion.Instantiation 14 (5.2%) 3 (3.5%) – 1152 (8.7%) 236 (5.0%) 18 (0.2%) Total 267 85 200 13287 4709 9336 Figure 9: Letters and News: Connective sense frequency 2008). There are two points to make. First, Figure 7 provides evidence (in terms of differences between genres in the senses associated with intersentential discourse relations that are not lexically marked) for taking genre as a factor in automated sense labelling of those relations. Secondly, Figures 8 and 9 summarize Figures 5, 6 and 7 with respect to the five senses that occur most frequently in the four genre with discourse relations that are not lexically marked, covering between 84% and 91% of those relations. These Figures show that, no matter what genre one considers, different senses tend to be expressed with (explicit) intra-sentential connectives, with explicit inter-sentential connectives and with implicit connectives. This means that lexically marked relations provide a poor model for automated sense labelling of relations that are not lexically marked. This is new evidence for the suggestion (Sporleder and Lascarides, 2008) that intrinsic differences between explicit and implicit discourse relations mean that the latter have to be learned independently of the former. 8 Conclusion This paper has, for the first time, provided genre information about the articles in the Penn TreeBank. It has characterised each genre in terms of features manually annotated in the Penn Discourse TreeBank, and used this to show that genre should be made a factor in automated sense labelling of discourse relations that are not explicitly marked. There are clearly other potential differences that one might usefully investigate: For example, following (Pitler et al., 2008), one might look at whether connectives with multiple senses occur with only one of those senses (or mainly so) in a particular genre. Or one might investigate how patterns of attribution vary in different genres, since this is relevant to subjectivity in text. Other aspects of genre may be even more significant for language technology. For example, whereas the first sentence of a news article might be an effective summary of its contents – e.g. (4) Singer Bette Midler won a $400,000 federal court jury verdict against Young & Rubicam in a case that threatens a popular advertising industry practice of using “sound-alike” performers to tout products. (wsj 0485) it might be less so in the case of an essay, even one of about the same length – e.g. (5) On June 30, a major part of our trade deficit went poof! (wsj 0447) Of course, to exploit these differences, it is important to be able to automatically identify what genre or genres a text belongs to. Fortunately, there is a growing body of work on genre-based text classification, including (Dewdney et al., 2001; Finn and Kushmerick, 2006; Kessler et al., 1997; Stamatatos et al., 2000; Wolters and Kirsten, 1999). Of particular interest in this regard is whether other news corpora, such as the New York Times Annotated Corpus (Linguistics Data Consortium Catalog Number: LDC2008T19) manifest similar properties to the WSJ in their different genres. If so, then genre-specific extrapolation from the WSJ Corpus may enable better performance on a wider range of corpora. Acknowledgments I thank my three anonymous reviewers for their useful comments. Additional thoughtful comments came from Mark Steedman, Alan Lee, Rashmi Prasad and Ani Nenkova. References Douglas Biber. 1986. Spoken and written textual dimensions in english. Language, 62(2):384–414. Douglas Biber. 2003. Compressed noun-phrase structures in newspaper discourse. In Jean Aitchison and Diana Lewis, editors, New Media Language, pages 169–181. Routledge. 681 Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2002. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Proceedings of the 2nd SIGdial Workshop on Discourse and Dialogue, Aalborg, Denmark. Nigel Dewdney, Carol VanEss-Dykema, and Richard MacMillan. 2001. The form is the substance: classification of genres in text. In Proceedings of the Workshop on Human Language Technology and Knowledge Management, pages 1–8. Robert Elwell and Jason Baldridge. 2008. Discourse connective argument identication with connective specic rankers. In Proceedings of the IEEE Conference on Semantic Computing. Evan Sandhaus. 2008. New york times corpus: Corpus overview. Provided with the corpus, LDC catalogue entry LDC2008T19. Aidan Finn and Nicholas Kushmerick. 2006. Learning to classify documents according to genre. Journal of the American Society for Information Science and Technology, 57. Brett Kessler, Geoffrey Numberg, and Hinrich Sch¨utze. 1997. Automatic detection of text genre. In Proceedings of the 35th Annual Meeting of the ACL, pages 32–38. Daniel Marcu and Abdessamad Echihabi. 2002. An unsupervised approach to recognizing discourse relations. In Proceedings of the Association for Computational Linguistics. Eleni Miltsakaki, Livio Robaldo, Alan Lee, and Aravind Joshi. 2008. Sense annotation in the penn discourse treebank. In Computational Linguistics and Intelligent Text Processing, pages 275–286. Springer. Emily Pitler and Ani Nenkova. 2008. Revisiting readability: A unified framework for predicting text quality. In Proceedings of EMNLP. Emily Pitler, Mridhula Raghupathy, Hena Mehta, Ani Nenkova, Alan Lee, and Aravind Joshi. 2008. Easily identifiable discourse relations. In Proceedings of COLING, Manchester. Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse relations in text. In Proceedings of ACL-IJCNLP, Singapore. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Aravind Joshi, and Bonnie Webber. 2007. Attribution and its annotation in the Penn Discourse TreeBank. TAL (Traitement Automatique des Langues), 42(2). Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse TreeBank 2.0. In Proceedings, 6th International Conference on Language Resources and Evaluation, Marrakech, Morocco. Mark Rosso. 2008. User-based identification of web genres. J American Society for Information Science and Technology, 59(7):1053–1072. Caroline Sporleder and Alex Lascarides. 2008. Using automatically labelled examples to classify rhetorical relations: an assessment. Natural Language Engineering, 14(3):369–416. Efstathios Stamatatos, Nikos Fakotakis, and George Kokkinakis. 2000. Text genre detection using common word frequencies. In Proceedings of the 18th Annual Conference of the ACL, pages 808–814. John Swales. 1990. Genre Analysis. Cambridge University Press, Cambridge. Ben Wellner and James Pustejovsky. 2007. Automatically identifying the arguments to discourse connectives. In Proceedings of the 2007 Conference on Empirical Methods in Natural Language Processing (EMNLP), Prague CZ. Ben Wellner. 2008. Sequence Models and Ranking Methods for Discourse Parsing. Ph.D. thesis, Brandeis University. Maria Wolters and Mathias Kirsten. 1999. Exploring the use of linguistic features in domain and genre classification. In Proceedings of the 9th Meeting of the European Chapter of the Assoc. for Computational Linguistics, pages 142–149, Bergen, Norway. 682
2009
76
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 683–691, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Automatic sense prediction for implicit discourse relations in text Emily Pitler, Annie Louis, Ani Nenkova Computer and Information Science University of Pennsylvania Philadelphia, PA 19104, USA epitler,lannie,[email protected] Abstract We present a series of experiments on automatically identifying the sense of implicit discourse relations, i.e. relations that are not marked with a discourse connective such as “but” or “because”. We work with a corpus of implicit relations present in newspaper text and report results on a test set that is representative of the naturally occurring distribution of senses. We use several linguistically informed features, including polarity tags, Levin verb classes, length of verb phrases, modality, context, and lexical features. In addition, we revisit past approaches using lexical pairs from unannotated text as features, explain some of their shortcomings and propose modifications. Our best combination of features outperforms the baseline from data intensive approaches by 4% for comparison and 16% for contingency. 1 Introduction Implicit discourse relations abound in text and readers easily recover the sense of such relations during semantic interpretation. But automatic sense prediction for implicit relations is an outstanding challenge in discourse processing. Discourse relations, such as causal and contrast relations, are often marked by explicit discourse connectives (also called cue words) such as “because” or “but”. It is not uncommon, though, for a discourse relation to hold between two text spans without an explicit discourse connective, as the example below demonstrates: (1) The 101-year-old magazine has never had to woo advertisers with quite so much fervor before. [because] It largely rested on its hard-to-fault demographics. In this paper we address the problem of automatic sense prediction for discourse relations in newspaper text. For our experiments, we use the Penn Discourse Treebank, the largest existing corpus of discourse annotations for both implicit and explicit relations. Our work is also informed by the long tradition of data intensive methods that rely on huge amounts of unannotated text rather than on manually tagged corpora (Marcu and Echihabi, 2001; Blair-Goldensohn et al., 2007). In our analysis, we focus only on implicit discourse relations and clearly separate these from explicits. Explicit relations are easy to identify. The most general senses (comparison, contingency, temporal and expansion) can be disambiguated in explicit relations with 93% accuracy based solely on the discourse connective used to signal the relation (Pitler et al., 2008). So reporting results on explicit and implicit relations separately will allow for clearer tracking of progress. In this paper we investigate the effectiveness of various features designed to capture lexical and semantic regularities for identifying the sense of implicit relations. Given two text spans, previous work has used the cross-product of the words in the spans as features. We examine the most informative word pair features and find that they are not the semantically-related pairs that researchers had hoped. We then introduce several other methods capturing the semantics of the spans (polarity features, semantic classes, tense, etc.) and evaluate their effectiveness. This is the first study which reports results on classifying naturally occurring implicit relations in text and uses the natural distribution of the various senses. 2 Related Work Experiments on implicit and explicit relations Previous work has dealt with the prediction of discourse relation sense, but often for explicits and at the sentence level. Soricut and Marcu (2003) address the task of 683 parsing discourse structures within the same sentence. They use the RST corpus (Carlson et al., 2001), which contains 385 Wall Street Journal articles annotated following the Rhetorical Structure Theory (Mann and Thompson, 1988). Many of the useful features, syntax in particular, exploit the fact that both arguments of the connective are found in the same sentence. Such features would not be applicable to the analysis of implicit relations that occur intersententially. Wellner et al. (2006) used the GraphBank (Wolf and Gibson, 2005), which contains 105 Associated Press and 30 Wall Street Journal articles annotated with discourse relations. They achieve 81% accuracy in sense disambiguation on this corpus. However, GraphBank annotations do not differentiate between implicits and explicits, so it is difficult to verify success for implicit relations. Experiments on artificial implicits Marcu and Echihabi (2001) proposed a method for cheap acquisition of training data for discourse relation sense prediction. Their idea is to use unambiguous patterns such as [Arg1, but Arg2.] to create synthetic examples of implicit relations. They delete the connective and use [Arg1, Arg2] as an example of an implicit relation. The approach is tested using binary classification between relations on balanced data, a setting very different from that of any realistic application. For example, a question-answering application that needs to identify causal relations (i.e. as in Girju (2003)), must not only differentiate causal relations from comparison relations, but also from expansions, temporal relations, and possibly no relation at all. In addition, using equal numbers of examples of each type can be misleading because the distribution of relations is known to be skewed, with expansions occurring most frequently. Causal and comparison relations, which are most useful for applications, are less frequent. Because of this, the recall of the classification should be the primary metric of success, while the Marcu and Echihabi (2001) experiments report only accuracy. Later work (Blair-Goldensohn et al., 2007; Sporleder and Lascarides, 2008) has discovered that the models learned do not perform as well on implicit relations as one might expect from the test accuracies on synthetic data. 3 Penn Discourse Treebank For our experiments, we use the Penn Discourse Treebank (PDTB; Prasad et al., 2008), the largest available annotated corpora of discourse relations. The PDTB contains discourse annotations over the same 2,312 Wall Street Journal (WSJ) articles as the Penn Treebank. For each explicit discourse connective (such as “but” or “so”), annotators identified the two text spans between which the relation holds and the sense of the relation. The PDTB also provides information about local implicit relations. For each pair of adjacent sentences within the same paragraph, annotators selected the explicit discourse connective which best expressed the relation between the sentences and then assigned a sense to the relation. In Example (1) above, the annotators identified “because” as the most appropriate connective between the sentences, and then labeled the implicit discourse relation Contingency. In the PDTB, explicit and implicit relations are clearly distinguished, allowing us to concentrate solely on the implicit relations. As mentioned above, each implicit and explicit relation is annotated with a sense. The senses are arranged in a hierarchy, allowing for annotations as specific as Contingency.Cause.reason. In our experiments, we use only the top level of the sense annotations: Comparison, Contingency, Expansion, and Temporal. Using just these four relations allows us to be theory-neutral; while different frameworks (Hobbs, 1979; McKeown, 1985; Mann and Thompson, 1988; Knott and Sanders, 1998; Asher and Lascarides, 2003) include different relations of varying specificities, all of them include these four core relations, sometimes under different names. Each relation in the PDTB takes two arguments. Example (1) can be seen as the predicate Contingency which takes the two sentences as arguments. For implicits, the span in the first sentence is called Arg1 and the span in the following sentence is called Arg2. 4 Word pair features in prior work Cross product of words Discourse connectives are the most reliable predictors of the semantic sense of the relation (Marcu, 2000; Pitler et al., 2008). However, in the absence of explicit markers, the most easily accessible features are the 684 words in the two text spans of the relation. Intuitively, one would expect that there is some relationship that holds between the words in the two arguments. Consider for example the following sentences: The recent explosion of country funds mirrors the ”closedend fund mania” of the 1920s, Mr. Foot says, when narrowly focused funds grew wildly popular. They fell into oblivion after the 1929 crash. The words “popular” and “oblivion” are almost antonyms, and one might hypothesize that their occurrence in the two text spans is what triggers the contrast relation between the sentences. Similarly, a pair of words such as (rain, rot) might be indicative of a causal relation. If this hypothesis is correct, pairs of words (w1, w2) such that w1 appears in the first sentence and w2 appears in the second sentence would be good features for identifying contrast relations. Indeed, word pairs form the basic feature of most previous work on classifying implicit relations (Marcu and Echihabi, 2001; BlairGoldensohn et al., 2007; Sporleder and Lascarides, 2008) or the simpler task of predicting which connective should be used to express a relation (Lapata and Lascarides, 2004). Semantic relations vs. function word pairs If the hypothesis for word pair triggers of discourse relations were true, the analysis of unambiguous relations can be used to discover pairs of words with causal or contrastive relations holding between them. Yet, feature analysis has not been performed in prior studies to establish or refute this possibility. At the same time, feature selection is always necessary for word pairs, which are numerous and lead to data sparsity problems. Here, we present a meta analysis of the feature selection work in three prior studies. One approach for reducing the number of features follows the hypothesis of semantic relations between words. Marcu and Echihabi (2001) considered only nouns, verbs and and other cue phrases in word pairs. They found that even with millions of training examples, prediction results using all words were superior to those based on only pairs of non-function words. However, since the learning curve is steeper when function words were removed, they hypothesize that using only non-function words will outperform using all words once enough training data is available. In a similar vein, Lapata and Lascarides (2004) used pairings of only verbs, nouns and adjectives for predicting which temporal connective is most suitable to express the relation between two given text spans. Verb pairs turned out to be one of the best features, but no useful information was obtained using nouns and adjectives. Blair-Goldensohn et al. (2007) proposed several refinements of the word pair model. They show that (i) stemming, (ii) using a small fixed vocabulary size consisting of only the most frequent stems (which would tend to be dominated by function words) and (iii) a cutoff on the minimum frequency of a feature, all result in improved performance. They also report that filtering stopwords has a negative impact on the results. Given these findings, we expect that pairs of function words are informative features helpful in predicting discourse relation sense. In our work that we describe next, we use feature selection to investigate the word pairs in detail. 5 Analysis of word pair features For the analysis of word pair features, we use a large collection of automatically extracted explicit examples from the experiments in BlairGoldensohn et al. (2007). The data, from now on referred to as TextRels, has explicit contrast and causal relations which were extracted from the English Gigaword Corpus (Graff, 2003) which contains over four million newswire articles. The explicit cue phrase is removed from each example and the spans are treated as belonging to an implicit relation. Besides cause and contrast, the TextRels data include a no-relation category which consists of sentences from the same text that are separated by at least three other sentences. To identify features useful for classifying comparison vs other relations, we chose a random sample of 5000 examples for Contrast and 5000 Other relations (2500 each of Cause and No-relation). For the complete set of 10,000 examples, word pair features were computed. After removing word pairs that appear less than 5 times, the remaining features were ranked by information gain using the MALLET toolkit1. Table 1 lists the word pairs with highest information gain for the Contrast vs. Other and Cause vs. Other classification tasks. All contain very frequent stop words, and interestingly for the Con1mallet.cs.umass.edu 685 trast vs. Other task, most of the word pairs contain discourse connectives. This is certainly unexpected, given that word pairs were formed by deleting the discourse connectives from the sentences expressing Contrast. Word pairs containing “but” as one of their elements in fact signal the presence of a relation that is not Contrast. Consider the example shown below: The government says it has reached most isolated townships by now, but because roads are blocked, getting anything but basic food supplies to people remains difficult. Following Marcu and Echihabi (2001), the pair [The government says it has reached most isolated townships by now, but] and [roads are blocked, getting anything but basic food supplies to people remains difficult.] is created as an example of the Cause relation. Because of examples like this, “but-but” is a very useful word pair feature indicating Cause, as the but would have been removed for the artifical Contrast examples. In fact, the top 17 features for classifying Contrast versus Other all contain the word “but”, and are indications that the relation is Other. These findings indicate an unexpected anomalous effect in the use of synthetic data. Since relations are created by removing connectives, if an unambiguous connective remains, its presence is a reliable indicator that the example should be classified as Other. Such features might work well and lead to high accuracy results in identifying synthetic implicit relations, but are unlikely to be useful in a realistic setting of actual implicits. Comparison vs. Other Contingency vs. Other the-but s-but the-in the-and in-the the-of of-but for-but but-but said-said to-of the-a in-but was-but it-but a-and a-the of-the to-but that-but the-it* to-and to-to the-in and-but but-the to-it* and-and the-the in-in a-but he-but said-in to-the of-and a-of said-but they-but of-in in-and in-of s-and Table 1: Word pairs with highest information gain. Also note that the only two features predictive of the comparison class (indicated by * in Table 1): the-it and to-it, contain only function words rather than semantically related nonfunction words. This ranking explains the observations reported in Blair-Goldensohn et al. (2007) where removing stopwords degraded classifier performance and why using only nouns, verbs or adjectives (Marcu and Echihabi, 2001; Lapata and Lascarides, 2004) is not the best option2. 6 Features for sense prediction of implicit discourse relations The contrast between the “popular”/“oblivion” example we started with above can be analyzed in terms of lexical relations (near antonyms), but also could be explained by different polarities of the two words: “popular” is generally a positive word, while “oblivion” has negative connotations. While we agree that the actual words in the arguments are quite useful, we also define several higher-level features corresponding to various semantic properties of the words. The words in the two text spans of a relation are taken from the gold-standard annotations in the PDTB. Polarity Tags: We define features that represent the sentiment of the words in the two spans. Each word’s polarity was assigned according to its entry in the Multi-perspective Question Answering Opinion Corpus (Wilson et al., 2005). In this resource, each sentiment word is annotated as positive, negative, both, or neutral. We use the number of negated and non-negated positive, negative, and neutral sentiment words in the two text spans as features. If a writer refers to something as “nice” in Arg1, that counts towards the positive sentiment count (Arg1Positive); “not nice” would count towards Arg1NegatePositive. A sentiment word is negated if a word with a General Inquirer (Stone et al., 1966) Negate tag precedes it. We also have features for the cross products of these polarities between Arg1 and Arg2. We expected that these features could help Comparison examples especially. Consider the following example: Executives at Time Inc. Magazine Co., a subsidiary of Time Warner, have said the joint venture with Mr. Lang wasn’t a good one. The venture, formed in 1986, was supposed to be Time’s low-cost, safe entry into women’s magazines. The word good is annotated with positive polarity, however it is negated. Safe is tagged as having positive polarity, so this opposition could indicate the Comparison relation between the two sentences. Inquirer Tags: To get at the meanings of the spans, we look up what semantic categories each 2In addition, an informal inspection of 100 word pairs with high information gain for Contrast vs. Other (the longest word pairs were chosen, as those are more likely to be content words) found only six semantically opposed pairs. 686 word falls into according to the General Inquirer lexicon (Stone et al., 1966). The General Inquirer has classes for positive and negative polarity, as well as more fine-grained categories such as words related to virtue or vice. The Inquirer even contains a category called “Comp” that includes words that tend to indicate Comparison, such as “optimal”, “other”, “supreme”, or “ultimate”. Several of the categories are complementary: Understatement versus Overstatement, Rise versus Fall, or Pleasure versus Pain. Pairs where one argument contains words that indicate Rise and the other argument indicates Fall might be good evidence for a Comparison relation. The benefit of using these tags instead of just the word pairs is that we see more observations for each semantic class than for any particular word, reducing the data sparsity problem. For example, the pair rose:fell often indicates a Comparison relation when speaking about stocks. However, occasionally authors refer to stock prices as “jumping” rather than “rising”. Since both jump and rise are members of the Rise class, new jump examples can be classified using past rise examples. Development testing showed that including features for all words’ tags was not useful, so we include the Inquirer tags of only the verbs in the two arguments and their cross-product. Just as for the polarity features, we include features for both each tag and its negation. Money/Percent/Num: If two adjacent sentences both contain numbers, dollar amounts, or percentages, it is likely that a comparison relation might hold between the sentences. We included a feature for the count of numbers, percentages, and dollar amounts in Arg1 and Arg2. We also included the number of times each combination of number/percent/dollar occurs in Arg1 and Arg2. For example, if Arg1 mentions a percentage and Arg2 has two dollar amounts, the feature Arg1Percent-Arg2Money would have a count of 2. This feature is probably genre-dependent. Numbers and percentages often appear in financial texts but would be less frequent in other genres. WSJ-LM: This feature represents the extent to which the words in the text spans are typical of each relation. For each sense, we created unigram and bigram language models over the implicit examples in the training set. We compute each example’s probability according to each of these language models. The features are the ranks of the spans’ likelihoods according to the various language models. For example, if of the unigram models, the most likely relation to generate this example was Contingency, then the example would include the feature ContingencyUnigram1. If the third most likely relation according to the bigram models was Expansion, then it would include the feature ExpansionBigram3. Expl-LM: This feature ranks the text spans according to language models derived from the explicit examples in the TextRels corpus. However, the corpus contains only Cause, Contrast and Norelation, hence we expect the WSJ language models to be more helpful. Verbs: These features include the number of pairs of verbs in Arg1 and Arg2 from the same verb class. Two verbs are from the same verb class if each of their highest Levin verb class (Levin, 1993) levels (in the LCS Database (Dorr, 2001)) are the same. The intuition behind this feature is that the more related the verbs, the more likely the relation is an Expansion. The verb features also include the average length of verb phrases in each argument, as well as the cross product of this feature for the two arguments. We hypothesized that verb chunks that contain more words, such as “They [are allowed to proceed]” often contain rationales afterwards (signifying Contingency relations), while short verb phrases like “They proceed” might occur more often in Expansion or Temporal relations. Our final verb features were the part of speech tags (gold-standard from the Penn Treebank) of the main verb. One would expect that Expansion would link sentences with the same tense, whereas Contingency and Temporal relations would contain verbs with different tenses. First-Last, First3: The first and last words of a relation’s arguments have been found to be particularly useful for predicting its sense (Wellner et al., 2006). Wellner et al. (2006) suggest that these words are such predictive features because they are often explicit discourse connectives. In our experiments on implicits, the first and last words are not connectives. However, some implicits have been found to be related by connective-like expressions which often appear in the beginning of the second argument. In the PDTB, these are annotated as alternatively lexicalized relations (AltLexes). To capture such effects, we included the first and last words of Arg1 as features, the first 687 and last words of Arg2, the pair of the first words of Arg1 and Arg2, and the pair of the last words. We also add two additional features which indicate the first three words of each argument. Modality: Modal words, such as “can”, “should”, and “may”, are often used to express conditional statements (i.e. “If I were a wealthy man, I wouldn’t have to work hard.”) thus signaling a Contingency relation. We include a feature for the presence or absence of modals in Arg1 and Arg2, features for specific modal words, and their cross-products. Context: Some implicit relations appear immediately before or immediately after certain explicit relations far more often than one would expect due to chance (Pitler et al., 2008). We define a feature indicating if the immediately preceding (or following) relation was an explicit. If it was, we include the connective trigger of the relation and its sense as features. We use oracle annotations of the connective sense, however, most of the connectives are unambiguous. One might expect a different distribution of relation types in the beginning versus further in the middle of a paragraph. We capture paragraphposition information using a feature which indicates if Arg1 begins a paragraph. Word pairs Four variants of word pair models were used in our experiments. All the models were eventually tested on implicit examples from the PDTB, but the training set-up was varied. Wordpairs-TextRels In this setting, we trained a model on word pairs derived from unannotated text (TextRels corpus). Wordpairs-PDTBImpl Word pairs for training were formed from the cross product of words in the textual spans (Arg1 x Arg2) of the PDTB implicit relations. Wordpairs-selected Here, only word pairs from Wordpairs-PDTBImpl with non-zero information gain on the TextRels corpus were retained. Wordpairs-PDTBExpl In this case, the model was formed by using the word pairs from the explicit relations in the sections of the PDTB used for training. 7 Classification Results For all experiments, we used sections 2-20 of the PDTB for training and sections 21-22 for testing. Sections 0-1 were used as a development set for feature design. We ran four binary classification tasks to identify each of the main relations from the rest. As each of the relations besides Expansion are infrequent, we train using equal numbers of positive and negative examples of the target relation. The negative examples were chosen at random. We used all of sections 21 and 22 for testing, so the test set is representative of the natural distribution. The training sets contained: Comparison (1927 positive, 1927 negative), Contingency (3500 each), Expansion3 (6356 each), and Temporal (730 each). The test set contained: 151 examples of Comparison, 291 examples of Contingency, 986 examples of Expansion, 82 examples of Temporal, and 13 examples of No-relation. We used Naive Bayes, Maximum Entropy (MaxEnt), and AdaBoost (Freund and Schapire, 1996) classifiers implemented in MALLET. 7.1 Non-Wordpair Features The performance using only our semantically informed features is shown in Table 7. Only the Naive Bayes classification results are given, as space is limited and MaxEnt and AdaBoost gave slightly lower accuracies overall. The table lists the f-score for each of the target relations, with overall accuracy shown in brackets. Given that the experiments are run on natural distribution of the data, which are skewed towards Expansion relations, the f-score is the more important measure to track. Our random baseline is the f-score one would achieve by randomly assigning classes in proportion to its true distribution in the test set. The best results for all four tasks are considerably higher than random prediction, but still low overall. Our features provide 6% to 18% absolute improvements in f-score over the baseline for each of the four tasks. The largest gain was in the Contingency versus Other prediction task. The least improvement was for distinguishing Expansion versus Other. However, since Expansion forms the largest class of relations, its f-score is still the highest overall. We discuss the results per relation class next. Comparison We expected that polarity features would be especially helpful for identifying Com3The PDTB also contains annotations of entity relations, which most frameworks consider a subset of Expansion. Thus, we include relations annotated as EntRel as positive examples of Expansion. 688 Features Comp. vs. Not Cont. vs. Other Exp. vs. Other Temp. vs. Other Four-way Money/Percent/Num 19.04 (43.60) 18.78 (56.27) 22.01 (41.37) 10.40 (23.05) (63.38) Polarity Tags 16.63 (55.22) 19.82 (76.63) 71.29 (59.23) 11.12 (18.12) (65.19) WSJ-LM 18.04 (9.91) 0.00 (80.89) 0.00 (35.26) 10.22 (5.38) (65.26) Expl-LM 18.04 (9.91) 0.00 (80.89) 0.00 (35.26) 10.22 (5.38) (65.26) Verbs 18.55 (26.19) 36.59 (62.44) 59.36 (52.53) 12.61 (41.63) (65.33) First-Last, First3 21.01 (52.59) 36.75 (59.09) 63.22 (56.99) 15.93 (61.20) (65.40) Inquirer tags 17.37 (43.8) 15.76 (77.54) 70.21 (58.04) 11.56 (37.69) (62.21) Modality 17.70 (17.6) 21.83 (76.95) 15.38 (37.89) 11.17 (27.91) (65.33) Context 19.32 (56.66) 29.55 (67.42) 67.77 (57.85) 12.34 (55.22) (64.01) Random 9.91 19.11 64.74 5.38 Table 2: f-score (accuracy) using different features; Naive Bayes. parison relations. Surprisingly, polarity was actually one of the worst classes of features for Comparison, achieving an f-score of 16.33 (in contrast to using the first, last and first three words of the sentences as features, which leads to an f-score of 21.01). We examined the prevalence of positivenegative or negative-positive polarity pairs in our training set. 30% of the Comparison examples contain one of these opposite polarity pairs, while 31% of the Other examples contain an opposite polarity pair. To our knowledge, this is the first study to examine the prevalence of polarity words in the arguments of discourse relations in their natural distributions. Contrary to popular belief, Comparisons do not tend to have more opposite polarity pairs. The two most useful classes of features for recognizing Comparison relations were the first, last and first three words in the sentence and the context features that indicate the presence of a paragraph boundary or of an explicit relation just before or just after the location of the hypothesized implicit relation (19.32 f-score). Contingency The two best features for the Contingency vs. Other distinction were verb information (36.59 f-score) and first, last and first three words in the sentence (36.75 f-score). Context again was one of the features that led to improvement. This makes sense, as Pitler et al. (2008) found that implicit contingencies are often found immediately following explicit comparisons. We were surprised that the polarity features were helpful for Contingency but not Comparison. Again we looked at the prevalence of opposite polarity pairs. While for Comparison versus Other there was not a significant difference, for Contingency there are quite a few more opposite polarity pairs (52%) than for not Contingency (41%). The language model features were completely useless for distinguishing contingencies from other relations. Expansion As Expansion is the majority class in the natural distribution, recall is less of a problem than precision. The features that help achieve the best f-score are all features that were found to be useful in identifying other relations. Polarity tags, Inquirer tags and context were the best features for identifying expansions with f-scores around 70%. Temporal Implicit temporal relations are relatively rare, making up only about 5% of our test set. Most temporal relations are explicitly marked with a connective like “when” or “after”. Yet again, the first and last words of the sentence turned out to be useful indicators for temporal relations (15.93 f-score). The importance of the first and last words for this distinction is clear. It derives from the fact that temporal implicits often contain words like “yesterday” or “Monday” at the end of the sentence. Context is the next most helpful feature for temporal relations. 7.2 Which word pairs help? For Comparison and Contingency, we analyze the behavior of word pair features under several different settings. Specifically we want to address two important related questions raised in recent work by others: (i) is unannotated data from explicits useful for training models that disambiguate implicit discourse relations and (ii) are explicit and implicit relations intrinsically different from each other. Wordpairs-TextRels is the worst approach. The best use of word pair features is Wordpairsselected. This model gives 4% better absolute fscore for Comparison and 14% for Contingency over Wordpairs-TextRels. In this setting the TextRels data was used to choose the word pair features, but the probabilities for each feature were estimated using the training portion of the PDTB 689 Comp. vs. Other Wordpairs-TextRels 17.13 (46.62) Wordpairs-PDTBExpl 19.39 (51.41) Wordpairs-PDTBImpl 20.96 (42.55) First-last, first3 (best-non-wp) 21.01 (52.59) Best-non-wp + Wordpairs-selected 21.88 (56.40) Wordpairs-selected 21.96 (56.59) Cont. vs. Other Wordpairs-TextRels 31.10 (41.83) Wordpairs-PDTBExpl 37.77 (56.73) Wordpairs-PDTBImpl 43.79 (61.92) Polarity, verbs, first-last, first3, modality, context (best-non-wp) 42.14 (66.64) Wordpairs-selected 45.60 (67.10) Best-non-wp + Wordpairs-selected 47.13 (67.30) Expn. vs. Other Best-non-wp + wordpairs 62.39 (59.55) Wordpairs-PDTBImpl 63.84 (60.28) Polarity, inquirer tags, context (bestnon-wp) 76.42 (63.62) Temp. vs. Other First-last, first3 (best-non-wp) 15.93 (61.20) Wordpairs-PDTBImpl 16.21 (61.98) Best-non-wp + Wordpairs-PDTBImpl 16.76 (63.49) Table 3: f-score (accuracy) of various feature sets; Naive Bayes. implicit examples. We also confirm that even within the PDTB, information from annotated explicit relations (Wordpairs-PDTBExpl) is not as helpful as information from annotated implicit relations (Wordpairs-PDTBImpl). The absolute difference in f-score between the two models is close to 2% for Comparison, and 6% for Contingency. 7.3 Best results Adding other features to word pairs leads to improved performance for Contingency, Expansion and Temporal relations, but not for Comparison. For contingency detection, the best combination of our features included polarity, verb information, first and last words, modality, context with Wordpairs-selected. This combination led to a definite improvement, reaching an f-score of 47.13 (16% absolute improvement in f-score over Wordpairs-TextRels). For detecting expansions, the best combination of our features (polarity+Inquirer tags+context) outperformed Wordpairs-PDTBImpl by a wide margin, close to 13% absolute improvement (fscores of 76.42 and 63.84 respectively). 7.4 Sequence Model of Discourse Relations Our results from the previous section show that classification of implicits benefits from information about nearby relations, and so we expected improvements using a sequence model, rather than classifying each relation independently. We trained a CRF classifier (Lafferty et al., 2001) over the sequence of implicit examples from all documents in sections 02 to 20. The test set is the same as used for the 2-way classifiers. We compare against a 6-way4 Naive Bayes classifier. Only word pairs were used as features for both. Overall 6-way prediction accuracy is 43.27% for the Naive Bayes model and 44.58% for the CRF model. 8 Conclusion We have presented the first study that predicts implicit discourse relations in a realistic setting (distinguishing a relation of interest from all others, where the relations occur in their natural distributions). Also unlike prior work, we separate the task from the easier task of explicit discourse prediction. Our experiments demonstrate that features developed to capture word polarity, verb classes and orientation, as well as some lexical features are strong indicators of the type of discourse relation. We analyze word pair features used in prior work that were intended to capture such semantic oppositions. We show that the features in fact do not capture semantic relation but rather give information about function word co-occurrences. However, they are still a useful source of information for discourse relation prediction. The most beneficial application of such features is when they are selected from a large unannotated corpus of explicit relations, but then trained on manually annotated implicit relations. Context, in terms of paragraph boundaries and nearby explicit relations, also proves to be useful for the prediction of implicit discourse relations. It is helpful when added as a feature in a standard, instance-by-instance learning model. A sequence model also leads to over 1% absolute improvement for the task. 9 Acknowledgments This work was partially supported by NSF grants IIS-0803159, IIS-0705671 and IGERT 0504487. We would like to thank Sasha Blair-Goldensohn for providing us with the TextRels data and for the insightful discussion in the early stages of our work. 4the four main relations, EntRel, NoRel 690 References N. Asher and A. Lascarides. 2003. Logics of conversation. Cambridge University Press. S. Blair-Goldensohn, K.R. McKeown, and O.C. Rambow. 2007. Building and Refining RhetoricalSemantic Relation Models. In Proceedings of NAACL HLT, pages 428–435. L. Carlson, D. Marcu, and M.E. Okurowski. 2001. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Proceedings of the Second SIGdial Workshop on Discourse and Dialogue, pages 1–10. B.J. Dorr. 2001. LCS Verb Database. Technical Report Online Software Database, University of Maryland, College Park, MD. Y. Freund and R.E. Schapire. 1996. Experiments with a New Boosting Algorithm. In Machine Learning: Proceedings of the Thirteenth International Conference, pages 148–156. R. Girju. 2003. Automatic detection of causal relations for Question Answering. In Proceedings of the ACL 2003 workshop on Multilingual summarization and question answering-Volume 12, pages 76–83. D. Graff. 2003. English gigaword corpus. Corpus number LDC2003T05, Linguistic Data Consortium, Philadelphia. J. Hobbs. 1979. Coherence and coreference. Cognitive Science, 3:67–90. A. Knott and T. Sanders. 1998. The classification of coherence relations and their linguistic markers: An exploration of two languages. Journal of Pragmatics, 30(2):135–175. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In International Conference on Machine Learning 2001, pages 282–289. M. Lapata and A. Lascarides. 2004. Inferring sentence-internal temporal relations. In HLTNAACL 2004: Main Proceedings. B. Levin. 1993. English Verb Classes and Alternations: A Preliminary Investigation. Chicago, IL. W.C. Mann and S.A. Thompson. 1988. Rhetorical structure theory: Towards a functional theory of text organization. Text, 8. D. Marcu and A. Echihabi. 2001. An unsupervised approach to recognizing discourse relations. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 368–375. D. Marcu. 2000. The Theory and Practice of Discourse and Summarization. The MIT Press. K. McKeown. 1985. Text Generation: Using Discourse strategies and Focus Constraints to Generate Natural Language Text. Cambridge University Press, Cambridge, England. E. Pitler, M. Raghupathy, H. Mehta, A. Nenkova, A. Lee, and A. Joshi. 2008. Easily identifiable discourse relations. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING08), short paper. R. Soricut and D. Marcu. 2003. Sentence level discourse parsing using syntactic and lexical information. In HLT-NAACL. C. Sporleder and A. Lascarides. 2008. Using automatically labelled examples to classify rhetorical relations: An assessment. Natural Language Engineering, 14:369–416. P.J. Stone, J. Kirsh, and Cambridge Computer Associates. 1966. The General Inquirer: A Computer Approach to Content Analysis. MIT Press. B. Wellner, J. Pustejovsky, C. Havasi, A. Rumshisky, and R. Sauri. 2006. Classification of discourse coherence relations: An exploratory study using multiple knowledge sources. In Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue. T. Wilson, J. Wiebe, and P. Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 347–354. F. Wolf and E. Gibson. 2005. Representing discourse coherence: A corpus-based study. Computational Linguistics, 31(2):249–288. 691
2009
77
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 692–700, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP A Framework of Feature Selection Methods for Text Categorization Shoushan Li1 Rui Xia2 Chengqing Zong2 Chu-Ren Huang1 1 Department of Chinese and Bilingual Studies The Hong Kong Polytechnic University {shoushan.li,churenhuang} @gmail.com 2 National Laboratory of Pattern Recognition Institute of Automation Chinese Academy of Sciences {rxia,cqzong}@nlpr.ia.ac.cn Abstract In text categorization, feature selection (FS) is a strategy that aims at making text classifiers more efficient and accurate. However, when dealing with a new task, it is still difficult to quickly select a suitable one from various FS methods provided by many previous studies. In this paper, we propose a theoretic framework of FS methods based on two basic measurements: frequency measurement and ratio measurement. Then six popular FS methods are in detail discussed under this framework. Moreover, with the guidance of our theoretical analysis, we propose a novel method called weighed frequency and odds (WFO) that combines the two measurements with trained weights. The experimental results on data sets from both topic-based and sentiment classification tasks show that this new method is robust across different tasks and numbers of selected features. 1 Introduction With the rapid growth of online information, text classification, the task of assigning text documents to one or more predefined categories, has become one of the key tools for automatically handling and organizing text information. The problems of text classification normally involve the difficulty of extremely high dimensional feature space which sometimes makes learning algorithms intractable. A standard procedure to reduce the feature dimensionality is called feature selection (FS). Various FS methods, such as document frequency (DF), information gain (IG), mutual information (MI), 2 χ -test (CHI), Bi-Normal Separation (BNS), and weighted log-likelihood ratio (WLLR), have been proposed for the tasks (Yang and Pedersen, 1997; Nigam et al., 2000; Forman, 2003) and make text classification more efficient and accurate. However, comparing these FS methods appears to be difficult because they are usually based on different theories or measurements. For example, MI and IG are based on information theory, while CHI is mainly based on the measurements of statistic independence. Previous comparisons of these methods have mainly depended on empirical studies that are heavily affected by the experimental sets. As a result, conclusions from those studies are sometimes inconsistent. In order to better understand the relationship between these methods, building a general theoretical framework provides a fascinating perspective. Furthermore, in real applications, selecting an appropriate FS method remains hard for a new task because too many FS methods are available due to the long history of FS studies. For example, merely in an early survey paper (Sebastiani, 2002), eight methods are mentioned. These methods are provided by previous work for dealing with different text classification tasks but none of them is shown to be robust across different classification applications. In this paper, we propose a framework with two basic measurements for theoretical comparison of six FS methods which are widely used in text classification. Moreover, a novel method is set forth that combines the two measurements and tunes their influences considering different application domains and numbers of selected features. The remainder of this paper is organized as follows. Section 2 introduces the related work on 692 feature selection for text classification. Section 3 theoretically analyzes six FS methods and proposes a new FS approach. Experimental results are presented and analyzed in Section 4. Finally, Section 5 draws our conclusions and outlines the future work. 2 Related Work FS is a basic problem in pattern recognition and has been a fertile field of research and development since the 1970s. It has been proven to be effective on removing irrelevant and redundant features, increasing efficiency in learning tasks, and improving learning performance. FS methods fall into two broad categories, the filter model and the wrapper model (John et al., 1994). The wrapper model requires one predetermined learning algorithm in feature selection and uses its performance to evaluate and determine which features are selected. And the filter model relies on general characteristics of the training data to select some features without involving any specific learning algorithm. There is evidence that wrapper methods often perform better on small scale problems (John et al, 1994), but on large scale problems, such as text classification, wrapper methods are shown to be impractical because of its high computational cost. Therefore, in text classification, filter methods using feature scoring metrics are popularly used. Below we review some recent studies of feature selection on both topic-based and sentiment classification. In the past decade, FS studies mainly focus on topic-based classification where the classification categories are related to the subject content, e.g., sport or education. Yang and Pedersen (1997) investigate five FS metrics and report that good FS methods improve the categorization accuracy with an aggressive feature removal using DF, IG, and CHI. More recently, Forman (2003) empirically compares twelve FS methods on 229 text classification problem instances and proposes a new method called 'Bi-Normal Separation' (BNS). Their experimental results show that BNS can perform very well in the evaluation metrics of recall rate and F-measure. But for the metric of precision, it often loses to IG. Besides these two comparison studies, many others contribute to this topic (Yang and Liu, 1999; Brank et al., 2002; Gabrilovich and Markovitch, 2004) and more and more new FS methods are generated, such as, Gini index (Shang et al., 2007), Distance to Transition Point (DTP) (Moyotl-Hernandez and Jimenez-Salazar, 2005), Strong Class Information Words (SCIW) (Li and Zong, 2005) and parameter tuning based FS for Rocchio classifier (Moschitti, 2003). Recently, sentiment classification has become popular because of its wide applications (Pang et al., 2002). Its criterion of classification is the attitude expressed in the text (e.g., recommended or not recommended, positive or negative) rather than some facts (e.g., sport or education). To our best knowledge, yet no related work has focused on comparison studies of FS methods on this special task. There are only some scattered reports in their experimental studies. Riloff et al. (2006) report that the traditional FS method (only using IG method) performs worse than the baseline in some cases. However, Cui et al. (2006) present the experiments on the sentiment classification for large-scale online product reviews to show that using the FS method of CHI does not degrade the performance but can significantly reduce the dimension of the feature vector. Moreover, Ng et al. (2006) examine the FS of the weighted log-likelihood ratio (WLLR) on the movie review dataset and achieves an accuracy of 87.1%, which is higher than the result reported by Pang and Lee (2004) with the same dataset. From the analysis above, we believe that the performance of the sentiment classification system is also dramatically affected by FS. 3 Our Framework In the selection process, each feature (term, or single word) is assigned with a score according to a score-computing function. Then those with higher scores are selected. These mathematical definitions of the score-computing functions are often defined by some probabilities which are estimated by some statistic information in the documents across different categories. For the convenience of description, we give some notations of these probabilities below. ( ) P t : the probability that a document x contains term t ; ( ) i P c : the probability that a document x does not belong to category ic ; ( , ) i P t c : the joint probability that a document x contains term t and also belongs to category ic ; ( | ) i P c t : the probability that a document x belongs to category ic ,under the condition that it contains term t. 693 ( | ) i P t c : the probability that, a document x does not contain term t with the condition that x belongs to category ic ; Some other probabilities, such as ( ) P t , ( ) i P c , ( | ) i P t c , ( | ) i P t c , ( | ) i P c t , and ( | ) i P c t , are similarly defined. In order to estimate these probabilities, statistical information from the training data is needed, and notations about the training data are given as follows: 1 { }m i i c = : the set of categories; iA : the number of the documents that contain the term t and also belong to category ic ; iB : the number of the documents that contain the term t but do not belong to category ic ; i N : the total number of the documents that belong to category ic ; all N : the total number of all documents from the training data. i C : the number of the documents that do not contain the term t but belong to category ic , i.e., i i N A − i D : the number of the documents that neither contain the term t nor belong to category ic , i.e., all i i N N B − − ; In this section, we would analyze theoretically six popular methods, namely DF, MI, IG, CHI, BNS, and WLLR. Although these six FS methods are defined differently with different scoring measurements, we believe that they are strongly related. In order to connect them, we define two basic measurements which are discussed as follows. The first measurement is to compute the document frequency in one category, i.e., iA . The second measurement is the ratio between the document frequencies in one category and the other categories, i.e., / i i A B . The terms with a high ratio are often referred to as the terms with high category information. These two measurements form the basis for all the measurements that are used by the FS methods throughout this paper. In particular, we show that DF and MI are using the first and second measurement respectively. Other complicated FS methods are combinations of these two measurements. Thus, we regard the two measurements as basic, which are referred to as the frequency measurement and ratio measurement. 3.1 Document Frequency (DF) DF is the number of documents in which a term occurs. It is defined as 1( ) m i i DF A = =∑ The terms with low or high document frequency are often referred to as rare or common terms, respectively. It is easy to see that this FS method is based on the first basic measurement. It assumes that the terms with higher document frequency are more informative for classification. But sometimes this assumption does not make any sense, for example, the stop words (e.g., the, a, an) hold very high DF scores, but they seldom contribute to classification. In general, this simple method performs very well in some topic-based classification tasks (Yang and Pedersen, 1997). 3.2 Mutual Information (MI) The mutual information between term t and class ic is defined as ( | ) ( , ) log ( ) i i P t c I t c P t = And it is estimated as log ( )( ) i all i i i i A N MI A C A B × = + + Let us consider the following formula (using Bayes theorem) ( | ) ( | ) ( , ) log log ( ) ( ) i i i i P t c P c t I t c P t P c = = Therefore, ( , )=log ( | ) log ( ) i i i I t c P c t P c − And it is estimated as log log log log 1 log(1 ) log / i i i i all i i i i all i i i all A N MI A B N A B N A N N A B N = − + + = − − = − + − From this formula, we can see that the MI score is based on the second basic measurement. This method assumes that the term with higher category ratio is more effective for classification. It is reported that this method is biased towards low frequency terms and the bias becomes extreme when ( ) P t is near zero. It can be seen in the following formula (Yang and Pedersen, 1997) ( , ) log( ( | )) log( ( )) i i I t c P t c P t = − 694 Therefore, this method might perform badly when common terms are informative for classification. Taking into account mutual information of all categories, two types of MI score are commonly used: the maximum score ( ) max I t and the average score ( ) avg I t , i.e., 1 ( ) max { ( , )} m max i i I t I t c = = , 1 ( ) ( ) ( , ) m avg i i i I t P c I t c = = ⋅ ∑ . We choose the maximum score since it performs better than the average score (Yang and Pedersen, 1997). It is worth noting that the same choice is made for other methods, including CHI, BNS, and WLLR in this paper. 3.3 Information Gain (IG) IG measures the number of bits of information obtained for category prediction by recognizing the presence or absence of a term in a document (Yang and Pedersen, 1997). The function is 1 1 1 ( ) { ( )log ( )} +{ ( )[ ( | )log ( | )] ( )[ ( | )log ( | )]} m i i i m i i i m i i i G t P c P c P t P c t P c t P t P c t P c t = = = = − + ∑ ∑ ∑ And it is estimated as 1 1 1 1 1 { log } +( / )[ log ] ( / )[ log ] m i i i all all m m i i i all i i i i i i m m i i i all i i i i i i N N IG N N A A A N A B A B C C C N C D C D = = = = = = − + + + + + ∑ ∑ ∑ ∑ ∑ From the definition, we know that the information gain is the weighted average of the mutual information ( , ) i I t c and ( , ) i I t c where the weights are the joint probabilities ( , ) i P t c and ( , ) i P t c : 1 1 ( ) ( , ) ( , ) ( , ) ( , ) m m i i i i i i G t P t c I t c P t c I t c = = = + ∑ ∑ Since ( , ) i P t c is closely related to the document frequency iA and the mutual information ( , ) i I t c is shown to be based on the second measurement, we can say that the IG score is influenced by the two basic measurements. 3.4 2 χ Statistic (CHI) The CHI measurement (Yang and Pedersen, 1997) is defined as 2 ( ) ( ) ( ) ( ) ( ) all i i i i i i i i i i i i N A D C B CHI A C B D A B C D ⋅ − = + ⋅ + ⋅ + ⋅ + In order to get the relationship between CHI and the two measurements, the above formula is rewritten as follows 2 [ ( ) ( ) ] ( ) ( ) [ ( )] all i all i i i i i i all i i i all i i N A N N B N A B CHI N N N A B N A B ⋅ − − − − = ⋅ − ⋅ + ⋅ − + For simplicity, we assume that there are two categories and the numbers of the training documents in the two categories are the same ( 2 all i N N = ). The CHI score then can be written as 2 2 2 ( ) ( ) [2 ( )] 2 ( / 1) 2 ( / 1) [ / ( / 1)] i i i i i i i i i i i i i i i i i i i N A B CHI A B N A B N A B N A B A B A B A − = + ⋅ − + − = + ⋅ ⋅ − + From the above formula, we see that the CHI score is related to both the frequency measurement iA and ratio measurement / i i A B . Also, when keeping the same ratio value, the terms with higher document frequencies will yield higher CHI scores. 3.5 Bi-Normal Separation (BNS) BNS method is originally proposed by Forman (2003) and it is defined as 1 1 ( , ) ( ( | )) ( ( | ) i i i BNS t c F P t c F P t c − − = − It is calculated using the following formula 1 1 ( ) ( ) i i i all i A B BNS F F N N N − − = − − where ( ) F x is the cumulative probability function of standard normal distribution. For simplicity, we assume that there are two categories and the numbers of the training documents in the two categories are the same, i.e., 2 all i N N = and we also assume that i i A B > . It should be noted that this assumption is only to allow easier analysis but will not be applied in our experiment implementation. In addition, we only consider the case when / 0.5 i i A N ≤ . In fact, most terms take the document frequency iA which is less than half of i N . Under these conditions, the BNS score can be shown in Figure 1 where the area of the shadow part represents ( / / ) i i i i A N B N − and the length of the projection to the x axis represents the BNS score. 695 From Figure 1, we can easily draw the two following conclusions: 1) Given the same value of iA , the BNS score increases with the increase of i i A B − . 2) Given the same value of i i A B − , BNS score increase with the decrease of iA . Figure 1. View of BNS using the normal probability distribution. Both the left and right graphs have shadowed areas of the same size. And the value of i i A B − can be rewritten as the following 1 (1 ) / i i i i i i i i i A B A B A A A A B − − = ⋅ = − ⋅ The above analysis gives the following conclusions regarding the relationship between BNS and the two basic measurements: 1) Given the same iA , the BNS score increases with the increase of / i i A B . 2) Given the same / i i A B , when iA increases, i i A B − also increase. It seems that the BNS score does not show a clear relationship with iA . In summary, the BNS FS method is biased towards the terms with the high category ratio but cannot be said to be sensitive to document frequency. 3.6 Weighted Log Likelihood Ratio (WLLR) WLLR method (Nigam et al., 2000) is defined as ( | ) ( , ) ( | )log ( | ) i i i i P t c WLLR t c P t c P t c = And it is estimated as ( ) log i i all i i i i A A N N WLLR N B N ⋅ − = ⋅ The formula shows WLLR is proportional to the frequency measurement and the logarithm of the ratio measurement. Clearly, WLLR is biased towards the terms with both high category ratio and high document frequency and the frequency measurement seems to take a more important place than the ratio measurement. 3.7 Weighed Frequency and Odds (WFO) So far in this section, we have shown that the two basic measurements constitute the six FS methods. The class prior probabilities, ( ), 1,2,..., i P c i m = , are also related to the selection methods except for the two basic measurements. Since they are often estimated according to the distribution of the documents in the training data and are identical for all the terms in a class, we ignore the discussion of their influence on the selection measurements. In the experiment, we consider the case when training data have equal class prior probabilities. When training data are unbalanced, we need to change the forms of the two basic measurements to / i i A N and ( ) / ( ) i all i i i A N N B N ⋅ − ⋅ . Because some methods are expressed in complex forms, it is difficult to explain their relationship with the two basic measurements, for example, which one prefers the category ratio most. Instead, we will give the preference analysis in the experiment by analyzing the features in real applications. But the following two conclusions are drawn without doubt according to the theoretical analysis given above. 1) Good features are features with high document frequency; 2) Good features are features with high category ratio. These two conclusions are consistent with the original intuition. However, using any single one does not provide competence in selecting the best set of features. For example, stop words, such as ‘a’, ‘the’ and ‘as’, have very high document frequency but are useless for the classification. In real applications, we need to mix these two measurements to select good features. Because of different distribution of features in different domains, the importance of each measurement may differ a lot in different applications. Moreover, even in a given domain, when different numbers of features are to be selected, different combinations of the two measurements are required to provide the best performance. Although a great number of FS methods is available, none of them can appropriately change the preference of the two measurements. A better way is to tune the importance according to the application rather than to use a predetermined combination. Therefore, we propose a new FS method called Weighed Frequency and Odds (WFO), which is defined as 696 ( | ) / ( | ) 1 i i when P t c P t c > 1 ( | ) ( , ) ( | ) [log ] ( | ) i i i i P t c WFO t c P t c P t c λ λ − = ( , ) 0 i else WFO t c = And it is estimated as 1 ( ) ( ) (log ) i i all i i i i A A N N WFO N B N λ λ − ⋅ − = ⋅ where λ is the parameter for tuning the weight between frequency and odds. The value of λ varies from 0 to 1. By assigning different value to λ we can adjust the preference of each measurement. Specially, when 0 λ = , the algorithm prefers the category ratio that is equivalent to the MI method; when 1 λ = , the algorithm is similar to DF; when 0.5 λ = , the algorithm is exactly the WLLR method. In real applications, a suitable parameter λ needs to be learned by using training data. 4 Experimental Studies 4.1 Experimental Setup Data Set: The experiments are carried out on both topic-based and sentiment text classification datasets. In topic-based text classification, we use two popular data sets: one subset of Reuters-21578 referred to as R2 and the 20 Newsgroup dataset referred to as 20NG. In detail, R2 consist of about 2,000 2-category documents from standard corpus of Reuters-21578. And 20NG is a collection of approximately 20,000 20-category documents 1 . In sentiment text classification, we also use two data sets: one is the widely used Cornell movie-review dataset2 (Pang and Lee, 2004) and one dataset from product reviews of domain DVD3 (Blitzer et al., 2007). Both of them are 2-category tasks and each consists of 2,000 reviews. In our experiments, the document numbers of all data sets are (nearly) equally distributed cross all categories. Classification Algorithm: Many classification algorithms are available for text classification, such as Naïve Bayes, Maximum Entropy, k-NN, and SVM. Among these methods, SVM is shown to perform better than other methods (Yang and Pedersen, 1997; Pang et al., 1 http://people.csail.mit.edu/~jrennie/20Newsgroups/ 2 http://www.cs.cornell.edu/People/pabo/movie-review-data/ 3 http://www.seas.upenn.edu/~mdredze/datasets/sentiment/ 2002). Hence we apply SVM algorithm with the help of the LIBSVM 4 tool. Almost all parameters are set to their default values except the kernel function which is changed from a polynomial kernel function to a linear one because the linear one usually performs better for text classification tasks. Experiment Implementation: In the experiments, each dataset is randomly and evenly split into two subsets: 90% documents as the training data and the remaining 10% as testing data. The training data are used for training SVM classifiers, learning parameters in WFO method and selecting "good" features for each FS method. The features are single words with a bool weight (0 or 1), representing the presence or absence of a feature. In addition to the “principled” FS methods, terms occurring in less than three documents ( 3 DF ≤ ) in the training set are removed. 4.2 Relationship between FS Methods and the Two Basic Measurements To help understand the relationship between FS methods and the two basic measurements, the empirical study is presented as follows. Since the methods of DF and MI only utilize the document frequency and category information respectively, we use the DF scores and MI scores to represent the information of the two basic measurements. Thus we would select the top-2% terms with each method and then investigate the distribution of their DF and MI scores. First of all, for clear comparison, we normalize the scores coming from all the methods using Min-Max normalization method which is designed to map a score s to 's in the range [0, 1] by computing ' s Min s Max Min − = − where Min and Max denote the minimum and maximum values respectively in all terms’ scores using one FS method. Table 1 shows the mean values of all top-2% terms’ MI scores and DF scores of all the six FS methods in each domain. From this table, we can apparently see the relationship between each method and the two basic measurements. For instance, BNS most distinctly prefers the terms with high MI scores and low DF scores. According to the degree of this preference, we 4 http://www.csie.ntu.edu.tw/~cjlin/libsvm/ 697 FS Methods Domain 20NG R2 Movie DVD DF score MI score DF score MI score DF score MI score DF score MI score MI 0.004 0.870 0.047 0.959 0.003 0.888 0.004 0.881 BNS 0.005 0.864 0.117 0.922 0.008 0.881 0.006 0.880 CHI 0.015 0.814 0.211 0.748 0.092 0.572 0.055 0.676 IG 0.087 0.525 0.209 0.792 0.095 0.559 0.066 0.669 WLLR 0.026 0.764 0.206 0.805 0.168 0.414 0.127 0.481 DF 0.122 0.252 0.268 0.562 0.419 0.09 0.321 0.111 Table 1. The mean values of all top-2% terms’ MI and DF scores using six FS methods in each domain can rank these six methods as MI, BNS IG, CHI, WLLR DF f f , where x y f means method x prefers the terms with higher MI scores (higher category information) and lower DF scores (lower document frequency) than method y. This empirical discovery is in agreement with the finding that WLLR is biased towards the high frequency terms and also with the finding that BNS is biased towards high category information (cf. Section 3 theoretical analysis). Also, we can find that CHI and IG share a similar preference of these two measurements in 2-category domains, i.e., R2, movie, and DVD. This gives a good explanation that CHI and IG are two similar-performed methods for 2-category tasks, which have been found by Forman (2003) in their experimental studies. According to the preference, we roughly cluster FS methods into three groups. The first group includes the methods which dramatically prefer the category information, e.g., MI and BNS; the second one includes those which prefer both kinds of information, e.g., CHI, IG, and WLLR; and the third one includes those which strongly prefer frequency information, e.g., DF. 4.3 Performances of Different FS Methods It is worth noting that learning parameters in WFO is very important for its good performance. We use 9-fold cross validation to help learning the parameter λ so as to avoid over-fitting. Specifically, we run nine times by using every 8 fold documents as a new training data set and the remaining one fold documents as a development data set. In each running with one fixed feature number m, we get the best ,i m best λ − (i=1,..., 9) value through varying ,i m λ from 0 to 1 with the step of 0.1 to get the best performance in the development data set. The average value m best λ − , i.e., 9 , 1 ( ) / 9 m best i m best i λ λ − − = = ∑ is used for further testing. Figure 2 shows the experimental results when using all FS methods with different selected feature numbers. The red line with star tags represents the results of WFO. At the first glance, in R2 domain, the differences of performances across all are very noisy when the feature number is larger than 1,000, which makes the comparison meaningless. We think that this is because the performances themselves in this task are very high (nearly 98%) and the differences between two FS methods cannot be very large (less than one percent). Even this, WFO method do never get the worst performance and can also achieve the top performance in about half times, e.g., when feature numbers are 20, 50, 100, 500, 3000. Let us pay more attention to the other three domains and discuss the results in the following two cases. In the first case when the feature number is low (about less than 1,000), the FS methods in the second group including IG, CHI, WLLR, always perform better than those in the other two groups. WFO can also perform well because its parameters m best λ − are successfully learned to be around 0.5, which makes it consistently belong to the second group. Take 500 feature number for instance, the parameters 500 best λ − are 0.42, 0.50, and 0.34 in these three domains respectively. In the second case when the feature number is large, among the six traditional methods, MI and BNS take the leads in the domains of 20NG and Movie while IG and CHI seem to be better and more stable than others in the domain of DVD. As for WFO, its performances are excellent cross all these three domains and different feature numbers. In each domain, it performs similarly as or better than the top methods due to its well-learned parameters. For example, in 20NG, the parameters m best λ − are 0.28, 0.20, 0.08, and 0.01 when feature numbers are 10,000, 15,000, 20,000, and 30,000. These values are close to 0 698 (WFO equals MI when 0 λ = ) while MI is the top one in this domain. 10 20 50 100 200 500 1000 2000 3000 4227 0.88 0.9 0.92 0.94 0.96 0.98 1 feature number accuracy Topic - R2 DF MI IG BNS CHI WLLR WFO 200 500 1000 2000 5000 10000 15000 20000 30000 32091 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 feature number accuracy Topic - 20NG DF MI IG BNS CHI WLLR WFO 50 200 500 1000 4000 7000 10000 13000 15176 0.55 0.6 0.65 0.7 0.75 0.8 0.85 feature number accuracy Sentiment - Movie DF MI IG BNS CHI WLLR WFO 20 50 100 500 1000 1500 2000 3000 4000 5824 0.5 0.55 0.6 0.65 0.7 0.75 0.8 feature number accuracy Sentiment - DVD DF MI IG BNS CHI WLLR WFO Figure 2. The classification accuracies of the four domains using seven different FS methods while increasing the number of selected features. From Figure 2, we can also find that FS does help sentiment classification. At least, it can dramatically decrease the feature numbers without losing classification accuracies (see Movie domain, using only 500-4000 features is as good as using all 15176 features). 5 Conclusion and Future Work In this paper, we propose a framework with two basic measurements and use it to theoretically analyze six FS methods. The differences among them mainly lie in how they use these two measurements. Moreover, with the guidance of the analysis, a novel method called WFO is proposed, which combine these two measurements with trained weights. The experimental results show that our framework helps us to better understand and compare different FS methods. Furthermore, the novel method WFO generated from the framework, can perform robustly across different domains and feature numbers. In our study, we use four data sets to test our new method. There are much more data sets on text categorization which can be used. In additional, we only focus on using balanced samples in each category to do the experiments. It is also necessary to compare the FS methods on some unbalanced data sets, which are common in real-life applications (Forman, 2003; Mladeni and Marko, 1999). These matters will be dealt with in the future work. Acknowledgments The research work described in this paper has been partially supported by Start-up Grant for Newly Appointed Professors, No. 1-BBZM in the Hong Kong Polytechnic University. References J. Blitzer, M. Dredze, and F. Pereira. 2007. Biographies, Bollywood, Boom-boxes and Blenders: Domain adaptation for sentiment classification. In Proceedings of ACL-07, the 45th Meeting of the Association for Computational Linguistics. J. Brank, M. Grobelnik, N. Milic-Frayling, and D. Mladenic. 2002. Interaction of feature selection methods and linear classification models. In Workshop on Text Learning held at ICML. H. Cui, V. Mittal, and M. Datar. 2006. Comparative experiments on sentiment classification for online product reviews. In Proceedings of AAAI-06, the 21st National Conference on Artificial Intelligence. G. Forman. 2003. An extensive empirical study of feature selection metrics for text classification. The Journal of Machine Learning Research, 3(1): 1289-1305. 699 E. Gabrilovich and S. Markovitch. 2004. Text categorization with many redundant features: using aggressive feature selection to make SVMs competitive with C4.5. In Proceedings of the ICML, the 21st International Conference on Machine Learning. G. John, K. Ron, and K. Pfleger. 1994. Irrelevant features and the subset selection problem. In Proceedings of ICML-94, the 11st International Conference on Machine Learning. S. Li and C. Zong. 2005. A new approach to feature selection for text categorization. In Proceedings of the IEEE International Conference on Natural Language Processing and Knowledge Engineering (NLP-KE). D. Mladeni and G. Marko. 1999. Feature selection for unbalanced class distribution and naive bayes. In Proceedings of ICML-99, the 16th International Conference on Machine Learning. A. Moschitti. 2003. A study on optimal parameter tuning for Rocchio text classifier. In Proceedings of ECIR, Lecture Notes in Computer Science, vol. 2633, pp. 420-435. E. Moyotl-Hernandez and H. Jimenez-Salazar. 2005. Enhancement of DTP feature selection method for text categorization. In Proceedings of CICLing, Lecture Notes in Computer Science, vol.3406, pp.719-722. V. Ng, S. Dasgupta, and S. M. Niaz Arifin. 2006. Examining the role of linguistic knowledge sources in the automatic identification and classification of reviews. In Proceedings of the COLING/ACL Main Conference Poster Sessions. K. Nigam, A. McCallum, S. Thrun, and T. Mitchell. 2000. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2/3): 103-134. B. Pang, L. Lee, and S. Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of EMNLP-02, the Conference on Empirical Methods in Natural Language Processing. B. Pang and L. Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of ACL-04, the 42nd Meeting of the Association for Computational Linguistics. E. Riloff, S. Patwardhan, and J. Wiebe. 2006. Feature subsumption for opinion analysis. In Proceedings of EMNLP-06, the Conference on Empirical Methods in Natural Language Processing,. F. Sebastiani. 2002. Machine learning in automated text categorization. ACM Computing Surveys, 34(1): 1-47. W. Shang, H. Huang, H. Zhu, Y. Lin, Y. Qu, and Z. Wang. 2007. A novel feature selection algorithm for text categorization. The Journal of Expert System with Applications, 33:1-5. Y. Yang and J. Pedersen. 1997. A comparative study on feature selection in text categorization. In Proceedings of ICML-97, the 14th International Conference on Machine Learning. Y. Yang and X. Liu. 1999. A re-examination of text categorization methods. In Proceedings of SIGIR-99, the 22nd annual international ACM Conference on Research and Development in Information Retrieval. 700
2009
78
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 701–709, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Mine the Easy, Classify the Hard: A Semi-Supervised Approach to Automatic Sentiment Classification Sajib Dasgupta and Vincent Ng Human Language Technology Research Institute University of Texas at Dallas Richardson, TX 75083-0688 {sajib,vince}@hlt.utdallas.edu Abstract Supervised polarity classification systems are typically domain-specific. Building these systems involves the expensive process of annotating a large amount of data for each domain. A potential solution to this corpus annotation bottleneck is to build unsupervised polarity classification systems. However, unsupervised learning of polarity is difficult, owing in part to the prevalence of sentimentally ambiguous reviews, where reviewers discuss both the positive and negative aspects of a product. To address this problem, we propose a semi-supervised approach to sentiment classification where we first mine the unambiguous reviews using spectral techniques and then exploit them to classify the ambiguous reviews via a novel combination of active learning, transductive learning, and ensemble learning. 1 Introduction Sentiment analysis has recently received a lot of attention in the Natural Language Processing (NLP) community. Polarity classification, whose goal is to determine whether the sentiment expressed in a document is “thumbs up” or “thumbs down”, is arguably one of the most popular tasks in document-level sentiment analysis. Unlike topic-based text classification, where a high accuracy can be achieved even for datasets with a large number of classes (e.g., 20 Newsgroups), polarity classification appears to be a more difficult task. One reason topic-based text classification is easier than polarity classification is that topic clusters are typically well-separated from each other, resulting from the fact that word usage differs considerably between two topically-different documents. On the other hand, many reviews are sentimentally ambiguous for a variety of reasons. For instance, an author of a movie review may have negative opinions of the actors but at the same time talk enthusiastically about how much she enjoyed the plot. Here, the review is ambiguous because she discussed both the positive and negative aspects of the movie, which is not uncommon in reviews. As another example, a large portion of a movie review may be devoted exclusively to the plot, with the author only briefly expressing her sentiment at the end of the review. In this case, the review is ambiguous because the objective material in the review, which bears no sentiment orientation, significantly outnumbers its subjective counterpart. Realizing the challenges posed by ambiguous reviews, researchers have explored a number of techniques to improve supervised polarity classifiers. For instance, Pang and Lee (2004) train an independent subjectivity classifier to identify and remove objective sentences from a review prior to polarity classification. Koppel and Schler (2006) use neutral reviews to help improve the classification of positive and negative reviews. More recently, McDonald et al. (2007) have investigated a model for jointly performing sentence- and document-level sentiment analysis, allowing the relationship between the two tasks to be captured and exploited. However, the increased sophistication of supervised polarity classifiers has also resulted in their increased dependence on annotated data. For instance, Koppel and Schler needed to manually identify neutral reviews to train their polarity classifier, and McDonald et al.’s joint model requires that each sentence in a review be labeled with polarity information. Given the difficulties of supervised polarity classification, it is conceivable that unsupervised polarity classification is a very challenging task. Nevertheless, a solution to unsupervised polarity classification is of practical significance. One reason is that the vast majority of supervised polarity 701 classification systems are domain-specific. Hence, when given a new domain, a large amount of annotated data from the domain typically needs to be collected in order to train a high-performance polarity classification system. As Blitzer et al. (2007) point out, this data collection process can be “prohibitively expensive, especially since product features can change over time”. Unfortunately, to our knowledge, unsupervised polarity classification is largely an under-investigated task in NLP. Turney’s (2002) work is perhaps one of the most notable examples of unsupervised polarity classification. However, while his system learns the semantic orientation of phrases in a review in an unsupervised manner, such information is used to heuristically predict the polarity of a review. At first glance, it may seem plausible to apply an unsupervised clustering algorithm such as kmeans to cluster the reviews according to their polarity. However, there is reason to believe that such a clustering approach is doomed to fail: in the absence of annotated data, an unsupervised learner is unable to identify which features are relevant for polarity classification. The situation is further complicated by the prevalence of ambiguous reviews, which may contain a large amount of irrelevant and/or contradictory information. In light of the difficulties posed by ambiguous reviews, we differentiate between ambiguous and unambiguous reviews in our classification process by addressing the task of semi-supervised polarity classification via a “mine the easy, classify the hard” approach. Specifically, we propose a novel system architecture where we first automatically identify and label the unambiguous (i.e., “easy”) reviews, then handle the ambiguous (i.e., “hard”) reviews using a discriminative learner to bootstrap from the automatically labeled unambiguous reviews and a small number of manually labeled reviews that are identified by an active learner. It is worth noting that our system differs from existing work on unsupervised/active learning in two aspects. First, while existing unsupervised approaches typically rely on clustering or learning via a generative model, our approach distinguishes between easy and hard instances and exploits the strengths of discriminative models to classify the hard instances. Second, while existing active learners typically start with manually labeled seeds, our active learner relies only on seeds that are automatically extracted from the data. Experimental results on five sentiment classification datasets demonstrate that our system can generate high-quality labeled data from unambiguous reviews, which, together with a small number of manually labeled reviews selected by the active learner, can be used to effectively classify ambiguous reviews in a discriminative fashion. The rest of the paper is organized as follows. Section 2 gives an overview of spectral clustering, which will facilitate the presentation of our approach to unsupervised sentiment classification in Section 3. We evaluate our approach in Section 4 and present our conclusions in Section 5. 2 Spectral Clustering In this section, we give an overview of spectral clustering, which is at the core of our algorithm for identifying ambiguous reviews. 2.1 Motivation When given a clustering task, an important question to ask is: which clustering algorithm should be used? A popular choice is k-means. Nevertheless, it is well-known that k-means has the major drawback of not being able to separate data points that are not linearly separable in the given feature space (e.g, see Dhillon et al. (2004)). Spectral clustering algorithms were developed in response to this problem with k-means clustering. The central idea behind spectral clustering is to (1) construct a low-dimensional space from the original (typically high-dimensional) space while retaining as much information about the original space as possible, and (2) cluster the data points in this lowdimensional space. 2.2 Algorithm Although there are several well-known spectral clustering algorithms in the literature (e.g., Weiss (1999), Meil˘a and Shi (2001), Kannan et al. (2004)), we adopt the one proposed by Ng et al. (2002), as it is arguably the most widely used. The algorithm takes as input a similarity matrix S created by applying a user-defined similarity function to each pair of data points. Below are the main steps of the algorithm: 1. Create the diagonal matrix G whose (i,i)th entry is the sum of the i-th row of S, and then construct the Laplacian matrix L = G−1/2SG−1/2. 2. Find the eigenvalues and eigenvectors of L. 702 3. Create a new matrix from the m eigenvectors that correspond to the m largest eigenvalues.1 4. Each data point is now rank-reduced to a point in the m-dimensional space. Normalize each point to unit length (while retaining the sign of each value). 5. Cluster the resulting data points using kmeans. In essence, each dimension in the reduced space is defined by exactly one eigenvector. The reason why eigenvectors with large eigenvalues are retained is that they capture the largest variance in the data. Therefore, each of them can be thought of as revealing an important dimension of the data. 3 Our Approach While spectral clustering addresses a major drawback of k-means clustering, it still cannot be expected to accurately partition the reviews due to the presence of ambiguous reviews. Motivated by this observation, rather than attempting to cluster all the reviews at the same time, we handle them in different stages. As mentioned in the introduction, we employ a “mine the easy, classify the hard” approach to polarity classification, where we (1) identify and classify the “easy” (i.e., unambiguous) reviews with the help of a spectral clustering algorithm; (2) manually label a small number of “hard” (i.e., ambiguous) reviews selected by an active learner; and (3) using the reviews labeled thus far, apply a transductive learner to label the remaining (ambiguous) reviews. In this section, we discuss each of these steps in detail. 3.1 Identifying Unambiguous Reviews We begin by preprocessing the reviews to be classified. Specifically, we tokenize and downcase each review and represent it as a vector of unigrams, using frequency as presence. In addition, we remove from the vector punctuation, numbers, words of length one, and words that occur in a single review only. Finally, following the common practice in the information retrieval community, we remove words with high document frequency, many of which are stopwords or domainspecific general-purpose words (e.g., “movies” in the movie domain). A preliminary examination of our evaluation datasets reveals that these words 1For brevity, we will refer to the eigenvector with the n-th largest eigenvalue simply as the n-th eigenvector. typically comprise 1–2% of a vocabulary. The decision of exactly how many terms to remove from each dataset is subjective: a large corpus typically requires more removals than a small corpus. To be consistent, we simply sort the vocabulary by document frequency and remove the top 1.5%. Recall that in this step we use spectral clustering to identify unambiguous reviews. To make use of spectral clustering, we first create a similarity matrix, defining the similarity between two reviews as the dot product of their feature vectors, but following Ng et al. (2002), we set its diagonal entries to 0. We then perform an eigen-decomposition of this matrix, as described in Section 2.2. Finally, using the resulting eigenvectors, we partition the length-normalized reviews into two sets. As Ng et al. point out, “different authors still disagree on which eigenvectors to use, and how to derive clusters from them”. To create two clusters, the most common way is to use only the second eigenvector, as Shi and Malik (2000) proved that this eigenvector induces an intuitively ideal partition of the data — the partition induced by the minimum normalized cut of the similarity graph2, where the nodes are the data points and the edge weights are the pairwise similarity values of the points. Clustering in a one-dimensional space is trivial: since we have a linearization of the points, all we need to do is to determine a threshold for partitioning the points. A common approach is to set the threshold to zero. In other words, all points whose value in the second eigenvector is positive are classified as positive, and the remaining points are classified as negative. However, we found that the second eigenvector does not always induce a partition of the nodes that corresponds to the minimum normalized cut. One possible reason is that Shi and Malik’s proof assumes the use of a Laplacian matrix that is different from the one used by Ng et al. To address this problem, we use the first five eigenvectors: for each eigenvector, we (1) use each of its n elements as a threshold to independently generate n partitions, (2) compute the normalized cut value for each partition, and (3) find the minimum of the n cut values. We then select the eigenvector that corresponds to the smallest of the five minimum cut values. Next, we identify the ambiguous reviews from 2Using the normalized cut (as opposed to the usual cut) ensures that the size of the two clusters are relatively balanced, avoiding trivial cuts where one cluster is empty and the other is full. See Shi and Malik (2000) for details. 703 the resulting partition. To see how this is done, consider the example in Figure 1, where the goal is to produce two clusters from five data points. 1 1 1 0 0 1 1 1 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 1 1 ! −0.6983 0.7158 −0.6983 0.7158 −0.9869 −0.1616 −0.6224 −0.7827 −0.6224 −0.7827 ! Figure 1: Sample data and the top two eigenvectors of its Laplacian In the matrix on the left, each row is the feature vector generated for Di, the i-th data point. By inspection, one can identify two clusters, {D1, D2} and {D4, D5}. D3 is ambiguous, as it bears resemblance to the points in both clusters and therefore can be assigned to any of them. In the matrix on the right, the two columns correspond to the top two eigenvectors obtained via an eigendecomposition of the Laplacian matrix formed from the five data points. As we can see, the second eigenvector gives us a natural cluster assignment: all the points whose corresponding values in the second eigenvector are strongly positive will be in one cluster, and the strongly negative points will be in another cluster. Being ambiguous, D3 is weakly negative and will be assigned to the “negative” cluster. Before describing our algorithm for identifying ambiguous data points, we make two additional observations regarding D3. First, if we removed D3, we could easily cluster the remaining (unambiguous) points, since the similarity graph becomes more disconnected as we remove more ambiguous data points. The question then is: why is it important to produce a good clustering of the unambiguous points? Recall that the goal of this step is not only to identify the unambiguous reviews, but also to annotate them as POSITIVE or NEGATIVE, so that they can serve as seeds for semi-supervised learning in a later step. If we have a good 2-way clustering of the seeds, we can simply annotate each cluster (by sampling a handful of its reviews) rather than each seed. To reiterate, removing the ambiguous data points can help produce a good clustering of their unambiguous counterparts. Second, as an ambiguous data point, D3 can in principle be assigned to any of the two clusters. According to the second eigenvector, it should be assigned to the “negative” cluster; but if feature #4 were irrelevant, it should be assigned to the “positive” cluster. In other words, the ability to determine the relevance of each feature is crucial to the accurate clustering of the ambiguous data points. However, in the absence of labeled data, it is not easy to assess feature relevance. Even if labeled data were present, the ambiguous points might be better handled by a discriminative learning system than a clustering algorithm, as discriminative learners are more sophisticated, and can handle ambiguous feature space more effectively. Taking into account these two observations, we aim to (1) remove the ambiguous data points while clustering their unambiguous counterparts, and then (2) employ a discriminative learner to label the ambiguous points in a later step. The question is: how can we identify the ambiguous data points? To do this, we exploit an important observation regarding eigendecomposition. In the computation of eigenvalues, each data point factors out the orthogonal projections of each of the other data points with which they have an affinity. Ambiguous data points receive the orthogonal projections from both the positive and negative data points, and hence they have near-zero values in the pivot eigenvectors. Given this observation, our algorithm uses the eight steps below to remove the ambiguous points in an iterative fashion and produce a clustering of the unambiguous points. 1. Create a similarity matrix S from the data points D. 2. Form the Laplacian matrix L from S. 3. Find the top five eigenvectors of L. 4. Row-normalize the five eigenvectors. 5. Pick the eigenvector e for which we get the minimum normalized cut. 6. Sort D according to e and remove α points in the middle of D (i.e., the points indexed from |D| 2 −α 2 + 1 to |D| 2 + α 2 ). 7. If |D| = β, goto Step 8; else goto Step 1. 8. Run 2-means on e to cluster the points in D. This algorithm can be thought of as the opposite of self-training. In self-training, we iteratively train a classifier on the data labeled so far, use it to classify the unlabeled instances, and augment the labeled data with the most confidently labeled instances. In our algorithm, we start with an initial clustering of all of the data points, and then iteratively remove the α most ambiguous points from the dataset and cluster the remaining points. Given this analogy, it should not be difficult to see the advantage of removing the data points in an iterative fashion (as opposed to removing them in a 704 single iteration): the clusters produced in a given iteration are supposed to be better than those in the previous iterations, as subsequent clusterings are generated from less ambiguous points. In our experiments, we set α to 50 and β to 500.3 Finally, we label the two clusters. To do this, we first randomly sample 10 reviews from each cluster and manually label each of them as POSITIVE or NEGATIVE. Then, we label a cluster as POSITIVE if more than half of the 10 reviews from the cluster are POSITIVE; otherwise, it is labeled as NEGATIVE. For each of our evaluation datasets, this labeling scheme always produces one POSITIVE cluster and one NEGATIVE cluster. In the rest of the paper, we will refer to these 500 automatically labeled reviews as seeds. A natural question is: can this algorithm produce high-quality seeds? To answer this question, we show in the middle column of Table 1 the labeling accuracy of the 500 reviews produced by our iterative algorithm for our five evaluation datasets (see Section 4.1 for details on these datasets). To better understand whether it is indeed beneficial to remove the ambiguous points in an iterative fashion, we also show the results of a version of this algorithm in which we remove all but the 500 least ambiguous points in just one iteration (see the rightmost column). As we can see, for three datasets (Movie, Kitchen, and Electronics), the accuracy is above 80%. For the remaining two (Book and DVD), the accuracy is not particularly good. One plausible reason is that the ambiguous reviews in Book and DVD are relatively tougher to identify. Another reason can be attributed to the failure of the chosen eigenvector to capture the sentiment dimension. Recall that each eigenvector captures an important dimension of the data, and if the eigenvector that corresponds to the minimum normalized cut (i.e., the eigenvector that we chose) does not reveal the sentiment dimension, the resulting clustering (and hence the seed accuracy) will be poor. However, even with imperfectly labeled seeds, we will show in the next section how we exploit these seeds to learn a better classifier. 3.2 Incorporating Active Learning Spectral clustering allows us to focus on a small number of dimensions that are relevant as far as creating well-separated clusters is concerned, but 3Additional experiments indicate that the accuracy of our approach is not sensitive to small changes to these values. Dataset Iterative Single Step Movie 89.3 86.5 Kitchen 87.9 87.1 Electronics 80.4 77.6 Book 68.5 70.3 DVD 66.3 65.4 Table 1: Seed accuracies on five datasets. they are not necessarily relevant for creating polarity clusters. In fact, owing to the absence of labeled data, unsupervised clustering algorithms are unable to distinguish between useful and irrelevant features for polarity classification. Nevertheless, being able to distinguish between relevant and irrelevant information is important for polarity classification, as discussed before. Now that we have a small, high-quality seed set, we can potentially make better use of the available features by training a discriminative classifier on the seed set and having it identify the relevant and irrelevant features for polarity classification. Despite the high quality of the seed set, the resulting classifier may not perform well when applied to the remaining (unlabeled) points, as there is no reason to believe that a classifier trained solely on unambiguous reviews can achieve a high accuracy when classifying ambiguous reviews. We hypothesize that a high accuracy can be achieved only if the classifier is trained on both ambiguous and unambiguous reviews. As a result, we apply active learning (Cohn et al., 1994) to identify the ambiguous reviews. Specifically, we train a discriminative classifier using the support vector machine (SVM) learning algorithm (Joachims, 1999) on the set of unambiguous reviews, and then apply the resulting classifier to all the reviews in the training folds4 that are not seeds. Since this classifier is trained solely on the unambiguous reviews, it is reasonable to assume that the reviews whose labels the classifier is most uncertain about (and therefore are most informative to the classifier) are those that are ambiguous. Following previous work on active learning for SVMs (e.g., Campbell et al. (2000), Schohn and Cohn (2000), Tong and Koller (2002)), we define the uncertainty of a data point as its distance from the separating hyperplane. In other words, 4Following Dredze and Crammer (2008), we perform cross-validation experiments on the 2000 labeled reviews in each evaluation dataset, choosing the active learning points from the training folds. Note that the seeds obtained in the previous step were also acquired using the training folds only. 705 points that are closer to the hyperplane are more uncertain than those that are farther away. We perform active learning for five iterations. In each iteration, we select the 10 most uncertain points from each side of the hyperplane for human annotation, and then re-train a classifier on all of the points annotated so far. This yields a total of 100 manually labeled reviews. 3.3 Applying Transductive Learning Given that we now have a labeled set (composed of 100 manually labeled points selected by active learning and 500 unambiguous points) as well as a larger set of points that are yet to be labeled (i.e., the remaining unlabeled points in the training folds and those in the test fold), we aim to train a better classifier by using a weakly supervised learner to learn from both the labeled and unlabeled data. As our weakly supervised learner, we employ a transductive SVM. To begin with, note that the automatically acquired 500 unambiguous data points are not perfectly labeled (see Section 3.1). Since these unambiguous points significantly outnumber the manually labeled points, they could undesirably dominate the acquisition of the hyperplane and diminish the benefits that we could have obtained from the more informative and perfectly labeled active learning points otherwise. We desire a system that can use the active learning points effectively and at the same time is noise-tolerant to the imperfectly labeled unambiguous data points. Hence, instead of training just one SVM classifier, we aim to reduce classification errors by training an ensemble of five classifiers, each of which uses all 100 manually labeled reviews and a different subset of the 500 automatically labeled reviews. Specifically, we partition the 500 automatically labeled reviews into five equal-sized sets as follows. First, we sort the 500 reviews in ascending order of their corresponding values in the eigenvector selected in the last iteration of our algorithm for removing ambiguous points (see Section 3.1). We then put point i into set Li mod 5. This ensures that each set consists of not only an equal number of positive and negative points, but also a mix of very confidently labeled points and comparatively less confidently labeled points. Each classifier Ci will then be trained transductively, using the 100 manually labeled points and the points in Li as labeled data, and the remaining points (including all points in Lj, where i ̸= j) as unlabeled data. After training the ensemble, we classify each unlabeled point as follows: we sum the (signed) confidence values assigned to it by the five ensemble classifiers, labeling it as POSITIVE if the sum is greater than zero (and NEGATIVE otherwise). Since the points in the test fold are included in the unlabeled data, they are all classified in this step. 4 Evaluation 4.1 Experimental Setup For evaluation, we use five sentiment classification datasets, including the widely-used movie review dataset [MOV] (Pang et al., 2002) as well as four datasets that contain reviews of four different types of product from Amazon [books (BOO), DVDs (DVD), electronics (ELE), and kitchen appliances (KIT)] (Blitzer et al., 2007). Each dataset has 2000 labeled reviews (1000 positives and 1000 negatives). We divide the 2000 reviews into 10 equal-sized folds for cross-validation purposes, maintaining balanced class distributions in each fold. It is important to note that while the test fold is accessible to the transductive learner (Step 3), only the reviews in training folds (but not their labels) are used for the acquisition of seeds (Step 1) and the selection of active learning points (Step 2). We report averaged 10-fold cross-validation results in terms of accuracy. Following Kamvar et al. (2003), we also evaluate the clusters produced by our approach against the gold-standard clusters using Adjusted Rand Index (ARI). ARI ranges from −1 to 1; better clusterings have higher ARI values. 4.2 Baseline Systems Recall that our approach uses 100 hand-labeled reviews chosen by active learning. To ensure a fair comparison, each of our three baselines has access to 100 labeled points chosen from the training folds. Owing to the randomness involved in the choice of labeled data, all baseline results are averaged over ten independent runs for each fold. Semi-supervised spectral clustering. We implemented Kamvar et al.’s (2003) semi-supervised spectral clustering algorithm, which incorporates labeled data into the clustering framework in the form of must-link and cannot-link constraints. Instead of computing the similarity between each pair of points, the algorithm computes the similarity between a point and its k most similar points only. Since its performance is highly sensitive to 706 Accuracy Adjusted Rand Index System Variation MOV KIT ELE BOO DVD MOV KIT ELE BOO DVD 1 Semi-supervised spectral learning 67.3 63.7 57.7 55.8 56.2 0.12 0.08 0.01 0.02 0.02 2 Transductive SVM 68.7 65.5 62.9 58.7 57.3 0.14 0.09 0.07 0.03 0.02 3 Active learning 68.9 68.1 63.3 58.6 58.0 0.14 0.14 0.08 0.03 0.03 4 Our approach (after 1st step) 69.8 70.8 65.7 58.6 55.8 0.15 0.17 0.10 0.03 0.01 5 Our approach (after 2nd step) 73.5 73.0 69.9 60.6 59.8 0.22 0.21 0.16 0.04 0.04 6 Our approach (after 3rd step) 76.2 74.1 70.6 62.1 62.7 0.27 0.23 0.17 0.06 0.06 Table 2: Results in terms of accuracy and Adjusted Rand Index for the five datasets. k, we tested values of 10, 15, . . ., 50 for k and reported in row 1 of Table 2 the best results. As we can see, accuracy ranges from 56.2% to 67.3%, whereas ARI ranges from 0.02 to 0.12. Transductive SVM. We employ as our second baseline a transductive SVM5 trained using 100 points randomly sampled from the training folds as labeled data and the remaining 1900 points as unlabeled data. Results of this baseline are shown in row 2 of Table 3. As we can see, accuracy ranges from 57.3% to 68.7% and ARI ranges from 0.02 to 0.14, which are significantly better than those of semi-supervised spectral learning. Active learning. Our last baseline implements the active learning procedure as described in Tong and Koller (2002). Specifically, we begin by training an inductive SVM on one labeled example from each class, iteratively labeling the most uncertain unlabeled point on each side of the hyperplane and re-training the SVM until 100 points are labeled. Finally, we train a transductive SVM on the 100 labeled points and the remaining 1900 unlabeled points, obtaining the results in row 3 of Table 1. As we can see, accuracy ranges from 58% to 68.9%, whereas ARI ranges from 0.03 to 0.14. Active learning is the best of the three baselines, presumably because it has the ability to choose the labeled data more intelligently than the other two. 4.3 Our Approach Results of our approach are shown in rows 4–6 of Table 2. Specifically, rows 4 and 5 show the results of the SVM classifier when it is trained on the labeled data obtained after the first step (unsupervised extraction of unambiguous reviews) and the second step (active learning), respectively. After the first step, our approach can already achieve 5All the SVM classifiers in this paper are trained using the SVMlight package (Joachims, 1999). All SVM-related learning parameters are set to their default values, except in transductive learning, where we set p (the fraction of unlabeled examples to be classified as positive) to 0.5 so that the system does not have any bias towards any class. comparable results to the best baseline. Performance increases substantially after the second step, indicating the benefits of active learning. Row 6 shows the results of transductive learning with ensemble. Comparing rows 5 and 6, we see that performance rises by 0.7%-2.9% for all five datasets after “ensembled” transduction. This could be attributed to (1) the unlabeled data, which may have provided the transductive learner with useful information that are not accessible to the other learners, and (2) the ensemble, which is more noise-tolerant to the imperfect seeds. 4.4 Additional Experiments To gain insight into how the design decisions we made in our approach impact performance, we conducted the following additional experiments. Importance of seeds. Table 1 showed that for all but one dataset, the seeds obtained through multiple iterations are more accurate than those obtained in a single iteration. To envisage the importance of seeds, we conducted an experiment where we repeated our approach using the seeds learned in a single iteration. Results are shown in the first row of Table 3. In comparison to row 6 of Table 2, we can see that results are indeed better when we bootstrap from higher-quality seeds. To further understand the role of seeds, we experimented with a version of our approach that bootstraps from no seeds. Specifically, we used the 500 seeds to guide the selection of active learning points, but trained a transductive SVM using only the active learning points as labeled data (and the rest as unlabeled data). As can be seen in row 2 of Table 3, the results are poor, suggesting that our approach yields better performance than the baselines not only because of the way the active learning points were chosen, but also because of contributions from the imperfectly labeled seeds. We also experimented with training a transductive SVM using only the 100 least ambiguous seeds (i.e., the points with the largest unsigned 707 Accuracy Adjusted Rand Index System Variation MOV KIT ELE BOO DVD MOV KIT ELE BOO DVD 1 Single-step cluster purification 74.9 72.7 70.1 66.9 60.7 0.25 0.21 0.16 0.11 0.05 2 Using no seeds 58.3 55.6 59.7 54.0 56.1 0.04 0.04 0.02 0.01 0.01 3 Using the least ambiguous seeds 74.6 69.7 69.1 60.9 63.3 0.24 0.16 0.14 0.05 0.07 4 No Ensemble 74.1 72.7 68.8 61.5 59.9 0.23 0.21 0.14 0.05 0.04 5 Passive learning 74.1 72.4 68.0 63.7 58.6 0.23 0.20 0.13 0.07 0.03 6 Using 500 active learning points 82.5 78.4 77.5 73.5 73.4 0.42 0.32 0.30 0.22 0.22 7 Fully supervised results 86.1 81.7 79.3 77.6 80.6 0.53 0.41 0.34 0.30 0.38 Table 3: Additional results in terms of accuracy and Adjusted Rand Index for the five datasets. second eigenvector values) in combination with the active learning points as labeled data (and the rest as unlabeled data). Note that the accuracy of these 100 least ambiguous seeds is 4–5% higher than that of the 500 least ambiguous seeds shown in Table 1. Results are shown in row 3 of Table 3. As we can see, using only 100 seeds turns out to be less beneficial than using all of them via an ensemble. One reason is that since these 100 seeds are the most unambiguous, they may also be the least informative as far as learning is concerned. Remember that SVM uses only the support vectors to acquire the hyperplane, and since an unambiguous seed is likely to be far away from the hyperplane, it is less likely to be a support vector. Role of ensemble learning To get a better idea of the role of the ensemble in the transductive learning step, we used all 500 seeds in combination with the 100 active learning points to train a single transductive SVM. Results of this experiment (shown in row 4 of Table 3) are worse than those in row 6 of Table 2, meaning that the ensemble has contributed positively to performance. This should not be surprising: as noted before, since the seeds are not perfectly labeled, using all of them without an ensemble might overwhelm the more informative active learning points. Passive learning. To better understand the role of active learning in our approach, we replaced it with passive learning, where we randomly picked 100 data points from the training folds and used them as labeled data. Results, shown in row 5 of Table 3, are averaged over ten independent runs for each fold. In comparison to row 6 of Table 2, we see that employing points chosen by an active learner yields significantly better results than employing randomly chosen points, which suggests that the way the points are chosen is important. Using more active learning points. An interesting question is: how much improvement can we obtain if we employ more active learning points? In row 6 of Table 3, we show the results when the experiment in row 6 of Table 2 was repeated using 500 active learning points. Perhaps not surprisingly, the 400 additional labeled points yield a 4– 11% increase in accuracy. For further comparison, we trained a fully supervised SVM classifier using all of the training data. Results are shown in row 7 of Table 3. As we can see, employing only 500 active learning points enables us to almost reach fully-supervised performance for three datasets. 5 Conclusions We have proposed a novel semi-supervised approach to polarity classification. Our key idea is to distinguish between unambiguous, easy-tomine reviews and ambiguous, hard-to-classify reviews. Specifically, given a set of reviews, we applied (1) an unsupervised algorithm to identify and classify those that are unambiguous, (2) an active learner that is trained solely on automatically labeled unambiguous reviews to identify a small number of prototypical ambiguous reviews for manual labeling, and (3) an ensembled transductive learner to train a sophisticated classifier on the reviews labeled so far to handle the ambiguous reviews. Experimental results on five sentiment datasets demonstrate that our “mine the easy, classify the hard” approach, which only requires manual labeling of a small number of ambiguous reviews, can be employed to train a highperformance polarity classification system. We plan to extend our approach by exploring two of its appealing features. First, none of the steps in our approach is designed specifically for sentiment classification. This makes it applicable to other text classification tasks. Second, our approach is easily extensible. Since the semisupervised learner is discriminative, our approach can adopt a richer representation that makes use of more sophisticated features such as bigrams or manually labeled sentiment-oriented words. 708 Acknowledgments We thank the three anonymous reviewers for their invaluable comments on an earlier draft of the paper. This work was supported in part by NSF Grant IIS-0812261. References John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the ACL, pages 440–447. Colin Campbell, Nello Cristianini, , and Alex J. Smola. 2000. Query learning with large margin classifiers. In Proceedings of ICML, pages 111–118. David Cohn, Les Atlas, and Richard Ladner. 1994. Improving generalization with active learning. Machine Learning, 15(2):201–221. Inderjit Dhillon, Yuqiang Guan, and Brian Kulis. 2004. Kernel k-means, spectral clustering and normalized cuts. In Proceedings of KDD, pages 551–556. Mark Dredze and Koby Crammer. 2008. Active learning with confidence. In Proceedings of ACL-08:HLT Short Papers (Companion Volume), pages 233–236. Thorsten Joachims. 1999. Making large-scale SVM learning practical. In Bernhard Scholkopf and Alexander Smola, editors, Advances in Kernel Methods - Support Vector Learning, pages 44–56. MIT Press. Sepandar Kamvar, Dan Klein, and Chris Manning. 2003. Spectral learning. In Proceedings of IJCAI, pages 561–566. Ravi Kannan, Santosh Vempala, and Adrian Vetta. 2004. On clusterings: Good, bad and spectral. Journal of the ACM, 51(3):497–515. Moshe Koppel and Jonathan Schler. 2006. The importance of neutral examples for learning sentiment. Computational Intelligence, 22(2):100–109. Ryan McDonald, Kerry Hannan, Tyler Neylon, Mike Wells, and Jeff Reynar. 2007. Structured models for fine-to-coarse sentiment analysis. In Proceedings of the ACL, pages 432–439. Marina Meil˘a and Jianbo Shi. 2001. A random walks view of spectral segmentation. In Proceedings of AISTATS. Andrew Ng, Michael Jordan, and Yair Weiss. 2002. On spectral clustering: Analysis and an algorithm. In Advances in NIPS 14. Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the ACL, pages 271–278. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of EMNLP, pages 79–86. Greg Schohn and David Cohn. 2000. Less is more: Active learning with support vector machines. In Proceedings of ICML, pages 839–846. Jianbo Shi and Jitendra Malik. 2000. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888– 905. Simon Tong and Daphne Koller. 2002. Support vector machine active learning with applications to text classification. Journal of Machine Learning Research, 2:45–66. Peter Turney. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In Proceedings of the ACL, pages 417–424. Yair Weiss. 1999. Segmentation using eigenvectors: A unifying view. In Proceedings of ICCV, pages 975– 982. 709
2009
79
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 64–72, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Topological Field Parsing of German Jackie Chi Kit Cheung Department of Computer Science University of Toronto Toronto, ON, M5S 3G4, Canada [email protected] Gerald Penn Department of Computer Science University of Toronto Toronto, ON, M5S 3G4, Canada [email protected] Abstract Freer-word-order languages such as German exhibit linguistic phenomena that present unique challenges to traditional CFG parsing. Such phenomena produce discontinuous constituents, which are not naturally modelled by projective phrase structure trees. In this paper, we examine topological field parsing, a shallow form of parsing which identifies the major sections of a sentence in relation to the clausal main verb and the subordinating heads. We report the results of topological field parsing of German using the unlexicalized, latent variable-based Berkeley parser (Petrov et al., 2006) Without any language- or model-dependent adaptation, we achieve state-of-the-art results on the T¨uBa-D/Z corpus, and a modified NEGRA corpus that has been automatically annotated with topological fields (Becker and Frank, 2002). We also perform a qualitative error analysis of the parser output, and discuss strategies to further improve the parsing results. 1 Introduction Freer-word-order languages such as German exhibit linguistic phenomena that present unique challenges to traditional CFG parsing. Topic focus ordering and word order constraints that are sensitive to phenomena other than grammatical function produce discontinuous constituents, which are not naturally modelled by projective (i.e., without crossing branches) phrase structure trees. In this paper, we examine topological field parsing, a shallow form of parsing which identifies the major sections of a sentence in relation to the clausal main verb and subordinating heads, when present. We report the results of parsing German using the unlexicalized, latent variable-based Berkeley parser (Petrov et al., 2006). Without any languageor model-dependent adaptation, we achieve stateof-the-art results on the T¨uBa-D/Z corpus (Telljohann et al., 2004), with a F1-measure of 95.15% using gold POS tags. A further reranking of the parser output based on a constraint involving paired punctuation produces a slight additional performance gain. To facilitate comparison with previous work, we also conducted experiments on a modified NEGRA corpus that has been automatically annotated with topological fields (Becker and Frank, 2002), and found that the Berkeley parser outperforms the method described in that work. Finally, we perform a qualitative error analysis of the parser output on the T¨uBa-D/Z corpus, and discuss strategies to further improve the parsing results. German syntax and parsing have been studied using a variety of grammar formalisms. Hockenmaier (2006) has translated the German TIGER corpus (Brants et al., 2002) into a CCG-based treebank to model word order variations in German. Foth et al. (2004) consider a version of dependency grammars known as weighted constraint dependency grammars for parsing German sentences. On the NEGRA corpus (Skut et al., 1998), they achieve an accuracy of 89.0% on parsing dependency edges. In Callmeier (2000), a platform for efficient HPSG parsing is developed. This parser is later extended by Frank et al. (2003) with a topological field parser for more efficient parsing of German. The system by Rohrer and Forst (2006) produces LFG parses using a manually designed grammar and a stochastic parse disambiguation process. They test on the TIGER corpus and achieve an F1-measure of 84.20%. In Dubey and Keller (2003), PCFG parsing of NEGRA is improved by using sister-head dependencies, which outperforms standard head lexicalization as well as an unlexicalized model. The best 64 performing model with gold tags achieve an F1 of 75.60%. Sister-head dependencies are useful in this case because of the flat structure of NEGRA’s trees. In contrast to the deeper approaches to parsing described above, topological field parsing identifies the major sections of a sentence in relation to the clausal main verb and subordinating heads, when present. Like other forms of shallow parsing, topological field parsing is useful as the first stage to further processing and eventual semantic analysis. As mentioned above, the output of a topological field parser is used as a guide to the search space of a HPSG parsing algorithm in Frank et al. (2003). In Neumann et al. (2000), topological field parsing is part of a divide-andconquer strategy for shallow analysis of German text with the goal of improving an information extraction system. Existing work in identifying topological fields can be divided into chunkers, which identify the lowest-level non-recursive topological fields, and parsers, which also identify sentence and clausal structure. Veenstra et al. (2002) compare three approaches to topological field chunking based on finite state transducers, memory-based learning, and PCFGs respectively. It is found that the three techniques perform about equally well, with F1 of 94.1% using POS tags from the TnT tagger, and 98.4% with gold tags. In Liepert (2003), a topological field chunker is implemented using a multi-class extension to the canonically two-class support vector machine (SVM) machine learning framework. Parameters to the machine learning algorithm are fine-tuned by a genetic search algorithm, with a resulting F1-measure of 92.25%. Training the parameters to SVM does not have a large effect on performance, increasing the F1-measure in the test set by only 0.11%. The corpus-based, stochastic topological field parser of Becker and Frank (2002) is based on a standard treebank PCFG model, in which rule probabilities are estimated by frequency counts. This model includes several enhancements, which are also found in the Berkeley parser. First, they use parameterized categories, splitting nonterminals according to linguistically based intuitions, such as splitting different clause types (they do not distinguish different clause types as basic categories, unlike T¨uBa-D/Z). Second, they take into account punctuation, which may help identify clause boundaries. They also binarize the very flat topological tree structures, and prune rules that only occur once. They test their parser on a version of the NEGRA corpus, which has been annotated with topological fields using a semiautomatic method. Ule (2003) proposes a process termed Directed Treebank Refinement (DTR). The goal of DTR is to refine a corpus to improve parsing performance. DTR is comparable to the idea of latent variable grammars on which the Berkeley parser is based, in that both consider the observed treebank to be less than ideal and both attempt to refine it by splitting and merging nonterminals. In this work, splitting and merging nonterminals are done by considering the nonterminals’ contexts (i.e., their parent nodes) and the distribution of their productions. Unlike in the Berkeley parser, splitting and merging are distinct stages, rather than parts of a single iteration. Multiple splits are found first, then multiple rounds of merging are performed. No smoothing is done. As an evaluation, DTR is applied to topological field parsing of the T¨uBa-D/Z corpus. We discuss the performance of these topological field parsers in more detail below. All of the topological parsing proposals predate the advent of the Berkeley parser. The experiments of this paper demonstrate that the Berkeley parser outperforms previous methods, many of which are specialized for the task of topological field chunking or parsing. 2 Topological Field Model of German Topological fields are high-level linear fields in an enclosing syntactic region, such as a clause (H¨ohle, 1983). These fields may have constraints on the number of words or phrases they contain, and do not necessarily form a semantically coherent constituent. Although it has been argued that a few languages have no word-order constraints whatsoever, most “free word-order” languages (even Warlpiri) have at the very least some sort of sentence- or clause-initial topic field followed by a second position that is occupied by clitics, a finite verb or certain complementizers and subordinating conjunctions. In a few Germanic languages, including German, the topology is far richer than that, serving to identify all of the components of the verbal head of a clause, except for some cases of long-distance dependen65 cies. Topological fields are useful, because while Germanic word order is relatively free with respect to grammatical functions, the order of the topological fields is strict and unvarying. Type Fields VL (KOORD) (C) (MF) VC (NF) V1 (KOORD) (LV) LK (MF) (VC) (NF) V2 (KOORD) (LV) VF LK (MF) (VC) (NF) Table 1: Topological field model of German. Simplified from T¨uBa-D/Z corpus’s annotation schema (Telljohann et al., 2006). In the German topological field model, clauses belong to one of three types: verb-last (VL), verbsecond (V2), and verb-first (V1), each with a specific sequence of topological fields (Table 1). VL clauses include finite and non-finite subordinate clauses, V2 sentences are typically declarative sentences and WH-questions in matrix clauses, and V1 sentences include yes-no questions, and certain conditional subordinate clauses. Below, we give brief descriptions of the most common topological fields. • VF (Vorfeld or ‘pre-field’) is the first constituent in sentences of the V2 type. This is often the topic of the sentence, though as an anonymous reviewer pointed out, this position does not correspond to a single function with respect to information structure. (e.g., the reviewer suggested this case, where VF contains the focus: –Wer kommt zur Party? –Peter kommt zur Party. –Who is coming to the Party? –Peter is coming to the party.) • LK (Linke Klammer or ‘left bracket’) is the position for finite verbs in V1 and V2 sentences. It is replaced by a complementizer with the field label C in VL sentences. • MF (Mittelfeld or ‘middle field’) is an optional field bounded on the left by LK and on the right by the verbal complex VC or by NF. Most verb arguments, adverbs, and prepositional phrases are found here, unless they have been fronted and put in the VF, or are prosodically heavy and postposed to the NF field. • VC is the verbal complex field. It includes infinite verbs, as well as finite verbs in VL sentences. • NF (Nachfeld or ‘post-field’) contains prosodically heavy elements such as postposed prepositional phrases or relative clauses. • KOORD1 (Koordinationsfeld or ‘coordination field’) is a field for clause-level conjunctions. • LV (Linksversetzung or ‘left dislocation’) is used for resumptive constructions involving left dislocation. For a detailed linguistic treatment, see (Frey, 2004). Exceptions to the topological field model as described above do exist. For instance, parenthetical constructions exist as a mostly syntactically independent clause inside another sentence. In our corpus, they are attached directly underneath a clausal node without any intervening topological field, as in the following example. In this example, the parenthetical construction is highlighted in bold print. Some clause and topological field labels under the NF field are omitted for clarity. (1) (a) (SIMPX “(VF Man) (LK muß) (VC verstehen) ” , (SIMPX sagte er), “ (NF daß diese Minderheiten seit langer Zeit massiv von den Nazis bedroht werden)). ” (b) Translation: “One must understand,” he said, “that these minorities have been massively threatened by the Nazis for a long time.” 3 A Latent Variable Parser For our experiments, we used the latent variablebased Berkeley parser (Petrov et al., 2006). Latent variable parsing assumes that an observed treebank represents a coarse approximation of an underlying, optimally refined grammar which makes more fine-grained distinctions in the syntactic categories. For example, the noun phrase category NP in a treebank could be viewed as a coarse approximation of two noun phrase categories corresponding to subjects and object, NPˆS, and NPˆVP. The Berkeley parser automates the process of finding such distinctions. It starts with a simple binarized X-bar grammar style backbone, and goes through iterations of splitting and merging nonterminals, in order to maximize the likelihood of the training set treebank. In the splitting stage, 1The T¨uBa-D/Z corpus distinguishes coordinating and non-coordinating particles, as well as clausal and field coordination. These distinctions need not concern us for this explanation. 66 Figure 1: “I could never have done that just for aesthetic reasons.” Sample T¨uBa-D/Z tree, with topological field annotations and edge labels. Topological field layer in bold. an Expectation-Maximization algorithm is used to find a good split for each nonterminal. In the merging stage, categories that have been oversplit are merged together to keep the grammar size tractable and reduce sparsity. Finally, a smoothing stage occurs, where the probabilities of rules for each nonterminal are smoothed toward the probabilities of the other nonterminals split from the same syntactic category. The Berkeley parser has been applied to the T¨uBaD/Z corpus in the constituent parsing shared task of the ACL-2008 Workshop on Parsing German (Petrov and Klein, 2008), achieving an F1measure of 85.10% and 83.18% with and without gold standard POS tags respectively2. We chose the Berkeley parser for topological field parsing because it is known to be robust across languages, and because it is an unlexicalized parser. Lexicalization has been shown to be useful in more general parsing applications due to lexical dependencies in constituent parsing (e.g. (K¨ubler et al., 2006; Dubey and Keller, 2003) in the case of German). However, topological fields explain a higher level of structure pertaining to clause-level word order, and we hypothesize that lexicalization is unlikely to be helpful. 4 Experiments 4.1 Data For our experiments, we primarily used the T¨uBaD/Z (T¨ubinger Baumbank des Deutschen / Schriftsprache) corpus, consisting of 26116 sentences (20894 training, 2611 development, 2089 test, with a further 522 sentences held out for future ex2This evaluation considered grammatical functions as well as the syntactic category. periments)3 taken from the German newspaper die tageszeitung. The corpus consists of four levels of annotation: clausal, topological, phrasal (other than clausal), and lexical. We define the task of topological field parsing to be recovering the first two levels of annotation, following Ule (2003). We also tested the parser on a version of the NEGRA corpus derived by Becker and Frank (2002), in which syntax trees have been made projective and topological fields have been automatically added through a series of linguistically informed tree modifications. All internal phrasal structure nodes have also been removed. The corpus consists of 20596 sentences, which we split into subsets of the same size as described by Becker and Frank (2002)4. The set of topological fields in this corpus differs slightly from the one used in T¨uBa-D/Z, making no distinction between clause types, nor consistently marking field or clause conjunctions. Because of the automatic annotation of topological fields, this corpus contains numerous annotation errors. Becker and Frank (2002) manually corrected their test set and evaluated the automatic annotation process, reporting labelled precision and recall of 93.0% and 93.6% compared to their manual annotations. There are also punctuation-related errors, including missing punctuation, sentences ending in commas, and sentences composed of single punctuation marks. We test on this data in order to provide a better comparison with previous work. Although we could have trained the model in Becker and Frank (2002) on the T¨uBa-D/Z corpus, it would not have 3These are the same splits into training, development, and test sets as in the ACL-08 Parsing German workshop. This corpus does not include sentences of length greater than 40. 416476 training sentences, 1000 development, 1058 testing, and 2062 as held-out data. We were unable to obtain the exact subsets used by Becker and Frank (2002). We will discuss the ramifications of this on our evaluation procedure. 67 Gold tags Edge labels LP% LR% F1% CB CB0% CB ≤2% EXACT% 93.53 93.17 93.35 0.08 94.59 99.43 79.50 + 95.26 95.04 95.15 0.07 95.35 99.52 83.86 + 92.38 92.67 92.52 0.11 92.82 99.19 77.79 + + 92.36 92.60 92.48 0.11 92.82 99.19 77.64 Table 2: Parsing results for topological fields and clausal constituents on the T¨uBa-D/Z corpus. been a fair comparison, as the parser depends quite heavily on NEGRA’s annotation scheme. For example, T¨uBa-D/Z does not contain an equivalent of the modified NEGRA’s parameterized categories; there exist edge labels in T¨uBaD/Z, but they are used to mark head-dependency relationships, not subtypes of syntactic categories. 4.2 Results We first report the results of our experiments on the T¨uBa-D/Z corpus. For the T¨uBa-D/Z corpus, we trained the Berkeley parser using the default parameter settings. The grammar trainer attempts six iterations of splitting, merging, and smoothing before returning the final grammar. Intermediate grammars after each step are also saved. There were training and test sentences without clausal constituents or topological fields, which were ignored by the parser and by the evaluation. As part of our experiment design, we investigated the effect of providing gold POS tags to the parser, and the effect of incorporating edge labels into the nonterminal labels for training and parsing. In all cases, gold annotations which include gold POS tags were used when training the parser. We report the standard PARSEVAL measures of parser performance in Table 2, obtained by the evalb program by Satoshi Sekine and Michael Collins. This table shows the results after five iterations of grammar modification, parameterized over whether we provide gold POS tags for parsing, and edge labels for training and parsing. The number of iterations was determined by experiments on the development set. In the evaluation, we do not consider edge labels in determining correctness, but do consider punctuation, as Ule (2003) did. If we ignore punctuation in our evaluation, we obtain an F1-measure of 95.42% on the best model (+ Gold tags, - Edge labels). Whether supplying gold POS tags improves performance depends on whether edge labels are considered in the grammar. Without edge labels, gold POS tags improve performance by almost two points, corresponding to a relative error reduction of 33%. In contrast, performance is negatively affected when edge labels are used and gold POS tags are supplied (i.e., + Gold tags, + Edge labels), making the performance worse than not supplying gold tags. Incorporating edge label information does not appear to improve performance, possibly because it oversplits the initial treebank and interferes with the parser’s ability to determine optimal splits for refining the grammar. Parser LP% LR% F1% T¨uBa-D/Z This work 95.26 95.04 95.15 Ule unknown unknown 91.98 NEGRA - from Becker and Frank (2002) BF02 (len. ≤40) 92.1 91.6 91.8 NEGRA - our experiments This work (len. ≤40) 90.74 90.87 90.81 BF02 (len. ≤40) 89.54 88.14 88.83 This work (all) 90.29 90.51 90.40 BF02 (all) 89.07 87.80 88.43 Table 3: BF02 = (Becker and Frank, 2002). Parsing results for topological fields and clausal constituents. Results from Ule (2003) and our results were obtained using different training and test sets. The first row of results of Becker and Frank (2002) are from that paper; the rest were obtained by our own experiments using that parser. All results consider punctuation in evaluation. To facilitate a more direct comparison with previous work, we also performed experiments on the modified NEGRA corpus. In this corpus, topological fields are parameterized, meaning that they are labelled with further syntactic and semantic information. For example, VF is split into VF-REL for relative clauses, and VF-TOPIC for those containing topics in a verb-second sentence, among others. All productions in the corpus have also been binarized. Tuning the parameter settings on the development set, we found that parameterized categories, binarization, and including punctuation gave the best F1 performance. First-order horizontal and zeroth order vertical markoviza68 tion after six iterations of splitting, merging, and smoothing gave the best F1 result of 91.78%. We parsed the corpus with both the Berkeley parser and the best performing model of Becker and Frank (2002). The results of these experiments on the test set for sentences of length 40 or less and for all sentences are shown in Table 3. We also show other results from previous work for reference. We find that we achieve results that are better than the model in Becker and Frank (2002) on the test set. The difference is statistically significant (p = 0.0029, Wilcoxon signed-rank). The results we obtain using the parser of Becker and Frank (2002) are worse than the results described in that paper. We suggest the following reasons for this discrepancy. While the test set used in the paper was manually corrected for evaluation, we did not correct our test set, because it would be difficult to ensure that we adhered to the same correction guidelines. No details of the correction process were provided in the paper, and descriptive grammars of German provide insufficient guidance on many of the examples in NEGRA on issues such as ellipses, short infinitival clauses, and expanded participial constructions modifying nouns. Also, because we could not obtain the exact sets used for training, development, and testing, we had to recreate the sets by randomly splitting the corpus. 4.3 Category Specific Results We now return to the T¨uBa-D/Z corpus for a more detailed analysis, and examine the categoryspecific results for our best performing model (+ Gold tags, - Edge labels). Overall, Table 4 shows that the best performing topological field categories are those that have constraints on the type of word that is allowed to fill it (finite verbs in LK, verbs in VC, complementizers and subordinating conjunctions in C). VF, in which only one constituent may appear, also performs relatively well. Topological fields that can contain a variable number of heterogeneous constituents, on the other hand, have poorer F1-measure results. MF, which is basically defined relative to the positions of fields on either side of it, is parsed several points below LK, C, and VC in accuracy. NF, which contains different kinds of extraposed elements, is parsed at a substantially worse level. Poorly parsed categories tend to occur infrequently, including LV, which marks a rare resumptive construction; FKOORD, which marks topological field coordination; and the discourse marker DM. The other clause-level constituents (PSIMPX for clauses in paratactic constructions, RSIMPX for relative clauses, and SIMPX for other clauses) also perform below average. Topological Fields Category # LP% LR% F1% PARORD 20 100.00 100.00 100.00 VCE 3 100.00 100.00 100.00 LK 2186 99.68 99.82 99.75 C 642 99.53 98.44 98.98 VC 1777 98.98 98.14 98.56 VF 2044 96.84 97.55 97.20 KOORD 99 96.91 94.95 95.92 MF 2931 94.80 95.19 94.99 NF 643 83.52 81.96 82.73 FKOORD 156 75.16 73.72 74.43 LV 17 10.00 5.88 7.41 Clausal Constituents Category # LP% LR% F1% SIMPX 2839 92.46 91.97 92.21 RSIMPX 225 91.23 92.44 91.83 PSIMPX 6 100.00 66.67 80.00 DM 28 59.26 57.14 58.18 Table 4: Category-specific results using grammar with no edge labels and passing in gold POS tags. 4.4 Reranking for Paired Punctuation While experimenting with the development set of T¨uBa-D/Z, we noticed that the parser sometimes returns parses, in which paired punctuation (e.g. quotation marks, parentheses, brackets) is not placed in the same clause–a linguistically implausible situation. In these cases, the high-level information provided by the paired punctuation is overridden by the overall likelihood of the parse tree. To rectify this problem, we performed a simple post-hoc reranking of the 50-best parses produced by the best parameter settings (+ Gold tags, - Edge labels), selecting the first parse that places paired punctuation in the same clause, or returning the best parse if none of the 50 parses satisfy the constraint. This procedure improved the F1measure to 95.24% (LP = 95.39%, LR = 95.09%). Overall, 38 sentences were parsed with paired punctuation in different clauses, of which 16 were reranked. Of the 38 sentences, reranking improved performance in 12 sentences, did not affect performance in 23 sentences (of which 10 already had a perfect parse), and hurt performance in three sentences. A two-tailed sign test suggests that rerank69 ing improves performance (p = 0.0352). We discuss below why sentences with paired punctuation in different clauses can have perfect parse results. To investigate the upper-bound in performance that this form of reranking is able to achieve, we calculated some statistics on our (+ Gold tags, Edge labels) 50-best list. We found that the average rank of the best scoring parse by F1-measure is 2.61, and the perfect parse is present for 1649 of the 2088 sentences at an average rank of 1.90. The oracle F1-measure is 98.12%, indicating that a more comprehensive reranking procedure might allow further performance gains. 4.5 Qualitative Error Analysis As a further analysis, we extracted the worst scoring fifty sentences by F1-measure from the parsed test set (+ Gold tags, - Edge labels), and compared them against the gold standard trees, noting the cause of the error. We analyze the parses before reranking, to see how frequently the paired punctuation problem described above severely affects a parse. The major mistakes made by the parser are summarized in Table 5. Problem Freq. Misidentification of Parentheticals 19 Coordination problems 13 Too few SIMPX 10 Paired punctuation problem 9 Other clause boundary errors 7 Other 6 Too many SIMPX 3 Clause type misidentification 2 MF/NF boundary 2 LV 2 VF/MF boundary 2 Table 5: Types and frequency of parser errors in the fifty worst scoring parses by F1-measure, using parameters (+ Gold tags, - Edge labels). Misidentification of Parentheticals Parenthetical constructions do not have any dependencies on the rest of the sentence, and exist as a mostly syntactically independent clause inside another sentence. They can occur at the beginning, end, or in the middle of sentences, and are often set off orthographically by punctuation. The parser has problems identifying parenthetical constructions, often positing a parenthetical construction when that constituent is actually attached to a topological field in a neighbouring clause. The following example shows one such misidentification in bracket notation. Clause internal topological fields are omitted for clarity. (2) (a) T¨uBa-D/Z: (SIMPX Weder das Ausmaß der Sch¨onheit noch der fr¨uhere oder sp¨atere Zeitpunkt der Geburt macht einen der Zwillinge f¨ur eine Mutter mehr oder weniger echt / authentisch / ¨uberlegen). (b) Parser: (SIMPX Weder das Ausmaß der Sch¨onheit noch der fr¨uhere oder sp¨atere Zeitpunkt der Geburt macht einen der Zwillinge f¨ur eine Mutter mehr oder weniger echt) (PARENTHETICAL / authentisch / ¨uberlegen.) (c) Translation: “Neither the degree of beauty nor the earlier or later time of birth makes one of the twins any more or less real/authentic/superior to a mother.” We hypothesized earlier that lexicalization is unlikely to give us much improvement in performance, because topological fields work on a domain that is higher than that of lexical dependencies such as subcategorization frames. However, given the locally independent nature of legitimate parentheticals, a limited form of lexicalization or some other form of stronger contextual information might be needed to improve identification performance. Coordination Problems The second most common type of error involves field and clause coordinations. This category includes missing or incorrect FKOORD fields, and conjunctions of clauses that are misidentified. In the following example, the conjoined MFs and following NF in the correct parse tree are identified as a single long MF. (3) (a) T¨uBa-D/Z: Auf dem europ¨aischen Kontinent aber hat (FKOORD (MF kein Land und keine Macht ein derartiges Interesse an guten Beziehungen zu Rußland) und (MF auch kein Land solche Erfahrungen im Umgang mit Rußland)) (NF wie Deutschland). (b) Parser: Auf dem europ¨aischen Kontinent aber hat (MF kein Land und keine Macht ein derartiges Interesse an guten Beziehungen zu Rußland und auch kein Land solche Erfahrungen im Umgang mit Rußland wie Deutschland). (c) Translation: “On the European continent, however, no land and no power has such an interest in good relations with Russia (as Germany), and also no land (has) such experience in dealing with Russia as Germany.” Other Clause Errors Other clause-level errors include the parser predicting too few or too many clauses, or misidentifying the clause type. Clauses are sometimes confused with NFs, and there is one case of a relative clause being misidentified as a 70 main clause with an intransitive verb, as the finite verb appears at the end of the clause in both cases. Some clause errors are tied to incorrect treatment of elliptical constructions, in which an element that is inferable from context is missing. Paired Punctuation Problems with paired punctuation are the fourth most common type of error. Punctuation is often a marker of clause or phrase boundaries. Thus, predicting paired punctuation incorrectly can lead to incorrect parses, as in the following example. (4) (a) “ Auch (SIMPX wenn der Krieg heute ein Mobilisierungsfaktor ist) ” , so Pau , “ (SIMPX die Leute sehen , daß man f¨ur die Arbeit wieder auf die Straße gehen muß) . ” (b) Parser: (SIMPX “ (LV Auch (SIMPX wenn der Krieg heute ein Mobilisierungsfaktor ist)) ” , so Pau , “ (SIMPX die Leute sehen , daß man f¨ur die Arbeit wieder auf die Straße gehen muß)) . ” (c) Translation: “Even if the war is a factor for mobilization,” said Pau, “the people see, that one must go to the street for employment again.” Here, the parser predicts a spurious SIMPX clause spanning the text of the entire sentence, but this causes the second pair of quotation marks to be parsed as belonging to two different clauses. The parser also predicts an incorrect LV field. Using the paired punctuation constraint, our reranking procedure was able to correct these errors. Surprisingly, there are cases in which paired punctuation does not belong inside the same clause in the gold parses. These cases are either extended quotations, in which each of the quotation mark pair occurs in a different sentence altogether, or cases where the second of the quotation mark pair must be positioned outside of other sentence-final punctuation due to orthographic conventions. Sentence-final punctuation is typically placed outside a clause in this version of T¨uBa-D/Z. Other Issues Other incorrect parses generated by the parser include problems with the infrequently occurring topological fields like LV and DM, inability to determine the boundary between MF and NF in clauses without a VC field separating the two, and misidentifying appositive constructions. Another issue is that although the parser output may disagree with the gold standard tree in T¨uBa-D/Z, the parser output may be a well-formed topological field parse for the same sentence with a different interpretation, for example because of attachment ambiguity. Each of the authors independently checked the fifty worstscoring parses, and determined whether each parse produced by the Berkeley parser could be a wellformed topological parse. Where there was disagreement, we discussed our judgments until we came to a consensus. Of the fifty parses, we determined that nine, or 18%, could be legitimate parses. Another five, or 10%, differ from the gold standard parse only in the placement of punctuation. Thus, the F1-measures we presented above may be underestimating the parser’s performance. 5 Conclusion and Future Work In this paper, we examined applying the latentvariable Berkeley parser to the task of topological field parsing of German, which aims to identify the high-level surface structure of sentences. Without any language or model-dependent adaptation, we obtained results which compare favourably to previous work in topological field parsing. We further examined the results of doing a simple reranking process, constraining the output parse to put paired punctuation in the same clause. This reranking was found to result in a minor performance gain. Overall, the parser performs extremely well in identifying the traditional left and right brackets of the topological field model; that is, the fields C, LK, and VC. The parser achieves basically perfect results on these fields in the T¨uBa-D/Z corpus, with F1-measure scores for each at over 98.5%. These scores are higher than previous work in the simpler task of topological field chunking. The focus of future research should thus be on correctly identifying the infrequently occuring fields and constructions, with parenthetical constructions being a particular concern. Possible avenues of future research include doing a more comprehensive discriminative reranking of the parser output. Incorporating more contextual information might be helpful to identify discourse-related constructions such as parentheses, and the DM and LV topological fields. Acknowledgements We are grateful to Markus Becker, Anette Frank, Sandra Kuebler, and Slav Petrov for their invaluable help in gathering the resources necessary for our experiments. This work is supported in part by the Natural Sciences and Engineering Research Council of Canada. 71 References M. Becker and A. Frank. 2002. A stochastic topological parser for German. In Proceedings of the 19th International Conference on Computational Linguistics, pages 71–77. S. Brants, S. Dipper, S. Hansen, W. Lezius, and G. Smith. 2002. The TIGER Treebank. In Proceedings of the Workshop on Treebanks and Linguistic Theories, pages 24–41. U. Callmeier. 2000. PET–a platform for experimentation with efficient HPSG processing techniques. Natural Language Engineering, 6(01):99–107. A. Dubey and F. Keller. 2003. Probabilistic parsing for German using sister-head dependencies. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 96–103. K.A. Foth, M. Daum, and W. Menzel. 2004. A broad-coverage parser for German based on defeasible constraints. Constraint Solving and Language Processing. A. Frank, M. Becker, B. Crysmann, B. Kiefer, and U. Schaefer. 2003. Integrated shallow and deep parsing: TopP meets HPSG. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 104–111. W. Frey. 2004. Notes on the syntax and the pragmatics of German Left Dislocation. In H. Lohnstein and S. Trissler, editors, The Syntax and Semantics of the Left Periphery, pages 203–233. Mouton de Gruyter, Berlin. J. Hockenmaier. 2006. Creating a CCGbank and a Wide-Coverage CCG Lexicon for German. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 505–512. T.N. H¨ohle. 1983. Topologische Felder. Ph.D. thesis, K¨oln. S. K¨ubler, E.W. Hinrichs, and W. Maier. 2006. Is it really that difficult to parse German? In Proceedings of EMNLP. M. Liepert. 2003. Topological Fields Chunking for German with SVM’s: Optimizing SVM-parameters with GA’s. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP), Bulgaria. G. Neumann, C. Braun, and J. Piskorski. 2000. A Divide-and-Conquer Strategy for Shallow Parsing of German Free Texts. In Proceedings of the sixth conference on Applied natural language processing, pages 239–246. Morgan Kaufmann Publishers Inc. San Francisco, CA, USA. S. Petrov and D. Klein. 2008. Parsing German with Latent Variable Grammars. In Proceedings of the ACL-08: HLT Workshop on Parsing German (PaGe08), pages 33–39. S. Petrov, L. Barrett, R. Thibaux, and D. Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 433–440, Sydney, Australia, July. Association for Computational Linguistics. C. Rohrer and M. Forst. 2006. Improving coverage and parsing quality of a large-scale LFG for German. In Proceedings of the Language Resources and Evaluation Conference (LREC-2006), Genoa, Italy. W. Skut, T. Brants, B. Krenn, and H. Uszkoreit. 1998. A Linguistically Interpreted Corpus of German Newspaper Text. Proceedings of the ESSLLI Workshop on Recent Advances in Corpus Annotation. H. Telljohann, E. Hinrichs, and S. Kubler. 2004. The T¨uBa-D/Z treebank: Annotating German with a context-free backbone. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC 2004), pages 2229–2235. H. Telljohann, E.W. Hinrichs, S. Kubler, and H. Zinsmeister. 2006. Stylebook for the Tubingen Treebank of Written German (T¨uBa-D/Z). Seminar fur Sprachwissenschaft, Universitat Tubingen, Tubingen, Germany. T. Ule. 2003. Directed Treebank Refinement for PCFG Parsing. In Proceedings of Workshop on Treebanks and Linguistic Theories (TLT) 2003, pages 177–188. J. Veenstra, F.H. M¨uller, and T. Ule. 2002. Topological field chunking for German. In Proceedings of the Sixth Conference on Natural Language Learning, pages 56–62. 72
2009
8
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 710–718, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Modeling Latent Biographic Attributes in Conversational Genres Nikesh Garera and David Yarowsky Department of Computer Science, Johns Hopkins University Human Language Technology Center of Excellence Baltimore MD, USA {ngarera,yarowsky}@cs.jhu.edu Abstract This paper presents and evaluates several original techniques for the latent classification of biographic attributes such as gender, age and native language, in diverse genres (conversation transcripts, email) and languages (Arabic, English). First, we present a novel partner-sensitive model for extracting biographic attributes in conversations, given the differences in lexical usage and discourse style such as observed between same-gender and mixedgender conversations. Then, we explore a rich variety of novel sociolinguistic and discourse-based features, including mean utterance length, passive/active usage, percentage domination of the conversation, speaking rate and filler word usage. Cumulatively up to 20% error reduction is achieved relative to the standard Boulis and Ostendorf (2005) algorithm for classifying individual conversations on Switchboard, and accuracy for gender detection on the Switchboard corpus (aggregate) and Gulf Arabic corpus exceeds 95%. 1 Introduction Speaker attributes such as gender, age, dialect, native language and educational level may be (a) stated overtly in metadata, (b) derivable indirectly from metadata such as a speaker’s phone number or userid, or (c) derivable from acoustic properties of the speaker, including pitch and f0 contours (Bocklet et al., 2008). In contrast, the goal of this paper is to model and classify such speaker attributes from only the latent information found in textual transcripts. In particular, we are interested in modeling and classifying biographic attributes such as gender and age based on lexical and discourse factors including lexical choice, mean utterance length, patterns of participation in the conversation and filler word usage. Furthermore, a speaker’s lexical choice and discourse style may differ substantially depending on the gender/age/etc. of the speaker’s interlocutor, and hence improvements may be achived via dyadic modeling or stacked classifiers. There has been substantial work in the sociolinguistics literature investigating discourse style differences due to speaker properties such as gender (Coates, 1997; Eckert, McConnell-Ginet, 2003). Analyzing such differences is not only interesting from the sociolinguistic and psycholinguistic point of view of language understanding, but also from an engineering perspective, given the goal of predicting latent author/speaker attributes in various practical applications such as user authenticaion, call routing, user and population profiling on social networking websites such as facebook, and gender/age conditioned language models for machine translation and speech recogntition. While most of the prior work in sociolinguistics has been approached from a non-computational perspective, Koppel et al. (2002) employed the use of a linear model for gender classification with manually assigned weights for a set of linguistically interesting words as features, focusing on a small development corpus. Another computational study for gender classification using approximately 30 weblog entries was done by Herring and Paolillo (2006), making use of a logistic regression model to study the effect of different features. While small-scale sociolinguistic studies on monologues have shed some light on important features, we focus on modeling attributes from spoken conversations, building upon the work of 710 Boulis and Ostendorf (2005) and show how gender and other attributes can be accurately predicted based on the following original contributions: 1. Modeling Partner Effect: A speaker may adapt his or her conversation style depending on the partner and we show how conditioning on the predicted partner class using a stacked model can provide further performance gains in gender classification. 2. Sociolinguistic features: The paper explores a rich set of lexical and non-lexical features motivated by the sociolinguistic literature for gender classification, and show how they can effectively augment the standard ngrambased model of Boulis and Ostendorf (2005). 3. Application to Arabic Language: We also report results for Arabic language and show that the ngram model gives reasonably high accuracy for Arabic as well. Furthmore, we also get consistent performance gains due to partner effect and sociolingusic features, as observed in English. 4. Application to Email Genre: We show how the models explored in this paper extend to email genre, showing the wide applicability of general text-based features. 5. Application to new attributes: We show how the lexical model of Boulis and Ostendorf (2005) can be extended to Age and Native vs. Non-native prediction, with further improvements gained from our partner-sensitive models and novel sociolinguistic features. 2 Related Work Much attention has been devoted in the sociolinguistics literature to detection of age, gender, social class, religion, education, etc. from conversational discourse and monologues starting as early as the 1950s, making use of morphological features such as the choice between the -ing and the -in variants of the present participle ending of the verb (Fisher, 1958), and phonological features such as the pronounciation of the “r” sound in words such as far, four, cards, etc. (Labov, 1966). Gender differences has been one of the primary areas of sociolinguistic research, including work such as Coates (1998) and Eckert and McConnell-Ginet (2003). There has also been some work in developing computational models based on linguistically interesting clues suggested by the sociolinguistic literature for detecting gender on formal written texts (Singh, 2001; Koppel et al., 2002; Herring and Paolillo, 2006) but it has been primarily focused on using a small number of manually selected features, and on a small number of formal written texts. Another relevant line of work has been on the blog domain, using a bag of words feature set to discriminate age and gender (Schler et al., 2006; Burger and Henderson, 2006; Nowson and Oberlander, 2006). Conversational speech presents a challenging domain due to the interaction of genders, recognition errors and sudden topic shifts. While prosodic features have been shown to be useful in gender/age classification (e.g. Shafran et al., 2003), their work makes use of speech transcripts along the lines of Boulis and Ostendorf (2005) in order to build a general model that can be applied to electronic conversations as well. While Boulis and Ostendorf (2005) observe that the gender of the partner can have a substantial effect on their classifier accuracy, given that same-gender conversations are easier to classify than mixed-gender classifications, they don’t utilize this observation in their work. In Section 5.3, we show how the predicted gender/age etc. of the partner/interlocutor can be used to improve overall performance via both dyadic modeling and classifier stacking. Boulis and Ostendorf (2005) have also constrained themselves to lexical n-gram features, while we show improvements via the incorporation of non-lexical features such as the percentage domination of the conversation, degree of passive usage, usage of subordinate clauses, speaker rate, usage profiles for filler words (e.g. ”umm”), mean-utterance length, and other such properties. We also report performance gains of our models for a new genre (email) and a new language (Arabic), indicating the robustness of the models explored in this paper. Finally, we also explore and evaluate original model performance on additional latent speaker attributes including age and native vs. non-native English speaking status. 3 Corpus Details Consistent with Boulis and Ostendorf (2005), we utilized the Fisher telephone conversation corpus (Cieri et al., 2004) and we also evaluated performance on the standard Switchboard conversational corpus (Godfrey et al., 1992), both collected and annotated by the Linguistic Data Consortium. In both cases, we utilized the provided metadata 711 (including true speaker gender, age, native language, etc.) as only class labels for both training and evaluation, but never as features in the classification. The primary task we employed was identical to Boulis and Ostendorf (2005), namely the classification of gender, etc. of each speaker in an isolated conversation, but we also evaluate performance when classifying speaker attributes given the combination of multiple conversations in which the speaker has participated. The Fisher corpus contains a total of 11971 speakers and each speaker participated in 1-3 conversations, resulting in a total of 23398 conversation sides (i.e. the transcript of a single speaker in a single conversation). We followed the preprocessing steps and experimental setup of Boulis and Ostendorf (2005) as closely as possible given the details presented in their paper, although some details such as the exact training/test partition were not currently obtainable from either the paper or personal communication. This resulted in a training set of 9000 speakers with 17587 conversation sides and a test set of 1000 speakers with 2008 conversation sides. The Switchboard corpus was much smaller and consisted of 543 speakers, with 443 speakers used for training and 100 speakers used for testing, resulting in a total of 4062 conversation sides for training and 808 conversation sides for testing. 4 Modeling Gender via Ngram features (Boulis and Ostendorf, 2005) As our reference algorithm, we used the current state-of-the-art system developed by Boulis and Ostendorf (2005) using unigram and bigram features in a SVM framework. We reimplemented this model as our reference for gender classification, further details of which are given below: 4.1 Training Vectors For each conversation side, a training example was created using unigram and bigram features with tf-idf weighting, as done in standard text classification approaches. However, stopwords were retained in the feature set as various sociolinguistic studies have shown that use of some of the stopwords, for instance, pronouns and determiners, are correlated with age and gender. Also, only the ngrams with frequency greater than 5 were retained in the feature set following Boulis and Ostendorf (2005). This resulted in a total of 227,450 features for the Fisher corpus and 57,914 features for the Switchboard corpus. Female Male Fisher Corpus husband -0.0291 my wife 0.0366 my husband -0.0281 wife 0.0328 oh -0.0210 uh 0.0284 laughter -0.0186 ah 0.0248 have -0.0169 er 0.0222 mhm -0.0169 i i 0.0201 so -0.0163 hey 0.0199 because -0.0160 you doing 0.0169 and -0.0155 all right 0.0169 i know -0.0152 man 0.0160 hi -0.0147 pretty 0.0156 um -0.0141 i see 0.0141 boyfriend -0.0134 yeah i 0.0125 oh my -0.0124 my girlfriend 0.0114 i have -0.0119 thats thats 0.0109 but -0.0118 mike 0.0109 children -0.0115 guy 0.0109 goodness -0.0114 is that 0.0108 yes -0.0106 basically 0.0106 uh huh -0.0105 shit 0.0102 Switchboard Corpus oh -0.0122 wife 0.0078 laughter -0.0088 my wife 0.0077 my husband -0.0077 uh 0.0072 husband -0.0072 i i 0.0053 have -0.0069 actually 0.0051 uhhuh -0.0068 sort of 0.0041 and i -0.0050 yeah i 0.0041 feel -0.0048 got 0.0039 umhum -0.0048 a 0.0038 i know -0.0047 sort 0.0037 really -0.0046 yep 0.0036 women -0.0043 the 0.0036 um -0.0042 stuff 0.0035 would -0.0039 yeah 0.0034 children -0.0038 pretty 0.0033 too -0.0036 that that 0.0032 but -0.0035 guess 0.0031 and -0.0034 as 0.0029 wonderful -0.0032 is 0.0028 yeah yeah -0.0031 i guess 0.0028 Table 1: Top 20 ngram features for gender, ranked by the weights assigned by the linear SVM model 4.2 Model After extracting the ngrams, a SVM model was trained via the SVMlight toolkit (Joachims, 1999) using the linear kernel with the default toolkit settings. Table 1 shows the most discriminative ngrams for gender based on the weights assigned by the linear SVM model. It is interesting that some of the gender-correlated words proposed by sociolinguistics are also found by this empirical approach, including the frequent use of “oh” by females and also obvious indicators of gender such as “my wife” or “my husband”, etc. Also, named entity “Mike” shows up as a discriminative unigram, this maybe due to the self-introduction at the beginning of the conversations and “Mike” being a common male name. For compatibility with Boulis and Ostendorf (2005), no special pre712 Figure 1: The effect of varying the amount of each conversation side utilized for training, based on the utilized % of each conversation (starting from their beginning). processing for names is performed, and they are treated as just any other unigrams or bigrams1. Furthermore, the ngram-based approach scales well with varying the amount of conversation utilized in training the model as shown in Figure 1. The “Boulis and Ostendorf, 05” rows in Table 3 show the performance of this reimplemented algorithm on both the Fisher (90.84%) and Switchboard (90.22%) corpora, under the identical training and test conditions used elsewhere in our paper for direct comparison with subsequent results2. 5 Effect of Partner’s Gender Our original contribution in this section is the successful modeling of speaker properties (e.g. gender/age) based on the prior and joint modeling of the partner speaker’s gender/age in the same discourse. The motivation here is that people tend to use stronger gender-specific, age-specific or dialect-specific word/phrase usage and discourse properties when speaking with someone of a similar gender/age/dialect than when speaking with someone of a different gender/age/dialect, when they may adapt a more neutral speaking style. Also, discourse properties such as relative use of the passive and percentage of the conversation dominated may vary depending on the gender or age relationship with the speaking partner. We employ several varieties of classifier stacking and joint modeling to be effectively sensitive to these differences. To illustrate the significance of 1A natural extension of this work, however, would be to do explicit extraction of self introductions and then do tablelookup-based gender classification, although we did not do so for consistency with the reference algorithm. 2The modest differences with their reported results may be due to unreported details such as the exact training/test splits or SVM parameterizations, so for the purposes of assessing the relative gain of our subsequent enhancements we base all reported experiments on the internally-consistent configurations as (re-)implemented here. Fisher Corpus Same gender conversations 94.01 Mixed gender conversations 84.06 Switchboard Corpus Same gender conversations 93.22 Mixed gender conversations 86.84 Table 2: Difference in Gender classification accuracy between mixed gender and same gender conversations using the reference algorithm Classifying speaker’s and partner’s gender simultaneously Male-Male 84.80 Female-Female 81.96 Male-Female 15.58 Female-Male 27.46 Table 3: Performance for 4-way classification of the entire conversation into (mm, ff, mf, fm) classes using the reference algorithm on Switchboard corpus. the “partner effect”, Table 2 shows the difference in the standard algorithm performance between same-gender conversations (when gender-specific style flourishes) and mixed-gender conversations (where more neutral styles are harder to classify). Table 3 shows the classwise performance of classifying the entire conversation into four possible categories. We can see that the mixed-gender cases are also significantly harder to classify on a conversation level granularity. 5.1 Oracle Experiment To assess the potential gains from full exploitation of partner-sensitive modeling, we first report the result from an oracle experiment, where we assume we know whether the conversation is homogeneous (same gender) or heterogeneous (different gender). In order to effectively utilize this information, we classify both the test conversation side and the partner side, and if the classifier is more confident about the partner side then we choose the gender of the test conversation side based on the heterogeneous/homogeneous information. The overall accuracy improves to 96.46% on the Fisher corpus using this oracle (from 90.84%), leading us to the experiment where the oracle is replaced with a non-oracle SVM model trained on a subset of training data such that all test conversation sides (of the speaker and the partner) are excluded from the training set. 5.2 Replacing Oracle by a Homogeneous vs Heterogenous Classifier Given the substantial improvement using the Oracle information, we initially trained another bi713 nary classifier for classifying the conversation as mixed or single-gender. It turns out that this task is much harder than the single-side gender classification, task and achieved only a low accuracy value of 68.35% on the Fisher corpus. Intuitively, the homogeneous vs. hetereogeneous partition results in a much harder classification task because the two diverse classes of male-male and femalefemale conversations are grouped into one class (“homogeneous”) resulting in linearly inseparable classes3. This subsequently lead us to create two different classifiers for conversations, namely, male-male vs rest and female-female vs rest4 used in a classifier combination framework as follows: 5.3 Modeling partner via conditional model and whole-conversation model The following classifiers were trained and each of their scores was used as a feature in a meta SVM classifier: 1. Male-Male vs Rest: Classifying the entire conversation (using test speaker and partner’s sides) as male-male or other5. 2. Female-Female vs Rest: Classifying the entire conversation (using test speaker and partner’s sides) as female-female or other. 3. Conditional model of gender given most likely partner’s gender: Two separate classifiers were trained for classifying the gender of a given conversation side, one where the partner is male and other where the partner is female. Given a test conversation side, we first choose the most likely gender of the partner’s conversation side using the ngrambased model6 and then choose the gender of the test conversation side using the appropriate conditional model. 4. Ngram model as explained in Section 4. The row labeled “+ Partner Model” in Table 4 shows the performance gain obtained via this meta-classifier incorporating conversation type and partner-conditioned models. 3Even non-linear kernels were not able to find a good classification boundary 4We also explored training a 3-way classifier, male-male, female-female, mixed and the results were similar to that of the binarized setup 5For classifying the conversations as male-male vs rest or female-female vs rest, all the conversations with either the speaker or the partner present in any of the test conversations were eliminated from the training set, thus creating a disjoint training and test conversation partitions. 6All the partner conversation sides of test speakers were removed from the training data and the ngram-based model was retrained on the remaining subset. Figure 2: Empirical differences in sociolinguistic features for Gender on the Switchboard corpus 6 Incorporating Sociolinguistic Features The sociolinguistic literature has shown gender differences for speakers due to features such as speaking rate, pronoun usage and filler word usage. While ngram features are able to reasonably predict speaker gender due to their high detail and coverage and the overall importance of lexical choice in gender differences while speaking, the sociolinguistics literature suggests that other nonlexical features can further help improve performance, and more importantly, advance our understanding of gender differences in discourse. Thus, on top of the standard Boulis and Ostendorf (2005) model, we also investigated the following features motivated by the sociolinguistic literature on gender differences in discourse (Macaulay, 2005): 1. % of conversation spoken: We measured the speaker’s fraction of conversation spoken via three features extracted from the transcripts: % of words, utterances and time. 2. Speaker rate: Some studies have shown that males speak faster than females (Yuan et al., 2006) as can also be observed in Figure 2 showing empirical data obtained from Switchboard corpus. The speaker rate was measured in words/sec., using starting and ending time-stamps for the discourse. 3. % of pronoun usage: Macaulay (2005) argues that females tend to use more third-person male/female pronouns (he, she, him, her and his) as compared to males. 4. % of back-channel responses such as “(laughter)” and “(lipsmacks)”. 5. % of passive usage: Passives were detected by extracting a list of past-participle verbs from Penn Treebank and using occurences of “form of ”to be” + past participle”. 714 6. % of short utterances (<= 3 words). 7. % of modal auxiliaries, subordinate clauses. 8. % of “mm” tokens such as “mhm”, “um”, “uh-huh”, “uh”, “hm”, “hmm”,etc. 9. Type-token ratio 10. Mean inter-utterance time: Avg. time taken between utterances of the same speaker. 11. % of “yeah” occurences. 12. % of WH-question words. 13. % Mean word and utterance length. The above classes resulted in a total of 16 sociolinguistic features which were added based on feature ablation studies as features in the meta SVM classifier along with the 4 features as explained previously in Section 5.3. The rows in Table 4 labeled “+ (any sociolinguistic feature)” show the performance gain using the respective features described in this section. Each row indicates an additive effect in the feature ablation, showing the result of adding the current sociolinguistic feature with the set of features mentioned in the rows above. 7 Gender Classification Results Table 4 combines the results of the experiments reported in the previous sections, assessed on both the Fisher and Switchboard corpora for gender classification. The evaluation measure was the standard classifier accuracy, that is, the fraction of test conversation sides whose gender was correctly predicted. Baseline performance (always guessing female) yields 57.47% and 51.6% on Fisher and Switchboard respectively. As noted before, the standard reference algorithm is Boulis and Ostendorf (2005), and all cited relative error reductions are based on this established standard, as implemented in this paper. Also, as a second reference, performance is also cited for the popular “Gender Genie”, an online gender-detector7, based on the manually weighted word-level sociolinguistic features discussed in Argamon et al. (2003). The additional table rows are described in Sections 4-6, and cumulatively yield substantial improvements over the Boulis and Ostendorf (2005) standard. 7.1 Aggregating results over per-speaker via consensus voting While Table 4 shows results for classifying the gender of the speaker on a per conversation basis (to be consistent and enable fair comparison 7http://bookblog.net/gender/genie.php Model Acc. Error Reduc. Fisher Corpus (57.5% of sides are female) Gender Genie 55.63 -384% Ngram (Boulis & Ostendorf, 05) 90.84 Ref. + Partner Model 91.28 4.80% + % of “yeah” 91.33 + % of (laughter) 91.38 + % of short utt. 91.43 + % of auxiliaries 91.48 + % of subord-clauses, “mm” 91.58 + % of Participation (in utt.) 91.63 + % of Passive usage 91.68 9.17% Switchboard Corpus (51.6% of sides are female) Gender Genie 55.94 -350% Ngram (Boulis & Ostendorf, 05) 90.22 Ref. + Partner Model 91.58 13.91% + Speaker rate, % of fillers 91.71 + Mean utt. len., % of Ques. 91.96 + % of Passive usage 92.08 + % of (laughter) 92.20 20.25% Table 4: Results showing improvement in accuracy of gender classifier using partner-model and sociolinguistic features Model Acc. Error Reduc. Fisher Corpus Ngram (Boulis & Ostendorf, 05) 90.50 Ref. + Partner Model 91.60 11.58% + Socioling. Features 91.70 12.63% Switchboard Corpus Ngram (Boulis & Ostendorf, 05) 92.78 Ref. + Partner Model 93.81 14.27% + Socioling. Features 96.91 57.20% Table 5: Aggregate results on a “per-speaker” basis via majority consensus on different conversations for the respective speaker. The results on Switchboard are significantly higher due to more conversations per speaker as compared to the Fisher corpus with the work reported by Boulis and Ostendorf (2005)), all of the above models can be easily extended to per-speaker evaluation by pooling in the predictions from multiple conversations of the same speaker. Table 5 shows the result of each model on a per-speaker basis using a majority vote of the predictions made on the individual conversations of the respective speaker. The consensus model when applied to Switchboard corpus show larger gains as it has 9.38 conversations per speaker on average as compared to 1.95 conversations per speaker on average in Fisher. The results 715 on Switchboard corpus show a very large reduction in error rate of more than 57% with respect to the standard algorithm, further indicating the usefulness of the partner-sensitive model and richer sociolinguistic features when more conversational evidence is available. 8 Application to Arabic Language It would be interesting to see how the Boulis and Ostendorf (2005) model along with the partnerbased model and sociolinguistic features would extend to a new language. We used the LDC Gulf Arabic telephone conversation corpus (Linguistic Data Consortium, 2006). The training set consisted of 499 conversations, and the test set consisted of 200 conversations. Each speaker participated in only one conversation, resulting in the same number of training/test speakers as conversations, and thus there was no overlap in speakers/partners between training and test sets. Only non-lexical sociolinguistic features were used for Arabic in addition to the ngram features. The results for Arabic are shown in table 6. Based on prior distribution, always guessing the most likely class for gender (“male”) yielded 52.5% accuracy. We can see that the Boulis and Ostendorf (2005) model gives a reasonably high accuracy in Arabic as well. More importantly, we also see consistent performance gains via partner modeling and sociolinguistic features, indicating the robustness of these models and achieving final accuracy of 96%. 9 Application to Email Genre A primary motivation for using only the speaker transcripts as compared to also using acoustic properties of the speaker (Bocklet et al., 2008) was to enable the application of the models to other new genres. In order to empirically support this motivation, we also tested the performance of the models explored in this paper on the Enron email corpus (Klimt and Yang, 2004). We manually annotated the sender’s gender on a random collection of emails taken from the corpus. The resulting training and test sets after preprocessing for header information, reply-to’s, forwarded messages consisted of 1579 and 204 emails respectively. In addition to ngram features, a subset of sociolinguistic features that could be extracted for email were also utilized. Based on the prior distribution, always guessing the most likely class (“male”) resulted in 63.2% accuracy. We can see from Table 7 that the Boulis and Ostendorf (2005) Model Acc. Error Reduc. Gulf Arabic (52.5% sides are male) Ngram (Boulis & Ostendorf, 05) 92.00 Ref. + Partner Model 95.00 + Mean word len. 95.50 + Mean utt. len. 96.00 50.00% Table 6: Gender classification results for a new language (Gulf Arabic) showing consistent improvement gains via partner-model and sociolinguistic features. Model Acc. Error Reduc. Enron Email Corpus (63.2% sides are male) Ngram (Boulis & Ostendorf, 05) 76.78 Ref. + % of subor-claus., Mean 80.19 word len., Type-token ratio + % of pronouns. 80.50 16.02% Table 7: Application of Ngram model and sociolinguistic features for gender classification in a new genre (Email) model based on lexical features yields a reasonable performance with further improvements due to the addition of sociolingustic features, resulting in 80.5% accuracy. 10 Application to New Attributes While gender has been studied heavily in the literature, other speaker attributes such as age and native/non-native status also correlate highly with lexical choice and other non-lexical features. We applied the ngram-based model of Boulis and Ostendorf (2005) and our improvements using our partner-sensitive model and richer sociolinguistic features for a binary classification of the age of the speaker, and classifying into native speaker of English vs non-native. Corpus details for Age and Native Language: For age, we used the same training and test speakers from Fisher corpus as explained for gender in section 3 and binarized into greater-than or lessthan-or-equal-to 40 for more parallel binary evaluation. For predicting native/non-native status, we used the 1156 non-native speakers in the Fisher corpus and pooled them with a randomly selected equal number of native speakers. The training and test partitions consisted of 2000 and 312 speakers respectively, resulting in 3267 conversation sides for training and 508 conversation sides for testing. 716 Age >= 40 Age < 40 well 0.0330 im thirty -0.0266 im forty 0.0189 actually -0.0262 thats right 0.0160 definitely -0.0226 forty 0.0158 like -0.0223 yeah well 0.0153 wow -0.0189 uhhuh 0.0148 as well -0.0183 yeah right 0.0144 exactly -0.0170 and um 0.0130 oh wow -0.0143 im fifty 0.0126 everyone -0.0137 years 0.0126 i mean -0.0132 anyway 0.0123 oh really -0.0128 isnt 0.0118 mom -0.0112 daughter 0.0117 im twenty -0.0110 well i 0.0116 cool -0.0108 in fact 0.0116 think that -0.0107 whether 0.0111 so -0.0107 my daughter 0.0111 mean -0.0106 pardon 0.0110 pretty -0.0106 gee 0.0109 thirty -0.0105 know laughter 0.0105 hey -0.0103 this 0.0102 right now -0.0100 oh 0.0102 cause -0.0096 young 0.0100 im actually -0.0096 in 0.0100 my mom -0.0096 when they 0.0100 kinda -0.0095 Table 8: Top 25 ngram features for Age ranked by weights assigned by the linear SVM model Results for Age and Native/Non-Native: Based on the prior distribution, always guessing the most likely class for age ( age less-than-orequal-to 40) results in 62.59% accuracy and always guessing the most likely class for native language (non-native) yields 50.59% accuracy. Table 9 shows the results for age and native/nonnative speaker status. We can see that the ngrambased approach for gender also gives reasonable performance on other speaker attributes, and more importantly, both the partner-model and sociolinguistic features help in reducing the error rate on age and native language substantially, indicating their usefulness not just on gender but also on other diverse latent attributes. Table 8 shows the most discriminative ngrams for binary classification of age, it is interesting to see the use of “well” right on top of the list for older speakers, also found in the sociolinguistic studies for age (Macaulay, 2005). We also see that older speakers talk about their children (“my daughter”) and younger speakers talk about their parents (“my mom”), the use of words such as “wow”, “kinda” and “cool” is also common in younger speakers. To give maximal consistency/benefit to the Boulis and Ostendorf (2005) n-gram-based model, we did not filter the self-reporting n-grams such as “im forty” and “im thirty”, putting our sociolinguisticliterature-based and discourse-style-based features at a relative disadvantage. Model Accuracy Age (62.6% of sides have age <= 40) Ngram Model 82.27 + Partner Model 82.77 + % of passive, mean inter-utt. time 83.02 , % of pronouns + % of “yeah” 83.43 + type/token ratio, + % of lipsmacks 83.83 + % of auxiliaries, + % of short utt. 83.98 + % of “mm” 84.03 (Reduction in Error) (9.93%) Native vs Non-native (50.6% of sides are non-native) Ngram 76.97 + Partner 80.31 + Mean word length 80.51 (Reduction in Error) (15.37%) Table 9: Results showing improvement in the accuracy of age and native language classification using partner-model and sociolinguistic features 11 Conclusion This paper has presented and evaluated several original techniques for the latent classification of speaker gender, age and native language in diverse genres and languages. A novel partner-sensitve model shows performance gains from the joint modeling of speaker attributes along with partner speaker attributes, given the differences in lexical usage and discourse style such as observed between same-gender and mixed-gender conversations. The robustness of the partner-model is substantially supported based on the consistent performance gains achieved in diverse languages and attributes. This paper has also explored a rich variety of novel sociolinguistic and discourse-based features, including mean utterance length, passive/active usage, percentage domination of the conversation, speaking rate and filler word usage. In addition to these novel models, the paper also shows how these models and the previous work extend to new languages and genres. Cumulatively up to 20% error reduction is achieved relative to the standard Boulis and Ostendorf (2005) algorithm for classifying individual conversations on Switchboard, and accuracy for gender detection on the Switchboard corpus (aggregate) and Gulf Arabic exceeds 95%. Acknowledgements We would like to thank Omar F. Zaidan for valuable discussions and feedback during the initial stages of this work. 717 References S. Argamon, M. Koppel, J. Fine, and A.R. Shimoni. 2003. Gender, genre, and writing style in formal written texts. Text-Interdisciplinary Journal for the Study of Discourse, 23(3):321–346. T. Bocklet, A. Maier, and E. N¨oth. 2008. Age Determination of Children in Preschool and Primary School Age with GMM-Based Supervectors and Support Vector Machines/Regression. In Proceedings of Text, Speech and Dialogue; 11th International Conference, volume 1, pages 253–260. C. Boulis and M. Ostendorf. 2005. A quantitative analysis of lexical differences between genders in telephone conversations. Proceedings of ACL, pages 435–442. J.D. Burger and J.C. Henderson. 2006. An exploration of observable features related to blogger age. In Computational Approaches to Analyzing Weblogs: Papers from the 2006 AAAI Spring Symposium, pages 15–20. C. Cieri, D. Miller, and K. Walker. 2004. The Fisher Corpus: a resource for the next generations of speech-to-text. In Proceedings of LREC. J. Coates. 1998. Language and Gender: A Reader. Blackwell Publishers. Linguistic Data Consortium. 2006. Gulf Arabic Conversational Telephone Speech Transcripts. P. Eckert and S. McConnell-Ginet. 2003. Language and Gender. Cambridge University Press. J.L. Fischer. 1958. Social influences on the choice of a linguistic variant. Word, 14:47–56. JJ Godfrey, EC Holliman, and J. McDaniel. 1992. Switchboard: Telephone speech corpus for research and development. Proceedings of ICASSP, 1. S.C. Herring and J.C. Paolillo. 2006. Gender and genre variation in weblogs. Journal of Sociolinguistics, 10(4):439–459. J. Holmes and M. Meyerhoff. 2003. The Handbook of Language and Gender. Blackwell Publishers. H. Jing, N. Kambhatla, and S. Roukos. 2007. Extracting social networks and biographical facts from conversational speech transcripts. Proceedings of ACL, pages 1040–1047. B. Klimt and Y. Yang. 2004. Introducing the Enron corpus. In First Conference on Email and AntiSpam (CEAS). M. Koppel, S. Argamon, and A.R. Shimoni. 2002. Automatically Categorizing Written Texts by Author Gender. Literary and Linguistic Computing, 17(4):401–412. W. Labov. 1966. The Social Stratification of English in New York City. Center for Applied Linguistics, Washington DC. H. Liu and R. Mihalcea. 2007. Of Men, Women, and Computers: Data-Driven Gender Modeling for Improved User Interfaces. In International Conference on Weblogs and Social Media. R.K.S. Macaulay. 2005. Talk that Counts: Age, Gender, and Social Class Differences in Discourse. Oxford University Press, USA. S. Nowson and J. Oberlander. 2006. The identity of bloggers: Openness and gender in personal weblogs. Proceedings of the AAAI Spring Symposia on Computational Approaches to Analyzing Weblogs. J. Schler, M. Koppel, S. Argamon, and J. Pennebaker. 2006. Effects of age and gender on blogging. Proceedings of the AAAI Spring Symposia on Computational Approaches to Analyzing Weblogs. I. Shafran, M. Riley, and M. Mohri. 2003. Voice signatures. Proceedings of ASRU, pages 31–36. S. Singh. 2001. A pilot study on gender differences in conversational speech on lexical richness measures. Literary and Linguistic Computing, 16(3):251–264. 718
2009
80
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 719–727, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP A Graph-based Semi-Supervised Learning for Question-Answering Asli Celikyilmaz EECS Department University of California at Berkeley Berkeley, CA, 94720 [email protected] Marcus Thint Intelligent Systems Research Centre British Telecom (BT Americas) Jacksonville, FL 32256, USA [email protected] Zhiheng Huang EECS Department University of California at Berkeley Berkeley, CA, 94720 [email protected] Abstract We present a graph-based semi-supervised learning for the question-answering (QA) task for ranking candidate sentences. Using textual entailment analysis, we obtain entailment scores between a natural language question posed by the user and the candidate sentences returned from search engine. The textual entailment between two sentences is assessed via features representing high-level attributes of the entailment problem such as sentence structure matching, question-type named-entity matching based on a question-classifier, etc. We implement a semi-supervised learning (SSL) approach to demonstrate that utilization of more unlabeled data points can improve the answer-ranking task of QA. We create a graph for labeled and unlabeled data using match-scores of textual entailment features as similarity weights between data points. We apply a summarization method on the graph to make the computations feasible on large datasets. With a new representation of graph-based SSL on QA datasets using only a handful of features, and under limited amounts of labeled data, we show improvement in generalization performance over state-of-the-art QA models. 1 Introduction Open domain natural language question answering (QA) is a process of automatically finding answers to questions searching collections of text files. There are intensive research in this area fostered by evaluation-based conferences, such as the Text REtrieval Conference (TREC) (Voorhees, 2004), etc. One of the focus of these research, as well as our work, is on factoid questions in English, whereby the answer is a short string that indicates a fact, usually a named entity. A typical QA system has a pipeline structure starting from extraction of candidate sentences to ranking true answers. In order to improve QA systems’ performance many research focus on different structures such as question processing (Huang et al., 2008), information retrieval (Clarke et al., 2006), information extraction (Saggion and Gaizauskas, 2006), textual entailment (TE) (Harabagiu and Hickl, 2006) for ranking, answer extraction, etc. Our QA system has a similar pipeline structure and implements a new TE module for information extraction phase of the QA task. TE is a task of determining if the truth of a text entails the truth of another text (hypothesis). Harabagui and Hickl (2006) has shown that using TE for filtering or ranking answers can enhance the accuracy of current QA systems, where the answer of a question must be entailed by the text that supports the correctness of this answer. We derive information from pair of texts, i.e., question as hypothesis and candidate sentence as the text, potentially indicating containment of true answer, and cast the inference recognition as classification problem to determine if a question text follows candidate text. One of the challenges we face with is that we have very limited amount of labeled data, i.e., correctly labeled (true/false entailment) sentences. Recent research indicates that using labeled and unlabeled data in semi-supervised learning (SSL) environment, with an emphasis on graph-based methods, can improve the performance of information extraction from data for tasks such as question classification (Tri et al., 2006), web classification (Liu et al., 2006), relation extraction (Chen et al., 2006), passage-retrieval (Otterbacher et al., 2009), various natural language processing tasks such as partof-speech tagging, and named-entity recognition (Suzuki and Isozaki, 2008), word-sense disam719 biguation (Niu et al., 2005), etc. We consider situations where there are much more unlabeled data, XU, than labeled data, XL, i.e., nL ≪nU. We construct a textual entailment (TE) module by extracting features from each paired question and answer sentence and designing a classifier with a novel yet feasible graphbased SSL method. The main contributions are: −construction of a TE module to extract matching structures between question and answer sentences, i.e., q/a pairs. Our focus is on identifying good matching features from q/a pairs, concerning different sentence structures in section 2, −representation of our linguistic system by a form of a special graph that uses TE scores in designing a novel affinity matrix in section 3, −application of a graph-summarization method to enable learning from a very large unlabeled and rather small labeled data, which would not have been feasible for most sophisticated learning tools in section 4. Finally we demonstrate the results of experiments with real datasets in section 5. 2 Feature Extraction for Entailment Implementation of different TE models has previously shown to improve the QA task using supervised learning methods (Harabagiu and Hickl, 2006). We present our recent work on the task of QA, wherein systems aim at determining if a text returned by a search engine contains the correct answer to the question posed by the user. The major categories of information extraction produced by our QA system characterizes features for our TE model based on analysis of q/a pairs. Here we give brief descriptions of only the major modules of our QA due to space limitations. 2.1 Pre-Processing for Feature Extraction We build the following pre-processing modules for feature extraction to be applied prior to our textual entailment analysis. Question-Type Classifier (QC): QC is the task of identifying the type of a given question among a predefined set of question types. The type of a question is used as a clue to narrow down the search space to extract the answer. We used our QC system presented in (Huang et al., 2008), which classifies each question into 6-coarse categories (i.e., abbr., entity, human, location, number, description) as well as 50-fine categories (i.e., color, food, sport, manner, etc.) with almost 90% accuracy. For instance, for question ”How many states are there in US?”, the question-type would be ’NUMBER’ as course category, and ’Count’ for the finer category, represented jointly as NUM:Count. The QC model is trained via support vector machines (SVM) (Vapnik, 1995) considering different features such as semantic headword feature based on variation of Collins rules, hypernym extraction via Lesk word disambiguation (Lesk, 1988), regular expressions for whword indicators, n-grams, word-shapes(capitals), etc. Extracted question-type is used in connection with our Named-Entity-Recognizer, to formulate question-type matching feature, explained next. Named-Entity Recognizer (NER): This component identifies and classifies basic entities such as proper names of person, organization, product, location; time and numerical expressions such as year, day, month; various measurements such as weight, money, percentage; contact information like address, web-page, phone-number, etc. This is one of the fundamental layers of information extraction of our QA system. The NER module is based on a combination of user defined rules based on Lesk word disambiguation (Lesk, 1988), WordNet (Miller, 1995) lookups, and many userdefined dictionary lookups, e.g. renown places, people, job types, organization names, etc. During the NER extraction, we also employ phrase analysis based on our phrase utility extraction method using Standford dependency parser ((Klein and Manning, 2003)). We can categorize entities up to 6 coarse and 50 fine categories to match them with the NER types from QC module. Phrase Identification(PI): Our PI module undertakes basic syntactic analysis (shallow parsing) and establishes simple, un-embedded linguistic structures such as noun-phrases (NN), basic prepositional phrases (PP) or verb groups (VG). In particular PI module is based on 56 different semantic structures identified in Standford dependency parser in order to extract meaningful compound words from sentences, e.g., ”They heard high pitched cries.”. Each phrase is identified with a head-word (cries) and modifiers (high pitched). Questions in Affirmative Form: To derive linguistic information from pair of texts (statements), we parse the question and turn into affirmative form by replacing the wh-word with a placeholder and associating the question word with the question-type from the QC module. For example: 720 ”What is the capital of France?” is written in affirmative form as ”[X]LOC:City is the capital of FranceLOC:Country.”. Here X is the answer text of LOC:City NER-type, that we seek. Sentence Semantic Component Analysis: Using shallow semantics, we decode the underlying dependency trees that embody linguistic relationships such as head-subject (H-S), head-modifier (complement) (H-M), head-object (H-O), etc. For instance, the sentence ”Bank of America acquired Merrill Lynch in 2008.” is partitioned as: −Head (H): acquired −Subject (S): Bank of America[Human:group] −Object (O): Merrill Lynch[Human:group] −Modifier (M): 2008[Num:Date] These are used as features to match components of questions like ”Who purchased Merrill Lynch?”. Sentence Structure Analysis: In our question analysis, we observed that 98% of affirmed questions did not contain any object and they are also in copula (linking) sentence form that is, they are only formed by subject and information about the subject as: {subject + linking-verb + subjectinfo.}. Thus, we investigate such affirmed questions different than the rest and call them copula sentences and the rest as non-copula sentences. 1 For instance our system recognizes affirmed question ” Fred Durst’s group name is [X]DESC:Def”. as copula-sentence, which consists of subject (underlined) and some information about it. 2.2 Features from Paired Sentence Analysis We extract the TE features based on the above lexical, syntactic and semantic analysis of q/a pairs and cast the QA task as a classification problem. Among many syntactic and semantic features we considered, here we present only the major ones: (1) (QTCF) Question-Type-Candidate Sentence NER match feature: Takes on the value ’1’ when the candidate sentence contains the fine NER of the question-type, ’0.5’ if it contains the coarse NER or ’0’ if no NER match is found. (2) (QComp) Question component match features: The sentence component analysis is applied on both the affirmed question and the candidate sentence pairs to characterize their semantic components including subject(S), object(O), head (H) and modifiers(M). We match each semantic component of a question to the best matching com1One option would have been to leave out the non-copula questions and build the model for only copula questions. ponent of a candidate sentence. For example for the given question, ”When did Nixon die?”, when the following candidate sentence, i.e., ”Richard Nixon, 37th President of USA, passed away of stroke on April 22, 1994.” is considered, we extract the following component match features: −Head-Match: die→pass away −Subject-Match: Nixon→Richard Nixon −Object-Match: − −Modifier-Match: [X]→April 22, 1994 In our experiments we observed that converted questions have at most one subject, head, object and a few modifiers. Thus, we used one feature for each and up to three for M-Match features. The feature values vary based on matching type, i.e., exact match, containment, synonym match, etc. For example, the S-Match feature will be ”1.0” due to head-match of the noun-phrase. (3) (LexSem) Lexico-Syntactic Alignment Features: They range from the ratio of consecutive word overlap between converted question (Q) and candidate sentence (S) including –Unigram/Bigram, selecting individual/pair of adjacent tokens in Q matching with the S –Noun and verb counts in common, separately. –When words don’t match we attempt matching synonyms in WordNet for most common senses. –Verb match statistics using WordNet’s cause and entailment relations. As a result, each q/a pair is represented as a feature vector xi ∈ℜd characterizing the entailment information between them. 3 Graph Based Semi-Supervised Learning for Entailment Ranking We formulate semi-supervised entailment rank scores as follows. Let each data point in X = {x1, ..., xn}, xi ∈ℜd represents information about a question and candidate sentence pair and Y = {y1, ..., yn} be their output labels. The labeled part of X is represented with XL = {x1, ..., xl} with associated labels YL = {y1, ..., yl}T . For ease of presentation we concentrate on binary classification, where yi can take on either of {−1, +1} representing entailment or non-entailment. X has also unlabeled part, XU = {x1, ..., xu}, i.e., X = XL ∪XU. The aim is to predict labels for XU. There are also other testing points, XTe, which has the same properties as X. Each node V in graph g = (V, E) represents a feature vector, xi ∈ℜd of a q/a pair, characteriz721 ing their entailment relation information. When all components of a hypothesis (affirmative question) have high similarity with components of text (candidate sentence), then entailment score between them would be high. Another pair of q/a sentences with similar structures would also have high entailment scores as well. So similarity between two q/a pairs xi, xj, is represented with wij ∈ℜn×n, i.e., edge weights, and is measured as: wij = 1 − dP q=1 |xiq−xjq| d (1) As total entailment scores get closer, the larger their edge weights would be. Based on our sentence structure analysis in section 2, given dataset can be further separated into two, i.e., Xcp containing q/a pairs in which affirmed questions are copula-type, and Xncp containing q/a pairs with non-copula-type affirmed questions. Since copula and non-copula sentences have different structures, e.g., copula sentences does not usually have objects, we used different sets of features for each type. Thus, we modify edge weights in (1) as follows: ˜wij =              0 xi ∈Xcp, xj ∈Xncp 1 − dcp P q=1 |xiq−xjq| dcp xi, xj ∈Xcp 1 − dncp P q=1 |xiq−xjq| dncp xi, xj ∈Xncp (2) The diagonal degree matrix D is defined for graph g by D=P j ˜wij. In general graph-based SSL, a function over the graph is estimated such that it satisfies two conditions: 1) close to the observed labels , and 2) be smooth on the whole graph by: arg minf X i⊂L (fi −yi)2+λ X i,j∈L∪U ˜wij(fi −fj)2 (3) The second term is a regularizer to represent the label smoothness, fT Lf, where L = D−W is the graph Laplacian. To satisfy the local and global consistency (Zhou et al., 2004), normalized combinatorial Laplacian is used such that the second term in (3) is replaced with normalized Laplacian, L = D−1/2LD−1/2, as follows: X i,j∈L∪U wij( fi √di − fj √ dj )2 = fT Lf (4) Setting gradient of loss function to zero, optimum f∗, where Y = {YL ∪YU} , YU =  yn l+1 = 0 ; f∗= (1 + λ (1 −L))−1 Y (5) Most graph-based SSLs are transductive, i.e., not easily expendable to new test points outside L∪U. In (Delalleau et al., 2005) an induction scheme is proposed to classify a new point xTe by ˆf(xTe) = P i∈L∪U wxifi P i∈L∪U wxi (6) Thus, we use induction, where we can, to avoid re-construction of the graph for new test points. 4 Graph Summarization Research on graph-based SSL algorithms point out their effectiveness on real applications, e.g., (Zhu et al., 2003), (Zhou and Sch¨olkopf, 2004), (Sindhwani et al., 2007). However, there is still a need for fast and efficient SSL methods to deal with vast amount of data to extract useful information. It was shown in (Delalleau et al., 2006) that the convergence rate of the propagation algorithms of SSL methods is O(kn2), which mainly depends on the form of eigenvectors of the graph Laplacian (k is the number of nearest neighbors). As the weight matrix gets denser, meaning there will be more data points with connected weighted edges, the more it takes to learn the classifier function via graph. Thus, the question is, how can one reduce the data points so that weight matrix is sparse, and it takes less time to learn? Our idea of summarization is to create representative vertices of data points that are very close to each other in terms of edge weights. Suffice to say that similar data points are likely to represent denser regions in the hyper-space and are likely to have same labels. If these points are close enough, we can characterize the boundaries of these group of similar data points with respect to graph and then capture their summary information by new representative vertices. We replace each data point within the boundary with their representative vertex, to form a summary graph. 4.1 Graph Summarization Algorithm Let each selected dataset be denoted as Xs = {xs i} , i = 1...m, s = 1, ..., q, where m is the number of data points in the sample dataset and q is the number of sample datasets drawn from X. The labeled data points, i.e., XL, are appended to each of these selected Xs datasets, Xs =  xs 1, ...xs m−l ∪XL. Using a separate learner, e.g., SVM (Vapnik, 1995), we obtain predicted outputs, ˆ Y s = ˆys 1, ..., ˆys m−l  of Xs and append observed labels ˆ Y s = ˆ Y s ∪YL. 722 Figure 1: Graph Summarization. (a) Actual data point with predicted class labels, (b) magnified view of a single node (black) and its boundaries (c) calculated representative vertex, (d) summary dataset. We define the weight W s and degree Ds matrices of Xs using (1). Diagonal elements of Ds is converted into a column vector and is sorted to find the high degree vertices that are surrounded with large number of close neighbors. The algorithm starts from the highest degree node xs i ∈Xs, where initial neighbor nodes have assumably the same labels. This is shown in Figure 1-(b) with the inner square around the middle black node, corresponding high degree node. If its immediate k neighbors, dark blue colored nodes, have the same label, the algorithm continues to search for the secondary k neighbors, the light blue colored nodes, i.e., the neighbors of the neighbors, to find out if there are any opposite labeled nodes around. For instance, for the corresponding node (black) in Figure 1-(b) we can only go up to two neighbors, because in the third level, there are a few opposite labeled nodes, in red. This indicates boundary Bs i for a corresponding node and unique nearest neighbors of same labels. Bs i = n xs i ∪  xs j nm j=1 o (7) In (7), nm denotes the maximum number of nodes of a Bs i and ∀xs j, xs j′ ∈Bs i , ys j = ys j′ = yBs i , where yBs i is the label of the selected boundary Bs i . We identify the edge weights ws ij between each node in the boundary Bs i via (1), thus the boundary is connected. We calculate the weighted average of the vertices to obtain the representative summary node of Bs i as shown in Figure 1-(c); X s Bi = Pnm i̸=j=1 1 2ws ij(xs i + xs j) Pnm i̸=j=1 ws ij (8) The boundaries of some nodes may only contain themselves because their immediate neighbors may have opposite class labels. Similarly some may have only k + 1 nodes, meaning only immediate neighbor nodes have the same labels. For instance in Fig. 1 the boundary is drawn after the secondary neighbors are identified (dashed outer boundary). This is an important indication that some representative data points are better indicators of class labels than the others due to the fact that they represent a denser region of same labeled points. We represent this information with the local density constraints. Each new vertex is associated with a local density constraint, 0 ≤δj ≤1, which is equal to the total number of neighboring nodes used to construct it. We use the normalized density constraints for ease of calculations. Thus, for a each sample summary dataset, a local density constraint vector is identified as δs = {δs 1, ..., δs nb}T . The local density constraints become crucial for inference where summarized labeled data are used instead of overall dataset. Algorithm 1 Graph Summary of Large Dataset 1: Given X = {x1, ..., xn} , X = XL ∪XU 2: Set q ←max number of subsets 3: for s ←1, ..., q do 4: Choose a random subset with repetitions 5: Xs = {xs 1, ..., xs m−l, xm−l+1, ..., xm} 6: Summarize Xs to obtain X s in (9) 7: end for 8: Obtain summary dataset X =  X s q s=1=  Xi p i=1 and local density constrains, δ = {δi}p i=1. After all data points are evaluated, the sample dataset Xs can now be represented with the summary representative vertices as X s =  X s B1, ..., X s Bnb . (9) and corresponding local density constraints as, δs = {δs 1, ..., δs nb}T , 0 < δs i ≤1 (10) 723 The summarization algorithm is repeated for each random subset Xs, s = 1, ..., q of very large dataset X = XL ∪XU, see Algorithm 1. As a result q number of summary datasets X s each of which with nb labeled data points are combined to form a representative sample of X, X =  X s q s=1 reducing the number of data from n to a much smaller number of data, p = q ∗nb ≪n. So the new summary of the X can be represented with X =  Xi p i=1. For example, an original dataset with 1M data points can be divided up to q = 50 random samples of m = 5000 data points each. Then using graph summarization each summarized dataset may be represented with nb ∼= 500 data points. After merging summarized data, final summarized samples compile to 500 ∗50 ∼= 25K ≪1M data points, reduced to 1/40 of its original size. Each representative data point in the summarized dataset X is associated with a local density constraints, a p = q ∗nb dimensional row vector as δ = {δi}p i=1. We can summarize a graph separately for different sentence structures, i.e., copula and noncopula sentences. Then representative data points from each summary dataset are merged to form final summary dataset. The Hybrid graph summary models in the experiments follow such approach. 4.2 Prediction of New Testing Dataset Instead of using large dataset, we now use summary dataset with predicted labels, and local density constraints to learn the class labels of nte number of unseen data points, i.e., testing data points, XTe = {x1, ..., xnte}. Using graph-based SSL method on the new representative dataset, X′ = X ∪XTe, which is comprised of summarized dataset, X =  Xi p i=1, as labeled data points, and the testing dataset, XTe as unlabeled data points. Since we do not know estimated local density constraints of unlabeled data points, we use constants to construct local density constraint column vector for X′ dataset as follows: δ′ = {1 + δi}p i=1 ∪[1 ... 1]T ∈ℜnte (11) 0 < δi ≤1. To embed the local density constraints, the second term in (3) is replaced with the constrained normalized Laplacian, Lc = δT Lδ, X i,j∈L∪T wij( fi p δ′ i ∗di − fj q δ′ j ∗dj )2 = fT Lcf (12) If any testing vector has an edge between a labeled vector, then with the usage of the local density constraints, the edge weights will not not only be affected by that labeled node, but also how dense that node is within that part of the graph. 5 Experiments We demonstrate the results from three sets of experiments to explore how our graph representation, which encodes textual entailment information, can be used to improve the performance of the QA systems. We show that as we increase the number of unlabeled data, with our graphsummarization, it is feasible to extract information that can improve the performance of QA models. We performed experiments on a set of 1449 questions from TREC-99-03. Using the search engine 2, we retrieved around 5 top-ranked candidate sentences from a large newswire corpus for each question to compile around 7200 q/a pairs. We manually labeled each candidate sentence as true or false entailment depending on the containment of the true answer string and soundness of the entailment to compile quality training set. We also used a set of 340 QA-type sentence pairs from RTE02-03 and 195 pairs from RTE04 by converting the hypothesis sentences into question form to create additional set of q/a pairs. In total, we created labeled training dataset XL of around 7600 q/a pairs . We evaluated the performance of graphbased QA system using a set of 202 questions from the TREC04 as testing dataset (Voorhees, 2003), (Prager et al., 2000). We retrieved around 20 candidate sentences for each of the 202 test questions and manually labeled each q/a pair as true/false entailment to compile 4037 test data. To obtain more unlabeled training data XU, we extracted around 100,000 document headlines from a large newswire corpus. Instead of matching headline and first sentence of the document as in (Harabagiu and Hickl, 2006), we followed a different approach. Using each headline as a query, we retrieved around 20 top-ranked sentences from search engine. For each headline, we picked the 1st and the 20th retrieved sentences. Our assumption is that the first retrieved sentence may have higher probability to entail the headline, whereas the last one may have lower probability. Each of these headline-candidate sentence pairs is used as additional unlabeled q/a pair. Since each head2http://lucene.apache.org/java/ 724 Features Model MRR Top1 Top5 Baseline − 42.3% 32.7% 54.5% QTCF SVM 51.9% 44.6% 63.4% SSL 49.5% 43.1% 60.9% LexSem SVM 48.2% 40.6% 61.4% SSL 47.9% 40.1% 58.4% QComp SVM 54.2% 47.5% 64.3% SSL 51.9% 45.5% 62.4% Table 1: MRR for different features and methods. line represents a converted question, in order to extract the question-type feature, we use a matching NER-type between the headline and candidate sentence to set question-type NER match feature. We applied pre-processing and feature extraction steps of section 2 to compile labeled and unlabeled training and labeled testing datasets. We use the rank scores obtained from the search engine as baseline of our system. We present the performance of the models using Mean Reciprocal Rank (MRR), top 1 (Top1) and top 5 prediction accuracies (Top5) as they are the most commonly used performance measures of QA systems (Voorhees, 2004). We performed manual iterative parameter optimization during training based on prediction accuracy to find the best k-nearest parameter for SSL, i.e., k = {3, 5, 10, 20, 50} , and best C =  10−2, .., 102 and γ =  2−2, .., 23 for RBF kernel SVM. Next we describe three different experiments and present individual results. Graph summarization makes it feasible to execute SSL on very large unlabeled datasets, which was otherwise impossible. This paper has no assumptions on the performance of the method in comparison to other SSL methods. Experiment 1. Here we test individual contribution of each set of features on our QA system. We applied SVM and our graph based SSL method with no summarization to learn models using labeled training and testing datasets. For SSL we used the training as labeled and testing as unlabeled dataset in transductive way to predict the entailment scores. The results are shown in Table 1. From section 2.2, QTCF represents question-type NER match feature, LexSem is the bundle of lexico-semantic features and QComp is the matching features of subject, head, object, and three complements. In comparison to the baseline, QComp have a significant effect on the accuracy of the QA system. In addition, QTCF has shown to improve the MRR performance by about 22%. Although the LexSem features have minimal semantic properties, they can improve MRR performance by 14%. Experiment 2. To evaluate the performance of graph summarization we performed two separate experiments. In the first part, we randomly selected subsets of labeled training dataset Xi L ⊂ XL with different sample sizes, ni L ={1% ∗nL, 5% ∗nL, 10% ∗nL, 25% ∗nL, 50% ∗nL, 100% ∗nL}, where nL represents the sample size of XL. At each random selection, the rest of the labeled dataset is hypothetically used as unlabeled data to verify the performance of our SSL using different sizes of labeled data. Table 2 reports the MRR performance of QA system on testing dataset using SVM and our graph-summary SSL (gSum SSL) method using the similarity function in (1). In the second part of the experiment, we applied graph summarization on copula and noncopula questions separately and merged obtained representative points to create labeled summary dataset. Then using similarity function in (2) we applied SSL on labeled summary and unlabeled testing via transduction. We call these models as Hybrid gSum SSL. To build SVM models in the same way, we separated the training dataset into two based on copula and non-copula questions, Xcp, Xncp and re-run the SVM method separately. The testing dataset is divided into two accordingly. Predicted models from copula sentence datasets are applied on copula sentences of testing dataset and vice versa for non- copula sentences. The predicted scores are combined to measure overall performance of Hybrid SVM models. We repeated the experiments five times with different random samples and averaged the results. Note from Table 2 that, when the number of labeled data is small (ni L < 10% ∗nL), graph based SSL, gSum SSL, has a better performance compared to SVM. As the percentage of labeled points in training data increase, the SVM performance increases, however graph summary SSL is still comparable with SVM. On the other hand, when we build separate models for copula and non-copula questions with different features, the performance of the overall model significantly increases in both methods. Especially in Hybrid graph-Summary SSL, Hybrid gSum SSL, when the number of labeled data is small (ni L < 25% ∗ nL) performance improvement is better than rest 725 % SVM gSum SSL Hybrid SVM Hybrid gSum SSL #Labeled MRR Top1 Top5 MRR Top1 Top5 MRR Top1 Top5 MRR Top1 Top5 1% 45.2 33.2 65.8 56.1 44.6 72.8 51.6 40.1 70.8 59.7 47.0 75.2 5% 56.5 45.1 73.0 57.3 46.0 73.7 54.2 40.6 72.3 60.3 48.5 76.7 10% 59.3 47.5 76.7 57.9 46.5 74.2 57.7 47.0 74.2 60.4 48.5 77.2 25% 59.8 49.0 78.7 58.4 45.0 79.2 61.4 49.5 78.2 60.6 49.0 76.7 50% 60.9 48.0 80.7 58.9 45.5 79.2 62.2 51.0 79.7 61.3 50.0 77.2 100% 63.5 55.4 77.7 59.7 47.5 79.7 67.6 58.0 82.2 61.9 51.5 78.2 Table 2: The MRR (%) results of graph-summary SSL (gSum SSL) and SVM as well as Hybrid gSum SSL and Hybrid SVM with different sizes of labeled data. #Unlabeled MRR Top1 Top5 25K 62.1% 52.0% 76.7% 50K 62.5% 52.5% 77.2% 100K 63.3% 54.0% 77.2% Table 3: The effect of number of unlabeled data on MRR from Hybrid graph Summarization SSL. of the models. As more labeled data is introduced, Hybrid SVM models’ performance increase drastically, even outperforming the state-of-the art MRR performance on TREC04 datasets presented in (Shen and Klakow, 2006) i.e., MRR=67.0%, Top1=62.0%, Top5=74.0%. This is due to the fact that we establish two seperate entailment models for copula and non-copula q/a sentence pairs that enables extracting useful information and better representation of the specific data. Experiment 3. Although SSL methods are capable of exploiting information from unlabeled data, learning becomes infeasible as the number of data points gets very large. There are various research on SLL to overcome the usage of large number of unlabeled dataset challenge (Delalleau et al., 2006). Our graph summarization method, Hybrid gsum SSL, has a different approach. which can summarize very large datasets into representative data points and embed the original spatial information of data points, namely local density constraints, within the SSL summarization schema. We demonstrate that as more labeled data is used, we would have a richer summary dataset with additional spatial information that would help to improve the the performance of the graph summary models. We gradually increase the number of unlabeled data samples as shown in Table 3 to demonstrate the effects on the performance of testing dataset. The results show that the number of unlabeled data has positive effect on performance of graph summarization SSL. 6 Conclusions and Discussions In this paper, we applied a graph-based SSL algorithm to improve the performance of QA task by exploiting unlabeled entailment relations between affirmed question and candidate sentence pairs. Our semantic and syntactic features for textual entailment analysis has individually shown to improve the performance of the QA compared to the baseline. We proposed a new graph representation for SSL that can represent textual entailment relations while embedding different question structures. We demonstrated that summarization on graph-based SSL can improve the QA task performance when more unlabeled data is used to learn the classifier model. There are several directions to improve our work: (1) The results of our graph summarization on very large unlabeled data is slightly less than best SVM results. This is largely due to using headlines instead of affirmed questions, wherein headlines does not contain question-type and some of them are not in proper sentence form. This adversely effects the named entity match of questiontype and the candidate sentence named entities as well as semantic match component feature extraction. We will investigate experiment 3 by using real questions from different sources and construct different test datasets. (2) We will use other distance measures to better explain entailment between q/a pairs and compare with other semisupervised and transductive approaches. 726 References Jinxiu Chen, Donghong Ji, C. Lim Tan, and Zhengyu Niu. 2006. Relation extraction using label propagation based semi-supervised learning. In Proceedings of the ACL-2006. Charles L.A. Clarke, Gordon V. Cormack, R. Thomas Lynam, and Egidio L. Terra. 2006. Question answering by passage selection. In In: Advances in open domain question answering, Strzalkowski, and Harabagiu (Eds.), pages 259–283. Springer. Oliver Delalleau, Yoshua Bengio, and Nicolas Le Roux. 2005. Efficient non-parametric function induction in semi-supervised learning. In Proceedings of AISTAT-2005. Oliver Delalleau, Yoshua Bengio, and Nicolas Le Roux. 2006. Large-scale algorithms. In In: SemiSupervised Learning, pages 333–341. MIT Press. Sandra Harabagiu and Andrew Hickl. 2006. Methods for using textual entailment in open-domain question answering. In In Proc. of ACL-2006, pages 905–912. Zhiheng Huang, Marcus Thint, and Zengchang Qin. 2008. Question classification using headwords and their hypernyms. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-08), pages 927–936. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Meeting of the ACL-2003, pages 423–430. Michael Lesk. 1988. They said true things, but called them by wrong names - vocabulary problems in retrieval systems. In In Proc. 4th Annual Conference of the University of Waterloo Centre for the New OED. Rong Liu, Jianzhong Zhou, and Ming Liu. 2006. A graph-based semi-supervised learning algorithm for web page classification. In Proc. Sixth Int. Conf. on Intelligent Systems Design and Applications. George Miller. 1995. Wordnet: A lexical database for english. In Communications of the ACL-1995. Zheng-Yu Niu, Dong-Hong Ji, and Chew-Lim Tan. 2005. Word sense disambiguation using labeled propagation based semi-supervised learning. In Proceedings of the ACL-2005. Jahna Otterbacher, Gunes Erkan, and R. Radev Dragomir. 2009. Biased lexrank:passage retrieval using random walks with question-based priors. Information Processing and Management, 45:42–54. Eric W. Prager, John M.and Brown, Dragomir Radev, and Krzysztof Czuba. 2000. One search engine or two for question-answering. In Proc. 9th Text REtrieval conference. Horacio Saggion and Robert Gaizauskas. 2006. Experiments in passage selection and answer extraction for question answering. In Advances in natural language processing, pages 291–302. Springer. Dan Shen and Dietrich Klakow. 2006. Exploring correlation of dependency relation paths for answer extraction. In Proceedings of ACL-2006. Vikas Sindhwani, Wei Chu, and S. Sathiya Keerthi. 2007. Semi-supervised gaussian process classifiers. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-07), pages 1059– 1064. Jun Suzuki and Hideki Isozaki. 2008. Semi-supervised sequential labeling and segmentation using gigaword scale unlabeled data. In Proceedings of the ACL-2008. Nguyen Thanh Tri, Nguyen Minh Le, and Akira Shimazu. 2006. Using semi-supervised learning for question classification. In ICCPOL, pages 31–41. LNCS 4285. Vilademir Vapnik. 1995. The nature of statistical learning theory. In Springer-Verlag, New York. Ellen M. Voorhees. 2003. Overview of the trec 2003 question answering track. In Proc. 12th Text REtrieval conference. Ellen M. Voorhees. 2004. Overview of trec2004 question answering track. Dengyong Zhou and Bernhard Sch¨olkopf. 2004. Learning from labeled and unlabeled data using random walks. In Proceedings of the 26th DAGM Symposium, (Eds.) Rasmussen, C.E., H.H. Blthoff, M.A. Giese and B. Schlkopf, pages 237–244, Berlin, Germany. Springer. Dengyong Zhou, Olivier Bousquet, Thomas N. Lal, Jason Weston, and Bernhard Sch¨olkopf. 2004. Learning with local and global consistency. Advances in Neural Information Processing Systems, 16:321– 328. Xiaojin Zhu, John Lafferty, and Zoubin Ghahramani. 2003. Semi-supervised learning: From Gaussian Fields to Gaussian processes. Technical Report CMU-CS-03-175, Carnegie Mellon University, Pittsburgh. 727
2009
81
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 728–736, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Combining Lexical Semantic Resources with Question & Answer Archives for Translation-Based Answer Finding Delphine Bernhard and Iryna Gurevych Ubiquitous Knowledge Processing (UKP) Lab Computer Science Department Technische Universit¨at Darmstadt, Hochschulstraße 10 D-64289 Darmstadt, Germany http://www.ukp.tu-darmstadt.de/ Abstract Monolingual translation probabilities have recently been introduced in retrieval models to solve the lexical gap problem. They can be obtained by training statistical translation models on parallel monolingual corpora, such as question-answer pairs, where answers act as the “source” language and questions as the “target” language. In this paper, we propose to use as a parallel training dataset the definitions and glosses provided for the same term by different lexical semantic resources. We compare monolingual translation models built from lexical semantic resources with two other kinds of datasets: manually-tagged question reformulations and question-answer pairs. We also show that the monolingual translation probabilities obtained (i) are comparable to traditional semantic relatedness measures and (ii) significantly improve the results over the query likelihood and the vector-space model for answer finding. 1 Introduction The lexical gap (or lexical chasm) often observed between queries and documents or questions and answers is a pervasive problem both in Information Retrieval (IR) and Question Answering (QA). This problem arises from alternative ways of conveying the same information, due to synonymy or paraphrasing, and is especially severe for retrieval over shorter documents, such as sentence retrieval or question retrieval in Question & Answer archives. Several solutions to this problem have been proposed including query expansion (Riezler et al., 2007; Fang, 2008), query reformulation or paraphrasing (Hermjakob et al., 2002; Tomuro, 2003; Zukerman and Raskutti, 2002) and semantic information retrieval (M¨uller et al., 2007). Berger and Lafferty (1999) have formulated a further solution to the lexical gap problem consisting in integrating monolingual statistical translation models in the retrieval process. Monolingual translation models encode statistical word associations which are trained on parallel monolingual corpora. The major drawback of this approach lies in the limited availability of truly parallel monolingual corpora. In practice, training data for translation-based retrieval often consist in question-answer pairs, usually extracted from the evaluation corpus itself (Riezler et al., 2007; Xue et al., 2008; Lee et al., 2008). While collectionspecific translation models effectively encode statistical word associations for the target document collection, it also introduces a bias in the evaluation and makes it difficult to assess the quality of the translation model per se, independently from a specific task and document collection. In this paper, we propose new kinds of datasets for training domain-independent monolingual translation models. We use the definitions and glosses provided for the same term by different lexical semantic resources to automatically train the translation models. This approach has been very recently made possible by the emergence of new kinds of lexical semantic and encyclopedic resources such as Wikipedia and Wiktionary. These resources are freely available, up-to-date and have a broad coverage and good quality. Thanks to the combination of several resources, it is possible to obtain monolingual parallel corpora which are large enough to train domain-independent translation models. In addition, we collected question-answer pairs and manually-tagged question reformulations from a social Q&A site. We use these datasets to build further translation models. Translation-based retrieval models have been 728 widely used in practice by the IR and QA community. However, the quality of the semantic information encoded in the translation tables has never been assessed intrinsically. To do so, we compare translation probabilities with concept vector based semantic relatedness measures with respect to human relatedness rankings for reference word pairs. This study provides empirical evidence for the high quality of the semantic information encoded in statistical word translation tables. We then use the translation models in an answer finding task based on a new question-answer dataset which is totally independent from the resources used for training the translation models. This extrinsic evaluation shows that our translation models significantly improve the results over the query likelihood and the vector-space model. The remainder of the paper is organised as follows. Section 2 discusses related work on semantic relatedness and statistical translation models for retrieval. Section 3 presents the monolingual parallel datasets we used for obtaining monolingual translation probabilities. Semantic relatedness experiments are detailed in Section 4. Section 5 presents answer finding experiments. Finally, we conclude in Section 6. 2 Related Work 2.1 Statistical Translation Models for Retrieval Statistical translation models for retrieval have first been introduced by Berger and Lafferty (1999). These models attempt to address synonymy and polysemy problems by encoding statistical word associations trained on monolingual parallel corpora. This method offers several advantages. First, it bases upon a sound mathematical formulation of the retrieval model. Second, it is not as computationally expensive as other semantic retrieval models, since it only relies on a word translation table which can easily be computed before retrieval. The main drawback lies in the availability of suitable training data for the translation probabilities. Berger and Lafferty (1999) initially built synthetic training data consisting of queries automatically generated from documents. Berger et al. (2000) proposed to train translation models on question-answer pairs taken from Usenet FAQs and call-center dialogues, with answers corresponding to the “source” language and questions to the “target” language. Subsequent work in this area often used similar kinds of training data such as question-answer pairs from Yahoo! Answers (Lee et al., 2008) or from the Wondir site (Xue et al., 2008). Lee et al. (2008) tried to further improve translation models based on question-answer pairs by selecting the most important terms to build compact translation models. Other kinds of training data have also been proposed. Jeon et al. (2005) automatically clustered semantically similar questions based on their answers. Murdock and Croft (2005) created a first parallel corpus of synonym pairs extracted from WordNet, and an additional parallel corpus of English words translating to the same Arabic term in a parallel English-Arabic corpus. Similar work has also been performed in the area of query expansion using training data consisting of FAQ pages (Riezler et al., 2007) or queries and clicked snippets from query logs (Riezler et al., 2008). All in all, translation models have been shown to significantly improve the retrieval results over traditional baselines for document retrieval (Berger and Lafferty, 1999), question retrieval in Question & Answer archives (Jeon et al., 2005; Lee et al., 2008; Xue et al., 2008) and for sentence retrieval (Murdock and Croft, 2005). Many of the approaches previously described have used parallel data extracted from the retrieval corpus itself. The translation models obtained are therefore domain and collection-specific, which introduces a bias in the evaluation and makes it difficult to assess to what extent the translation model may be re-used for other tasks and document collections. We henceforth propose a new approach for building monolingual translation models relying on domain-independent lexical semantic resources. Moreover, we extensively compare the results obtained by these models with models obtained from a different type of dataset, namely Question & Answer archives. 2.2 Semantic Relatedness The rationale behind translation-based retrieval models is that monolingual translation probabilities encode some form of semantic knowledge. The semantic similarity and relatedness of words has traditionally been assessed through corpusbased and knowledge-based measures. Corpusbased measures include Hyperspace Analogue to 729 Language (HAL) (Lund and Burgess, 1996) and Latent Semantic Analysis (LSA) (Landauer et al., 1998). Knowledge-based measures rely on lexical semantic resources such as WordNet and comprise path length based measures (Rada et al., 1989) and concept vector based measures (Qiu and Frei, 1993). These measures have recently also been applied to new collaboratively constructed resources such as Wikipedia (Zesch et al., 2007) and Wiktionary (Zesch et al., 2008), with good results. While classical measures of semantic relatedness have been extensively studied and compared, based on comparisons with human relatedness judgements or word-choice problems, there is no comparable intrinsic study of the relatedness measures obtained through word translation probabilities. In this study, we use the correlation with human rankings for reference word pairs to investigate how word translation probabilities compare with traditional semantic relatedness measures. To our knowledge, this is the first time that word-toword translation probabilities are used for ranking word-pairs with respect to their semantic relatedness. 3 Parallel Datasets In order to obtain parallel training data for the translation models, we collected three different datasets: manually-tagged question reformulations and question-answer pairs from the WikiAnswers social Q&A site (Section 3.1), and glosses from WordNet, Wiktionary, Wikipedia and Simple Wikipedia (Section 3.2). 3.1 Social Q&A Sites Social Q&A sites, such as Yahoo! Answers and AnswerBag, provide portals where users can ask their own questions as well as answer questions from other users. For our experiments we collected a dataset of questions and answers, as well as question reformulations, from the WikiAnswers1 (WA) web site. WikiAnswers is a social Q&A site similar to Yahoo! Answers and AnswerBag. The main originality of WikiAnswers is that users might manually tag question reformulations in order to prevent the duplication of answers to questions asking the same thing in a different way. When a user enters a question that is not already part of the question repository, the web site displays a list of already 1http://wiki.answers.com/ existing questions similar to the one just asked by the user. The user may then freely select the question which paraphrases her question, if available. The question reformulations thus labelled by the users are stored in order to retrieve the same answer when a given question reformulation is asked again. We collected question-answer pairs and question reformulations from the WikiAnswers site. The resulting dataset contains 480,190 questions with answers.2 We use this dataset in order to train two different translation models: Question-Answer Pairs (WAQA) In this setting, question-answer pairs are considered as a parallel corpus. Two different forms of combinations are possible: (Q,A), where questions act as source and answers as target, and (A,Q), where answers act as source and questions as target. Recent work by Xue et al. (2008) has shown that the best results are obtained by pooling the questionanswer pairs {(q, a)1, ..., (q, a)n} and the answerquestion pairs {(a, q)1, ..., (a, q)n} for training, so that we obtain the following parallel corpus: {(q, a)1, ..., (q, a)n}∪{(a, q)1, ..., (a, q)n}. Overall, this corpus contains 1,227,362 parallel pairs and will be referred to as WAQA (WikiAnswers Question-Answers) in the rest of the paper. Question Reformulations (WAQ) In this setting, question and question reformulation pairs are considered as a parallel corpus, e.g. ‘How long do polar bears live?’ and ‘What is the polar bear lifespan?’. For a given user question q1, we retrieve its stored reformulations from the WikiAnswers dataset; q11, q12, .... The original question and reformulations are subsequently combined and pooled to obtain a parallel corpus of question reformulation pairs: {(q1, q11), (q1, q12), ..., (qn, qnm)} ∪ {(q11, q1), (q12, q1), ..., (qnm, qn)}. This corpus contains 4,379,620 parallel pairs and will be referred to as WAQ (WikiAnswers Questions) in the rest of the paper. 3.2 Lexical Semantic Resources Glosses and definitions for the same lexeme in different lexical semantic and encyclopedic resources can actually be considered as near-paraphrases, since they define the same terms and hence have 2A question may have more than one answer. 730 gem moon WAQ WAQA LSR ALLPool WAQ WAQA LSR ALLPool gem explorer gem gem moon moon moon moon 95 ford diamonds xlt land earth lunar land xlt gem gemstone 95 foot lunar sun earth module xlt diamond explorer armstrong apollo earth landed stones demand natural gemstone set landed tides armstrong expedition lists facets diamonds actually neil moons neil ring dash rare natural neil 1969 phase apollo gemstone center synthetic diamond landed armstrong crescent set modual play ruby ford apollo space astronomical foot crystal lights usage ruby walked surface occurs actually Table 1: Sample top translations for different training data. ALL corresponds to WAQ+WAQA+LSR. the same meaning, as shown by the following example for the lexeme “moon”: • Wordnet (sense 1): the natural satellite of the Earth. • English Wiktionary: The Moon, the satellite of planet Earth. • English Wikipedia: The Moon (Latin: Luna) is Earth’s only natural satellite and the fifth largest natural satellite in the Solar System. We use glosses and definitions contained in the following resources to build a parallel corpus: • WordNet (Fellbaum, 1998). We use a freely available API for WordNet (JWNL3) to access WordNet 3.0. • English Wiktionary. We use the Wiktionary dump from January 11, 2009. • English and Simple English Wikipedia. We use the Wikipedia dump from February 6, 2007 and the Simple Wikipedia dump from July 24, 2008. The Simple English Wikipedia is an English Wikipedia targeted at non-native speakers of English which uses simpler words than the English Wikipedia. Wikipedia and Simple Wikipedia articles do not directly correspond to glosses such as those found in dictionaries, we therefore considered the first paragraph in articles as a surrogate for glosses. Given a list of 86,584 seed lexemes extracted from WordNet, we collected the glosses for each lexeme from the four English resources described 3http://sourceforge.net/projects/ jwordnet/ above. We then built pairs of glosses by considering each possible pair of resource. Given that a lexeme might have different senses, and hence different glosses, it is possible to extract several gloss pairs for one and the same lexeme and one and the same pair of resources. It is therefore necessary to perform word sense alignment. As we do not need perfect training data, but rather large amounts of training data, we used a very simple method consisting in eliminating gloss pairs which did not at least have one lemma in common (excluding stop words and the seed lexeme itself). The final pooled parallel corpus contains 307,136 pairs and is henceforth much smaller than the previous datasets extracted from WikiAnswers. This corpus will be referred to as LSR. 3.3 Translation Model Training We used the GIZA++ SMT Toolkit4 (Och and Ney, 2003) in order to obtain word-to-word translation probabilities from the parallel datasets described above. As is common practice in translation-based retrieval, we utilised the IBM translation model 1. The only pre-processing steps performed for all parallel datasets were tokenisation and stop word removal.5 3.4 Comparison of Word-to-Word Translations Table 1 gives some examples of word-to-word translations obtained for the different parallel corpora used (the column ALLPool will be described in the next section). As evidenced by this table, 4http://code.google.com/p/giza-pp/ 5For stop word removal we used the list available at: http://truereader.com/manuals/onix/ stopwords1.html. 731 the different kinds of data encode different types of information, including semantic relatedness and similarity, as well as morphological relatedness. As could be expected, the quality of the “translations” is variable and heavily dependent on the training data: the WAQ and WAQA models reveal the users’ interests, while the LSR model encodes lexicographic and encyclopedic knowledge. For instance, “gem” is an acronym for “generic electronic module”, which is found in Ford vehicles. Since many question-answer pairs in WA are related to cars, this very particular use of “gem” is predominant in the WAQ and WAQA translation tables. 3.5 Combination of the Datasets In order to investigate the role played by different kinds of training data, we combined the several translation models, using the two methods described by Xue et al. (2008). The first method consists in a linear combination of the word-to-word translation probabilities after training: PLin(wi|wj) = αPWAQ(wi|wj) + γPWAQA(wi|wj) + δPLSR(wi|wj) (1) where α + γ + δ = 1. This approach will be labelled with the Lin subscript. The second method consists in pooling the training datasets, i.e. concatenating the parallel corpora, before training. This approach will be labelled with the Pool subscript. Examples for word-to-word translations obtained with this type of combination can be found in the last column for each word in Table 1. The ALLPool setting corresponds to the pooling of all three parallel datasets: WAQ+WAQA+LSR. 4 Semantic Relatedness Experiments The aim of this first experiment is to perform an intrinsic evaluation of the word translation probabilities obtained by comparing them to traditional semantic relatedness measures on the task of ranking word pairs. Human judgements of semantic relatedness can be used to evaluate how well semantic relatedness measures reflect human rankings by correlating their ranking results with Spearman’s rank correlation coefficient. Several evaluation datasets are available for English, but we restrict our study to the larger dataset created by Finkelstein et al. (2002) due to the low coverage of many pairs in the word-to-word translation tables. This dataset comprises two subsets, which have been annotated by different annotators: Fin1–153, containing 153 word pairs, and Fin2–200, containing 200 word pairs. Word-to-word translation probabilities are compared with a concept vector based measure relying on Explicit Semantic Analysis (Gabrilovich and Markovitch, 2007), since this approach has been shown to yield very good results (Zesch et al., 2008). The method consists in representing words as a concept vector, where concepts correspond to WordNet synsets, Wikipedia article titles or Wiktionary entry names. Concept vectors for each word are derived from the textual representation available for each concept, i.e. glosses in WordNet, the full article or the first paragraph of the article in Wikipedia or the full contents of a Wiktionary entry. We refer the reader to (Gabrilovich and Markovitch, 2007; Zesch et al., 2008) for technical details on how the concept vectors are built and used to obtain semantic relatedness values. Table 2 lists Spearman’s rank correlation coefficients obtained for concept vector based measures and translation probabilities. In order to ensure a fair evaluation, we limit the comparison to the word pairs which are contained in all resources and translation tables. Dataset Fin1-153 Fin2-200 Word pairs used 46 42 Concept vectors WordNet .26 .46 Wikipedia .27 .03 WikipediaFirst .30 .38 Wiktionary .39 .58 Translation probabilities WAQ .43 .65 WAQA .54 .37 LSR .51 .29 ALLPool .52 .57 Table 2: Spearman’s rank correlation coefficients on the Fin1-153 and Fin2-200 datasets. Best values for each dataset are in bold format. For WikipediaFirst, the concept vectors are based on the first paragraph of each article. The first observation is that the coverage over the two evaluation datasets is rather small: only 46 pairs have been evaluated for the Fin1-153 dataset and 42 for the Fin2-200 dataset. This is mainly 732 due to the natural absence of many word pairs in the translation tables. Indeed, translation probabilities can only be obtained from observed parallel pairs in the training data. Concept vector based measures are more flexible in that respect since the relatedness value is based on a common representation in a concept vector space. It is therefore possible to measure relatedness for a far greater number of word pairs, as long as they share some concept vector dimensions. The second observation is that, on the restricted subset of word pairs considered, the results obtained by word-to-word translation probabilities are most of the time better than those of concept vector measures. However, the differences are not statistically significant.6 5 Answer Finding Experiments 5.1 Retrieval based on Translation Models The second experiment aims at providing an extrinsic evaluation of the translation probabilities by employing them in an answer finding task. In order to perform retrieval, we use a ranking function similar to the one proposed by Xue et al. (2008), which builds upon previous work on translation-based retrieval models and tries to overcome some of their flaws: P(q|D) = Y w∈q P(w|D) (2) P(w|D) = (1 −λ)Pmx(w|D) + λP(w|C) (3) Pmx(w|D) = (1 −β)Pml(w|D) + β X t∈D P(w|t)Pml(t|D) (4) where q is the query, D the document, λ the smoothing parameter for the document collection C and P(w|t) is the probability of translating a document term t to the query term w. The only difference to the original model by Xue et al. (2008) is that we use Jelinek-Mercer smoothing for equation 3 instead of Dirichlet Smoothing, as it has been done by Jeon et al. (2005). In all our experiments, β was set to 0.8 and λ to 0.5. 5.2 The Microsoft Research QA Corpus We performed an extrinsic evaluation of monolingual word translation probabilities by integrating them in the retrieval model previously described for an answer finding task. To this aim, 6Fisher-Z transformation, two-tailed test with α=.05. we used the questions and answers contained in the Microsoft Research Question Answering Corpus.7 This corpus comprises approximately 1.4K questions collected from 10-13 year old schoolchildren, who were asked “If you could talk to an encyclopedia, what would you ask it?”. The answers to the questions have been manually identified in the full text of Encarta 98 and annotated with the following relevance judgements: exact answer (1), off topic (3), on topic - off target (4), partial answer (5). In order to use this dataset for an answer finding task, we consider the annotated answers as the documents to be retrieved and use the questions as the set of test queries. This corpus is particularly well suited to conduct experiments targeted at the lexical gap problem: only 28% of the question-answer pairs correspond to a strong match (two or more query terms in the same answer sentence), while about a half (52%) are a weak match (only one query term matched in the answer sentence) and 16 % are indirect answers which do not explicitly contain the answer but provide enough information for deducing it. Moreover, the Microsoft QA corpus is not limited to a specific topic and entirely independent from the datasets used to build our translation models. The original corpus contained some inconsistencies due to duplicated data and non-labelled entries. After cleaning, we obtained a corpus of 1,364 questions and 9,780 answers. Table 3 gives one example of a question with different answers and relevance judgements. We report the retrieval performance in terms of Mean Average Precision (MAP) and Mean RPrecision (R-prec), MAP being our primary evaluation metric. We consider the following relevance categories, corresponding to increasing levels of tolerance for inexact or partial answers: • MAP1, R-Prec1: exact answer (1) • MAP1,5, R-Prec1,5: exact answer (1) or partial answer (5) • MAP1,4,5, R-Prec1,4,5: exact answer (1) or partial answer (5) or on topic - off target (4) Similarly to the training data for translation models, the only pre-processing steps performed 7http://research.microsoft. com/en-us/downloads/ 88c0021c-328a-4148-a158-a42d7331c6cf/ default.aspx 733 Question Why is the sun bright? Exact answer Star, large celestial body composed of gravitationally contained hot gases emitting electromagnetic radiation, especially light, as a result of nuclear reactions inside the star. The sun is a star. Partial answer Solar Energy, radiant energy produced in the sun as a result of nuclear fusion reactions (see Nuclear Energy; Sun). On topic - off target The sun has a magnitude of -26.7, inasmuch as it is about 10 billion times as bright as Sirius in the earth’s sky. Table 3: Example relevance judgements in the Microsoft QA corpus. Model MAP1 R-Prec1 MAP1,5 R-Prec1,5 MAP1,4,5 R-Prec1,4,5 QLM 0.2679 0.1941 0.3179 0.2963 0.3215 0.3057 Lucene 0.2705 0.2002 0.3167 0.2956 0.3192 0.3030 WAQ 0.3002 0.2149* 0.3557 0.3269 0.3583 0.3375 WAQA 0.3000 0.2211 0.3640 0.3328 0.3664 0.3405 LSR 0.3046 0.2171* 0.3666 0.3327 0.3723 0.3464 WAQ+WAQAPool 0.3062 0.2259 0.3685 0.3339 0.3716 0.3454 WAQ+LSRPool 0.3117 0.2224 0.3736 0.3399 0.3766 0.3487 WAQA+LSRPool 0.3135 0.2267 0.3818 0.3444 0.3840 0.3515 WAQ+WAQA+LSRPool 0.3152 0.2286 0.3832 0.3495 0.3848 0.3569 WAQ+WAQA+LSRLin 0.3215 0.2343 0.3921 0.3536 0.3967 0.3673 Table 4: Answer retrieval results. The WAQ+WAQA+LSRLin results have been obtained with α=0.2 γ=0.2 and δ=0.6 (the parameter values have been determined empirically based on MAP and R-Prec). The performance gaps between the translation-based models and the baseline models are statistically significant, except for those marked with a ‘*’ (two-tailed paired t-test, p < 0.05). for this corpus were tokenisation and stop word removal. Due to the small size of the answer corpus, we built an open vocabulary background collection model to deal with out of vocabulary words by smoothing the unigram probabilities with Good-Turing discounting, using the SRILM toolkit8 (Stolcke, 2002). 5.3 Results As baselines, we consider the query-likelihood model (QLM), corresponding to equation 4 with β = 0, and Lucene.9 The results reported in Table 4 show that models incorporating monolingual translation probabilities perform consistently better than both baseline systems especially when they are used in combination. It is however difficult to provide a ranking of the different types of training data based on the retrieval results: it seems that LSR is slightly more performant than WAQ and WAQA, both alone and 8http://www.speech.sri.com/projects/ srilm/ 9http://lucene.apache.org in combination, but the improvement is minor. It is worth noticing that while the LSR training data are comparatively smaller than WAQ and WAQA, they however yield comparable results. The linear combination of datasets (WAQ+WAQA+LSRLin) yields statistically significant performance improvement when compared to the models without combinations (except when compared to WAQA for R-Prec1, p>0.05), which shows that the different datasets and resources used are complementary and each contribute to the overall result. Three answer retrieval examples are given in Figure 1. They provide further evidence for the results obtained. The correct answer to the first question “Who invented Halloween?” is retrieved by the WAQ+WAQA+LSRLin model, but not by the QLM. This is a case of a weak match with only “Halloween” as matching term. The WAQ+WAQA+LSRLin model is however able to establish the connection between the question term “invented” and the answer term “originated”. Questions 2 and 3 show that translation probabilities can also replace word normali734 QLM top answer WAQ+WAQA+LSRLin top answer Question 1: Who invented Halloween? Halloween occurs on October 31 and is observed in the U.S. and other countries with masquerading, bonfires, and games. The observances connected with Halloween are thought to have originated among the ancient Druids, who believed that on that evening, Saman, the lord of the dead, called forth hosts of evil spirits. Question 2: Can mosquito bites spread AIDS? Another species, the Asian tiger mosquito, has caused health experts concern since it was first detected in the United States in 1985. Probably arriving in shipments of used tire casings, this fierce biter can spread a type of encephalitis, dengue fever, and other diseases. Studies have shown no evidence of HIV transmission through insects – even in areas where there are many cases of AIDS and large populations of insects such as mosquitoes. Question 3: How do the mountains form into a shape? In 1985, scientists vaporized graphite to produce a stable form of carbon molecule consisting of 60 carbon atoms in a roughly spherical shape, looking like a soccer ball. Geologists believe that most mountains are formed by movements in the earth’s crust. Figure 1: Top answer retrieved by QLM and WAQ+WAQA+LSRLin. Lexical overlaps between question and answer are in bold, morphological relations are in italics. sation techniques such as stemming and lemmatisation, since the answers do not contain the question terms “mosquito” (for question 2) and “form” (for question 3), but only their inflected forms “mosquitoes” and “formed”. 6 Conclusion and Future Work We have presented three datasets for training statistical word translation models for use in answer finding: question-answer pairs, manually-tagged question reformulations and glosses for the same term extracted from several lexical semantic resources. It is the first time that the two latter types of datasets have been used for this task. We have also provided the first intrinsic evaluation of word translation probabilities with respect to human relatedness rankings for reference word pairs. This evaluation has shown that, despite the simplicity of the method, monolingual translation models are comparable to concept vector semantic relatedness measures for this task. Moreover, models based on translation probabilities yield significant improvement over baseline approaches for answer finding, especially when different types of training data are combined. The experiments bear strong evidence that several datasets encode different and complementary types of knowledge, which are all useful for retrieval. In order to integrate semantics in retrieval, it is therefore advisable to combine both knowledge specific to the task at hand, e.g. question-answer pairs, and external knowledge, as contained in lexical semantic resources. In the future, we would like to further evaluate the models presented in this paper for different tasks, such as question paraphrase retrieval, and larger datasets. We also plan to improve question analysis by automatically identifying question topic and question focus. Acknowledgments We thank Konstantina Garoufi, Nada Mimouni, Christof M¨uller and Torsten Zesch for contributions to this work. We also thank Mark-Christoph M¨uller and the anonymous reviewers for insightful comments. We are grateful to Bill Dolan for making us aware of the Microsoft Research QA Corpus. This work has been supported by the German Research Foundation (DFG) under the grant No. GU 798/3-1, and by the Volkswagen Foundation as part of the Lichtenberg-Professorship Program under the grant No. I/82806. References Adam Berger and John Lafferty. 1999. Information Retrieval as Statistical Translation. In Proceedings of the 22nd Annual International Conference on Re735 search and Development in Information Retrieval (SIGIR ’99), pages 222–229. Adam Berger, Rich Caruana, David Cohn, Dayne Freitag, and Vibhu Mittal. 2000. Bridging the Lexical Chasm: Statistical Approaches to Answer-Finding. In Proceedings of the 23rd Annual International Conference on Research and Development in Information Retrieval (SIGIR ’00), pages 192–199. Hui Fang. 2008. A Re-examination of Query Expansion Using Lexical Resources. In Proceedings of ACL-08: HLT, pages 139–147, Columbus, Ohio. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2002. Placing Search in Context: the Concept Revisited. ACM Transactions on Information Systems (TOIS), 20(1):116–131, January. Evgeniy Gabrilovich and Shaul Markovitch. 2007. Computing Semantic Relatedness using Wikipediabased Explicit Semantic Analysis. In Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI), pages 1606–1611. Ulf Hermjakob, Abdessamad Echihabi, and Daniel Marcu. 2002. Natural Language Based Reformulation Resource and Wide Exploitation for Question Answering. In Proceedings of the Eleventh Text Retrieval Conference (TREC 2002). Jiwoon Jeon, W. Bruce Croft, and Joon Ho Lee. 2005. Finding Similar Questions in Large Question and Answer Archives. In Proceedings of the 14th ACM International Conference on Information and Knowledge Management (CIKM ’05), pages 84–90. Thomas K. Landauer, Darrell Laham, and Peter Foltz. 1998. Learning Human-like Knowledge by Singular Value Decomposition: A Progress Report. Advances in Neural Information Processing Systems, 10:45–51. Jung-Tae Lee, Sang-Bum Kim, Young-In Song, and Hae-Chang Rim. 2008. Bridging Lexical Gaps between Queries and Questions on Large Online Q&A Collections with Compact Translation Models. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 410–418, Honolulu, Hawaii. Kevin Lund and Curt Burgess. 1996. Producing high-dimensional semantic spaces from lexical cooccurrence. Behavior Research Methods, Instruments & Computers, 28(2):203–208. Christof M¨uller, Iryna Gurevych, and Max M¨uhlh¨auser. 2007. Integrating Semantic Knowledge into Text Similarity and Information Retrieval. In Proceedings of the First IEEE International Conference on Semantic Computing (ICSC), pages 257–264. Vanessa Murdock and W. Bruce Croft. 2005. A Translation Model for Sentence Retrieval. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (HLT/EMNLP’05), pages 684–691. Franz J. Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1):19–51. Yonggang Qiu and Hans-Peter Frei. 1993. Concept Based Query Expansion. In Proceedings of the 16th Annual International Conference on Research and Development in Information Retrieval (SIGIR ’93), pages 160–169. Roy Rada, Hafedh Mili, Ellen Bicknell, and Maria Blettner. 1989. Development and Application of a Metric on Semantic Nets. IEEE Transactions on Systems, Man and Cybernetics, 19(1):17–30. Stefan Riezler, Alexander Vasserman, Ioannis Tsochantaridis, Vibhu Mittal, and Yi Liu. 2007. Statistical Machine Translation for Query Expansion in Answer Retrieval. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL’ 07), pages 464–471. Stefan Riezler, Yi Liu, and Alexander Vasserman. 2008. Translating Queries into Snippets for Improved Query Expansion. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING 2008), pages 737–744. Andreas Stolcke. 2002. SRILM – An Extensible Language Modeling Toolkit. In Proceedings of the International Conference on Spoken Language Processing (ICSLP), volume 2, pages 901–904. Noriko Tomuro. 2003. Interrogative Reformulation Patterns and Acquisition of Question Paraphrases. In Proceedings of the International Workshop on Paraphrasing, pages 33–40. Xiaobing Xue, Jiwoon Jeon, and W. Bruce Croft. 2008. Retrieval Models for Question and Answer Archives. In Proceedings of the 31st Annual International Conference on Research and Development in Information Retrieval (SIGIR ’08), pages 475– 482. Torsten Zesch, Iryna Gurevych, and Max M¨uhlh¨auser. 2007. Analyzing and Accessing Wikipedia as a Lexical Semantic Resource. In Data Structures for Linguistic Resources and Applications, pages 197–205. Gunter Narr, T¨ubingen. Torsten Zesch, Christof M¨uller, and Iryna Gurevych. 2008. Using Wiktionary for Computing Semantic Relatedness. In Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (AAAI 2008), pages 861–867. Ingrid Zukerman and Bhavani Raskutti. 2002. Lexical Query Paraphrasing for Document Retrieval. In Proceedings of the 19th International Conference on Computational linguistics, pages 1177– 1183, Taipei, Taiwan. 736
2009
82
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 737–745, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Answering Opinion Questions with Random Walks on Graphs Fangtao Li, Yang Tang, Minlie Huang, and Xiaoyan Zhu State Key Laboratory on Intelligent Technology and Systems Tsinghua National Laboratory for Information Science and Technology Department of Computer Sci. and Tech., Tsinghua University, Beijing 100084, China {fangtao06,tangyang9}@gmail.com,{aihuang,zxy-dcs}@tsinghua.edu.cn Abstract Opinion Question Answering (Opinion QA), which aims to find the authors’ sentimental opinions on a specific target, is more challenging than traditional factbased question answering problems. To extract the opinion oriented answers, we need to consider both topic relevance and opinion sentiment issues. Current solutions to this problem are mostly ad-hoc combinations of question topic information and opinion information. In this paper, we propose an Opinion PageRank model and an Opinion HITS model to fully explore the information from different relations among questions and answers, answers and answers, and topics and opinions. By fully exploiting these relations, the experiment results show that our proposed algorithms outperform several state of the art baselines on benchmark data set. A gain of over 10% in F scores is achieved as compared to many other systems. 1 Introduction Question Answering (QA), which aims to provide answers to human-generated questions automatically, is an important research area in natural language processing (NLP) and much progress has been made on this topic in previous years. However, the objective of most state-of-the-art QA systems is to find answers to factual questions, such as “What is the longest river in the United States?” and “Who is Andrew Carnegie?” In fact, rather than factual information, people would also like to know about others’ opinions, thoughts and feelings toward some specific objects, people and events. Some examples of these questions are: “How is Bush’s decision not to ratify the Kyoto Protocol looked upon by Japan and other US allies?”(Stoyanov et al., 2005) and “Why do people like Subway Sandwiches?” from TAC 2008 (Dang, 2008). Systems designed to deal with such questions are called opinion QA systems. Researchers (Stoyanov et al., 2005) have found that opinion questions have very different characteristics when compared with fact-based questions: opinion questions are often much longer, more likely to represent partial answers rather than complete answers and vary much more widely. These features make opinion QA a harder problem to tackle than fact-based QA. Also as shown in (Stoyanov et al., 2005), directly applying previous systems designed for fact-based QA onto opinion QA tasks would not achieve good performances. Similar to other complex QA tasks (Chen et al., 2006; Cui et al., 2007), the problem of opinion QA can be viewed as a sentence ranking problem. The Opinion QA task needs to consider not only the topic relevance of a sentence (to identify whether this sentence matches the topic of the question) but also the sentiment of a sentence (to identify the opinion polarity of a sentence). Current solutions to opinion QA tasks are generally in ad hoc styles: the topic score and the opinion score are usually separately calculated and then combined via a linear combination (Varma et al., 2008) or just filter out the candidate without matching the question sentiment (Stoyanov et al., 2005). However, topic and opinion are not independent in reality. The opinion words are closely associated with their contexts. Another problem is that existing algorithms compute the score for each answer candidate individually, in other words, they do not consider the relations between answer candidates. The quality of a answer candidate is not only determined by the relevance to the question, but also by other candidates. For example, the good answer may be mentioned by many candidates. In this paper, we propose two models to address the above limitations of previous sentence 737 ranking models. We incorporate both the topic relevance information and the opinion sentiment information into our sentence ranking procedure. Meanwhile, our sentence ranking models could naturally consider the relationships between different answer candidates. More specifically, our first model, called Opinion PageRank, incorporates opinion sentiment information into the graph model as a condition. The second model, called Opinion HITS model, considers the sentences as authorities and both question topic information and opinion sentiment information as hubs. The experiment results on the TAC QA data set demonstrate the effectiveness of the proposed Random Walk based methods. Our proposed method performs better than the best method in the TAC 2008 competition. The rest of this paper is organized as follows: Section 2 introduces some related works. We will discuss our proposed models in Section 3. In Section 4, we present an overview of our opinion QA system. The experiment results are shown in Section 5. Finally, Section 6 concludes this paper and provides possible directions for future work. 2 Related Work Few previous studies have been done on opinion QA. To our best knowledge, (Stoyanov et al., 2005) first created an opinion QA corpus OpQA. They find that opinion QA is a more challenging task than factual question answering, and they point out that traditional fact-based QA approaches may have difficulty on opinion QA tasks if unchanged. (Somasundaran et al., 2007) argues that making finer grained distinction of subjective types (sentiment and arguing) further improves the QA system. For non-English opinion QA, (Ku et al., 2007) creates a Chinese opinion QA corpus. They classify opinion questions into six types and construct three components to retrieve opinion answers. Relevant answers are further processed by focus detection, opinion scope identification and polarity detection. Some works on opinion mining are motivated by opinion question answering. (Yu and Hatzivassiloglou, 2003) discusses a necessary component for an opinion question answering system: separating opinions from fact at both the document and sentence level. (Soo-Min and Hovy, 2005) addresses another important component of opinion question answering: finding opinion holders. More recently, TAC 2008 QA track (evolved from TREC) focuses on finding answers to opinion questions (Dang, 2008). Opinion questions retrieve sentences or passages as answers which are relevant for both question topic and question sentiment. Most TAC participants employ a strategy of calculating two types of scores for answer candidates, which are the topic score measure and the opinion score measure (the opinion information expressed in the answer candidate). However, most approaches simply combined these two scores by a weighted sum, or removed candidates that didn’t match the polarity of questions, in order to extract the opinion answers. Algorithms based on Markov Random Walk have been proposed to solve different kinds of ranking problems, most of which are inspired by the PageRank algorithm (Page et al., 1998) and the HITS algorithm (Kleinberg, 1999). These two algorithms were initially applied to the task of Web search and some of their variants have been proved successful in a number of applications, including fact-based QA and text summarization (Erkan and Radev, 2004; Mihalcea and Tarau, 2004; Otterbacher et al., 2005; Wan and Yang, 2008). Generally, such models would first construct a directed or undirected graph to represent the relationship between sentences and then certain graph-based ranking methods are applied on the graph to compute the ranking score for each sentence. Sentences with high scores are then added into the answer set or the summary. However, to the best of our knowledge, all previous Markov Random Walk-based sentence ranking models only make use of topic relevance information, i.e. whether this sentence is relevant to the fact we are looking for, thus they are limited to fact-based QA tasks. To solve the opinion QA problems, we need to consider both topic and sentiment in a non-trivial manner. 3 Our Models for Opinion Sentence Ranking In this section, we formulate the opinion question answering problem as a topic and sentiment based sentence ranking task. In order to naturally integrate the topic and opinion information into the graph based sentence ranking framework, we propose two random walk based models for solving the problem, i.e. an Opinion PageRank model and an Opinion HITS model. 738 3.1 Opinion PageRank Model In order to rank sentence for opinion question answering, two aspects should be taken into account. First, the answer candidate is relevant to the question topic; second, the answer candidate is suitable for question sentiment. Considering Question Topic: We first introduce how to incorporate the question topic into the Markov Random Walk model, which is similar as the Topic-sensitive LexRank (Otterbacher et al., 2005). Given the set Vs = {vi} containing all the sentences to be ranked, we construct a graph where each node represents a sentence and each edge weight between sentence vi and sentence vj is induced from sentence similarity measure as follows: p(i →j) = f(i→j) P|Vs| k=1 f(i→k), where f(i →j) represents the similarity between sentence vi and sentence vj, here is cosine similarity (Baeza-Yates and Ribeiro-Neto, 1999). We define f(i →i) = 0 to avoid self transition. Note that p(i →j) is usually not equal to p(j →i). We also compute the similarity rel(vi|q) of a sentence vi to the question topic q using the cosine measure. This relevance score is then normalized as follows to make the sum of all relevance values of the sentences equal to 1: rel′(vi|q) = rel(vi|q) P|Vs| k=1 rel(vk|q). The saliency score Score(vi) for sentence vi can be calculated by mixing topic relevance score and scores of all other sentences linked with it as follows: Score(vi) = µ P j̸=i Score(vj) · p(j → i)+(1−µ)rel′(vi|q), where µ is the damping factor as in the PageRank algorithm. The matrix form is: ˜p = µ ˜ MT ˜p + (1 − µ)⃗α, where ˜p = [Score(vi)]|Vs|×1 is the vector of saliency scores for the sentences; ˜ M = [p(i →j)]|Vs|×|Vs| is the graph with each entry corresponding to the transition probability; ⃗α = [rel′(vi|q)]|Vs|×1 is the vector containing the relevance scores of all the sentences to the question. The above process can be considered as a Markov chain by taking the sentences as the states and the corresponding transition matrix is given by A ′ = µ ˜ MT + (1 −µ)⃗e⃗αT . Considering Topics and Sentiments Together: In order to incorporate the opinion information and topic information for opinion sentence ranking in an unified framework, we propose an Opinion PageRank model (Figure 1) based on a two-layer link graph (Liu and Ma, 2005; Wan and Yang, 2008). In our opinion PageRank model, the Figure 1: Opinion PageRank first layer contains all the sentiment words from a lexicon to represent the opinion information, and the second layer denotes the sentence relationship in the topic sensitive Markov Random Walk model discussed above. The dashed lines between these two layers indicate the conditional influence between the opinion information and the sentences to be ranked. Formally, the new representation for the twolayer graph is denoted as G∗= ⟨Vs, Vo, Ess, Eso⟩, where Vs = {vi} is the set of sentences and Vo = {oj} is the set of sentiment words representing the opinion information; Ess = {eij|vi, vj ∈Vs} corresponds to all links between sentences and Eso = {eij|vi ∈Vs, oj ∈Vo} corresponds to the opinion correlation between a sentence and the sentiment words. For further discussions, we let π(oj) ∈[0, 1] denote the sentiment strength of word oj, and let ω(vi, oj) ∈[0, 1] denote the strength of the correlation between sentence vi and word oj. We incorporate the two factors into the transition probability from vi to vj and the new transition probability p(i →j|Op(vi), Op(vj)) is defined as f(i→j|Op(vi),Op(vj)) P|Vs| k=1 f(i→k|Op(vi),Op(vk)) when P f ̸= 0, and defined as 0 otherwise, where Op(vi) is denoted as the opinion information of sentence vi, and f(i →j|Op(vi), Op(vj)) is the new similarity score between two sentences vi and vj, conditioned on the opinion information expressed by the sentiment words they contain. We propose to compute the conditional similarity score by linearly combining the scores conditioned on the source opinion (i.e. f(i →j|Op(vi))) and the destination opinion (i.e. f(i →j|Op(vj))) as follows: f(i →j|Op(vi), Op(vj)) = λ · f(i →j|Op(vi)) + (1 −λ) · f(i →j|Op(vj)) = λ · X ok∈Op(vi) f(i →j) · π(ok) · ω(ok, vi) + (1 −λ) · X ok′ ∈Op(vj)) (i →j) · π(ok′ ) · ω(ok′ , vj) (1) where λ ∈[0, 1] is the combination weight controlling the relative contributions from the source 739 opinion and the destination opinion. In this study, for simplicity, we define π(oj) as 1, if oj exists in the sentiment lexicon, otherwise 0. And ω(vi, oj) is described as an indicative function. In other words, if word oj appears in the sentence vi, ω(vi, oj) is equal to 1. Otherwise, its value is 0. Then the new row-normalized matrix ˜ M∗is defined as follows: ˜ M∗ ij = p(i →j|Op(i), Opj). The final sentence score for Opinion PageRank model is then denoted by: Score(vi) = µ · P j̸=i Score(vj) · ˜ M∗ ji + (1 −µ) · rel′(si|q). The matrix form is: ˜p = µ ˜ M∗T ˜p + (1 −µ) · ⃗α. The final transition matrix is then denoted as: A∗= µ ˜ M∗T +(1−µ)⃗e⃗αT and the sentence scores are obtained by the principle eigenvector of the new transition matrix A∗. 3.2 Opinion HITS Model The word’s sentiment score is fixed in Opinion PageRank. This may encounter problem when the sentiment score definition is not suitable for the specific question. We propose another opinion sentence ranking model based on the popular graph ranking algorithm HITS (Kleinberg, 1999). This model can dynamically learn the word sentiment score towards a specific question. HITS algorithm distinguishes the hubs and authorities in the objects. A hub object has links to many authorities, and an authority object has high-quality content and there are many hubs linking to it. The hub scores and authority scores are computed in a recursive way. Our proposed opinion HITS algorithm contains three layers. The upper level contains all the sentiment words from a lexicon, which represent their opinion information. The lower level contains all the words, which represent their topic information. The middle level contains all the opinion sentences to be ranked. We consider both the opinion layer and topic layer as hubs and the sentences as authorities. Figure 2 gives the bipartite graph representation, where the upper opinion layer is merged with lower topic layer together as the hubs, and the middle sentence layer is considered as the authority. Formally, the representation for the bipartite graph is denoted as G# = ⟨Vs, Vo, Vt, Eso, Est⟩, where Vs = {vi} is the set of sentences. Vo = {oj} is the set of all the sentiment words representing opinion information, Vt = {tj} is the set of all the words representing topic information. Eso = {eij|vi ∈Vs, oj ∈Vo} corresponds to the Figure 2: Opinion HITS model correlations between sentence and opinion words. Each edge eij is associated with a weight owij denoting the strength of the relationship between the sentence vi and the opinion word oj. The weight owij is 1 if the sentence vi contains word oj, otherwise 0. Est denotes the relationship between sentence and topic word. Its weight twij is calculated by tf · idf (Otterbacher et al., 2005). We define two matrixes O = (Oij)|Vs|×|Vo| and T = (Tij)|Vs|×|Vt| as follows, for Oij = owij, and if sentence i contains word j, therefore owij is assigned 1, otherwise owij is 0. Tij = twij = tfj · idfj (Otterbacher et al., 2005). Our new opinion HITS model is different from the basic HITS algorithm in two aspects. First, we consider the topic relevance when computing the sentence authority score based on the topic hub level as follows: Authsen(vi) ∝P twij>0 twij · topic score(j)·hubtopic(j), where topic score(j) is empirically defined as 1, if the word j is in the topic set (we will discuss in next section), and 0.1 otherwise. Second, in our opinion HITS model, there are two aspects to boost the sentence authority score: we simultaneously consider both topic information and opinion information as hubs. The final scores for authority sentence, hub topic and hub opinion in our opinion HITS model are defined as: Auth(n+1) sen (vi) = (2) γ · X twij >0 twij · topic score(j) · Hub(n) topic(tj) + (1 −γ) · X owij>0 owij · Hub(n) opinion(oj) Hub(n+1) topic (ti) = X twki>0 twki · Auth(n) sen(vi) (3) Hub(n+1) opinion(oi) = X owki>0 owki · Auth(n) sen(vi) (4) 740 Figure 3: Opinion Question Answering System The matrix form is: a(n+1) = γ · T · e · tT s · I · h(n) t + (1 −γ) · O · h(n) o (5) h(n+1) t = T T · a(n) (6) h(n+1) o = OT · a(n) (7) where e is a |Vt|×1 vector with all elements equal to 1 and I is a |Vt| × |Vt| identity matrix, ts = [topic score(j)]|Vt|×1 is the score vector for topic words, a(n) = [Auth(n) sen(vi)]|Vs|×1 is the vector authority scores for the sentence in the nth iteration, and the same as h(n) t = [Hub(n) topic(tj)]|Vt|×1, h(n) o = [Hub(n) opinion(tj)]|Vo|×1. In order to guarantee the convergence of the iterative form, authority score and hub score are normalized after each iteration. For computation of the final scores, the initial scores of all nodes, including sentences, topic words and opinion words, are set to 1 and the above iterative steps are used to compute the new scores until convergence. Usually the convergence of the iteration algorithm is achieved when the difference between the scores computed at two successive iterations for any nodes falls below a given threshold (10e-6 in this study). We use the authority scores as the saliency scores in the Opinion HITS model. The sentences are then ranked by their saliency scores. 4 System Description In this section, we introduce the opinion question answering system based on the proposed graph methods. Figure 3 shows five main modules: Question Analysis: It mainly includes two components. 1).Sentiment Classification: We classify all opinion questions into two categories: positive type or negative type. We extract several types of features, including a set of pattern features, and then design a classifier to identify sentiment polarity for each question (similar as (Yu and Hatzivassiloglou, 2003)). 2).Topic Set Expansion: The opinion question asks opinions about a particular target. Semantic role labeling based (Carreras and Marquez, 2005) and rule based techniques can be employed to extract this target as topic word. We also expand the topic word with several external knowledge bases: Since all the entity synonyms are redirected into the same page in Wikipedia (Rodrigo et al., 2007), we collect these redirection synonym words to expand topic set. We also collect some related lists as topic words. For example, given question “What reasons did people give for liking Ed Norton’s movies?”, we collect all the Norton’s movies from IMDB as this question’s topic words. Document Retrieval: The PRISE search engine, supported by NIST (Dang, 2008), is employed to retrieve the documents with topic word. Answer Candidate Extraction: We split retrieved documents into sentences, and extract sentences containing topic words. In order to improve recall, we carry out the following process to handle the problem of coreference resolution: We classify the topic word into four categories: male, female, group and other. Several pronouns are defined for each category, such as ”he”, ”him”, ”his” for male category. If a sentence is determined to contain the topic word, and its next sentence contains the corresponding pronouns, then the next sentence is also extracted as an answer candidate, similar as (Chen et al., 2006). Answer Ranking: The answer candidates are ranked by our proposed Opinion PageRank method or Opinion HITS method. Answer Selection by Removing Redundancy: We incrementally add the top ranked sentence into the answer set, if its cosine similarity with every extracted answer doesn’t exceed a predefined threshold, until the number of selected sentence (here is 40) is reached. 5 Experiments 5.1 Experiment Step 5.1.1 Dataset We employ the dataset from the TAC 2008 QA track. The task contains a total of 87 squishy 741 opinion questions.1 These questions have simple forms, and can be easily divided into positive type or negative type, for example “Why do people like Mythbusters?” and “What were the specific actions or reasons given for a negative attitude towards Mahmoud Ahmadinejad?”. The initial topic word for each question (called target in TAC) is also provided. Since our work in this paper focuses on sentence ranking for opinion QA, these characteristics of TAC data make it easy to process question analysis. Answers for all questions must be retrieved from the TREC Blog06 collection (Craig Macdonald and Iadh Ounis, 2006). The collection is a large sample of the blog sphere, crawled over an eleven-week period from December 6, 2005 until February 21, 2006. We retrieve the top 50 documents for each question. 5.1.2 Evaluation Metrics We adopt the evaluation metrics used in the TAC squishy opinion QA task (Dang, 2008). The TAC assessors create a list of acceptable information nuggets for each question. Each nugget will be assigned a normalized weight based on the number of assessors who judged it to be vital. We use these nuggets and corresponding weights to assess our approach. Three human assessors complete the evaluation process. Every question is scored using nugget recall (NR) and an approximation to nugget precision (NP) based on length. The final score will be calculated using F measure with TAC official value β = 3 (Dang, 2008). This means recall is 3 times as important as precision: F(β = 3) = (32 + 1) · NP · NR 32 · NP + NR where NP is the sum of weights of nuggets returned in response over the total sum of weights of all nuggets in nugget list, and NP = 1 − (length −allowance)/(length) if length is no less than allowance and 0 otherwise. Here allowance = 100 × (♯nuggets returned) and length equals to the number of non-white characters in strings. We will use average F Score to evaluate the performance for each system. 5.1.3 Baseline The baseline combines the topic score and opinion score with a linear weight for each answer candidate, similar to the previous ad-hoc algorithms: final score = (1 −α) × opinion score + α × topic score (8) 13 questions were dropped from the evaluation due to no correct answers found in the corpus The topic score is computed by the cosine similarity between question topic words and answer candidate. The opinion score is calculated using the number of opinion words normalized by the total number of words in candidate sentence. 5.2 Performance Evaluation 5.2.1 Performance on Sentimental Lexicons Lexicon Neg Pos Description Name Size Size 1 HowNet 2700 2009 English translation of positive/negative Chinese words 2 Senti4800 2290 Words with a positive WordNet or negative score above 0.6 3 Intersec640 518 Words appeared in tion both 1 and 2 4 Union 6860 3781 Words appeared in 1 or 2 5 All 10228 10228 All words appeared in 1 or 2 without distinguishing pos or neg Table 1: Sentiment lexicon description For lexicon-based opinion analysis, the selection of opinion thesaurus plays an important role in the final performance. HowNet2 is a knowledge database of the Chinese language, and provides an online word list with tags of positive and negative polarity. We use the English translation of those sentiment words as the sentimental lexicon. SentiWordNet (Esuli and Sebastiani, 2006) is another popular lexical resource for opinion mining. Table 1 shows the detail information of our used sentiment lexicons. In our models, the positive opinion words are used only for positive questions, and negative opinion words just for negative questions. We initially set parameter λ in Opinion PageRank as 0 as (Liu and Ma, 2005), and other parameters simply as 0.5, including µ in Opinion PageRank, γ in Opinion HITS, and α in baseline. The experiment results are shown in Figure 4. We can make three conclusions from Figure 4: 1. Opinion PageRank and Opinion HITS are both effective. The best results of Opinion PageRank and Opinion HITS respectively achieve around 35.4% (0.199 vs 0.145), and 34.7% (0.195 vs 0.145) improvements in terms of F score over the best baseline result. We believe this is because our proposed models not only incorporate the topic information and opinion information, but also con2http://www.keenage.com/zhiwang/e zhiwang.html 742 0 15 0.2 0.25 HowNet SentiWordNet Intersection Union All 0 0.05 0.1 0.15 Baseline Opinion PageRank Opinion HITS Figure 4: Sentiment Lexicon Performance sider the relationship between different answers. The experiment results demonstrate the effectiveness of these relations. 2. Opinion PageRank and Opinion HITS are comparable. Among five sentimental lexicons, Opinion PageRank achieves the best results when using HowNet and Union lexicons, and Opinion HITS achieves the best results using the other three lexicons. This may be because when the sentiment lexicon is defined appropriately for the specific question set, the opinion PageRank model performs better. While when the sentiment lexicon is not suitable for these questions, the opinion HITS model may dynamically learn a temporal sentiment lexicon and can yield a satisfied performance. 3. Hownet achieves the best overall performance among five sentiment lexicons. In HowNet, English translations of the Chinese sentiment words are annotated by nonnative speakers; hence most of them are common and popular terms, which maybe more suitable for the Blog environment (Zhang and Ye, 2008). We will use HowNet as the sentiment thesaurus in the following experiments. In baseline, the parameter α shows the relative contributions for topic score and opinion score. We vary α from 0 to 1 with an interval of 0.1, and find that the best baseline result 0.170 is achieved when α=0.1. This is because the topic information has been considered during candidate extraction, the system considering more opinion information (lower α) achieves better. We will use this best result as baseline score in following experiments. Since F(3) score is more related with recall, F score and recall will be demonstrated. In the next two sections, we will present the performances of the parameters in each model. For simplicity, we denote Opinion PageRank as PR, Opinion HITS as HITS, baseline as Base, Recall as r, F score as F. 0.22 0.24 0.26 PR_r PR_F Base_r Base_F F(3) 0.12 0.14 0.16 0.18 0.2 0 0.2 0.4 0.6 0.8 1 Figure 5: Opinion PageRank Performance with varying parameter λ (µ = 0.5) 0.22 0.24 0.26 PR_r PR_F Base_r Base_F F(3) 0.12 0.14 0.16 0.18 0.2 0 0.2 0.4 0.6 0.8 1 Figure 6: Opinion PageRank Performance with varying parameter µ (λ = 0.2) 5.2.2 Opinion PageRank Performance In Opinion PageRank model, the value λ combines the source opinion and the destination opinion. Figure 5 shows the experiment results on parameter λ. When we consider lower λ, the system performs better. This demonstrates that the destination opinion score contributes more than source opinion score in this task. The value of µ is a trade-off between answer reinforcement relation and topic relation to calculate the scores of each node. For lower value of µ, we give more importance to the relevance to the question than the similarity with other sentences. The experiment results are shown in Figure 6. The best result is achieved when µ = 0.8. This figure also shows the importance of reinforcement between answer candidates. If we don’t consider the sentence similarity(µ = 0), the performance drops significantly. 5.2.3 Opinion HITS Performance The parameter γ combines the opinion hub score and topic hub score in the Opinion HITS model. The higher γ is, the more contribution is given 743 0.22 0.24 0.26 HITS_r HITS_F Base_r Base_F F(3) 0.12 0.14 0.16 0.18 0.2 0 0.2 0.4 0.6 0.8 1 Figure 7: Opinion HITS Performance with varying parameter γ to topic hub level, while the less contribution is given to opinion hub level. The experiment results are shown in Figure 7. Similar to baseline parameter α, since the answer candidates are extracted based on topic information, the systems considering opinion information heavily (α=0.1 in baseline, γ=0.2) perform best. Opinion HITS model ranks the sentences by authority scores. It can also rank the popular opinion words and popular topic words from the topic hub layer and opinion hub layer, towards a specific question. Take the question 1024.3 “What reasons do people give for liking Zillow?” as an example, its topic word is “Zillow”, and its sentiment polarity is positive. Based on the final hub scores, the top 10 topic words and opinion words are shown as Table 2. Opinion real, like, accurate, rich, right, interesting, Words better, easily, free, good Topic zillow, estate, home, house, data, value, Words site, information, market, worth Table 2: Question-specific popular topic words and opinion words generated by Opinion HITS Zillow is a real estate site for users to see the value of houses or homes. People like it because it is easily used, accurate and sometimes free. From the Table 2, we can see that the top topic words are the most related with question topic, and the top opinion words are question-specific sentiment words, such as “accurate”, “easily”, “free”, not just general opinion words, like “great”, “excellent” and “good”. 5.2.4 Comparisons with TAC Systems We are also interested in the performance comparison with the systems in TAC QA 2008. From Table 3, we can see Opinion PageRank and Opinion System Precision Recall F(3) OpPageRank 0.109 0.242 0.200 OpHITS 0.102 0.256 0.205 System 1 0.079 0.235 0.186 System 2 0.053 0.262 0.173 System 3 0.109 0.216 0.172 Table 3: Comparison results with TAC 2008 Three Top Ranked Systems (system 1-3 demonstrate top 3 systems in TAC) HITS respectively achieve around 10% improvement compared with the best result in TAC 2008, which demonstrates that our algorithm is indeed performing much better than the state-of-the-art opinion QA methods. 6 Conclusion and Future Works In this paper, we proposed two graph based sentence ranking methods for opinion question answering. Our models, called Opinion PageRank and Opinion HITS, could naturally incorporate topic relevance information and the opinion sentiment information. Furthermore, the relationships between different answer candidates can be considered. We demonstrate the usefulness of these relations through our experiments. The experiment results also show that our proposed methods outperform TAC 2008 QA Task top ranked systems by about 10% in terms of F score. Our random walk based graph methods integrate topic information and sentiment information in a unified framework. They are not limited to the sentence ranking for opinion question answering. They can be used in general opinion document search. Moreover, these models can be more generalized to the ranking task with two types of influencing factors. Acknowledgments: Special thanks to Derek Hao Hu and Qiang Yang for their valuable comments and great help on paper preparation. We also thank Hongning Wang, Min Zhang, Xiaojun Wan and the anonymous reviewers for their useful comments, and thank Hoa Trang Dang for providing the TAC evaluation results. The work was supported by 973 project in China(2007CB311003), NSFC project(60803075), Microsoft joint project ”Opinion Summarization toward Opinion Search”, and a grant from the International Development Research Center, Canada. 744 References Ricardo Baeza-Yates and Berthier Ribeiro-Neto. 1999. Modern Information Retrieval. Addison Wesley, May. Xavier Carreras and Lluis Marquez. 2005. Introduction to the conll-2005 shared task: Semantic role labeling. Yi Chen, Ming Zhou, and Shilong Wang. 2006. Reranking answers for definitional qa using language modeling. In ACL-CoLing, pages 1081–1088. Hang Cui, Min-Yen Kan, and Tat-Seng Chua. 2007. Soft pattern matching models for definitional question answering. ACM Trans. Inf. Syst., 25(2):8. Hoa Trang Dang. 2008. Overview of the tac 2008 opinion question answering and summarization tasks (draft). In TAC. G¨unes Erkan and Dragomir R. Radev. 2004. Lexpagerank: Prestige in multi-document text summarization. In EMNLP. Andrea Esuli and Fabrizio Sebastiani. 2006. Sentiwordnet: A publicly available lexical resource for opinion mining. In LREC. Jon M. Kleinberg. 1999. Authoritative sources in a hyperlinked environment. J. ACM, 46(5):604–632. Lun-Wei Ku, Yu-Ting Liang, and Hsin-Hsi Chen. 2007. Question analysis and answer passage retrieval for opinion question answering systems. In ROCLING. Tie-Yan Liu and Wei-Ying Ma. 2005. Webpage importance analysis using conditional markov random walk. In Web Intelligence, pages 515–521. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into texts. In EMNLP. Jahna Otterbacher, G¨unes Erkan, and Dragomir R. Radev. 2005. Using random walks for questionfocused sentence retrieval. In HLT/EMNLP. Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1998. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford University. Swapna Somasundaran, Theresa Wilson, Janyce Wiebe, and Veselin Stoyanov. 2007. Qa with attitude: Exploiting opinion type analysis for improving question answering in online discussions and the news. In ICWSM. Kim Soo-Min and Eduard Hovy. 2005. Identifying opinion holders for question answering in opinion texts. In AAAI 2005 Workshop. Veselin Stoyanov, Claire Cardie, and Janyce Wiebe. 2005. Multi-perspective question answering using the opqa corpus. In HLT/EMNLP. Vasudeva Varma, Prasad Pingali, Rahul Katragadda, and et al. 2008. Iiit hyderabad at tac 2008. In Text Analysis Conference. X. Wan and J Yang. 2008. Multi-document summarization using cluster-based link analysis. In SIGIR, pages 299–306. Hong Yu and Vasileios Hatzivassiloglou. 2003. Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In EMNLP. Min Zhang and Xingyao Ye. 2008. A generation model to unify topic relevance and lexicon-based sentiment for opinion retrieval. In SIGIR, pages 411–418. 745
2009
83
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 746–754, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP What lies beneath: Semantic and syntactic analysis of manually reconstructed spontaneous speech Erin Fitzgerald Johns Hopkins University Baltimore, MD, USA [email protected] Frederick Jelinek Johns Hopkins University Baltimore, MD, USA [email protected] Robert Frank Yale University New Haven, CT, USA [email protected] Abstract Spontaneously produced speech text often includes disfluencies which make it difficult to analyze underlying structure. Successful reconstruction of this text would transform these errorful utterances into fluent strings and offer an alternate mechanism for analysis. Our investigation of naturally-occurring spontaneous speaker errors aligned to corrected text with manual semanticosyntactic analysis yields new insight into the syntactic and structural semantic differences between spoken and reconstructed language. 1 Introduction In recent years, natural language processing tasks such as machine translation, information extraction, and question answering have been steadily improving, but relatively little of these systems besides transcription have been applied to the most natural form of language input: spontaneous speech. Moreover, there has historically been little consideration of how to analyze the underlying semantico-syntactic structure of speech. A system would accomplish reconstruction of its spontaneous speech input if its output were to represent, in flawless, fluent, and contentpreserved English, the message that the speaker intended to convey (Fitzgerald and Jelinek, 2008; Fitzgerald et al., 2009). Examples of such reconstructions are seen in the following sentence-like units (SUs). EX1: that’s uh that’s a relief becomes that’s a relief EX2: how can you do that without + it’s a catch-22 becomes how can you do that without <ARG> it’s a catch-22 EX3: they like video games some kids do becomes some kids like video games In EX1, reconstruction requires only the deletion of a simple filled pause and speaker repetition (or reparandum (Shriberg, 1994)). The second example shows a restart fragment, where an utterance is aborted by the speaker and then restarted with a new train of thought. Reconstruction here requires 1. Detection of an interruption point (denoted + in the example) between the abandoned thought and its replacement, 2. Determination that the abandoned portion contains unique and preservable content and should be made a new sentence rather than be deleted (which would alter meaning) 3. Analysis showing that a required argument must be inserted in order to complete the sentence. Finally, in the third example EX3, in order to produce one of the reconstructions given, a system must 1. Detect the anaphoric relationship between “they” and “some kids” 2. Detect the referral of “do” to “like video games” 3. Make the necessary word reorderings and deletion of the less informative lexemes. These examples show varying degrees of difficulty for the task of automatic reconstruction. In each case, we also see that semantic analysis of the reconstruction is more straightforward than of the 746 original string directly. Such analysis not only informs us of what the speaker intended to communicate, but also reveals insights into the types of errors speakers make when speaking spontaneously and where these errors occur. The semantic labeling of reconstructed sentences, when combined with the reconstruction alignments, may yield new quantifiable insights into the structure of disfluent natural speech text. In this paper, we will investigate this relationship further. Generally, we seek to answer two questions: • What generalizations about the underlying structure of errorful and reconstructed speech utterances are possible? • Are these generalizations sufficiently robust as to be incorporated into statistical models in automatic systems? We begin by reviewing previous work in the automatic labeling of structural semantics and motivating the analysis not only in terms of discovery but also regarding its potential application to automatic speech reconstruction research. In Section 2 we describe the Spontaneous Speech Reconstruction (SSR) corpus and the manual semantic role labeling it includes. Section 3 analyzes structural differences between verbatim and reconstructed text in the SSR as evaluated by a combination of manual and automatically generated phrasal constituent parses, while Section 4 combines syntactic structure and semantic label annotations to determine the consistency of patterns and their comparison to similar patterns in the Wall Street Journal (WSJ)-based Proposition Bank (PropBank) corpus (Palmer et al., 2005). We conclude by offering a high level analysis of discoveries made and suggesting areas for continued analysis in the future. Expanded analysis of these results is described in (Fitzgerald, 2009). 1.1 Semantic role labeling Every verb can be associated with a set of core and optional argument roles, sometimes called a roleset. For example, the verb “say” must have a sayer and an utterance which is said, along with an optionally defined hearer and any number of locative, temporal, manner, etc. adjunctival arguments. The task of predicate-argument labeling (sometimes called semantic role labeling or SRL) assigns a simple who did what to whom when, where, some kids | {z } ARG0 like |{z} predicate video games | {z } ARG1 Figure 1: Semantic role labeling for the sentence “some kids like video games”. According to PropBank specifications, core arguments for each predicate are assigned a corresponding label ARG0ARG5 (where ARG0 is a proto-agent, ARG1 is a proto-patient, etc. (Palmer et al., 2005)). why, how, etc. structure to sentences (see Figure 1), often for downstream processes such as information extraction and question answering. Reliably identifying and assigning these roles to grammatical text is an active area of research (Gildea and Jurafsky, 2002; Pradhan et al., 2004; Pradhan et al., 2008), using training resources like the Linguistic Data Consortium’s Proposition Bank (PropBank) (Palmer et al., 2005), a 300k-word corpus with semantic role relations labeled for verbs in the WSJsection of the Penn Treebank. A common approach for automatic semantic role labeling is to separate the process into two steps: argument identification and argument labeling. For each task, standard cue features in automatic systems include verb identification, analysis of the syntactic path between that verb and the prospective argument, and the direction (to the left or to the right) in which the candidate argument falls in respect to its predicate. In (Gildea and Palmer, 2002), the effect of parser accuracy on semantic role labeling is quantified, and consistent quality parses were found to be essential when automatically identifying semantic roles on WSJ text. 1.2 Potential benefit of semantic analysis to speech reconstruction With an adequate amount of appropriately annotated conversational text, methods such as those referred to in Section 1.1 may be adapted for transcriptions of spontaneous speech in future research. Furthermore, given a set of semantic role labels on an ungrammatical string, and armed with the knowledge of a set of core semanticosyntactic principles which constrain the set of possible grammatical sentences, we hope to discover and take advantage of new cues for construction errors in the field of automatic spontaneous speech reconstruction. 747 2 Data We conducted our experiments on the Spontaneous Speech Reconstruction (SSR) corpus (Fitzgerald and Jelinek, 2008), a 6,000 SU set of reconstruction annotations atop a subset of Fisher conversational telephone speech data (Cieri et al., 2004), including • manual word alignments between corresponding original and cleaned sentence-like units (SUs) which are labeled with transformation types (Section 2.1), and • annotated semantic role labels on predicates and their arguments for all grammatical reconstructions (Section 2.2). The fully reconstructed portion of the SSR corpus consists of 6,116 SUs and 82,000 words total. While far smaller than the 300,000-word PropBank corpus, we believe that this data will be adequate for an initial investigation to characterize semantic structure of verbatim and reconstructed speech. 2.1 Alignments and alteration labels In the SSR corpus, words in each reconstructed utterance were deleted, inserted, substituted, or moved as required to make the SU as grammatical as possible without altering the original meaning and without the benefit of extrasentential context. Alignments between the original words and their reconstructed “source” words (i.e. in the noisy channel paradigm) are explicitly defined, and for each alteration a corresponding alteration label has been chosen from the following. - DELETE words: fillers, repetitions/revisions, false starts, co-reference, leading conjugation, and extraneous phrases - INSERT neutral elements, such as function words like “the”, auxiliary verbs like “is”, or undefined argument placeholders, as in “he wants <ARG>” - SUBSTITUTE words to change tense or number, correct transcriber errors, and replace colloquial phrases (such as: “he was like...” → “he said...”) - REORDER words (within sentence boundaries) and label as adjuncts, arguments, or other structural reorderings Unchanged original words are aligned to the corresponding word in the reconstruction with an arc marked BASIC. 2.2 Semantic role labeling in the SSR corpus One goal of speech reconstruction is to develop machinery to automatically reduce an utterance to its underlying meaning and then generate clean text. To do this, we would like to understand how semantic structure in spontaneous speech text varies from that of written text. Here, we can take advantage of the semantic role labeling included in the SSR annotation effort. Rather than attempt to label incomplete utterances or errorful phrases, SSR annotators assigned semantic annotation only to those utterances which were well-formed and grammatical post-reconstruction. Therefore, only these utterances (about 72% of the annotated SSR data) can be given a semantic analysis in the following sections. For each well-formed and grammatical sentence, all (non-auxiliary and non-modal) verbs were identified by annotators and the corresponding predicate-argument structure was labeled according to the role-sets defined in the PropBank annotation effort1. We believe the transitive bridge between the aligned original and reconstructed sentences and the predicate-argument labels for those reconstructions (described further in Section 4) may yield insight into the structure of speech errors and how to extract these verb-argument relationships in verbatim and errorful speech text. 3 Syntactic variation between original and reconstructed strings As we begin our analysis, we first aim to understand the types of syntactic changes which occur during the course of spontaneous speech reconstruction. These observations are made empirically given automatic analysis of the SSR corpus annotations. Syntactic evaluation of speech and reconstructed structure is based on the following resources: 1. the manual parse Pvm for each verbatim original SU (from SSR) 2. the automatic parse Pva of each verbatim original SU 1PropBank roleset definitions for given verbs can be reviewed at http://www.cs.rochester.edu/∼gildea/Verbs/. 748 3. the automatic parse Pra of each reconstructed SU We note that automatic parses (using the state of the art (Charniak, 1999) parser) of verbatim, unreconstructed strings are likely to contain many errors due to the inconsistent structure of verbatim spontaneous speech (Harper et al., 2005). While this limits the reliability of syntactic observations, it represents the current state of the art for syntactic analysis of unreconstructed spontaneous speech text. On the other hand, automatically obtained parses for cleaned reconstructed text are more likely to be accurate given the simplified and more predictable structure of these SUs. This observation is unfortunately not evaluable without first manually parsing all reconstructions in the SSR corpus, but is assumed in the course of the following syntax-dependent analysis. In reconstructing from errorful and disfluent text to clean text, a system makes not only surface changes but also changes in underlying constituent dependencies and parser interpretation. We can quantify these changes in part by comparing the internal context-free structure between the two sets of parses. We compare the internal syntactic structure between sets Pva and Pra of the SSR check set. Statistics are compiled in Table 1 and analyzed below. • 64.2% of expansion rules in parses Pva also occur in reconstruction parses Pra, and 92.4% (86.8%) of reconstruction parse Pra expansions come directly from the verbatim parses Pva (from columns one and two of Table 1). • Column three of Table 1 shows the rule types most often dropped from the verbatim string parses Pva in the transformation to reconstruction. The Pva parses select full clause non-terminals (NTs) for the verbatim parses which are not in turn selected for automatic parses of the reconstruction (e.g. [SBAR → S] or [S →VP]). This suggests that these rules may be used to handle errorful structures not seen by the trained grammar. • Rule types in column four of Table 1 are the most often “generated” in Pra (as they are unseen in the automatic parse Pva). Since rules like [S →NP VP], [PP →IN NP], and [SBAR →IN S] appear in a reconstruction parse but not corresponding verbatim parse at similar frequencies regardless of whether Pvm or Pva are being compared, we are more confident that these patterns are effects of the verbatim-reconstruction comparison and not the specific parser used in analysis. The fact that these patterns occur indicates that it is these common rules which are most often confounded by spontaneous speaker errors. • Given a Levenshtein alignment between altered rules, the most common changes within a given NT phrase are detailed in column five of Table 1. We see that the most common aligned rule changes capture the most basic of errors: a leading coordinator (#1 and 2) and rules proceeded by unnecessary filler words (#3 and 5). Complementary rules #7 and 9 (e.g. VP →[rule]/[rule SBAR] and VP →[rule SBAR]/[rule]) show that complementing clauses are both added and removed, possibly in the same SU (i.e. a phrase shift), during reconstruction. 4 Analysis of semantics for speech Figure 2: Manual semantic role labeling for the sentence “some kids like video games” and SRL mapped onto its verbatim source string “they like video games and stuff some kids do” To analyze the semantic and syntactic patterns found in speech data and its corresponding reconstructions, we project semantic role labels from strings into automatic parses, and moreover from their post-reconstruction source to the verbatim original speech strings via the SSR manual word alignments, as shown in Figures 2. The automatic SRL mapping procedure from the reconstructed string Wr to related parses Pra and Pva and the verbatim original string Wv is as follows. 749 Pva rules Pra rules Pva rules most Pra rules most Levenshtein-aligned expansion in Pra in Pva frequently dropped frequently added changes (Pva/Pra) 1. NP →PRP 1. S →NP VP 1. S →[ CC rule] / [rule] 2. ROOT →S 2. PP →IN NP 2. S →[ CC NP VP] / [ NP VP] 3. S →NP VP 3. ROOT →S 3. S →[ INTJ rule] / [rule] 4. INTJ →UH 4. ADVP →RB 4. S →[ NP rule] / [rule] 64.2% 92.4% 5. PP →IN NP 5. S →NP ADVP VP 5. S →[ INTJ NP VP] / [ NP VP] 6. ADVP →RB 6. SBAR →IN S 6. S →[ NP NP VP] / [ NP VP] 7. SBAR →S 7. SBAR →S 7. VP →[rule] / [rule SBAR] 8. NP →DT NN 8. S →ADVP NP VP 8. S →[ RB rule] / [rule] 9. S →VP 9. S →VP 9. VP →[rule SBAR] / [rule] 10. PRN →S 10. NP →NP SBAR 10. S →[rule] / [ ADVP rule] Table 1: Internal syntactic structure removed and gained during reconstruction. This table compares the rule expansions for each verbatim string automatically parsed Pva and the automatic parse of the corresponding reconstruction in the SSR corpus (Pra). 1. Tag each reconstruction word wr ∈string Wr with the annotated SRL tag twr. (a) Tag each verbatim word wv ∈string Wv aligned to wr via a BASIC, REORDER, or SUBSTITUTE alteration label with the SRL tag twr as well. (b) Tag each verbatim word wv aligned to wr via a DELETE REPETITION or DELETE CO-REFERENCE alignment with a shadow of that SRL tag twr (see the lower tags in Figure 2 for an example) Any verbatim original word wv with any other alignment label is ignored in this semantic analysis as SRL labels for the aligned reconstruction word wr do not directly translate to them. 2. Overlay tagged words of string Wv and Wr with the automatic (or manual) parse of the same string. 3. Propagate labels. For each constituent in the parse, if all children within a syntactic constituent expansion (or all but EDITED or INTJ) has a given SRL tag for a given predicate, we instead tag that NT (and not children) with the semantic label information. 4.1 Labeled verbs and their arguments In the 3,626 well-formed and grammatical SUs labeled with semantic roles in the SSR, 895 distinct verb types were labeled with core and adjunct arguments as defined in Section 1.1. The most frequent of these verbs was the orthographic form “’s” which was labeled 623 times, or in roughly 5% of analyzed sentences. Other forms of the verb “to be”, including “is”, “was”, “be”, “are”, “re”, “’m”, and “being”, were labeled over 1,500 times, or at a rate of nearly one in half of all well-formed reconstructed sentences. The verb type frequencies roughly follow a Zipfian distribution (Zipf, 1949), where most verb words appear only once (49.9%) or twice (16.0%). On average, 1.86 core arguments (ARG[0-4]) are labeled per verb, but the specific argument types and typical argument numbers per predicate are verb-specific. For example, the ditransitive verb “give” has an average of 2.61 core arguments for its 18 occurrences, while the verb “divorced” (whose core arguments “initiator of end of marriage” and “ex-spouse” are often combined, as in “we divorced two years ago”) was labeled 11 times with an average of 1.00 core arguments per occurrence. In the larger PropBank corpus, annotated atop WSJ news text, the most frequently reported verb root is “say”, with over ten thousand labeled appearances in various tenses (this is primarily explained by the genre difference between WSJ and telephone speech)2; again, most verbs occur two or fewer times. 4.2 Structural semantic statistics in cleaned speech A reconstruction of a verbatim spoken utterance can be considered an underlying form, analogous 2The reported PropBank analysis ignores past and present participle (passive) usage; we do not do this in our analysis. 750 to that of Chomskian theory or Harris’s conception of transformation (Harris, 1957). In this view, the original verbatim string is the surface form of the sentence, and as in linguistic theory should be constrained in some manner similar to constraints between Logical Form (LF) and Surface Structure (SS). Most common syntactic Data SRL Total categories, with rel. frequency Pva 10110 NP (50%) PP (6%) Pra ARG1 8341 NP (58%) SBAR (9%) PB05 Obj-NP (52%) S (22%) Pva 4319 NP (90%) WHNP (3%) Pra ARG0 4518 NP (93%) WHNP (3%) PB05 Subj-NP (97%) NP (2%) Pva 3836 NP (28%) PP (13%) Pra ARG2 3179 NP (29%) PP (18%) PB05 NP (36%) Obj-NP (29%) Pva 931 ADVP (25%) NP (20%) Pra TMP 872 ADVP (27%) PP (18%) PB05 ADVP (26%) PP-in (16%) Pva 562 MD (58%) TO (18%) Pra MOD 642 MD (57%) TO (19%) PB05 MD (99%) ADVP (1%) Pva 505 PP (47%) ADVP (16%) Pra LOC 489 PP (54%) ADVP (17%) PB05 PP-in (59%) PP-on (10.0%) Table 2: Most frequent phrasal categories for common arguments in the SSR (mapping SRLs onto Pva parses). PB05 refers to the PropBank data described in (Palmer et al., 2005). Most common argument Data NT Total labels, with rel. frequency Pva 10541 ARG1 (48%) ARG0 (37%) Pra NP 10218 ARG1 (47%) ARG0 (41%) PB05 ARG2 (34%) ARG1 (24%) PB05 Subj-NP ARG0 (79%) ARG1 (17%) PB05 Obj-NP ARG1 (84%) ARG2 (10%) Pva PP 1714 ARG1 (34%) ARG2 (30%) Pra 1777 ARG1 (31%) ARG2 (30%) PB05 PP-in LOC (48%) TMP (35%) PB05 PP-at EXT (36%) LOC (27%) Pva 1519 ARG2 (21%) ARG1 (19%) Pra ADVP 1444 ARG2 (22%) ADV (20%) PB05 TMP (30%) MNR (22%) Pva 930 ARG1 (61%) ARG2 (14%) Pra SBAR 1241 ARG1 (62%) ARG2 (12%) PB05 ADV (36%) TMP (30%) Pva 523 ARG1 (70%) ARG2 (16%) Pra S 526 ARG1 (72%) ARG2 (17%) PB05 ARG1 (76%) ADV (9%) Pva 449 MOD (73%) ARG1 (18%) Pra MD 427 MOD (86%) ARG1 (11%) PB05 MOD (97%)Adjuncts (3%) Table 3: Most frequent argument categories for common syntactic phrases in the SSR (mapping SRLs onto Pva parses). In this section, we identify additional trends which may help us to better understand these constraints, such as the most common phrasal category for common arguments in common contexts – listed in Table 2 – and the most frequent semantic argument type for NTs in the SSR – listed in Table 3. 4.3 Structural semantic differences between verbatim speech and reconstructed speech We now compare the placement of semantic role labels with reconstruction-type labels assigned in the SSR annotations. These analyses were conducted on Pra parses of reconstructed strings, the strings upon which semantic labels were directly assigned. Reconstructive deletions Q: Is there a relationship between speaker error types requiring deletions and the argument shadows contained within? Only two deletion types – repetitions/revisions and co-references – have direct alignments between deleted text and preserved text and thus can have argument shadows from the reconstruction marked onto the verbatim text. Of 9,082 propagated deleted repetition/ revision phrase nodes from Pva, we found that 31.0% of arguments within were ARG1, 22.7% of arguments were ARG0, 8.6% of nodes were predicates labeled with semantic roles of their own, and 8.4% of argument nodes were ARG2. Just 8.4% of “delete repetition/revision” nodes were modifier (vs. core) arguments, with TMP and CAU labels being the most common. Far fewer (353) nodes from Pva represented deleted co-reference words. Of these, 57.2% of argument nodes were ARG1, 26.6% were ARG0 and 13.9% were ARG2. 7.6% of “argument” nodes here were SRL-labeled predicates, and 10.2% were in modifier rather than core arguments, the most prevalent were TMP and LOC. These observations indicate to us that redundant co-references are far most likely to occur for ARG1 roles (most often objects, though also subjects for copular verbs (i.e. “to be”) and others) and appear more likely than random to occur in core argument regions of an utterance rather than in optional modifying material. Reconstructive insertions 751 Q: When null arguments are inserted into reconstructions of errorful speech, what semantic role do they typically fill? Three types of insertions were made by annotators during the reconstruction of the SSR corpus. Inserted function words, the most common, were also the most varied. Analyzing the automatic parses of the reconstructions Pra, we find that the most commonly assigned parts-of-speech (POS) for these elements was fittingly IN (21.5%, preposition), DT (16.7%, determiner) and CC (14.3%, conjunction). Interestingly, we found that the next most common POS assignments were noun labels, which may indicate errors in SSR labeling. Other inserted word types were auxiliary or otherwise neutral verbs, and, as expected, most POS labels assigned by the parses were verb types, mostly VBP (non-third person present singular). About half of these were labeled as predicates with corresponding semantic roles; the rest were unlabeled which makes sense as true auxiliary verbs were not labeled in the process. Finally, around 147 insertion types made were neutral arguments (given the orthographic form <ARG>). 32.7% were common nouns and 18.4% of these were labeled personal pronouns PRP. Another 11.6% were adjectives JJ. We found that 22 (40.7%) of 54 neutral argument nodes directly assigned as semantic roles were ARG1, and another 33.3% were ARG0. Nearly a quarter of inserted arguments became part of a larger phrase serving as a modifier argument, the most common of which were CAU and LOC. Reconstructive substitutions Q: How often do substitutions occur in the analyzed data, and is there any semantic consistency in the types of words changed? 230 phrase tense substitutions occurred in the SSR corpus. Only 13 of these were directly labeled as predicate arguments (as opposed to being part of a larger argument), 8 of which were ARG1. Morphology changes generally affect verb tense rather than subject number, and with no real impact on semantic structure. Colloquial substitutions of verbs, such as “he was like...” →“he said...”, yield more unusual semantic analysis on the unreconstructed side as nonverbs were analyzed as verbs. Reconstructive word re-orderings Q: How is the predicate-argument labeling affected? If reorderings occur as a phrase, what type of phrase? Word reorderings labeled as argument movements occurred 136 times in the 3,626 semantics-annotated SUs in the SSR corpus. Of these, 81% were directly labeled as arguments to some sentence-internal predicate. 52% of those arguments were ARG1, 17% were ARG0, and 13% were predicates. 11% were labeled as modifying arguments rather than core arguments, which may indicate confusion on the part of the annotators and possibly necessary cleanup. More commonly labeled than argument movement was adjunct movement, assigned to 206 phrases. 54% of these reordered adjuncts were not directly labeled as predicate arguments but were within other labeled arguments. The most commonly labeled adjunct types were TMP (19% of all arguments), ADV (13%), and LOC (11%). Syntactically, 25% of reordered adjuncts were assigned ADVP by the automatic parser, 19% were assigned NP, 18% were labeled PP, and remaining common NT assignments included IN, RB, and SBAR. Finally, 239 phrases were labeled as being reordered for the general reason of fixing the grammar, the default change assignment given by the annotation tool when a word was moved. This category was meant to encompass all movements not included in the previous two categories (arguments and adjuncts), including moving “I guess” from the middle or end of a sentence to the beginning, determiner movement, etc. Semantically, 63% of nodes were directly labeled as predicates or predicate arguments. 34% of these were PRED, 28% were ARG1, 27% were ARG0, 8% were ARG2, and 8% were roughly evenly distributed across the adjunct argument types. Syntactically, 31% of these changes were NPs, 16% were ADVPs, and 14% were VBPs (24% were verbs in general). The remaining 30% of changes were divided amongst 19 syntactic categories from CC to DT to PP. 4.4 Testing the generalizations required for automatic SRL for speech The results described in (Gildea and Palmer, 2002) show that parsing dramatically helps during the course of automatic SRL. We hypothesize that the current state-of-art for parsing speech is adequate to generally identify semantic roles in spon752 taneously produced speech text. For this to be true, features for which SRL is currently dependent on such as consistent predicate-to-parse paths within automatic constituent parses must be found to exist in data such as the SSR corpus. The predicate-argument path is defined as the number of steps up and down a parse tree (and through which NTs) which are taken to traverse the tree from the predicate (verb) to its argument. For example, the path from predicate VBP →“like” to the argument ARG0 (NP →“some kids”) might be [VBP ↑VP ↑S ↓NP]. As trees grow more complex, as well as more errorful (as expected for the automatic parses of verbatim speech text), the paths seen are more sparsely observed (i.e. the probability density is less concentrated at the most common paths than similar paths seen in the PropBank annotations). We thus consider two path simplifications as well: • compressed: only the source, target, and root nodes are preserved in the path (so the path above becomes [VBP ↑S ↓NP]) • POS class clusters: rather than distinguish, for example, between different tenses of verbs in a path, we consider only the first letter of each NT. Thus, clustering compressed output, the new path from predicate to ARG0 becomes [V ↑S ↓N]. The top paths were similarly consistent regardless of whether paths are extracted from Pra, Pvm, or Pva (Pva results shown in Table 4), but we see that the distributions of paths are much flatter (i.e. a greater number and total relative frequency of path types) going from manual to automatic parses and from parses of verbatim to parses of reconstructed strings. 5 Discussion In this work, we sought to find generalizations about the underlying structure of errorful and reconstructed speech utterances, in the hopes of determining semantic-based features which can be incorporated into automatic systems identifying semantic roles in speech text as well as statistical models for reconstruction itself. We analyzed syntactic and semantic variation between original and reconstructed utterances according to manually and automatically generated parses and manually labeled semantic roles. Argument Path from Predicate Freq VBP ↑VP ↑S ↓NP 4.9% PredicateVB ↑VP ↑VP ↑S ↓NP 3.9% Argument VB ↑VP ↓NP 3.8% Paths VBD ↑VP ↑S ↓NP 2.8% 944 more path types 84.7% VB ↑S ↓NP 7.3% VB ↑VP ↓NP 5.8% Compressed VBP ↑S ↓NP 5.3% VBD ↑S ↓NP 3.5% 333 more path types 77.1% V ↑S ↓N 25.8% V ↑V ↓N 17.5% POS class+ V ↑V ↓A 8.2% compressed V ↑V ↓V 7.7% 60 more path types 40.8% Table 4: Frequent Pva predicate-argument paths Syntactic paths from predicates to arguments were similar to those presented for WSJ data (Palmer et al., 2005), though these patterns degraded when considered for automatically parsed verbatim and errorful data. We believe that automatic models may be trained, but if entirely dependent on automatic parses of verbatim strings, an SRL-labeled resource much bigger than the SSR and perhaps even PropBank may be required. 6 Conclusions and future work This work is an initial proof of concept that automatic semantic role labeling (SRL) of verbatim speech text may be produced in the future. This is supported by the similarity of common predicateargument paths between this data and the PropBank WSJ annotations (Palmer et al., 2005) and the consistency of other features currently emphasized in automatic SRL work on clean text data. To automatically semantically label speech transcripts, however, is expected to require additional annotated data beyond the 3k utterances annotated for SRL included in the SSR corpus, though it may be adequate for initial adaptation studies. This new ground work using available corpora to model speaker errors may lead to new intelligent feature design for automatic systems for shallow semantic labeling and speech reconstruction. Acknowledgments Support for this work was provided by NSF PIRE Grant No. OISE-0530118. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the supporting agency. 753 References Eugene Charniak. 1999. A maximum-entropyinspired parser. In Proceedings of the Annual Meeting of the North American Association for Computational Linguistics. Christopher Cieri, Stephanie Strassel, Mohamed Maamouri, Shudong Huang, James Fiumara, David Graff, Kevin Walker, and Mark Liberman. 2004. Linguistic resource creation and distribution for EARS. In Rich Transcription Fall Workshop. Erin Fitzgerald and Frederick Jelinek. 2008. Linguistic resources for reconstructing spontaneous speech text. In Proceedings of the Language Resources and Evaluation Conference. Erin Fitzgerald, Keith Hall, and Frederick Jelinek. 2009. Reconstructing false start errors in spontaneous speech text. In Proceedings of the Annual Meeting of the European Association for Computational Linguistics. Erin Fitzgerald. 2009. Reconstructing Spontaneous Speech. Ph.D. thesis, The Johns Hopkins University. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245–288. Daniel Gildea and Martha Palmer. 2002. The necessity of parsing for predicate argument recognition. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. Mary Harper, Bonnie Dorr, John Hale, Brian Roark, Izhak Shafran, Matthew Lease, Yang Liu, Matthew Snover, Lisa Yung, Anna Krasnyanskaya, and Robin Stewart. 2005. Structural metadata and parsing speech. Technical report, JHU Language Engineering Workshop. Zellig S. Harris. 1957. Co-occurrence and transformation in linguistic structure. Language, 33:283–340. Martha Palmer, Paul Kingsbury, and Daniel Gildea. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71–106, March. Sameer Pradhan, Wayne Ward, Kadri Hacioglu, James Martin, and Dan Jurafsky. 2004. Shallow semantic parsing using support vector machines. In Proceedings of the Human Language Technology Conference/North American chapter of the Association of Computational Linguistics (HLT/NAACL), Boston, MA. Sameer Pradhan, James Martin, and Wayne Ward. 2008. Towards robust semantic role labeling. Computational Linguistics, 34(2):289–310. Elizabeth Shriberg. 1994. Preliminaries to a Theory of Speech Disfluencies. Ph.D. thesis, University of California, Berkeley. George K. Zipf. 1949. Human Behavior and the Principle of Least-Effort. Addison-Wesley. 754
2009
84
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 755–763, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Discriminative Lexicon Adaptation for Improved Character Accuracy – A New Direction in Chinese Language Modeling Yi-cheng Pan Speech Processing Labratory National Taiwan University Taipei, Taiwan 10617 [email protected] Lin-shan Lee Speech Processing Labratory National Taiwan University Taipei, Taiwan 10617 [email protected] Sadaoki Furui Furui Labratory Tokyo Institute of Technology Tokyo 152-8552 Japan [email protected] Abstract While OOV is always a problem for most languages in ASR, in the Chinese case the problem can be avoided by utilizing character n-grams and moderate performances can be obtained. However, character ngram has its own limitation and proper addition of new words can increase the ASR performance. Here we propose a discriminative lexicon adaptation approach for improved character accuracy, which not only adds new words but also deletes some words from the current lexicon. Different from other lexicon adaptation approaches, we consider the acoustic features and make our lexicon adaptation criterion consistent with that in the decoding process. The proposed approach not only improves the ASR character accuracy but also significantly enhances the performance of a characterbased spoken document retrieval system. 1 Introduction Generally, an automatic speech recognition (ASR) system requires a lexicon. The lexicon defines the possible set of output words and also the building units in the language model (LM). Lexical words offer local constraints to combine phonemes into short chunks while the language model combines phonemes into longer chunks by more global constraints. However, it’s almost impossible to include all words into a lexicon both due to the technical difficulty and also the fact that new words are created continuously. The missed out words will never be recognized, which is the well-known OOV problem. Using graphemes for OOV handling is proposed in English (Bisani and Ney, 2005). Although this sacrifices some of the lexical constraints and introduces a further difficulty to combine graphemes back into words, it is compensated by its ability for 5.8K characters 61.5K full lexicon bigram 63.55% 73.8% trigram 74.27% 79.28% Table 1: Character recognition accuracy under different lexicons and the order of language model. open vocabulary ASR. Morphs are another possibility, which are longer than graphemes but shorter than words, in other western languages (Hirsim¨aki et al., 2005). Chinese language, on the other hand, is quite different from western languages. There are no blanks between words and the definition for words is vague. Since almost all characters in Chinese have their own meanings and words are composed of the characters, there is an obvious solution for the OOV problem: simply using all characters as the lexicon. In Table 1 we see the differences in character recognition accuracy by using only 5.8K characters and a full set of 61.5K lexicon. The training set and testing set are the same as those that will be introduced in Section 4.1. It is clear that characters alone can provide moderate recognition accuracies while augmenting new words significantly improves the performance. If the words’ semantic functionality can be abandoned, which definitely can not be replaced by characters, we can treat words as a means to enhance character recognition accuracy. Such arguments stand at least for Chinese ASR since they evaluate on character error rate and do not add explicit blanks between words. Here we formulate a lexicon adaptation problem and try to discriminatively find out not only OOV words beneficial for ASR but also those existing words that can be deleted. Unlike previous lexicon adaptation or construction approaches (Chien, 1997; Fung, 1998; Deligne and Sagisaka, 2000; Saon and Padmanabhan, 2001; Gao et al., 2002; Federico and Bertoldi, 2004), we 755 consider the acoustic signals and also the whole speech decoding structure. We propose to use a simple approximation for the character posterior probabilities (PPs), which combines acoustic model and language model scores after decoding. Based on the character PPs, we adapt the current lexicon. The language model is then re-trained according the new lexicon. Such procedure can be iterated until convergence. Characters, are not only the output units in Chinese ASR but also have their roles in spoken document retrieval (SDR). It has been shown that characters are good indexing units. Generally, characters can at least help OOV query handling; in the subword-based confusion network (S-CN) proposed by Pan et al. (2007), characters are even better than words for in-vocabulary (IV) queries. In addition to evaluating the proposed approach on ASR performance, we investigate its helpfulness when integrated with an S-CN framework. 2 Related Work Previous works for lexicon adaptation were focused on OOV rate reduction. Given an adaptation corpus, the standard way is to first identify OOV words. These OOV words are selected into the current lexicon based on the criterion of frequency or recency (Federico and Bertoldi, 2004). The language model is also re-estimated according to the new corpus and new derived words. For Chinese, it is more difficult to follow the same approach since OOV words are not readily identifiable. Several methods have been proposed to extract OOV words from the new corpus based on different statistics, which include associate norm and context dependency (Chien, 1997), mutual information (Gao et al., 2002), morphological and statistical rules (Chen and Ma, 2002), and strength and spread measure (Fung, 1998). The used statistics generally help find sequences of characters that are consistent to the general concept of words. However, if we focus on ASR performance, the constraint of the extracted character strings to be word-like is unnecessary. Yang et al. (1998) proposed a way to select new character strings based on average character perplexity reduction. The word-like constraint is not required and they show a significant improvement on character-based perplexity. Similar ideas were found to use mutual probability as an effective measure to combine two existing lexicon words into a new word (Saon and Padmanabhan, 2001). Though proposed for English, this method is effective for Chinese ASR (Chen et al., 2004). Gao et al. (2002) combined an information gain-like metric and the perplexity reduction criterion for lexicon word selection. The application is on Chinese pinyin-tocharacter conversion, which has very good correlation with the underlying language model perplexity. The above works actually are all focused on the text level and only consider perplexity effect. However, as pointed by Rosenfeld (2000), lower perplexity does not always imply lower ASR error rate. Here we try to face the lexicon adaptation problem from another aspect and take the acoustic signals involved in the decoding procedure into account. 3 Proposed Approach 3.1 Overall Picture ord Character-based Confusion Automatic Speech Recognition (ASR) Character-based Confusion Network (CCN) construction word lattices Network (CCN) Adaptation Corpus Lexicon Adaptation for Improved Character Accuracy Add/Delete words Lexicon (Lexi) Language Model (LMi) y (LAICA) Word Segmentation LM Training (Lexi) Model (LMi) Manual Transcription Segmentation and LM Training g Corpora Figure 1: The flow chart of the proposed approach. We show the complete flow chart in Figure 1. At the beginning we are given an adaptation spoken corpus and manual transcriptions. Based on a baseline lexicon (Lex0) and a language model (LM0) we perform ASR on the adaptation corpus and construct corresponding word lattices. We then build character-based confusion networks (CCNs) (Fu et al., 2006; Qian et al., 2008). On the CCNs we perform the proposed algorithm to add and delete words into/from the current lexicon. The LM training corpora joined with the adaptation corpus is then segmented using Lex1 and the language model is in turn re-trained, which gives LM1. This procedure can be iterated to give Lexi and LMi until convergence. 3.2 Character Posterior Probability and Character-based Confusion Network (CCN) Consider a word W as shown in Figure 2 with characters {c1c2c3} corresponding to the edge e starting at time τ and ending at time t in a word lattice. During decoding the boundaries between c1 756 Figure 2: An edge e of word W composed of characters c1c2c3 starting at time τ and ending at time t. and c2, and c2 and c3 are recorded respectively as t1 and t2. The posterior probability (PP) of the edge e given the acoustic features A, P(e|A), is (Wessel et al., 2001): P(e|A) = α(τ) · P(xt τ|W) · PLM(W) · β(t) βstart , (1) where α(τ) and β(t) denote the forward and backward probability masses accumulated up to time τ and t obtained by the standard forward-backward algorithm, P(xt τ|W) is the acoustic likelihood function, PLM(W) the language model score, and βstart the sum of all path scores in the lattice. Equation (1) can be extended to the PP of a character of W, say c1 with edge e1: P(e1|A) = α(τ) · P(xt1 τ |c1) · PLM(c1) · β(t1) βstart . (2) Here we need two new probabilities, PLM(c1) and β(t1). Since neither is easy to estimate, we make some approximations. First, we assume PLM(c1) ≈PLM(W). Of course this is not true, the actual relation being PLM(c1) ≥PLM(W), since the set of events having c1 given its history includes a set of events having W given the same history. We used the above approximation for easier implementation. Second, we assume that after c1 there is only one path from t1 to t: through c2 and c3. This is more reasonable since we restrain the hypotheses space to be inside the word lattice, and pruned paths are simply neglected. With this approximation we have β(t1) = P(xt t1|c2c3) · β(t). Substituting these two approximate values for PLM(c1) and β(t1) in Equation (2), the result turns out to be very simple: P(e1|A) ≈P(e|A). With similar assumptions for the character edges e2 and e3, we have P(e2|A) ≈P(e3|A) ≈P(e|A). Similar results were obtained by Yao et al. (2008) from a different point of view. The result that P(ei|A) ≈P(e|A) seems to diverge from the intuition: approximating an n-segment word by splitting the probability of the entire edge over the segments – P(ei|A) ≈ np P(e|A). The basic meaning of Equation (1) is to calculate the ratio of the paths going through a specific edge divided by the total paths while each path is weighted properly. Of course the paths going through a sub-edge ei should be definitely more than the paths through the corresponding full-edge e. As a result, P(ei|A) should usually be greater than P(e|A), as implied by the intuition. However, the inter-connectivity between all sub-edges and the proper weights of them are not easy to be handled well. Here we constrain the inter-connectivity of sub-edges to be only inside its own word edge and also simplify the calculation of the weights of paths. This offers a tractable solution and the performance is quite acceptable. After we obtain the PPs for each character arc in the lattice, such as P(ei|A) as mentioned above, we can perform the same clustering method proposed by Mangu et al. (2000) to convert the word lattice to a strict linear sequence of clusters, each consisting of a set of alternatives of character hypotheses, or a character-based confusion network (CCN) (Fu et al., 2006; Qian et al., 2008). In CCN we collect the PPs for all character arc c with beginning time τ and end time t as P([c; τ, t]|A) (based on the above mentioned approximation): P([c; τ, t]|A) = P H = w1 . . . wN ∈lattice : ∃i ∈{1 . . . N} : wi contains [c; τ, t] P(H)P(A|H) P path H′ ∈lattice P(H′)P(A|H′) , (3) where H stands for a path in the word lattice. P(H) is the language model score of H (after proper scaling) and P(A|H) is the acoustic model score. CCN was known to be very helpful in reducing character error rate (CER) since it minimizes the expected CER (Fu et al., 2006; Qian et al., 2008). Given a CCN, we simply choose the characters with the highest PP from each cluster as the recognition results. 3.3 Lexicon Adaptation with Improved Character Accuracy (LAICA) In Figure 3 we show a piece of a character-based confusion network (CCN) aligned with the corresponding manual transcription characters. Such alignment can be implemented by an efficient dynamic programming method. The CCN is composed of several strict linear ordering clusters of 757 Rm-1 Rm Reference Characters … Rm+1 Rm+2 Rm+3 n || o || p || q || r || … Character-based Confusion Network (CCN) … … n s t u … … …… ………. Calign(m) Calign(m+2) Calign(m+3) o q … ……….. … … …… … … Calign(m-1) Calign(m+1) align(m+2) p Rm: character variable at the mth position in the reference characters m p Calign(m): a cluster of CCN aligned with the mth character in the reference n~u: symbols for Chinese characters Figure 3: A character-based confusion network (CCN) and corresponding reference manual transcription characters. character alternatives. In the figure, Calign(m) is a specific cluster aligned with the mth character in the reference, which contains characters {s . . . o . . .} (The alphabets n, o . . . u are symbols for specific Chinese characters) . The characters in each cluster of CCN are well sorted according to the PP, and in each cluster a special null character ϵ with its PP being equal to 1 minus the summation of PPs for all character hypotheses in that cluster. The clusters with ϵ ranked first are neglected in the alignment. After the alignment, there are only three possibilities corresponding to each reference character. (1) The reference character is ranked first in the corresponding cluster (Rm−1 and the cluster Calign(m−1)). In this case the reference character can be correctly recognized. (2) The reference character is included in the corresponding cluster but not ranked first. ([Rm . . . Rm+2] and {Calign(m), . . . , Calign(m+2)}) (3) The reference character is not included in the corresponding cluster (Rm+3 and Calign(m+3)). For cases (2) and (3), the reference character will be incorrectly recognized. The basic idea of the proposed lexicon adaptation with an improved character accuracy (LAICA) approach is to enhance the PPs of those incorrectly recognized characters by adding new words and deleting existing words in the lexicon. Here we only focus on those characters of case (2) mentioned above. This is primarily motivated by the minimum classification error (MCE) discriminative training approach proposed by Juang et al. (1997), where a sigmoid function was used to suppress the impacts of those perfectly and very poorly recognized training samples. In our approach, the case (1) is the perfect case and case (3) is the very poor one. Another motivation is that for characters in case (1), since they are already correctly recognized we do not try to enhance their PPs. The procedure of LAICA then becomes simple. Among the aligned reference characters and clusters of CCN, case (1) and (3) are anchors. The reference characters between two anchors then become our focus segment and their PPs should be enhanced. By investigating Equation (3), to enhance the PP of a specific character we can adjust the language model (P(H)), and the acoustic model (P(A|H)), or we can simply modify the lexicon (the constraint under summation). We should add new words to cover the characters of case (2) to enlarge the numerator of Equation (3) and at the same time delete some existing words to suppress the denominator. In Figure 3, reference characters [RmRm+1Rm+2=opq] and the clusters {Calign(m), . . . , Calign(m+2)} show an example of our focus segment. For each such segment, we at most add one new word and delete an existing word. From the string [opq] we choose the longest OOV part from it as a new word. To select a word to be deleted, we choose the longest in-vocabulary (IV) part from the top ranked competitors of [opq], which are then [stu] in clusters {Calign(m), . . . , Calign(m+2)}. This is also motivated by MCE that we only suppress the strongest competitors’ probabilities. Note that we do not delete single-characters in the procedure. The “at most one” constraint here is motivated by previous language model adaptation works (Federico, 1999) which usually try to introduce new evidences in the adaptation corpus but with the least modification of the original model. Of course the modification of language models led by the addition and deletion of words is hard to quantify and we choose to add and delete as fewer words as possible, which is just a simple heuristic. On the other hand, adding fewer words means that longer words are added. It has been shown that longer words are more helpful for ASR (Gao et al., 2004; Saon and Padmanabhan, 2001). The proposed LAICA approach can be regarded as a discriminative one since it not only considers the reference characters but also those wrongly recognized characters. This can be beneficial since it reduces potential ambiguities existing in the lexicon. 758 The Expectation-Maximization algorithm 1. Bootstrap initial word segmentation by maximum-matching algorithm (Wong and Chan, 1996) 2. Estimate unigram LM 3. Expectation: Re-segment according to the unigram LM 4. Maximization: Estimate the n-gram LM 5. Expectation: Re-segment according to the n-gram LM 6. Go to step 4 until convergence Table 2: EM algorithm for word segmentation and LM estimation 3.4 Word Segmentation and Language Model Training If we regard the word segmentation process as a hidden variable, then we can apply EM algorithm (Dempster et al., 1977) to train the underlying ngram language model. The procedure is described in Table 2. In the algorithm we can see two expectation phases. This is natural since at the beginning the bootstrap segmentation can not give reliable statistics for higher order n-gram and we choose to only use the unigram marginal probabilities. The procedure was well established by Hwang et al. (2006). Actually, the EM algorithm proposed here is similar to the n-multigram model training procedure proposed by Deligne and Sagisaka (2000). The role of multigrams can be regarded as the words here, except that multigrams begin from scratch while here we have an initial lexicon and use maximummatching algorithm to offer an acceptable initial unigram probability distributions. If the initial lexicon is not available, the procedure proposed by Deligne and Sagisaka (2000) is preferred. 4 Experimental Results 4.1 Baseline Lexicon, Corpora and Language Models The baseline lexicon was automatically constructed from a 300 MB Chinese news text corpus ranging from 1997 to 1999 using the widely applied PATtree-based word extraction method (Chien, 1997). It includes 61521 words in total, of which 5856 are single-characters. The key principles of the PAT-tree-based approach to extract a sequence of characters as a word are: (1) high enough frequency count; (2) high enough mutual information between component characters; (3) large enough number of context variations on both sides; (4) not dominated by the most frequent context among all context variations. In general the words extracted have high frequencies and clear boundaries, thus very often they have good semantic meanings. Since all the above statistics of all possible character sequences in a raw corpus are combinatorially too many, we need an efficient data structure such as the PAT-tree to record and access all such information. With the baseline lexicon, we performed the EM algorithm as in Table 2 to train the trigram LM. Here we used a 313 MB LM training corpus, which contains text news articles in 2000 and 2001. Note that in the following Sections, the pronunciations of the added words were automatically labeled by exhaustively generating all possible pronunciations from all component characters’ canonical pronunciations. 4.2 ASR Character Accuracy Results A set of broadcast news corpus collected from a Chinese radio station from January to September, 2001 was used as the speech corpus. It contained 10K utterances. We separated these utterances into two parts randomly: 5K as the adaptation corpus and 5K as the testing set. We show the ASR character accuracy results after lexicon adaptation by the proposed approach in Table 3. LAICA-1 LAICA-2 A D A+D A D A+D Baseline +1743 -1679 +1743 +409 -112 +314 -1679 -88 79.28 80.48 79.31 80.98 80.58 79.33 81.21 Table 3: ASR character accuracies for the baseline and the proposed LAICA approach. Two iterations are performed, each with three versions. A: only add new words, D: only delete words and A+D: simultaneously add and delete words. + and - means the number of words added and deleted, respectively. For the proposed LAICA approach, we show the results for one (LAICA-1) and two (LAICA2) iterations respectively, each of which has three different versions: (A) only add new words into the current lexicon, (D) only delete words, (A+D) simultaneously add and delete words. The number of added or deleted words are also included in Table 3. There are some interesting observations. First, we see that deletion of current words brought much 759 less benefits than adding new words. We try to give some explanations. Deleting existing words in the lexicon actually is a passive assistance for recognizing reference characters correctly. Of course we eliminate some strong competitive characters in this way but we can not guarantee that reference characters will then have high enough PP to be ranked first in its own cluster. Adding new words into the lexicon, on the other hand, offers explicit reinforcement in PP of the reference characters. Such reinforcement offers the main positive boosting for the PP of reference characters. These boosted characters are under some specific contexts which normally correspond to OOV words and sometimes in-vocabulary (IV) words that are hard to be recognized. From the model training aspect, adding new words gives the maximum-likelihood flavor while deleting existing words provides discriminant ability. It has been shown that discriminative training does not necessarily outperform maximumlikelihood training until we have enough training data (Ng and Jordan, 2001). So it is possible that discriminatively trained model performs worse than that trained by maximum likelihood. In our case, adding and deleting words seem to compliment each other well. This is an encouraging result. Another good property is that the proposed approach converged quickly. The number of words to be added or deleted dropped significantly in the second iteration, compared to the first one. Generally the fewer words to be changed the fewer recognition improvement can be expected. Actually we have tried the third iteration and simply obtained dozens of words to be added and no words to be deleted, which resulted in negligible changes in ASR recognition accuracy. 4.3 Comparison with other Lexicon Adaptation Methods In this section we compare our method with two other traditionally used approaches: one is the PATtree-based as introduced in Section 4.1 and the other is based on mutual probability (Saon and Padmanabhan, 2001), which is the geometrical average of the direct and reverse bigram: PM(wi, wj) = q Pf(wj|wi)Pr(wi|wj), where the direct (Pf(·) and reverse bigram (Pr(·)) can be estimated as: Pf(wj|wi) = P(Wt+1 = wj, Wt = wi) P(Wt = wi) , Pr(wj|wi) = P(Wt+1 = wj, Wt = wi) P(Wt+1 = wj) . PM(wi, wj) is used as a measure about whether to combine wi and wj as a new word. By properly setting a threshold, we may iteratively combine existing characters and/or words to produce the required number of new words. For both the PAT-treeand mutual-information-based approaches, we use the manual transcriptions of the development 5K utterances to collect the required statistics and we extract 2159 and 2078 words respectively to match the number of added words by the proposed LAICA approach after 2 iterations (without word deletion). The language model is also re-trained as described in Section 3.4. The results are shown in Table 4, where we also include the results of our approach with 2 iterations and adding words only for reference. PATtree Mutual Probability LAICA-2(A) Character Accuracy 79.33 80.11 80.58 Table 4: ASR character accuracies on the lexicon adapted by different approaches. From the results we observe that the PAT-treebased approach did not give satisfying improvements while the mutual probability-based one worked well. This may be due to the sparse adaptation data, which includes only 81K characters. PAT-tree-based approach relies on the frequency count, and some terms which occur only once in the adaptation data will not be extracted. Mutual probability-based approach, on the other hand, considers two simple criterion: the components of a new word occur often together and rarely in conjunction with other words (Saon and Padmanabhan, 2001). Compared with the proposed approach, both PAT-tree and mutual probability do not consider the decoding structure. Some new words are clearly good for human sense and definitely convey novel semantic information, but they can be useless for speech recognition. That is, character n-gram may handle these words equally well due to the low ambiguities with other words. The proposed LAICA approach tries to focus on those new words which can not be handled well by simple character n-grams. Moreover, the two methods discussed here do not offer possible ways to delete current words, which can be considered as a further advantage of the proposed LAICA approach. 760 4.4 Application: Character-based Spoken Document Indexing and Retrieval Pan et al. (2007) recently proposed a new Subwordbased Confusion Network (S-CN) indexing structure for SDR, which significantly outperforms word-based methods for IV or OOV queries. Here we apply S-CN structure to investigate the effectiveness of improved character accuracy for SDR. Here we choose characters as the subword units, and then the S-CN structure is exactly the same as CCN, which was introduced in Section 3.2. For the SDR back-end corpus, the same 5K test utterances as used for the ASR experiment in Section 4.2 were used. The previously mentioned lexicon adaptation approaches and corresponding language models were used in the same speech recognizer for the spoken document indexing. We automatically choose 139 words and terms as queries according to the frequency (at least six times in the 5K utterances). The SDR performance is evaluated by mean average precision (MAP) calculated by the trec eval1 package. The results are shown in Table 5. Character Accuracy MAP Baseline 79.28 0.8145 PAT-tree 79.33 0.8203 Mutual Probability 80.11 0.8378 LAICA-2(A+D) 81.21 0.8628 Table 5: ASR character accuracies and SDR MAP performances under S-CN structure. From the results, we see that generally the increasing of character recognition accuracy improves the SDR MAP performance. This seems trivial but we have to note the relative improvements. Actually the transformation ratios from the relative increased character accuracy to the relative increased MAP for the three lexicon adaptation approaches are different. A key factor making the proposed LAICA approach advantageous is that we try to extensively raise the incorrectly recognized character posterior probabilities, by means of adding effective OOV words and deleting ambiguous words. Actually S-CN is relying on the character posterior probability for indexing, which is consistent with our criterion and makes our approach beneficial. The degree of the raise of character posterior probabilities can be visualized more clearly in the following experiment. 1http://trec.nist.gov/ 4.5 Further Investigation: the Improved Rank in Character-based Confusion Networks In this experiment, we have the same setup as in Section 4.2. After decoding, we have characterbased confusion networks (CCNs) for each test utterance. Rather than taking the top ranked characters in each cluster as the recognition result, we investigate the ranks of the reference characters in these clusters. This can be achieved by the same alignment as we did in Section 3.3. The results are shown in Table 6. # of ranked reference characters Average Rank baseline 70993 1.92 PAT-tree 71038 1.89 Mutual Probability 71054 1.81 LAICA-2(A+D) 71083 1.67 Table 6: Average ranks of reference characters in the confusion networks constructed by different lexicons and corresponding language models In Table 6 we only evaluate ranks on those reference characters that can be found in its corresponding confusion network cluster (case (1) and (2) as described in Section 3.3). The number of those evaluated reference characters depends on the actual CCN and is also included in the results. Generally, over 93% of reference characters are included (the total number is 75541). Such ranks are critical for lattice-based spoken document indexing approaches such as S-CN since they directly affect retrieval precision. The advantage of the proposed LAICA approach is clear. The results here provide a more objective point of view since SDR evaluation is inevitably effected by the selected queries. 5 Conclusion and Future Work Characters together is an interesting and distinct language unit for Chinese. They can be simultaneously viewed as words and subwords, which offer a special means for OOV handling. While relying only on characters gives moderate performances in ASR, properly augmenting new words significantly increases the accuracy. An interesting question would then be how to choose words to augment. Here we formulate the problem as an adaptation one and try to find the best way to alter the current 761 lexicon for improved character accuracy. This is a new perspective for lexicon adaptation. Instead of identifying OOV words from adaptation corpus to reduce OOV rate, we try to pick out word fragments hidden in the adaptation corpus that help ASR. Furthermore, we delete some existing words which may result in ambiguities. Since we directly match our criterion with that in decoding, the proposed approach is expected to have more consistent improvements than perplexity based criterions. Characters also play an important role in spoken document retrieval. This extends the applicability of the proposed approach and we found that the S-CN structure proposed by Pan et al. for spoken document indexing fitted well with the proposed LAICA approach. However, there still remain lots to be improved. For example, considering Equation 3, the language model score and the summation constraint are not independent. After we alter the lexicon, the LM is different accordingly and there is no guarantee that the obtained posterior probabilities for those incorrectly recognized characters would be increased. We increased the path alternatives for those reference characters but this can not guarantee to increase total path probability mass. This can be amended by involving the discriminative language model adaptation in the iteration, which results in a unified language model and lexicon adaptation framework. This can be our future work. Moreover, the same procedure can be used in the construction. That is, beginning with only characters in the lexicon and using the training data to alter the current lexicon in each iteration. This is also an interesting direction. References Maximilian Bisani and Hermann Ney. 2005. Open vocabulary speech recognition with flat hybrid models. In Interspeech, pages 725–728. Keh-Jiann Chen and Wei-Yun Ma. 2002. Unknown word extraction for chinese documents. In COLING, pages 169–175. Berlin Chen, Jen-Wei Kuo, and Wen-Hung Tsai. 2004. Lightly supervised and data-driven approaches to mandarin broadcast news transcription. In ICASSP, pages 777–780. Lee-Feng Chien. 1997. Pat-tree-based keyword extraction for Chinese information retrieval. In SIGIR, pages 50–58. Sabine Deligne and Yoshinori Sagisaka. 2000. Statistical language modeling with a class-based nmultigram model. Comp. Speech and Lang., 14(3):261–279. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistics Society, 39(1):1–38. Marcello Federico and Nicola Bertoldi. 2004. Broadcast news LM adaptation over time. Comp. Speech Lang., 18:417–435. Marcello Federico. 1999. Efficient language model adaptation through MDI estimation. In Intersspech, pages 1583–1586. Yi-Sheng Fu, Yi-Cheng Pan, and Lin-Shan Lee. 2006. Improved large vocabulary continuous Chinese speech recognition by character-based consensus networks. In ISCSLP, pages 422–434. Pascale Fung. 1998. Extracting key terms from chinese and japanese texts. Computer Processing of Oriental Languages, 12(1):99–121. Jianfeng Gao, Joshua Goodman, Mingjing Li, and KaiFu Lee. 2002. Toward a unified approach to statistical language modeling for Chinese. ACM Transaction on Asian Language Information Processing, 1(1):3–33. Jianfeng Gao, Mu Li, Andi Wu, and Chang-Ning Huang. 2004. Chinese word segmentation: A pragmatic approach. In MSR-TR-2004-123. Teemu Hirsim¨aki, Mathias Creutz, Vesa Siivola, Mikko Kurimo, Sami Virpioja, and Janne Pylkk¨onen. 2005. Unlimited vocabulary speech recognition with morph language models applied to Finnish. Comp. Speech Lang. Mei-Yuh Hwang, Xin Lei, Wen Wang, and Takahiro Shinozaki. 2006. Investigation on mandarin broadcast news speech recognition. In InterspeechICSLP, pages 1233–1236. Bing-Hwang Juang, Wu Chou, and Chin-Hui Lee. 1997. Minimum classification error rate methods for speech recognition. IEEE Trans. Speech Audio Process., 5(3):257–265. Lidia Mangu, Eric Brill, and Andreas Stolcke. 2000. Finding consensus in speech recognition: Word error minimization and other applications of confusion networks. Comp. Speech Lang., 14(2):373–400. Andrew Y. Ng and Michael I. Jordan. 2001. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. In Advances in Neural Information Processing Systems (14), pages 841–848. 762 Yi-Cheng Pan, Hung-Lin Chang, and Lin-Shan Lee. 2007. Analytical comparison between position specific posterior lattices and confusion networks based on words and subword units for spoken document indexing. In ASRU. Yao Qian, Frank K. Soong, and Tan Lee. 2008. Toneenhanced generalized character posterior probability (GCPP) for Cantonese LVCSR. Comp. Speech Lang., 22(4):360–373. Ronald Rosenfeld. 2000. Two decades of statistical language modeling: Where do we go from here? Proceeding of IEEE, 88(8):1270–1278. George Saon and Mukund Padmanabhan. 2001. Datadriven approach to designing compound words for continuous speech recognition. IEEE Trans. Speech and Audio Process., 9(4):327–332, May. Frank Wessel, Ralf Schl¨uter, Klaus Macherey, and Hermann Ney. 2001. Confidence measures for large vocabulary continuous speech recognition. IEEE Trans. Speech Audio Process., 9(3):288–298, Mar. Pak-kwong Wong and Chorkin Chan. 1996. Chinese word segmentation based on maximum matching and word binding force. In Proceedings of the 16th International Conference on Computational Linguistic, pages 200–203. Kae-Cherng Yang, Tai-Hsuan Ho, Lee-Feng Chien, and Lin-Shan Lee. 1998. Statistics-based segment pattern lexicon: A new direction for chinese language modeling. In ICASSP, pages 169–172. 763
2009
85
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 764–772, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Improving Automatic Speech Recognition for Lectures through Transformation-based Rules Learned from Minimal Data Cosmin Munteanu∗† ∗National Research Council Canada 46 Dineen Drive Fredericton E3B 9W4, CANADA [email protected] Gerald Penn† †University of Toronto Dept. of Computer Science Toronto M5S 3G4, CANADA {gpenn,xzhu}@cs.toronto.edu Xiaodan Zhu† Abstract We demonstrate that transformation-based learning can be used to correct noisy speech recognition transcripts in the lecture domain with an average word error rate reduction of 12.9%. Our method is distinguished from earlier related work by its robustness to small amounts of training data, and its resulting efficiency, in spite of its use of true word error rate computations as a rule scoring function. 1 Introduction Improving access to archives of recorded lectures is a task that, by its very nature, requires research efforts common to both Automatic Speech Recognition (ASR) and Human-Computer Interaction (HCI). One of the main challenges to integrating text transcripts into archives of webcast lectures is the poor performance of ASR systems on lecture transcription. This is in part caused by the mismatch between the language used in a lecture and the predictive language models employed by most ASR systems. Most ASR systems achieve Word Error Rates (WERs) of about 40-45% in realistic and uncontrolled lecture conditions (Leeuwis et al., 2003; Hsu and Glass, 2006). Progress in ASR for this genre requires both better acoustic modelling (Park et al., 2005; F¨ugen et al., 2006) and better language modelling (Leeuwis et al., 2003; Kato et al., 2000; Munteanu et al., 2007). In contrast to some unsupervised approaches to language modelling that require large amounts of manual transcription, either from the same instructor or on the same topic (Nanjo and Kawahara, 2003; Niesler and Willett, 2002), the solution proposed by Glass et al. (2007) uses half of the lectures in a semester course to train an ASR system for the other half or for when the course is next offered, and still results in significant WER reductions. And yet even in this scenario, the business case for manually transcribing half of the lecture material in every recorded course is difficult to make, to say the least. Manually transcribing a one-hour recorded lecture requires at least 5 hours in the hands of qualified transcribers (Hazen, 2006) and roughly 10 hours by students enrolled in the course (Munteanu et al., 2008). As argued by Hazen (2006), any ASR improvements that rely on manual transcripts need to offer a balance between the cost of producing those transcripts and the amount of improvement (i.e. WER reductions). There is some work that specializes in adaptive language modelling with extremely limited amounts of manual transcripts. Klakow (2000) filters the corpus on which language models are trained in order to retain the parts that are more similar to the correct transcripts on a particular topic. This technique resulted in relative WER reductions of between 7% and 10%. Munteanu et al. (2007) use an information retrieval technique that exploits lecture presentation slides, automatically mining the World Wide Web for documents related to the topic as attested by text on the slides, and using these to build a bettermatching language model. This yields about an 11% relative WER reduction for lecture-specific language models. Following upon other applications of computer-supported collaborative work to address shortcomings of other systems in artificial intelligence (von Ahn and Dabbish, 2004), a wikibased technique for collaboratively editing lecture transcripts has been shown to produce entirely cor764 rected transcripts, given the proper motivation for students to participate (Munteanu et al., 2008). Another approach is active learning, where the goal is to select or generate a subset of the available data that would be the best candidate for ASR adaptation or training (Riccardi and Hakkani-Tur, 2005; Huo and Li, 2007).1 Even with all of these, however, there remains a significant gap between this WER and the threshold of 25%, at which lecture transcripts have been shown with statistical significance to improve student performance on a typical lecture browsing task (Munteanu et al., 2006). People have also tried to correct ASR output in a second pass. Ringger and Allen (1996) treated ASR errors as noise produced by an auxiliary noisy channel, and tried to decode back to the perfect transcript. This reduced WER from 41% to 35% on a corpus of train dispatch dialogues. Others combine the transcripts or word lattices (from which transcripts are extracted) of two complementary ASR systems, a technique first proposed in the context of NIST’s ROVER system (Fiscus, 1997) with a 12% relative error reduction (RER), and subsequently widely employed in many ASR systems. This paper tries to correct ASR output using transformation-based learning (TBL). This, too, has been attempted, although on a professional dictation corpus with a 35% initial WER (Peters and Drexel, 2004). They had access to a very large amount of manually transcribed data — so large, in fact, that the computation of true WER in the TBL rule selection loop was computationally infeasible, and so they used a set of faster heuristics instead. Mangu and Padmanabhan (2001) used TBL to improve the word lattices from which the transcripts are decoded, but this method also has efficiency problems (it begins with a reduction of the lattice to a confusion network), is poorly suited to word lattices that have already been heavily domain-adapted because of the language model’s low perplexity, and even with higher perplexity models (the SWITCHBOARD corpus using a lan1This work generally measures progress by reduction in the size of training data rather than relative WER reduction. Riccardi and Hakkani-Tur (2005) achieved a 30% WER with 68% less training data than their baseline. Huo and Li (2007) worked on a small-vocabulary name-selection task that combined active learning with acoustic model adaptation. They reduced the WER from 15% to 3% with 70 syllables of acoustic adaptation, relative to a baseline that reduced the WER to 3% with 300 syllables of acoustic adaptation. guage model trained over a diverse range of broadcast news and telephone conversation transcripts), was reported to produce only a 5% WER reduction. What we show in this paper is that a true WER calculation is so valuable that a manual transcription of only about 10 minutes of a one-hour lecture is necessary to learn the TBL rules, and that this smaller amount of transcribed data in turn makes the true WER calculation computationally feasible. With this combination, we achieve a greater average relative error reduction (12.9%) than that reported by Peters and Drexel (2004) on their dictation corpus (9.6%), and an RER over three times greater than that of our reimplementation of their heuristics on our lecture data (3.6%). This is on top of the average 11% RER from language model adaptation on the same data. We also achieve the RER from TBL without the obligatory round of development-set parameter tuning required by their heuristics, and in a manner that is robust to perplexity. Less is more. Section 2 briefly introduces TransformationBased Learning (TBL), a method used in various Natural Language Processing tasks to correct the output of a stochastic model, and then introduces a TBL-based solution for improving ASR transcripts for lectures. Section 3 describes our experimental setup, and Section 4 analyses its results. 2 Transformation-Based Learning Brill’s tagger introduced the concept of Transformation-Based Learning (TBL) (Brill, 1992). The fundamental principle of TBL is to employ a set of rules to correct the output of a stochastic model. In contrast to traditional rule-based approaches where rules are manually developed, TBL rules are automatically learned from training data. The training data consist of sample output from the stochastic model, aligned with the correct instances. For example, in Brill’s tagger, the system assigns POSs to words in a text, which are later corrected by TBL rules. These rules are learned from manually-tagged sentences that are aligned with the same sentences tagged by the system. Typically, rules take the form of context-dependent transformations, for example “change the tag from verb to noun if one of the two preceding words is tagged as a determiner.” An important aspect of TBL is rule scoring/ranking. While the training data may suggest 765 a certain transformation rule, there is no guarantee that the rule will indeed improve the system’s accuracy. So a scoring function is used to rank rules. From all the rules learned during training, only those scoring higher than a certain threshold are retained. For a particular task, the scoring function ideally reflects an objective quality function. Since Brill’s tagger was first introduced, TBL has been used for other NLP applications, including ASR transcript correction (Peters and Drexel, 2004). A graphical illustration of this task is presented in Figure 1. Here, the rules consist of Figure 1: General TBL algorithm. Transformation rules are learned from the alignment of manuallytranscribed text (T) with automatically-generated transcripts (TASR) of training data, ranked according to a scoring function (S) and applied to the ASR output (T ′ ASR) of test data. word-level transformations that correct n-gram sequences. A typical challenge for TBL is the heavy computational requirements of the rule scoring function (Roche and Schabes, 1995; Ngai and Florian, 2001). This is no less true in largevocabulary ASR correction, where large training corpora are often needed to learn good rules over a much larger space (larger than POS tagging, for example). The training and development sets are typically up to five times larger than the evaluation test set, and all three sets must be sampled from the same cohesive corpus. While the objective function for improving the ASR transcript is WER reduction, the use of this for scoring TBL rules can be computationally prohibitive over large data-sets. Peters and Drexel (2004) address this problem by using an heuristic approximation to WER instead, and it appears that their approximation is indeed adequate when large amounts of training data are available. Our approach stands at the opposite side of this tradeoff — restrict the amount of training data to a bare minimum so that true WER can be used in the rule scoring function. As it happens, the minimum amount of data is so small that we can automatically develop highly domain-specific language models for single 1-hour lectures. We show below that the rules selected by this function lead to a significant WER reduction for individual lectures even if a little less than the first ten minutes of the lecture are manually transcribed. This combination of domain-specificity with true WER leads to the superior performance of the present method, at least in the lecture domain (we have not experimented with a dictation corpus). Another alternative would be to change the scope over which TBL rules are ranked and evaluated, but it is well known that globally-scoped ranking over the entire training set at once is so useful to TBL-based approaches that this is not a feasible option — one must either choose an heuristic approach, such as that of Peters and Drexel (2004) or reduce the amount of training data to learn sufficiently robust rules. 2.1 Algorithm and Rule Discovery As our proposed TBL adaptation operates directly on ASR transcripts, we employ an adaptation of the specific algorithm proposed by Peters and Drexel (2004), which is schematically represented in Figure 1. This in turn was adapted from the general-purpose algorithm introduced by Brill (1992). The transformation rules are contextual wordreplacement rules to be applied to ASR transcripts, and are learned by performing a wordlevel alignment between corresponding utterances in the manual and ASR transcripts of training data, and then extracting the mismatched word sequences, anchored by matching words. The matching words serve as contexts for the rules’ application. The rule discovery algorithm is outlined in Figure 2; it is applied to every mismatching word sequence between the utterance-aligned manual and ASR transcripts. For every mismatching sequence of words, a set 766 ⋄for every sequence of words c0w1 . . . wnc1 in the ASR output that is deemed to be aligned with a corresponding sequence c0w′ 1 . . . w′ mc1 in the manual transcript: ⋄add the following contextual replacements to the set of discovered rules: / c0w1 . . . wnc1 / c0w′ 1 . . . w′ mc1 / / c0w1 . . . wn / c0w′ 1 . . . w′ m / / w1 . . . wnc1 / w′ 1 . . . w′ mc1 / / w1 . . . wn / w′ 1 . . . w′ m / ⋄for each i such that 1 ≤i < min(n, m), add the following contextual replacements to the set of discovered rules: / c0w1 . . . wi / c0w′ 1 . . . w′ a(i) / / wi+1 . . . wnc1 / w′ a(i+1) . . . w′ mc1 / / w1 . . . wi / w′ 1 . . . w′ a(i) / / wi+1 . . . wn / w′ a(i+1) . . . w′ m / Figure 2: The discovery of transformation rules. of contextual replacement rules is generated. The set contains the mismatched pair, by themselves and together with three contexts formed from the left, right, and both anchor context words. In addition, all possible splices of the mismatched pair and the surrounding context words are also considered.2 Rules are shown here as replacement expressions in a sed-like syntax. Given the rule r = /w1 . . . wn/w′ 1 . . . w′ m/, every instance of the n-gram w1 . . . wn appearing in the current transcript is replaced with the n-gram w′ 1 . . . w′ m. Rules cannot apply to their own output. Rules that would result in arbitrary insertions of single words (e.g. / /w1/) are discarded. An example of a rule learned from transcripts is presented in Figure 3. 2.2 Scoring Function and Rule Application The scoring function that ranks rules is the main component of any TBL algorithm. Assuming a relatively small size for the available training data, a TBL scoring function that directly correlates with WER can be conducted globally over the entire training set. In keeping with TBL tradition, however, rule selection itself is still greedily approximated. Our scoring function is defined as: SW ER(r, TASR, T) = WER(TASR, T) −WER(ρ(r, TASR), T), 2The splicing preserves the original order of the wordlevel utterance alignment, i.e., the output of a typical dynamic programming implementation of the edit distance algorithm (Gusfield, 1997). For this, word insertion and deletion operations are treated as insertions of blanks in either the manual or ASR transcript. Utterance-align ASR output and correct transcripts: ASR: the okay one and you come and get your seats Correct: ok why don’t you come and get your seats ⇓ Insert sentence delimiters (to serve as possible anchors for the rules): ASR: <s> the okay one and you come and get your seats </s> Correct: <s> ok why don’t you come and get your seats </s> ⇓ Extract the mismatching sequence, enclosed by matching anchors: ASR: <s> the okay one and you Correct: <s> ok why don’t you ⇓ Output all rules for replacing the incorrect ASR sequence with the correct text, using the entire sequence (a) or splices (b), with or without surrounding anchors: (a) the okay one and / ok why don’t (a) the okay one and you / ok why don’t you (a) <s> the okay one and / <s> ok why don’t (a) <s> the okay one and you / <s> ok why don’t you (b) the okay / ok (b) <s> the okay / <s> ok (b) one and / why don’t (b) one and you / why don’t you (b) the okay one / ok why (b) <s> the okay one / <s> ok why (b) and / don’t (b) and you / don’t you Figure 3: An example of rule discovery. where ρ(r, TASR) is the result of applying rule r on text TASR. As outlined in Figure 1, rules that occur in the training sample more often than an established threshold are ranked according to the scoring function. The ranking process is iterative: in each iteration, the highest-scoring rule rbest is selected. In subsequent iterations, the training data TASR are replaced with the result of applying the selected rule on them (TASR ←ρ(rbest, TASR)) and the remaining rules are scored on the transformed training text. This ensures that the scoring and ranking of remaining rules takes into account the changes brought by the application of the previously selected rules. The iterations stop when the scoring function reaches zero: none of the remaining rules improves the WER on the training data. On testing data, rules are applied to ASR tran767 scripts in the same order in which they were selected. 3 Experimental Design Several combinations of TBL parameters were tested with no tuning or modifications between tests. As the proposed method was not refined during the experiments, and since one of the goals of our proposed approach is to eliminate the need for developmental data sets, the available data were partitioned only into training and test sets, with one additional hour set aside for code development and debugging. It can be assumed that a one-hour lecture given by the same instructor will exhibit a strong cohesion, both in topic and in speaking style, between its parts. Therefore, in contrast to typical TBL solutions, we have evaluated our TBL-based approach by partitioning each 50 minute lecture into a training and a test set, where the training set is smaller than the test set. As mentioned in the introduction, it is feasible to obtain manual transcripts for the first 10 to 15 minutes of a lecture. As such, the evaluation was carried out with two values for the training size: the first fifth (TS = 20%) and the first third (TS = 33%) of the lecture being manually transcribed. Besides the training size parameter, during all experimental tests a second parameter was also considered: the rule pruning threshold (RT). As described in Section 2.2, of all the rules learned during the rule discovery step, only those that occur more often than the threshold are scored and ranked. This parameter can be set as low as 1 (consider all rules) or 2 (consider all rules that occur at least twice over the training set). For largerscale tasks, the threshold serves as a pruning alternative to the computational burden of scoring several thousand rules. A large threshold could potentially lead to discrediting low-frequency but high-scoring rules. Due to the intentionally small size of our training data for lecture TBL, the lowest threshold was set to RT = 2. When a development set is available, several values for the RT parameter could be tested and the optimal one chosen for the evaluation task. Since we used no development set, we tested two more values for the rule pruning threshold: RT = 5 and RT = 10. Since our TBL solution is an extension of the solution proposed in Peters and Drexel (2004), their heuristic is our baseline. Their scoring function is the expected error reduction: XER = ErrLen · (GoodCnt −BadCnt), a WER approximation computed over all instances of rules applicable to the training set which reflects the difference between true positives (the number of times a rule is correctly applied to errorful transcripts – GoodCnt) and false positives (the instances of correct text being unnecessarily “corrected” by a rule – BadCnt). These are weighted by the length in words (ErrLen) of the text area that matches the left-hand side of the replacement. 3.1 Acoustic Model The experiments were conducted using the SONIC toolkit (Pellom, 2001). We used the acoustic model distributed with the toolkit, which was trained on 30 hours of data from 283 speakers from the WSJ0 and WSJ1 subsets of the 1992 development set of the Wall Street Journal (WSJ) Dictation Corpus. Our own lectures consist of eleven lectures of approximately 50 minutes each, recorded in three separate courses, each taught by a different instructor. For each course, the recordings were performed in different weeks of the same term. They were collected in a large, amphitheatre-style, 200-seat lecture hall using the AKG C420 head-mounted directional microphone. The recordings were not intrusive, and no alterations to the lecture environment or proceedings were made. The 1-channel recordings were digitized using a TASCAM US-122 audio interface as uncompressed audio files with a 16KHz sampling rate and 16-bit samples. The audio recordings were segmented at pauses longer than 200ms, manually for one instructor and automatically for the other two, using the silence detection algorithm described in Placeway et al. (1997). Our implementation was manually finetuned for every instructor in order to detect all pauses longer than 200ms while allowing a maximum of 20 seconds in between pauses. The evaluation data are described in Table 1. Four evaluations tasks were carried out; for instructor R, two separate evaluation sessions, R-1 and R-2, were conducted, using two different language models. The pronunciation dictionary was custom-built to include all words appearing in the corpus on which the language model was trained. Pronunciations were extracted from the 5K-word WSJ dictionary included with the SONIC toolkit and from 768 Evaluation task name R-1 R-2 G-1 K-1 Instructor R. G. K. Gender Male Male Female Age Early 60s Mid 40s Early 40s Segmentation manual automatic automatic # lectures 4 3 4 Lecture topic Interactive Software Unix promedia design design gramming Language model WSJ-5K WEB ICSISWB WSJ-5K Table 1: The evaluation data. the 100K-word CMU pronunciation dictionary. For all models, we allowed one non-dictionary word per utterance, but only for lines longer than four words. For allowable non-dictionary words, SONIC’s sspell lexicon access tool was used to generate pronunciations using letter-to-sound predictions. The language models were trained using the CMU-CAM Language Modelling Toolkit (Clarkson and R., 1997) with a training vocabulary size of 40K words. 3.2 Language Models The four evaluations were carried out using the language models given in Table 1, either custombuilt for a particular topic or the baseline models included in the SONIC toolkit, as follows: WSJ-5K is the baseline model of the SONIC toolkit. It is a 5K-word model built using the same corpus as the base acoustic model included in the toolkit. ICSISWB is a 40K-word model created through the interpolation of language models built on the entire transcripts of the ICSI Meeting corpus and the Switchboard corpus. The ICSI Meeting corpus consists of recordings of universitybased multi-speaker research meetings, totaling about 72 hours from 75 meetings (Janin et al., 2003). The Switchboard (SWB) corpus (Godfrey et al., 1992) is a large collection of about 2500 scripted telephone conversations between approximately 500 English-native speakers, suitable for the conversational style of lectures, as also suggested in (Park et al., 2005). WEB is a language model built for each particular lecture, using information retrieval techniques that exploit the lecture slides to automatically mine the World Wide Web for documents related to the presented topic. WEB adapts ICSISWB using these documents to build a language model that better matches the lecture topic. It is also a 40K-word model built on training corpora with an average file size of approximately 200 MB per lecture, and an average of 35 million word tokens per lecture. It is appropriate to take the difference between ICSISWB and WSJ-5K to be one of greater genre specificity, whereas the difference between WEB and ICSISWB is one of greater topic-specificity. Our experiments on these three models (Munteanu et al., 2007) shows that the topic adaptation provides nearly all of the benefit. 4 Results Tables 2, 3 and 43 present the evaluation results ICSISWB Lecture 1 Lecture 2 Lecture 3 TS = % 20 33 20 33 20 33 Initial WER 50.93 50.75 54.10 53.93 48.79 49.35 XER RT = 10 46.63 49.38 49.93 48.61 49.52 50.43 RT = 5 48.34 49.75 49.32 48.81 49.58 49.26 RT = 2 54.05 56.84 52.01 49.11 50.37 51.66 XER-NoS RT = 10 49.54 49.38 54.10 53.93 48.79 48.24 RT = 5 49.54 49.31 56.70 55.50 48.51 48.42 RT = 2 59.00 59.28 57.61 55.03 50.41 52.67 SW ER RT = 10 46.63 46.53 49.80 48.44 45.83 45.42 RT = 5 46.63 45.60 47.75 47.23 44.76 44.44 RT = 2 44.48 44.30 47.46 47.02 43.60 44.13 Table 4: Experimental evaluation: WER values for instructor G using the ICSISWB language model. for instructors R and G. The transcripts were obtained through ASR runs using three different language models. The TBL implementation with our scoring function SW ER brings relative WER reductions ranging from 10.5% to 14.9%, with an average of 12.9%. These WER reductions are greater than those produced by the XER baseline approach. It is not possible to provide confidence intervals since the proposed method does not tune parameters from sampled data (which we regard as a very positive quality for such a method to have). Our speculative experimentation with several values for TS and RT, however, leads us to conclude that this method is significantly less sensitive to variations in both the training size TS and the rule pruning threshold RT than earlier work, making it suitable for application to tasks with limited training data – a result somewhat expected since rules are validated through direct WER reductions over the entire training set. 3Although WSJ-5K and ICSISWB exhibited nearly the same WER in our earlier experiments on all lecturers, we did find upon inspection of the transcripts in question that ICSISWB was better interpretable on speakers that had more casual speaking styles, whereas WSJ-5K was better on speakers with more rehearsed styles. We have used whichever of these baselines was the best interpretable in our experiments here (WSJ-5K for R and K, ICSISWB for G). 769 WSJ-5K Lecture 1 Lecture 2 Lecture 3 Lecture 4 TS = % 20 33 20 33 20 33 20 33 Initial WER 50.48 50.93 51.31 51.90 50.28 49.23 54.39 54.04 XER RT = 10 49.97 49.82 49.27 49.77 46.85 48.08 52.17 50.58 RT = 5 50.01 50.07 49.99 51.13 48.39 47.37 50.91 49.62 RT = 2 49.87 51.75 49.52 51.13 47.13 47.31 52.70 50.56 XER-NoS RT = 10 47.25 46.82 49.98 48.72 48.44 45.21 51.37 49.73 RT = 5 49.03 48.78 47.37 51.25 47.84 44.07 49.54 48.97 RT = 2 52.21 53.47 49.31 52.29 50.85 49.41 50.63 51.81 SW ER RT = 10 45.18 44.58 49.06 45.97 46.49 45.30 49.60 47.95 RT = 5 44.82 43.82 46.73 45.52 45.64 43.18 47.79 46.74 RT = 2 44.04 43.99 45.81 45.16 44.35 41.49 46.89 44.28 Table 2: Experimental evaluation: WER values for instructor R using the WSJ-5K language model. WEB Lecture 1 Lecture 2 Lecture 3 Lecture 4 TS = % 20 33 20 33 20 33 20 33 Initial WER 45.54 45.85 43.36 43.87 46.69 47.14 49.78 49.38 XER RT = 10 42.91 43.90 42.44 43.81 46.78 45.35 46.92 49.65 RT = 5 43.45 43.81 42.65 44.37 46.90 42.12 47.34 46.04 RT = 2 43.26 45.46 44.19 44.66 43.77 45.12 61.54 60.40 XER-NoS RT = 10 43.51 42.97 42.11 41.98 44.66 46.59 47.24 46.30 RT = 5 44.96 42.98 40.01 40.52 44.66 41.74 47.23 44.35 RT = 2 46.72 48.16 44.79 45.87 40.44 44.32 61.84 64.40 SW ER RT = 10 41.98 41.44 42.11 40.75 44.66 45.27 47.24 45.85 RT = 5 40.97 40.56 38.85 39.08 44.66 40.84 45.27 42.39 RT = 2 40.67 40.47 38.00 38.07 40.00 40.08 43.31 41.52 Table 3: Experimental evaluation: WER values for instructor R using the WEB language models. As for how the transcripts improve, words with lower information content (e.g., a lower tf.idf score) are corrected more often and with more improvement than words with higher information content. The topic-specific language model adaptation that the TBL follows upon benefits words with higher information content more. It is possible that the favour observed in TBL with SW ER towards lower information content is a bias produced by the preceding round of language model adaptation, but regardless, it provides a muchneeded complementary effect. This can be observed in Tables 2 and 3, in which TBL produces nearly the same RER in either table for any lecture. We have also extensively experimented with the usability of lecture transcripts on human subjects (Munteanu et al., 2006), and have found that taskbased usability varies in linear relation to WER. An analysis of the rules selected by both TBL implementations revealed that using the XER approximation leads to several single-word rules being selected, such as rules removing all instances of frequent stop-words such as “the” and “for” or pronouns such as “he.” Therefore, an empirical improvement (XER −NoS) of the baseline was implemented that, beside pruning rules below the RT threshold, omits such single-word rules from being selected. As shown in Tables 2, 3 and 4, this restriction slightly improves the performance of the approximation-based TBL for some values of the RT and TS parameters, although it still does not consistently match the WER reductions of our scoring function. Although the experimental evaluation shows positive improvements in transcript quality through TBL, in particular when using the SW ER scoring function, an exception is illustrated in Table 5. The recordings for this evaluation were collected from a course on Unix programming, and lectures were highly interactive. Instructor K used numerous examples of C or Shell code, many of them being developed and tested in class. While the keywords from a programming language can be easily added to the ASR lexicon, the pronunciation of such abbreviated forms (especially for Shell programming) and of mostly all variable and custom function names proved to be a significant difficulty for the ASR system. This, combined with a high speaking rate and often inconsistently truncated words, led to few TBL rules occurring even above the lowest RT = 2 threshold (despite many TBL rules being initially discovered). As previously mentioned, one of the drawbacks of global TBL rule scoring is the heavy computational burden. The experiments conducted here, however, showed an average learning time of one hour per one-hour lecture, reaching at most three 770 WSJ-5K Lecture 1 Lecture 2 Lecture 3 Lecture 4 TS = % 20 33 20 33 20 33 20 33 Initial WER 44.31 44.06 46.12 45.80 51.10 51.19 53.92 54.89 XER RT = 10 44.31 44.06 46.12 46.55 51.10 51.19 53.92 54.89 RT = 5 44.31 44.87 46.82 47.47 51.10 51.19 53.96 55.56 RT = 2 47.46 55.21 50.54 51.01 52.60 54.93 57.48 60.46 XER-NoS RT = 10 44.31 44.06 46.12 46.55 51.10 51.19 53.92 54.89 RT = 5 44.31 44.87 46.82 47.47 51.10 51.19 53.96 55.56 RT = 2 46.43 54.41 50.54 51.01 53.01 55.02 57.47 60.02 SW ER RT = 10 44.31 44.06 46.12 45.80 51.10 51.19 53.92 54.89 RT = 5 44.31 44.05 46.11 45.88 51.10 51.19 53.92 54.89 RT = 2 44.34 44.07 46.03 45.89 50.96 50.93 54.01 55.16 Table 5: Experimental evaluation: WER values for instructor K using the WSJ-5K language model. hours4 for a threshold of 2 when training over transcripts for one third of a lecture. Therefore, it can be concluded that, despite being computationally more intensive than a heuristic approximation (for which the learning time is on the order of just a few minutes), a TBL system using a global, WERcorrelated scoring function not only produces better transcripts, but also produces them in a feasible amount of time with only a small amount of manual transcription for each lecture. 5 Summary and Discussion One of the challenges to reducing the WER of ASR transcriptions of lecture recordings is the lack of manual transcripts on which to train various ASR improvements. In particular, for onehour lectures given by different lecturers (such as, for example, invited presentations), it is often impractical to manually transcribe parts of the lecture that would be useful as training or development data. However, transcripts for the first 10-15 minutes of a particular lecture can be easily obtained. In this paper, we presented a solution that improves the quality of ASR transcripts for lectures. WER is reduced by 10% to 14%, with an average reduction of 12.9%, relative to initial values. This is achieved by making use of manual transcripts from as little as the first 10 minutes of a one-hour lecture. The proposed solution learns word-level transformation-based rules that attempt to replace parts of the ASR transcript with possible corrections. The experimental evaluation carried out over eleven lectures from three different courses and instructors shows that this amount of manual transcription can be sufficient to further improve a lecture-specific ASR system. 4It should be noted that, in order to preserve compatibility with other software tools, the code developed for these experiments was not optimized for speed. It is expected that a dedicated implementation would result in even lower runtimes. In particular, we demonstrated that a true WERbased scoring function for the TBL algorithm is both feasible and effective with a limited amount of training data and no development data. The proposed function assigns scores to TBL rules that directly correlate with reductions in the WER of the entire training set, leading to a better performance than that of a heuristic approximation. Furthermore, a scoring function that directly optimizes for WER reductions is more robust to variations in training size as well as to the value of the rule pruning threshold. As little as a value of 2 can be used for the threshold (scoring all rules that occur at least twice), with limited impact on the computational burden of learning the transformation rules. References E. Brill. 1992. A simple rule-based part of speech tagger. In Proc. 3rd Conf. on Applied NLP (ANLP), pages 152 – 155. P.R. Clarkson and Rosenfeld R. 1997. Statistical language modeling using the CMU-Cambridge Toolkit. In Proc. Eurospeech, volume 1, pages 2707–2710. J.G. Fiscus. 1997. A post-processing system to yield reduced word error rates: Recognizer output voting error reduction (ROVER). In Proc. IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 347–354. C. F¨ugen, M. Kolss, D. Bernreuther, M. Paulik, S. St¨uker, S. Vogel, and A. Waibel. 2006. Open domain speech recognition & translation: Lectures and speeches. In Proc. IEEE Conf. on Acoustics, Speech, and Signal Processing (ICASSP), volume 1, pages 569–572. J. Glass, T.J. Hazen, S. Cyphers, I. Malioutov, D. Huynh, and R. Barzilay. 2007. Recent progress in the MIT spoken lecture processing project. In Proc. 10th EuroSpeech / 8th InterSpeech, pages 2553–2556. 771 J. J. Godfrey, E. C. Holliman, and J. McDaniel. 1992. SWITCHBOARD: Telephone speech corpus for research and development. In Proc. IEEE Conf. Acoustics, Speech, and Signal Processing (ICASSP), pages 517–520. D. Gusfield. 1997. Algorithms on Strings, Trees, and Sequences. Cambridge University Press. T.J. Hazen. 2006. Automatic alignment and error correction of human generated transcripts for long speech recordings. In Proc. 9th Intl. Conf. on Spoken Language Processing (ICSLP) / InterSpeech, pages 1606–1609. B-J. Hsu and J. Glass. 2006. Style & topic language model adaptation using HMM-LDA. In Proc. ACL Conf. on Empirical Methods in NLP (EMNLP), pages 373–381. Q. Huo and W. Li. 2007. An active approach to speaker and task adaptation based on automatic analysis of vocabulary confusability. In Proc. 10th EuroSpeech / 8th InterSpeech, pages 1569–1572. A. Janin, Baron D., J. Edwards, D. Ellis, D. Gelbart, N. Morgan, B. Peskin, T. Pfau, E. Shriberg, A. Stolcke, and C. Wooters. 2003. The ICSI meeting corpus. In Proc. IEEE Conf. on Acoustics, Speech, and Signal Processing (ICASSP), pages 364–367. K. Kato, H. Nanjo, and T. Kawahara. 2000. Automatic transcription of lecture speech using topicindependent language modeling. In Proc. Intl. Conf. on Spoken Language Processing (ICSLP), volume 1, pages 162–165. D. Klakow. 2000. Selecting articles from the language model training corpus. In Proc. IEEE Conf. on Acoustics, Speech, and Signal Processing (ICASSP), pages 1695–1698. E. Leeuwis, M. Federico, and M. Cettolo. 2003. Language modeling and transcription of the TED corpus lectures. In Proc. Intl. Conf. on Acoustics, Speech, and Signal Processing (ICASSP), volume 1, pages 232–235. L. Mangu and M. Padmanabhan. 2001. Error corrective mechanisms for speech recognition. In Proc. IEEE Conf. on Acoustics, Speech, and Signal Processing (ICASSP), pages 29–32. C. Munteanu, R. Baecker, and G. Penn. 2008. Collaborative editing for improved usefulness and usability of transcript-enhanced webcasts. In Proc. ACM SIGCHI Conf. (CHI), pages 373–382. C. Munteanu, R. Baecker, G. Penn, E. Toms, and D. James. 2006. The effect of speech recognition accuracy rates on the usefulness and usability of webcast archives. In Proc. ACM SIGCHI Conf. (CHI), pages 493–502. C. Munteanu, G. Penn, and R. Baecker. 2007. Webbased language modelling for automatic lecture transcription. In Proc. 10th EuroSpeech / 8th InterSpeech, pages 2353–2356. H. Nanjo and T. Kawahara. 2003. Unsupervised language model adaptation for lecture speech recognition. In Proc. ISCA / IEEE Workshop on Spontaneous Speech Processing and Recognition (SSPR). G. Ngai and R. Florian. 2001. Transformation-based learning in the fast lane. In Proc. 2nd NAACL, pages 1–8. T. Niesler and D. Willett. 2002. Unsupervised language model adaptation for lecture speech transcription. In Proc. Intl. Conf. on Spoken Language Processing (ICSLP/Interspeech), pages 1413–1416. A. Park, T. J. Hazen, and J. R. Glass. 2005. Automatic processing of audio lectures for information retrieval: Vocabulary selection and language modeling. In Proc. IEEE Conf. on Acoustics, Speech, and Signal Processing (ICASSP). B. L. Pellom. 2001. SONIC: The university of colorado continuous speech recognizer. Technical Report #TR-CSLR-2001-01, University of Colorado. J. Peters and C. Drexel. 2004. Transformation-based error correction for speech-to-text systems. In Proc. Intl. Conf. on Spoken Language Processing (ICSLP/Interspeech), pages 1449–1452. P. Placeway, S. Chen, M. Eskenazi, U. Jain, V. Parikh, B. Raj, M. Ravishankar, R. Rosenfeld, K. Seymore, and M. Siegler. 1997. The 1996 HUB-4 Sphinx-3 system. In Proc. DARPA Speech Recognition Workshop. G. Riccardi and D. Hakkani-Tur. 2005. Active learning: Theory and applications to automatic speech recognition. IEEE Trans. Speech and Audio Processing, 13(4):504–511. E. K. Ringger and J. F. Allen. 1996. Error correction via a post-processor for continuous speech recognition. In Proc. IEEE Conf. on Acoustics, Speech, and Signal Processing (ICASSP), pages 427–430. E. Roche and Y. Schabes. 1995. Deterministic part-ofspeech tagging with finite-state transducers. Computational Linguistics, 21(2):227–253. L. von Ahn and L. Dabbish. 2004. Labeling images with a computer game. In Proc. ACM SIGCHI Conf. (CHI), pages 319–326. 772
2009
86
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 773–781, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Quadratic-Time Dependency Parsing for Machine Translation Michel Galley Computer Science Department Stanford University Stanford, CA 94305-9020 [email protected] Christopher D. Manning Computer Science Department Stanford University Stanford, CA 94305-9010 [email protected] Abstract Efficiency is a prime concern in syntactic MT decoding, yet significant developments in statistical parsing with respect to asymptotic efficiency haven’t yet been explored in MT. Recently, McDonald et al. (2005b) formalized dependency parsing as a maximum spanning tree (MST) problem, which can be solved in quadratic time relative to the length of the sentence. They show that MST parsing is almost as accurate as cubic-time dependency parsing in the case of English, and that it is more accurate with free word order languages. This paper applies MST parsing to MT, and describes how it can be integrated into a phrase-based decoder to compute dependency language model scores. Our results show that augmenting a state-ofthe-art phrase-based system with this dependency language model leads to significant improvements in TER (0.92%) and BLEU (0.45%) scores on five NIST Chinese-English evaluation test sets. 1 Introduction Hierarchical approaches to machine translation have proven increasingly successful in recent years (Chiang, 2005; Marcu et al., 2006; Shen et al., 2008), and often outperform phrase-based systems (Och and Ney, 2004; Koehn et al., 2003) on target-language fluency and adequacy. However, their benefits generally come with high computational costs, particularly when chart parsing, such as CKY, is integrated with language models of high orders (Wu, 1996). Indeed, synchronous CFG parsing with m-grams runs in O(n3m) time, where n is the length of the sentence.1 Furthermore, synchronous CFG approaches often only marginally outperform the most com1The algorithmic complexity of (Wu, 1996) is O(n3+4(m−1)), though Huang et al. (2005) present a more efficient factorization inspired by (Eisner and Satta, 1999) that yields an overall complexity of O(n3+3(m−1)), i.e., O(n3m). In comparison, phrase-based decoding can run in linear time if a distortion limit is imposed. Of course, this comparison holds only for approximate algorithms. Since exact MT decoding is NP complete (Knight, 1999), there is no exact search algorithm for either phrase-based or syntactic MT that runs in polynomial time (unless P = NP). petitive phrase-based systems in large-scale experiments such as NIST evaluations.2 This lack of significant difference may not be completely surprising. Indeed, researchers have shown that gigantic language models are key to state-ofthe-art performance (Brants et al., 2007), and the ability of phrase-based decoders to handle large-size, high-order language models with no consequence on asymptotic running time during decoding presents a compelling advantage over CKY decoders, whose time complexity grows prohibitively large with higher-order language models. While context-free decoding algorithms (CKY, Earley, etc.) may sometimes appear too computationally expensive for high-end statistical machine translation, there are many alternative parsing algorithms that have seldom been explored in the machine translation literature. The parsing literature presents faster alternatives for both phrasestructure and dependency trees, e.g., O(n) shiftreduce parsers and variants ((Ratnaparkhi, 1997; Nivre, 2003), inter alia). While deterministic parsers are often deemed inadequate for dealing with ambiguities of natural language, highly accurate O(n2) algorithms exist in the case of dependency parsing. Building upon the theoretical work of (Chu and Liu, 1965; Edmonds, 1967), McDonald et al. (2005b) present a quadratic-time dependency parsing algorithm that is just 0.7% less accurate than “full-fledged” chart parsing (which, in the case of dependency parsing, runs in time O(n3) (Eisner, 1996)). In this paper, we show how to exploit syntactic dependency structure for better machine translation, under the constraint that the depen2Results of the 2008 NIST Open MT evaluation (http://www.itl.nist.gov/iad/mig/tests/mt/2008/doc/ mt08_official_results_v0.html) reveal that, while many of the best systems in the Chinese-English and Arabic-English tasks incorporate synchronous CFG models, score differences with the best phrase-based system were insignificantly small. 773 dency structure is built as a by-product of phrasebased decoding, without reliance on a dynamicprogramming or chart parsing algorithm such as CKY or Earley. Adapting the approach of McDonald et al. (2005b) for machine translation, we incrementally build dependency structure left-toright in time O(n2) during decoding. Most interestingly, the time complexity of non-projective dependency parsing remains quadratic as the order of the language model increases. This provides a compelling advantage over previous dependency language models for MT (Shen et al., 2008), which use a 5-gram LM only during reranking. In our experiments, we build a competitive baseline (Koehn et al., 2007) incorporating a 5-gram LM trained on a large part of Gigaword and show that our dependency language model provides improvements on five different test sets, with an overall gain of 0.92 in TER and 0.45 in BLEU scores. These results are found to be statistically very significant (p ≤.01). 2 Dependency parsing for machine translation In this section, we review dependency parsing formulated as a maximum spanning tree problem (McDonald et al., 2005b), which can be solved in quadratic time, and then present its adaptation and novel application to phrase-based decoding. Dependency models have recently gained considerable interest in many NLP applications, including machine translation (Ding and Palmer, 2005; Quirk et al., 2005; Shen et al., 2008). Dependency structure provides several compelling advantages compared to other syntactic representations. First, dependency links are close to the semantic relationships, which are more likely to be consistent across languages. Indeed, Fox (2002) found inter-lingual phrasal cohesion to be greater than for a CFG when using a dependency representation, for which she found only 12.6% of head crossings and 9.2% modifier crossings. Second, dependency trees contain exactly one node per word, which contributes to cutting down the search space during parsing: indeed, the task of the parser is merely to connect existing nodes rather than hypothesizing new ones. Finally, dependency models are more flexible and account for (non-projective) head-modifier relations that CFG models fail to represent adequately, which is problematic with certain types of grammatical constructions and with free word order languages, who do you think they hired ? WP VB PRP VB PRP VBD . 1 2 3 4 5 6 7 <root> <root> 0 Figure 1: A dependency tree with directed edges going from heads to modifiers. The edge between who and hired causes this tree to be non-projective. Such a head-modifier relationship is difficult to represent with a CFG, since all words directly or indirectly headed by hired (i.e., who, think, they, and hired) do not constitute a contiguous sequence of words. as we will see later in this section. The most standardly used algorithm for parsing with dependency grammars is presented in (Eisner, 1996; Eisner and Satta, 1999). It runs in time O(n3), where n is the length of the sentence. Their algorithm exploits the special properties of dependency trees to reduce the worst-case complexity of bilexical parsing, which otherwise requires O(n4) for bilexical constituency-based parsing. While it seems difficult to improve the asymptotic running time of the Eisner algorithm beyond what is presented in (Eisner and Satta, 1999), McDonald et al. (2005b) show O(n2)-time parsing is possible if trees are not required to be projective. This relaxation entails that dependencies may cross each other rather than being required to be nested, as shown in Fig. 1. More formally, a non-projective tree is any tree that does not satisfy the following definition of a projective tree: Definition. Let x = x1 ···xn be an input sentence, and let y be a rooted tree represented as a set in which each element (i, j) ∈y is an ordered pair of word indices of x that defines a dependency relation between a head xi and a modifier xj. By definition, the tree y is said to be projective if each dependency (i, j) satisfies the following property: each word in xi+1 ···xj−1 (if i < j) or in xj+1 ···xi−1 (if j < i) is a descendent of head word xi. This relaxation is key to computational efficiency, since the parser does not need to keep track of whether dependencies assemble into contiguous spans. It is also linguistically desirable in the case of free word order languages such as Czech, Dutch, and German. Non-projective dependency structures are sometimes even needed for languages like English, e.g., in the case of the wh-movement shown in Fig. 1. For languages 774 with relatively rigid word order such as English, there may be some concern that searching the space of non-projective dependency trees, which is considerably larger than the space of projective dependency trees, would yield poor performance. That is not the case: dependency accuracy for nonprojective parsing is 90.2% for English (McDonald et al., 2005b), only 0.7% lower than a projective parser (McDonald et al., 2005a) that uses the same set of features and learning algorithm. In the case of dependency parsing for Czech, (McDonald et al., 2005b) even outperforms projective parsing, and was one of the top systems in the CoNLL-06 shared task in multilingual dependency parsing. 2.1 O(n2)-time dependency parsing for MT We now formalize weighted non-projective dependency parsing similarly to (McDonald et al., 2005b) and then describe a modified and more efficient version that can be integrated into a phrasebased decoder. Given the single-head constraint, parsing an input sentence x = (x0,x1,··· ,xn) is reduced to labeling each word xj with an index i identifying its head word xi. We include the dummy root symbol x0 = ⟨root⟩so that each word can be a modifier. We score each dependency relation using a standard linear model s(i, j) = λ ·f(i, j) (1) whose weight vector λ is trained using MIRA (Crammer and Singer, 2003) to optimize dependency parsing accuracy (McDonald et al., 2005a). As is commonly the case in statistical parsing, the score of the full tree is decomposed as the sum of the score of all edges: s(x,y) = ∑ (i,j)∈y λ ·f(i, j) (2) When there is no need to ensure projectivity, one can independently select the highest scoring edge (i, j) for each modifier xj, yet we generally want to ensure that the resulting structure is a tree, i.e., that it does not contain any circular dependencies. This optimization problem is a known instance of the maximum spanning tree (MST) problem. In our case, the graph is directed—indeed, the equality s(i, j) = s(j,i) is generally not true and would be linguistically aberrant—so the problem constitutes an instance of the less-known MST problem for directed graphs. This problem is solved with the Chu-Liu-Edmonds (CLE) algorithm (Chu and Liu, 1965; Edmonds, 1967). Formally, we represent the graph G = (V,E) with a vertex set V = x = {x0,··· ,xn} and a set of directed edges E = [0,n]×[1,n], in which each edge (i, j), representing the dependency xi →xj, is assigned a score s(i, j). Finding the spanning tree y ⊂E rooted at x0 that maximizes s(x,y) as defined in Equation 2 has a straightforward solution in O(n2 log(n)) time for dense graphs such as G, though Tarjan (1977) shows that the problem can be solved in O(n2). Hence, non-projective dependency parsing is solved in quadratic time. The main idea behind the CLE algorithm is to first greedily select for each word xj the incoming edge (i, j) with highest score, then to successively repeat the following two steps: (a) identify a loop in the graph, and if there is none, halt; (b) contract the loop into a single vertex, and update scores for edges coming in and out of the loop. Once all loops have been eliminated, the algorithm maps back the maximum spanning tree of the contracted graph onto the original graph G, and it can be shown that this yields a spanning tree that is optimal with respect to G and s (Georgiadis, 2003). The greedy approach of selecting the highest scoring edge (i, j) for each modifier xj can easily be applied left-to-right during phrase-based decoding, which proceeds in the same order. For each hypothesis expansion, our decoder generates the following information for the new hypothesis h: • a partial translation x; • a coverage set of input words c; • a translation score σ. In the case of non-projective dependency parsing, we need to maintain additional information for each word xj of the partial translation x: • a predicted POS tag tj; • a dependency score sj. Dependency scores sj are initialized to −∞. Each time a new word is added to a partial hypothesis, the decoder executes the routine shown in Table 1. To avoid cluttering the pseudo-code, we make here the simplifying assumption that each hypothesis expansion adds exactly one word, though the real implementation supports the case of phrases of any length. Line 3 determines whether the translation hypothesis is complete, in which case it explicitly builds the graph G and 775 Decoding: hypothesis expansion step. 1. Inferer generates new hypothesis h = (x,c,σ) 2. j ←|x|−1 3. tj ←tagger(xj−3,··· ,xj) 4. if complete(c) 5. Chu-Liu-Edmonds(h) 6. else 7. for i = 1 to j 8. sj = max(sj,s(i, j)) 9. si = max(si,s(j,i)) Table 1: Hypothesis expansion with dependency scoring. finds the maximum spanning tree. Note that it is impractical to identify loops each time a new word is added to a translation hypothesis, since this requires explicitly storing the dense graph G, which would require an O(n2) copy operation during each hypothesis expansion; this would of course increase time and space complexity (the max operation in lines 8 and 9 only keeps the current best scoring edges). If there is any loop, the dependency score is adjusted in the last hypothesis expansion. In practice, we delay the computation of dependency scores involving word xj until tag tj+1 is generated, since dependency parsing accuracy is particularly low (−0.8%) when the next tag is unknown. We found that dependency scores with or without loop elimination are generally close and highly correlated, and that MT performance without final loop removal was about the same (generally less than 0.2% BLEU). While it seems that loopy graphs are undesirable when the goal is to obtain a syntactic analysis, that is not necessarily the case when one just needs a language modeling score. 2.2 Features for dependency parsing In our experiments, we use sets of features that are similar to the ones used in the McDonald parser, though we make a key modification that yields an asymptotic speedup that ensures a genuine O(n2) running time. The three feature sets that were used in our experiments are shown in Table 2. We write h-word, h-pos, m-word, m-pos to refer to head and modifier words and POS tags, and append a numerical value to shift the word offset either to the left or to the right (e.g., h-pos+1 is the POS to the right of the head word). We use the symbol ∧to represent feature conjunctions. Each feature in the table has a distinct identifier, so that, e.g., the POS features Unigram features: h-word, h-pos, h-word ∧h-pos, m-word, m-pos, m-word ∧m-pos Bigram features: h-word ∧m-word, h-pos ∧m-pos, h-word ∧h-pos ∧m-word, h-word ∧h-pos ∧m-pos, m-word ∧m-pos ∧h-word, m-word ∧m-pos ∧h-pos, h-word ∧h-pos ∧m-word ∧m-pos Adjacent POS features: h-pos ∧h-pos+1 ∧m-pos−1 ∧m-pos, h-pos ∧h-pos+1 ∧m-pos ∧m-pos+1, h-pos−1 ∧h-pos ∧m-pos−1 ∧m-pos, h-pos−1 ∧h-pos ∧m-pos ∧m-pos+1 In-between POS features: if i < j: h-pos ∧h-pos+k ∧m-pos k ∈[i,min(i+5, j)] h-pos ∧m-pos−k ∧m-pos k ∈[max(i, j −5), j] if i > j: m-pos ∧m-pos+k ∧h-pos k ∈[ j,min(j +5,i)] m-pos ∧h-pos−k ∧h-pos k ∈[max(j,i−5),i] Table 2: Features for dependency parsing. It is quite similar to the McDonald (2005a) feature set, except that it does not include the set of all POS tags that appear between each candidate head-modifier pair (i, j). This modification is essential in order to make our parser run in true O(n2) time, as opposed to (McDonald et al., 2005b). SOURCE IDS GENRE SENTENCES English CTB 050–325 newswire 3027 English ATB all newswire 13628 OntoNotes all broadcast news 14056 WSJ 02–21 financial news 39832 Total 70543 Table 3: Characteristics of our training data. The second column identifies documents and sections selected for training. h-pos are all distinct from m-pos features.3 The primary difference between our feature sets and the ones of McDonald et al. is that their set of “in between POS features” includes the set of all tags appearing between each pair of words. Extracting all these tags takes time O(n) for any arbitrary pair (i, j). Since i and j are both free variables, feature computation in (McDonald et al., 2005b) takes time O(n3), even though parsing itself takes O(n2) time. To make our parser genuinely O(n2), we modified the set of in-between POS features in two ways. First, we restrict extraction of in-between POS tags to those words that appear within a window of five words relative to either the head or the modifier. While this change alone ensures that feature extraction is now O(1) for each word pair, this causes a fairly high drop of performance (dependency accuracy 3In addition to these basic features, we follow McDonald in conjoining most features with two extra pieces of information: a boolean variable indicating whether the modifier attaches to the left or to the right, and the binned distance between the two words. 776 ALGORITHM TIME SETUP TRAINING TESTING ACCURACY Projective O(n3) Parsing WSJ(02-21) WSJ(23) 90.60 Chu-Liu-Edmonds O(n3) Parsing WSJ(02-21) WSJ(23) 89.64 Chu-Liu-Edmonds O(n2) Parsing WSJ(02-21) WSJ(23) 89.32 Local classifier O(n2) Parsing WSJ(02-21) WSJ(23) 89.15 Projective O(n3) MT CTB(050-325) CTB(001-049) 86.33 Chu-Liu-Edmonds O(n3) MT CTB(050-325) CTB(001-049) 85.68 Chu-Liu-Edmonds O(n2) MT CTB(050-325) CTB(001-049) 85.43 Local classifier O(n2) MT CTB(050-325) CTB(001-049) 85.22 Projective O(n3) MT CTB(050-325), WSJ(02-21), ATB, OntoNotes CTB(001-049) 87.40(**) Chu-Liu-Edmonds O(n3) MT CTB(050-325), WSJ(02-21), ATB, OntoNotes CTB(001-049) 86.79 Chu-Liu-Edmonds O(n2) MT CTB(050-325), WSJ(02-21), ATB, OntoNotes CTB(001-049) 86.45(*) Local classifier O(n2) MT CTB(050-325), WSJ(02-21), ATB, OntoNotes CTB(001-049) 86.29 Table 4: Dependency parsing experiments on test sentences of any length. The projective parsing algorithm is the one implemented as in (McDonald et al., 2005a), which is known as one of the top performing dependency parsers for English. The O(n3) non-projective parser of (McDonald et al., 2005b) is slightly more accurate than our version, though ours runs in O(n2) time. “Local classifier” refers to non-projective dependency parsing without removing loops as a post-processing step. The result marked with (*) identifies the parser used for our MT experiments, which is only about 1% less accurate than a state-of-the-art dependency parser (**). on our test was down 0.9%). To make our genuinely O(n2) parser almost as accurate as the nonprojective parser of McDonald et al., we conjoin each in-between POS with its position relative to (i, j). This relatively simple change reduces the drop in accuracy to only 0.34%.4 3 Dependency parsing experiments In this section, we compare the performance of our parsing model to the ones of McDonald et al. Since our MT test sets include newswire, web, and audio, we trained our parser on different genres. Our training data includes newswire from the English translation treebank (LDC2007T02) and the English-Arabic Treebank (LDC2006T10), which are respectively translations of sections of the Chinese treebank (CTB) and Arabic treebank (ATB). We also trained the parser on the broadcastnews treebank available in the OntoNotes corpus (LDC2008T04), and added sections 02-21 of the WSJ Penn treebank. Documents 001-040 of the English CTB data were set aside to constitute a test set for newswire texts. Our other test set is the standard Section 23 of the Penn treebank. The splits and amounts of data used for training are displayed in Table 3. Parsing experiments are shown in Table 4. We 4We need to mention some practical considerations that make feature computation fast enough for MT. Most features are precomputed before actual decoding. All target-language words to appear during beam search can be determined in advance, and all their unigram feature scores are precomputed. For features conditioned on both head and modifier, scores are cached whenever possible. The only features that are not cached are the ones that include contextual POS tags, since their miss rate is relatively high. distinguish two experimental conditions: Parsing and MT. For Parsing, sentences are cased and tokenization abides to the PTB segmentation as used in the Penn treebank version 3. For the MT setting, texts are all lower case, and tokenization was changed to improve machine translation (e.g., most hyphenated words were split). For this setting, we also had to harmonize the four treebanks. The most crucial modification was to add NP internal bracketing to the WSJ (Vadas and Curran, 2007), since the three other treebanks contain that information. Treebanks were also transformed to be consistent with MT tokenization. We evaluate MT parsing models on CTB rather than on WSJ, since CTB contains newswire and is thus more representative of MT evaluation conditions. To obtain part-of-speech tags, we use a state-of-the-art maximum-entropy (CMM) tagger (Toutanova et al., 2003). In the Parsing setting, we use its best configuration, which reaches a tagging accuracy of 97.25% on standard WSJ test data. In the MT setting, we need to use a less effective tagger, since we cannot afford to perform Viterbi inference as a by-product of phrase-based decoding. Hence, we use a simpler tagging model that assigns tag ti to word xi by only using features of words xi−3 ···xi, and that does not condition any decision based on any preceding or next tags (ti−1, etc.). Its performance is 95.02% on the WSJ, and 95.30% on the English CTB. Additional experiments reveal two main contributing factors to this drop on WSJ: tagging uncased texts reduces tagging accuracy by about 1%, and using only wordbased features further reduces it by 0.6%. Table 4 shows that the accuracy of our truly 777 O(n2) parser is only .25% to .34% worse than the O(n3) implementation of (McDonald et al., 2005b).5 Compared to the state-of-the-art projective parser as implemented in (McDonald et al., 2005a), performance is 1.28% lower on WSJ, but only 0.95% when training on all our available data and using the MT setting. Overall, we believe that the drop of performance is a reasonable price to pay considering the computational constraints imposed by integrating the dependency parser into an MT decoder. The table also shows a gain of more than 1% in dependency accuracy by adding ATB, OntoNotes, and WSJ to the English CTB training set. The four sources were assigned non-uniform weights: we set the weight of the CTB data to be 10 times larger than the other corpora, which seems to work best in our parsing experiments. While this improvement of 1% may seem relatively small considering that the amount of training data is more than 20 times larger in the latter case, it is quite consistent with previous findings in domain adaptation, which is known to be a difficult task. For example, (Daume III, 2007) shows that training a learning algorithm on the weighted union of different data sets (which is basically what we did) performs almost as well as more involved domain adaptation approaches. 4 Machine translation experiments In our experiments, we use a re-implementation of the Moses phrase-based decoder (Koehn et al., 2007). We use the standard features implemented almost exactly as in Moses: four translation features (phrase-based translation probabilities and lexically-weighted probabilities), word penalty, phrase penalty, linear distortion, and language model score. We also incorporated the lexicalized reordering features of Moses, in order to experiment with a baseline that is stronger than the default Moses configuration. The language pair for our experiments is Chinese-to-English. The training data consists of about 28 million English words and 23.3 million 5Note that our results on WSJ are not exactly the same as those reported in (McDonald et al., 2005b), since we used slightly different head finding rules. To extract dependencies from treebanks, we used the LTH Penn Converter (http:// nlp.cs.lth.se/pennconverter/), which extracts dependencies that are almost identical to those used for the CoNLL-2008 Shared Task. We constrain the converter not to use functional tags found in the treebanks, in order to make it possible to use automatically parsed texts (i.e., perform selftraining) in future work. Chinese words drawn from various news parallel corpora distributed by the Linguistic Data Consortium (LDC). In order to provide experiments comparable to previous work, we used the same corpora as (Wang et al., 2007): LDC2002E18, LDC2003E07, LDC2003E14, LDC2005E83, LDC2005T06, LDC2006E26, LDC2006E8, and LDC2006G05. Chinese words were automatically segmented with a conditional random field (CRF) classifier (Chang et al., 2008) that conforms to the Chinese Treebank (CTB) standard. In order to train a competitive baseline given our computational resources, we built a large 5-gram language model using the Xinhua and AFP sections of the Gigaword corpus (LDC2007T40) in addition to the target side of the parallel data. This data represents a total of about 700 million words. We manually removed documents of Gigaword that were released during periods that overlap with those of our development and test sets. The language model was smoothed with the modified Kneser-Ney algorithm as implemented in (Stolcke, 2002), and we only kept 4-grams and 5-grams that occurred at least three times in the training data.6 For tuning and testing, we use the official NIST MT evaluation data for Chinese from 2002 to 2008 (MT02 to MT08), which all have four English references for each input sentence. We used the 1082 sentences of MT05 for tuning and all other sets for testing. Parameter tuning was done with minimum error rate training (Och, 2003), which was used to maximize BLEU (Papineni et al., 2001). Since MERT is prone to search errors, especially with large numbers of parameters, we ran each tuning experiment three times with different initial conditions. We used n-best lists of size 200 and a beam size of 200. In the final evaluations, we report results using both TER (Snover et al., 2006) and the original BLEU metric as described in (Papineni et al., 2001). All our evaluations are performed on uncased texts. The results for our translation experiments are shown in Table 5. We compared two systems: one with the set of features described earlier in this section. The second system incorporates one additional feature, which is the dependency language 6We found that sections of Gigaword other than Xinhua and AFP provide almost no improvement in our experiments. By leaving aside the other sections, we were able to increase the order of the language model to 5-gram and perform relatively little pruning. This LM required 16GB of RAM during training. 778 BLEU[%] DEP. LM MT05 (tune) MT02 MT03 MT04 MT06 MT08 no 33.42 33.38 33.13 36.21 32.16 24.83 yes 34.19 (+.77**) 33.85 (+.47) 33.73 (+.6*) 36.67 (+.46*) 32.84 (+.68**) 24.91 (+.08) TER[%] DEP. LM MT05 (tune) MT02 MT03 MT04 MT06 MT08 no 57.41 58.07 57.32 56.09 57.24 61.96 yes 56.27 (−1.14**) 57.15 (−.92**) 56.09 (−1.23**) 55.30 (−.79**) 56.05 (−1.19**) 61.41 (−.55*) MT05 (tune) MT02 MT03 MT04 MT06 MT08 Sentences 1082 878 919 1788 1664 1357 Table 5: MT experiments with and without a dependency language model. We use randomization tests (Riezler and Maxwell, 2005) to determine significance: differences marked with a (*) are significant at the p ≤.05 level, and those marked as (**) are significant at the p ≤.01 level. model score computed with the dependency parsing algorithm described in Section 2. We used the dependency model trained on the English CTB and ATB treebank, WSJ, and OntoNotes. We see that the Moses decoder with integrated dependency language model systematically outperforms the Moses baseline. For BLEU evaluations, differences are significant in four out of six cases, and in the case of TER, all differences are significant. Regarding the small difference in BLEU scores on MT08, we would like to point out that tuning on MT05 and testing on MT08 had a rather adverse effect with respect to translation length: while the two systems are relatively close in terms of BLEU scores (24.83 and 24.91, respectively), the dependency LM provides a much bigger gain when evaluated with BLEU precision (27.73 vs. 28.79), i.e., by ignoring the brevity penalty. On the other hand, the difference on MT08 is significant in terms of TER. Table 6 provides experimental results on the NIST test data (excluding the tuning set MT05) for each of the three genres: newswire, web data, and speech (broadcast news and conversation). The last column displays results for all test sets combined. Results do not suggest any noticeable difference between genres, and the dependency language model provides significant gains on all genres, despite the fact that this model was primarily trained on news data. We wish to emphasize that our positive results are particularly noteworthy because they are achieved over a baseline incorporating a competitive 5-gram language model. As is widely acknowledged in the speech community, it can be difficult to outperform high-order n-gram models in large-scale experiments. Finally, we quantified the effective running time of our phrase-based decoder with and without our dependency language BLEU[%] DEP. LM newswire web speech all no 32.86 21.75 36.88 32.29 yes 33.19 22.64 37.51 32.74 (+0.33) (+0.89) (+0.63) (+0.45) TER[%] DEP. LM newswire web speech all no 57.73 62.64 55.16 58.02 yes 56.73 61.97 54.26 57.10 (−1) (−0.67) (−0.9) (−0.92) newswire web speech all Sentences 4006 1149 1451 6606 Table 6: Test set performances on MT02-MT04 and MT06MT08, where the data was broken down by genre. Given the large amount of test data involved in this table, all these results are statistically highly significant (p ≤.01). 10 20 30 40 50 60 70 80 90 0 20 40 60 80 100 120 140 160 sentence length seconds depLM baseline Figure 2: Running time of our phrase-based decoder with and without quadratic-time dependency LM scoring. model using MT05 (Fig. 2). In both settings, we selected the best tuned model, which yield the performance shown in the first column of Table 5. Our decoder was run on an AMD Opteron Processor 2216 with 16GB of memory, and without resorting to any rescoring method such as cube pruning. In the case of English translations of 40 words and shorter, the baseline system took 6.5 seconds per sentence, whereas the dependency LM system spent 15.6 seconds per sentence, i.e., 2.4 times the baseline running time. In the case of translations 779 longer than 40 words, average speeds were respectively 17.5 and 59.5 seconds per sentence, i.e., the dependency was only 3.4 times slower.7 5 Related work Perhaps due to the high computational cost of synchronous CFG decoding, there have been various attempts to exploit syntactic knowledge and hierarchical structure in other machine translation experiments that do not require chart parsing. Using a reranking framework, Och et al. (2004) found that various types of syntactic features provided only minor gains in performance, suggesting that phrase-based systems (Och and Ney, 2004) should exploit such information during rather than after decoding. Wang et al. (2007) sidestep the need to operate large-scale word order changes during decoding (and thus lessening the need for syntactic decoding) by rearranging input words in the training data to match the syntactic structure of the target language. Finally, Birch et al. (2007) exploit factored phrase-based translation models to associate each word with a supertag, which contains most of the information needed to build a full parse. When combined with a supertag n-gram language model, it helps enforce grammatical constraints on the target side. There have been various attempts to reduce the computational expense of syntactic decoding, including multi-pass decoding approaches (Zhang and Gildea, 2008; Petrov et al., 2008) and rescoring approaches (Huang and Chiang, 2007). In the latter paper, Huang and Chiang introduce rescoring methods named “cube pruning” and “cube growing”, which first use a baseline decoder (either synchronous CFG or a phrase-based system) and no LM to generate a hypergraph, and then rescoring this hypergraph with a language model. Huang and Chiang show significant speed increases with little impact on translation quality. We believe that their approach is orthogonal (and possibly complementary) to our work, since our paper proposes a new model for fully-integrated decoding that increases MT performance, and does not rely on rescoring. 7We note that our Java-based decoder is research rather than industrial-strength code and that it could be substantially optimized. Hence, we think the reader should pay more attention to relative speed differences between the two systems rather than absolute timings. 6 Conclusion and future work In this paper, we presented a non-projective dependency parser whose time-complexity of O(n2) improves upon the cubic time implementation of (McDonald et al., 2005b), and does so with little loss in dependency accuracy (.25% to .34%). Since this parser does not need to enforce projectivity constraints, it can easily be integrated into a phrase-based decoder during search (rather than during rescoring). We use dependency scores as an extra feature in our MT experiments, and found that our dependency model provides significant gains over a competitive baseline that incorporates a large 5-gram language model (0.92% TER and 0.45% BLEU absolute improvements). We plan to pursue other research directions using dependency models discussed in this paper. While we use a dependency language model to exemplify the use of hierarchical structure within phrase based decoders, we could extend this work to incorporate dependency features of both sourceand target side. Since parsing of the source is relatively inexpensive compared to the target side, it would be relatively easy to condition headmodifier dependencies not only on the two target words, but also on their corresponding Chinese words and their relative positions in the Chinese tree. This would enable the decoder to capture syntactic reordering without requiring trees to be isomorphic or even projective. It would also be interesting to apply these models to target languages that have free word order, which would presumably benefit more from the flexibility of non-projective dependency models. Acknowledgements The authors wish to thank the anonymous reviewers for their helpful comments on an earlier draft of this paper, and Daniel Cer for his implementation of Phrasal, a phrase-based decoder similar to Moses. This paper is based on work funded by the Defense Advanced Research Projects Agency through IBM. The content does not necessarily reflect the views of the U.S. Government, and no official endorsement should be inferred. References A. Birch, M. Osborne, and P. Koehn. 2007. CCG supertags in factored statistical machine translation. In Proc. of the Workshop on Statistical Machine Translation, pages 9–16. 780 T. Brants, A. Popat, P. Xu, F. Och, and J. Dean. 2007. Large language models in machine translation. In Proc. of EMNLP-CoNLL, pages 858–867. P. Chang, M. Galley, and C. Manning. 2008. Optimizing Chinese word segmentation for machine translation performance. In Proc. of the ACL Workshop on Statistical Machine Translation, pages 224–232. D. Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. of ACL, pages 263–270. Y. J. Chu and T. H. Liu. 1965. On the shortest arborescence of a directed graph. Science Sinica, 14:1396– 1400. K. Crammer and Y. Singer. 2003. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research, 3:951–991. H. Daume III. 2007. Frustratingly easy domain adaptation. In Proc. of ACL, pages 256–263. Y. Ding and M. Palmer. 2005. Machine translation using probabilistic synchronous dependency insertion grammars. In Proc. of ACL, pages 541–548. J. Edmonds. 1967. Optimum branchings. Research of the National Bureau of Standards, 71B:233–240. J. Eisner and G. Satta. 1999. Efficient parsing for bilexical context-free grammars and headautomaton grammars. In Proc. of ACL, pages 457– 464. J. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proc. of COLING, pages 340–345. H. Fox. 2002. Phrasal cohesion and statistical machine translation. In Proc. of EMNLP, pages 304–311. L. Georgiadis. 2003. Arborescence optimization problems solvable by Edmonds’ algorithm. Theoretical Computer Science, 301(1-3):427–437. L. Huang and D. Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proc. of ACL, pages 144–151. L. Huang, H. Zhang, and D. Gildea. 2005. Machine translation as lexicalized parsing with hooks. In Proc. of the International Workshop on Parsing Technology, pages 65–73. K. Knight. 1999. Decoding complexity in wordreplacement translation models. Computational Linguistics, 25(4):607–615. P. Koehn, F. Och, and D. Marcu. 2003. Statistical phrase-based translation. In Proc. of NAACL. P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of ACL, Demonstration Session. D. Marcu, W. Wang, A. Echihabi, and K. Knight. 2006. SPMT: Statistical machine translation with syntactified target language phrases. In Proc. of EMNLP, pages 44–52. R. McDonald, K. Crammer, and F. Pereira. 2005a. Online large-margin training of dependency parsers. In Proc. of ACL, pages 91–98. R. McDonald, F. Pereira, K. Ribarov, and J. Hajic. 2005b. Non-projective dependency parsing using spanning tree algorithms. In Proc. of HLT-EMNLP, pages 523–530. J. Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proc. of the International Workshop on Parsing Technologies (IWPT 03), pages 149–160. F. Och and H. Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417–449. F. Och, D. Gildea, S. Khudanpur, A. Sarkar, K. Yamada, A. Fraser, S. Kumar, L. Shen, D. Smith, K. Eng, V. Jain, Z. Jin, and D. Radev. 2004. A smorgasbord of features for statistical machine translation. In Proceedings of HLT-NAACL. F. Och. 2003. Minimum error rate training for statistical machine translation. In Proc. of ACL. K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. 2001. BLEU: a method for automatic evaluation of machine translation. In Proc. of ACL. S. Petrov, A. Haghighi, and D. Klein. 2008. Coarseto-fine syntactic machine translation using language projections. In Proc. of EMNLP, pages 108–116. C. Quirk, A. Menezes, and C. Cherry. 2005. Dependency treelet translation: syntactically informed phrasal SMT. In Proc. of ACL, pages 271–279. A. Ratnaparkhi. 1997. A linear observed time statistical parser based on maximum entropy models. In Proc. of EMNLP. S. Riezler and J. Maxwell. 2005. On some pitfalls in automatic evaluation and significance testing for MT. In Proc. of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 57–64. L. Shen, J. Xu, and R. Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proc. of ACL, pages 577–585. M. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proc. of AMTA, pages 223–231. A. Stolcke. 2002. SRILM – an extensible language modeling toolkit. In Proc. Intl. Conf. on Spoken Language Processing (ICSLP–2002). R. Tarjan. 1977. Finding optimum branchings. Networks, 7:25–35. K. Toutanova, D. Klein, C. Manning, and Y. Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proc. of NAACL, pages 173–180. D. Vadas and J. Curran. 2007. Adding noun phrase structure to the Penn treebank. In Proc. of ACL, pages 240–247. C. Wang, M. Collins, and P. Koehn. 2007. Chinese syntactic reordering for statistical machine translation. In Proc. of EMNLP-CoNLL, pages 737–745. D. Wu. 1996. A polynomial-time algorithm for statistical machine translation. In Proc. of ACL. H. Zhang and D. Gildea. 2008. Efficient multi-pass decoding for synchronous context free grammars. In Proc. of ACL, pages 209–217. 781
2009
87
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 782–790, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP A Gibbs Sampler for Phrasal Synchronous Grammar Induction Phil Blunsom∗ [email protected] Chris Dyer† [email protected] Trevor Cohn∗ [email protected] Miles Osborne∗ [email protected] ∗Department of Informatics University of Edinburgh Edinburgh, EH8 9AB, UK †Department of Linguistics University of Maryland College Park, MD 20742, USA Abstract We present a phrasal synchronous grammar model of translational equivalence. Unlike previous approaches, we do not resort to heuristics or constraints from a word-alignment model, but instead directly induce a synchronous grammar from parallel sentence-aligned corpora. We use a hierarchical Bayesian prior to bias towards compact grammars with small translation units. Inference is performed using a novel Gibbs sampler over synchronous derivations. This sampler side-steps the intractability issues of previous models which required inference over derivation forests. Instead each sampling iteration is highly efficient, allowing the model to be applied to larger translation corpora than previous approaches. 1 Introduction The field of machine translation has seen many advances in recent years, most notably the shift from word-based (Brown et al., 1993) to phrasebased models which use token n-grams as translation units (Koehn et al., 2003). Although very few researchers use word-based models for translation per se, such models are still widely used in the training of phrase-based models. These wordbased models are used to find the latent wordalignments between bilingual sentence pairs, from which a weighted string transducer can be induced (either finite state (Koehn et al., 2003) or synchronous context free grammar (Chiang, 2007)). Although wide-spread, the disconnect between the translation model and the alignment model is artificial and clearly undesirable. Word-based models are incapable of learning translational equivalences between non-compositional phrasal units, while the algorithms used for inducing weighted transducers from word-alignments are based on heuristics with little theoretical justification. A model which can fulfil both roles would address both the practical and theoretical short-comings of the machine translation pipeline. The machine translation literature is littered with various attempts to learn a phrase-based string transducer directly from aligned sentence pairs, doing away with the separate word alignment step (Marcu and Wong, 2002; Cherry and Lin, 2007; Zhang et al., 2008b; Blunsom et al., 2008). Unfortunately none of these approaches resulted in an unqualified success, due largely to intractable estimation. Large training sets with hundreds of thousands of sentence pairs are common in machine translation, leading to a parameter space of billions or even trillions of possible bilingual phrase-pairs. Moreover, the inference procedure for each sentence pair is non-trivial, proving NP-complete for learning phrase based models (DeNero and Klein, 2008) or a high order polynomial (O(|f|3|e|3))1 for a sub-class of weighted synchronous context free grammars (Wu, 1997). Consequently, for such models both the parameterisation and approximate inference techniques are fundamental to their success. In this paper we present a novel SCFG translation model using a non-parametric Bayesian formulation. The model includes priors to impose a bias towards small grammars with few rules, each of which is as simple as possible (e.g., terminal productions consisting of short phrase pairs). This explicitly avoids the degenerate solutions of maximum likelihood estimation (DeNero et al., 2006), without resort to the heuristic estimator of Koehn et al. (2003). We develop a novel Gibbs sampler to perform inference over the latent synchronous derivation trees for our training instances. The sampler reasons over the infinite space of possible translation units without recourse to arbitrary restrictions (e.g., constraints drawn from a wordalignment (Cherry and Lin, 2007; Zhang et al., 2008b) or a grammar fixed a priori (Blunsom et al., 1f and e are the input and output sentences respectively. 782 2008)). The sampler performs local edit operations to nodes in the synchronous trees, each of which is very fast, leading to a highly efficient inference technique. This allows us to train the model on large corpora without resort to punitive length limits, unlike previous approaches which were only applied to small data sets with short sentences. This paper is structured as follows: In Section 3 we argue for the use of efficient sampling techniques over SCFGs as an effective solution to the modelling and scaling problems of previous approaches. We describe our Bayesian SCFG model in Section 4 and a Gibbs sampler to explore its posterior. We apply this sampler to build phrase-based and hierarchical translation models and evaluate their performance on small and large corpora. 2 Synchronous context free grammar A synchronous context free grammar (SCFG, (Lewis II and Stearns, 1968)) generalizes contextfree grammars to generate strings concurrently in two (or more) languages. A string pair is generated by applying a series of paired rewrite rules of the form, X →⟨e, f, a⟩, where X is a nonterminal, e and f are strings of terminals and nonterminals and a specifies a one-to-one alignment between non-terminals in e and f. In the context of SMT, by assigning the source and target languages to the respective sides of a probabilistic SCFG it is possible to describe translation as the process of parsing the source sentence, which induces a parallel tree structure and translation in the target language (Chiang, 2007). Figure 1 shows an example derivation for Japanese to English translation using an SCFG. For efficiency reasons we only consider binary or ternary branching rules and don’t allow rules to mix terminals and nonterminals. This allows our sampler to more efficiently explore the space of grammars (Section 4.2), however more expressive grammars would be a straightforward extension of our model. 3 Related work Most machine translation systems adopt the approach of Koehn et al. (2003) for ‘training’ a phrase-based translation model.2 This method starts with a word-alignment, usually the latent state of an unsupervised word-based aligner such 2We include grammar based transducers, such as Chiang (2007) and Marcu et al. (2006), in our definition of phrasebased models. Grammar fragment: X → ⟨X1 X2 X3, X1 X3 X2⟩ X → ⟨John-ga, John⟩ X → ⟨ringo-o, an apple⟩ X → ⟨tabeta, ate⟩ Sample derivation: ⟨S1, S1⟩⇒⟨X2, X2⟩ ⇒ ⟨X3 X4 X5, X3 X5 X4⟩ ⇒ ⟨John-ga X4 X5, John X5 X4⟩ ⇒ ⟨John-ga ringo-o X5, John X5 an apple⟩ ⇒ ⟨John-ga ringo-o tabeta, John ate an apple⟩ Figure 1: A fragment of an SCFG with a ternary non-terminal expansion and three terminal rules. as GIZA++. Various heuristics are used to combine source-to-target and target-to-source alignments, after which a further heuristic is used to read off phrase pairs which are ‘consistent’ with the alignment. Although efficient, the sheer number of somewhat arbitrary heuristics makes this approach overly complicated. A number of authors have proposed alternative techniques for directly inducing phrase-based translation models from sentence aligned data. Marcu and Wong (2002) proposed a phrase-based alignment model which suffered from a massive parameter space and intractable inference using expectation maximisation. Taking a different tack, DeNero et al. (2008) presented an interesting new model with inference courtesy of a Gibbs sampler, which was better able to explore the full space of phrase translations. However, the efficacy of this model is unclear due to the small-scale experiments and the short sampling runs. In this work we also propose a Gibbs sampler but apply it to the polynomial space of derivation trees, rather than the exponential space of the DeNero et al. (2008) model. The restrictions imposed by our tree structure make sampling considerably more efficient for long sentences. Following the broad shift in the field from finite state transducers to grammar transducers (Chiang, 2007), recent approaches to phrase-based alignment have used synchronous grammar formalisms permitting polynomial time inference (Wu, 1997; 783 Cherry and Lin, 2007; Zhang et al., 2008b; Blunsom et al., 2008). However this asymptotic time complexity is of high enough order (O(|f|3|e|3)) that inference is impractical for real translation data. Proposed solutions to this problem include imposing sentence length limits, using small training corpora and constraining the search space using a word-alignment model or parse tree. None of these limitations are particularly desirable as they bias inference. As a result phrase-based alignment models are not yet practical for the wider machine translation community. 4 Model Our aim is to induce a grammar from a training set of sentence pairs. We use Bayes’ rule to reason under the posterior over grammars, P(g|x) ∝P(x|g)P(g), where g is a weighted SCFG grammar and x is our training corpus. The likelihood term, P(x|g), is the probability of the training sentence pairs under the grammar, while the prior term, P(g), describes our initial expectations about what consitutes a plausible grammar. Specifically we incorporate priors encoding our preference for a briefer and more succinct grammar, namely that: (a) the grammar should be small, with few rules rewriting each non-terminal; and (b) terminal rules which specify phrasal translation correspondence should be small, with few symbols on their right hand side. Further, Bayesian non-parametrics allow the capacity of the model to grow with the data. Thereby we avoid imposing hard limits on the grammar (and the thorny problem of model selection), but instead allow the model to find a grammar appropriately sized for its training data. 4.1 Non-parametric form Our Bayesian model of SCFG derivations resembles that of Blunsom et al. (2008). Given a grammar, each sentence is generated as follows. Starting with a root non-terminal (z1), rewrite each frontier non-terminal (zi) using a rule chosen from our grammar expanding zi. Repeat until there are no remaining frontier non-terminals. This gives rise to the following derivation probability: p(d) = p(z1) Y ri∈d p(ri|zi) where the derivation is a sequence of rules d = (r1, . . . , rn), and zi denotes the root node of ri. We allow two types of rules: non-terminal and terminal expansions. The former rewrites a nonterminal symbol as a string of two or three nonterminals along with an alignment, specifying the corresponding ordering of the child trees in the source and target language. Terminal expansions rewrite a non-terminal as a pair of terminal n-grams, representing a phrasal translation pair, where either but not both may be empty. Each rule in the grammar, ri, is generated from its root symbol, zi, by first choosing a rule type ti ∈{TERM, NON-TERM} from a Bernoulli distribution, ri ∼Bernoulli(γ). We treat γ as a random variable with its own prior, γ ∼Beta(αR, αR) and integrate out the parameters, γ. This results in the following conditional probability for ti: p(ti|r−i, zi, αR) = n−i ti,zi + αR n−i ·,zi + 2αR where n−i ri,zi is the number of times ri has been used to rewrite zi in the set of all other rules, r−i, and n−i ·,zi = P r n−i r,zi is the total count of rewriting zi. The Dirichlet (and thus Beta) distribution are exchangeable, meaning that any permutation of its events are equiprobable. This allows us to reason about each event given previous and subsequent events (i.e., treat each item as the ‘last’.) When ti = NON-TERM, we generate a binary or ternary non-terminal production. The nonterminal sequence and alignment are drawn from (z, a) ∼φN zi and, as before, we define a prior over the parameters, φN zi ∼Dirichlet(αT ), and integrate out φN zi. This results in the conditional probability: p(ri|ti = NON-TERM, r−i, zi, αN) = nN,−i ri,zi + αN nN,−i ·,zi + |N|αN where nN,−i ri,zi is the count of rewriting zi with nonterminal rule ri, nN,−i ·,zi the total count over all nonterminal rules and |N| is the number of unique non-terminal rules. For terminal productions (ti = TERM) we first decide whether to generate a phrase in both languages or in one language only, according to a fixed probability pnull.3 Contingent on this decision, the terminal strings are then drawn from 3To discourage null alignments, we used pnull = 10−10 for this value in the experiments we report below. 784 either φP zi for phrase pairs or φnull for single language phrases. We choose Dirichlet process (DP) priors for these parameters: φP zi ∼DP(αP , P P 1 ) φnull zi ∼DP(αnull, P null 1 ) where the base distributions, P P 1 and P null 1 , range over phrase pairs or monolingual phrases in either language, respectively. The most important choice for our model is the priors on the parameters of these terminal distributions. Phrasal SCFG models are subject to a degenerate maximum likelihood solution in which all probability mass is placed on long, or whole sentence, phrase translations (DeNero et al., 2006). Therefore, careful consideration must be given when specifying the P1 distribution on terminals in order to counter this behavior. To construct a prior over string pairs, first we define the probability of a monolingual string (s): P X 0 (s) = PPoisson(|s|; 1) × 1 V |s| X where the PPoisson(k; 1) is the probability under a Poisson distribution of length k given an expected length of 1, while VX is the vocabulary size of language X. This distribution has a strong bias towards short strings. In particular note that generally a string of length k will be less probable than two of length k 2, a property very useful for finding ‘minimal’ translation units. This contrasts with a geometric distribution in which a string of length k will be more probable than its segmentations. We define P null 1 as the string probability of the non-null part of the rule: P null 1 (z →⟨e, f⟩) =  1 2P E 0 (e) if |f| = 0 1 2P F 0 (f) if |e| = 0 The terminal translation phrase pair distribution is a hierarchical Dirichlet Process in which each phrase are independently distributed according to DPs:4 P P 1 (z →⟨e, f⟩) = φE z (e) × φF z (f) φE z ∼DP(αPE, P E 0 ) 4This prior is similar to one used by DeNero et al. (2008), who used the expected table count approximation presented in Goldwater et al. (2006). However, Goldwater et al. (2006) contains two major errors: omitting P0, and using the truncated Taylor series expansion (Antoniak, 1974) which fails for small αP0 values common in these models. In this work we track table counts directly. and φF z is defined analogously. This prior encourages frequent phrases to participate in many different translation pairs. Moreover, as longer strings are likely to be less frequent in the corpus this has a tendency to discourage long translation units. 4.2 A Gibbs sampler for derivations Markov chain Monte Carlo sampling allows us to perform inference for the model described in 4.1 without restricting the infinite space of possible translation rules. To do this we need a method for sampling a derivation for a given sentence pair from p(d|d−). One possible approach would be to first build a packed chart representation of the derivation forest, calculate the inside probabilities of all cells in this chart, and then sample derivations top-down according to their inside probabilities (analogous to monolingual parse tree sampling described in Johnson et al. (2007)). A problem with this approach is that building the derivation forest would take O(|f|3|e|3) time, which would be impractical for long sentences. Instead we develop a collapsed Gibbs sampler (Teh et al., 2006) which draws new samples by making local changes to the derivations used in a previous sample. After a period of burn in, the derivations produced by the sampler will be drawn from the posterior distribution, p(d|x). The advantage of this algorithm is that we only store the current derivation for each training sentence pair (together these constitute the state of the sampler), but never need to reason over derivation forests. By integrating over (collapsing) the parameters we only store counts of rules used in the current sampled set of derivations, thereby avoiding explicitly representing the possibly infinite space of translation pairs. We define two operators for our Gibbs sampler, each of which re-samples local derivation structures. Figures 2 and 4 illustrate the permutations these operators make to derivation trees. The omitted tree structure in these figures denotes the Markov blanket of the operator: the structure which is held constant when enumerating the possible outcomes for an operator. The Split/Join operator iterates through the positions between each source word sampling whether a terminal boundary should exist at that position (Figure 2). If the source position 785 ... ... ... ... ... ... ... ... ... Figure 2: Split/Join sampler applied between a pair of adjacent terminals sharing the same parent. The dashed line indicates the source position being sampled, boxes indicate source and target tokens, while a solid line is a null alignment. ... ... ... ... ... ... ... ... Figure 4: Rule insert/delete sampler. A pair of adjacent nodes in a ternary rule can be re-parented as a binary rule, or vice-versa. falls between two existing terminals whose target phrases are adjacent, then any new target segmentation within those target phrases can be sampled, including null alignments. If the two existing terminals also share the same parent, then any possible re-ordering is also a valid outcome, as is removing the terminal boundary to form a single phrase pair. Otherwise, if the visited boundary point falls within an existing terminal, then all target split and re-orderings are possible outcomes. The probability for each of these configurations is evaluated (see Figure 3) from which the new configuration is sampled. While the first operator is theoretically capable of exploring the entire derivation forest (by flattening the tree into a single phrase and then splitting), the series of moves required would be highly improbable. To allow for faster mixing we employ the Insert/Delete operator which adds and deletes the parent non-terminal of a pair of adjacent nodes. This is illustrated in Figure 4. The update equations are analogous to those used for the Split/Join operator in Figure 3. In order for this operator to be effective we need to allow greater than binary branching nodes, otherwise deleting a nodes would require sampling from a much larger set of outcomes. Hence our adoption of a ternary branching grammar. Although such a grammar would be very inefficient for a dynamic programming algorithm, it allows our sampler to permute the internal structure of the trees more easily. 4.3 Hyperparameter Inference Our model is parameterised by a vector of hyperparameters, α = (αR, αN, αP , αPE, αPF , αnull), which control the sparsity assumption over various model parameters. We could optimise each concentration parameter on the training corpus by hand, however this would be quite an onerous task. Instead we perform inference over the hyperparameters following Goldwater and Griffiths (2007) by defining a vague gamma prior on each concentration parameter, αx ∼Gamma(10−4, 104). This hyper-prior is relatively benign, allowing the model to consider a wide range of values for the hyperparameter. We sample a new value for each αx using a log-normal distribution with mean αx and variance 0.3, which is then accepted into the distribution p(αx|d, α−) using the MetropolisHastings algorithm. Unlike the Gibbs updates, this calculation cannot be distributed over a cluster (see Section 4.4) and thus is very costly. Therefore for small corpora we re-sample the hyperparameter after every pass through the corpus, for larger experiments we only re-sample every 20 passes. 4.4 A Distributed approximation While employing a collapsed Gibbs sampler allows us to efficiently perform inference over the 786 p(JOIN) ∝p(ti = TERM|zi, r−) × p(ri = (zi →⟨e, f⟩)|zi, r−) (1) p(SPLIT) ∝p(ti = NON-TERM|zi, r−) × p(ri = (zi →⟨zl, zr, ai⟩)|zi, r−) (2) × p(tl = TERM|ti, zi, r−) × p(rl = (zl →⟨el, fl⟩)|zl, r−) × p(tr = TERM|ti, tl, zi, r−) × p(rr = (zr →⟨er, fr⟩)|zl, r−∪(zl →⟨el, fl⟩)) Figure 3: Gibbs sampling equations for the competing configurations of the Split/Join sampler, shown in Figure 2. Eq. (1) corresponds to the top-left configuration, and (2) the remaining configurations where the choice of el, fl, er, fr and ai specifies the string segmentation and the alignment (monotone or reordered). massive space of possible grammars, it induces dependencies between all the sentences in the training corpus. These dependencies make it difficult to scale our approach to larger corpora by distributing it across a number of processors. Recent work (Newman et al., 2007; Asuncion et al., 2008) suggests that good practical parallel performance can be achieved by having multiple processors independently sample disjoint subsets of the corpus. Each process maintains a set of rule counts for the entire corpus and communicates the changes it has made to its section of the corpus only after sampling every sentence in that section. In this way each process is sampling according to a slightly ‘out-of-date’ distribution. However, as we confirm in Section 5 the performance of this approximation closely follows the exact collapsed Gibbs sampler. 4.5 Extracting a translation model Although we could use our model directly as a decoder to perform translation, its simple hierarchical reordering parameterisation is too weak to be effective in this mode. Instead we use our sampler to sample a distribution over translation models for state-of-the-art phrase based (Moses) and hierarchical (Hiero) decoders (Koehn et al., 2007; Chiang, 2007). Each sample from our model defines a hierarchical alignment on which we can apply the standard extraction heuristics of these models. By extracting from a sequence of samples we can directly infer a distribution over phrase tables or Hiero grammars. 5 Evaluation Our evaluation aims to determine whether the phrase/SCFG rule distributions created by sampling from the model described in Section 4 impact upon the performance of state-of-theart translation systems. We conduct experiments translating both Chinese (high reordering) and Arabic (low reordering) into English. We use the GIZA++ implementation of IBM Model 4 (Brown et al., 1993; Och and Ney, 2003) coupled with the phrase extraction heuristics of Koehn et al. (2003) and the SCFG rule extraction heuristics of Chiang (2007) as our benchmark. All the SCFG models employ a single X non-terminal, we leave experiments with multiple non-terminals to future work. Our hypothesis is that our grammar based induction of translation units should benefit language pairs with significant reordering more than those with less. While for mostly monotone translation pairs, such as Arabic-English, the benchmark GIZA++-based system is well suited due to its strong monotone bias (the sequential Markov model and diagonal growing heuristic). We conduct experiments on both small and large corpora to allow a range of alignment qualities and also to verify the effectiveness of our distributed approximation of the Bayesian inference. The samplers are initialised with trees created from GIZA++ Model 4 alignments, altered such that they are consistent with our ternary grammar. This is achieved by using the factorisation algorithm of Zhang et al. (2008a) to first create initial trees. Where these factored trees contain nodes with mixed terminals and non-terminals, or more than three non-terminals, we discard alignment points until the node factorises correctly. As the alignments contain many such non-factorisable nodes, these trees are of poor quality. However, all samplers used in these experiments are first ‘burnt-in’ for 1000 full passes through the data. This allows the sampler to diverge from its initialisation condition, and thus gives us confidence that subsequent samples will be drawn from the posterior. An expectation over phrase tables and Hiero grammars is built from every 50th sample after the burn-in, up until the 1500th sample. We evaluate the translation models using IBM BLEU (Papineni et al., 2001). Table 1 lists the statistics of the corpora used in these experiments. 787 IWSLT NIST English←Chinese English←Chinese English←Arabic Sentences 40k 300k 290k Segs./Words 380k 340k 11.0M 8.6M 9.3M 8.5M Av. Sent. Len. 9 8 36 28 32 29 Longest Sent. 75 64 80 80 80 80 Table 1: Corpora statistics. System Test 05 Moses (Heuristic) 47.3 Moses (Bayes SCFG) 49.6 Hiero (Heuristic) 48.3 Hiero (Bayes SCFG) 51.8 Table 2: IWSLT Chinese to English translation. 5.1 Small corpus Firstly we evaluate models trained on a small Chinese-English corpus using a Gibbs sampler on a single CPU. This corpus consists of transcribed utterances made available for the IWSLT workshop (Eck and Hori, 2005). The sparse counts and high reordering for this corpus means the GIZA++ model produces very poor alignments. Table 2 shows the results for the benchmark Moses and Hiero systems on this corpus using both the heuristic phrase estimation, and our proposed Bayesian SCFG model. We can see that our model has a slight advantage. When we look at the grammars extracted by the two models we note that the SCFG model creates considerably more translation rules. Normally this would suggest the alignments of the SCFG model are a lot sparser (more unaligned tokens) than those of the heuristic, however this is not the case. The projected SCFG derivations actually produce more alignment points. However these alignments are much more locally consistent, containing fewer spurious off-diagonal alignments, than the heuristic (see Figure 5), and thus produce far more valid phrases/rules. 5.2 Larger corpora We now test our model’s performance on a larger corpus, representing a realistic SMT experiment with millions of words and long sentences. The Chinese-English training data consists of the FBIS corpus (LDC2003E14) and the first 100k sentences from the Sinorama corpus (LDC2005E47). The Arabic-English training data consists of the eTIRR corpus (LDC2004E72), the Arabic G G G G G G G G G G G G Number of Sampling Passes Negative Log−Posterior G G G G G G G G G G G G G G G G G G G G G G G G 476 478 480 482 484 486 488 490 20 40 60 80 100 120 140 160 180 200 220 240 single (exact) distributed Figure 6: The posterior for the single CPU sampler and distributed approximation are roughly equivalent over a sampling run. news corpus (LDC2004T17), the Ummah corpus (LDC2004T18), and the sentences with confidence c > 0.995 in the ISI automatically extracted web parallel corpus (LDC2006T02). The Chinese text was segmented with a CRF-based Chinese segmenter optimized for MT (Chang et al., 2008). The Arabic text was preprocessed according to the D2 scheme of Habash and Sadat (2006), which was identified as optimal for corpora this size. The parameters of the NIST systems were tuned using Och’s algorithm to maximize BLEU on the MT02 test set (Och, 2003). To evaluate whether the approximate distributed inference algorithm described in Section 4.4 is effective, we compare the posterior probability of the training corpus when using a single machine, and when the inference is distributed on an eight core machine. Figure 6 plots the mean posterior and standard error for five independent runs for each scenario. Both sets of runs performed hyperparameter inference every twenty passes through the data. It is clear from the training curves that the distributed approximation tracks the corpus probability of the correct sampler sufficiently closely. This concurs with the findings of Newman et al. 788 权利 与 义务 平衡 是 世贸 组织 的 重要 特点 balance of rights and obligations an important wto characteristic (a) Giza++ 权利 与 义务 平衡 是 世贸 组织 的 重要 特点 balance of rights and obligations an important wto characteristic (b) Gibbs Figure 5: Alignment example. The synchronous tree structure is shown for (b) using brackets to indicate constituent spans; these are omitted for single token constituents. The right alignment is roughly correct, except that ‘of’ and ‘an’ should be left unaligned (是‘to be’ is missing from the English translation). System MT03 MT04 MT05 Moses (Heuristic) 26.2 30.0 25.3 Moses (Bayes SCFG) 26.4 30.2 25.8 Hiero (Heuristic) 26.4 30.8 25.4 Hiero (Bayes SCFG) 26.7 30.9 26.0 Table 3: NIST Chinese to English translation. System MT03 MT04 MT05 Moses (Heuristic) 48.5 43.9 49.2 Moses (Bayes SCFG) 48.5 43.5 48.7 Hiero (Heuristic) 48.1 43.5 48.4 Hiero (Bayes SCFG) 48.4 43.4 47.7 Table 4: NIST Arabic to English translation. (2007) who also observed very little empirical difference between the sampler and its distributed approximation. Tables 3 and 4 show the result on the two NIST corpora when running the distributed sampler on a single 8-core machine.5 These scores tally with our initial hypothesis: that the hierarchical structure of our model suits languages that exhibit less monotone reordering. Figure 5 shows the projected alignment of a headline from the thousandth sample on the NIST Chinese data set. The effect of the grammar based alignment can clearly be seen. Where the combination of GIZA++ and the heuristics creates outlier alignments that impede rule extraction, the SCFG imposes a more rigid hierarchical structure on the alignments. We hypothesise that this property may be particularly useful for syntactic translation models which often have difficulty 5Producing the 1.5K samples for each experiment took approximately one day. with inconsistent word alignments not corresponding to syntactic structure. The combined evidence of the ability of our Gibbs sampler to improve posterior likelihood (Figure 6) and our translation experiments demonstrate that we have developed a scalable and effective method for performing inference over phrasal SCFG, without compromising the strong theoretical underpinnings of our model. 6 Discussion and Conclusion We have presented a Bayesian model of SCFG induction capable of capturing phrasal units of translational equivalence. Our novel Gibbs sampler over synchronous derivation trees can efficiently draw samples from the posterior, overcoming the limitations of previous models when dealing with long sentences. This avoids explicitly representing the full derivation forest required by dynamic programming approaches, and thus we are able to perform inference without resorting to heuristic restrictions on the model. Initial experiments suggest that this model performs well on languages for which the monotone bias of existing alignment and heuristic phrase extraction approaches fail. These results open the way for the development of more sophisticated models employing grammars capable of capturing a wide range of translation phenomena. In future we envision it will be possible to use the techniques developed here to directly induce grammars which match state-of-the-art decoders, such as Hiero grammars or tree substitution grammars of the form used by Galley et al. (2004). 789 Acknowledgements The authors acknowledge the support of the EPSRC (Blunsom & Osborne, grant EP/D074959/1; Cohn, grant GR/T04557/01) and the GALE program of the Defense Advanced Research Projects Agency, Contract No. HR001106-2-001 (Dyer). References C. E. Antoniak. 1974. Mixtures of dirichlet processes with applications to bayesian nonparametric problems. The Annals of Statistics, 2(6):1152–1174. A. Asuncion, P. Smyth, M. Welling. 2008. Asynchronous distributed learning of topic models. In NIPS. MIT Press. P. Blunsom, T. Cohn, M. Osborne. 2008. Bayesian synchronous grammar induction. In Proceedings of NIPS 21, Vancouver, Canada. P. F. Brown, S. A. D. Pietra, V. J. D. Pietra, R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. P.-C. Chang, D. Jurafsky, C. D. Manning. 2008. Optimizing Chinese word segmentation for machine translation performance. In Proc. of the Third Workshop on Machine Translation, Prague, Czech Republic. C. Cherry, D. Lin. 2007. Inversion transduction grammar for joint phrasal translation modeling. In Proc. of the HLTNAACL Workshop on Syntax and Structure in Statistical Translation (SSST 2007), Rochester, USA. D. Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. J. DeNero, D. Klein. 2008. The complexity of phrase alignment problems. In Proceedings of ACL-08: HLT, Short Papers, 25–28, Columbus, Ohio. Association for Computational Linguistics. J. DeNero, D. Gillick, J. Zhang, D. Klein. 2006. Why generative phrase models underperform surface heuristics. In Proc. of the HLT-NAACL 2006 Workshop on Statistical Machine Translation, 31–38, New York City. J. DeNero, A. Bouchard-Cˆot´e, D. Klein. 2008. Sampling alignment structure under a Bayesian translation model. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, 314–323, Honolulu, Hawaii. Association for Computational Linguistics. M. Eck, C. Hori. 2005. Overview of the IWSLT 2005 evaluation campaign. In Proc. of the International Workshop on Spoken Language Translation, Pittsburgh. M. Galley, M. Hopkins, K. Knight, D. Marcu. 2004. What’s in a translation rule? In Proc. of the 4th International Conference on Human Language Technology Research and 5th Annual Meeting of the NAACL (HLT-NAACL 2004), Boston, USA. S. Goldwater, T. Griffiths. 2007. A fully bayesian approach to unsupervised part-of-speech tagging. In Proc. of the 45th Annual Meeting of the ACL (ACL-2007), 744–751, Prague, Czech Republic. S. Goldwater, T. Griffiths, M. Johnson. 2006. Contextual dependencies in unsupervised word segmentation. In Proc. of the 44th Annual Meeting of the ACL and 21st International Conference on Computational Linguistics (COLING/ACL-2006), Sydney. N. Habash, F. Sadat. 2006. Arabic preprocessing schemes for statistical machine translation. In Proc. of the 6th International Conference on Human Language Technology Research and 7th Annual Meeting of the NAACL (HLT-NAACL 2006), New York City. Association for Computational Linguistics. M. Johnson, T. Griffiths, S. Goldwater. 2007. Bayesian inference for PCFGs via Markov chain Monte Carlo. In Proc. of the 7th International Conference on Human Language Technology Research and 8th Annual Meeting of the NAACL (HLT-NAACL 2007), 139–146, Rochester, New York. P. Koehn, F. J. Och, D. Marcu. 2003. Statistical phrasebased translation. In Proc. of the 3rd International Conference on Human Language Technology Research and 4th Annual Meeting of the NAACL (HLT-NAACL 2003), 81–88, Edmonton, Canada. P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, E. Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of the 45th Annual Meeting of the ACL (ACL-2007), Prague. P. M. Lewis II, R. E. Stearns. 1968. Syntax-directed transduction. J. ACM, 15(3):465–488. D. Marcu, W. Wong. 2002. A phrase-based, joint probability model for statistical machine translation. In Proc. of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP-2002), 133–139, Philadelphia. Association for Computational Linguistics. D. Marcu, W. Wang, A. Echihabi, K. Knight. 2006. SPMT: Statistical machine translation with syntactified target language phrases. In Proc. of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP2006), 44–52, Sydney, Australia. D. Newman, A. Asuncion, P. Smyth, M. Welling. 2007. Distributed inference for latent dirichlet allocation. In NIPS. MIT Press. F. J. Och, H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–52. F. J. Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of the 41st Annual Meeting of the ACL (ACL-2003), 160–167, Sapporo, Japan. K. Papineni, S. Roukos, T. Ward, W. Zhu. 2001. Bleu: a method for automatic evaluation of machine translation, 2001. Y. W. Teh, M. I. Jordan, M. J. Beal, D. M. Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581. D. Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. H. Zhang, D. Gildea, D. Chiang. 2008a. Extracting synchronous grammar rules from word-level alignments in linear time. In Proc. of the 22th International Conference on Computational Linguistics (COLING-2008), 1081–1088, Manchester, UK. H. Zhang, C. Quirk, R. C. Moore, D. Gildea. 2008b. Bayesian learning of non-compositional phrases with synchronous parsing. In Proc. of the 46th Annual Conference of the Association for Computational Linguistics: Human Language Technologies (ACL-08:HLT), 97–105, Columbus, Ohio. 790
2009
88
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 791–799, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Source-Language Entailment Modeling for Translating Unknown Terms Shachar Mirkin§, Lucia Specia†, Nicola Cancedda†, Ido Dagan§, Marc Dymetman†, Idan Szpektor§ § Computer Science Department, Bar-Ilan University † Xerox Research Centre Europe {mirkins,dagan,szpekti}@cs.biu.ac.il {lucia.specia,nicola.cancedda,marc.dymetman}@xrce.xerox.com Abstract This paper addresses the task of handling unknown terms in SMT. We propose using source-language monolingual models and resources to paraphrase the source text prior to translation. We further present a conceptual extension to prior work by allowing translations of entailed texts rather than paraphrases only. A method for performing this process efficiently is presented and applied to some 2500 sentences with unknown terms. Our experiments show that the proposed approach substantially increases the number of properly translated texts. 1 Introduction Machine Translation systems frequently encounter terms they are not able to translate due to some missing knowledge. For instance, a Statistical Machine Translation (SMT) system translating the sentence “Cisco filed a lawsuit against Apple for patent violation” may lack words like filed and lawsuit in its phrase table. The problem is especially severe for languages for which parallel corpora are scarce, or in the common scenario when the SMT system is used to translate texts of a domain different from the one it was trained on. A previously suggested solution (CallisonBurch et al., 2006) is to learn paraphrases of source terms from multilingual (parallel) corpora, and expand the phrase table with these paraphrases1. Such solutions could potentially yield a paraphrased sentence like “Cisco sued Apple for patent violation”, although their dependence on bilingual resources limits their utility. In this paper we propose an approach that consists in directly replacing unknown source terms, 1As common in the literature, we use the term paraphrases to refer to texts of equivalent meaning, of any length from single words (synonyms) up to complete sentences. using source-language resources and models in order to achieve two goals. The first goal is coverage increase. The availability of bilingual corpora, from which paraphrases can be learnt, is in many cases limited. On the other hand, monolingual resources and methods for extracting paraphrases from monolingual corpora are more readily available. These include manually constructed resources, such as WordNet (Fellbaum, 1998), and automatic methods for paraphrases acquisition, such as DIRT (Lin and Pantel, 2001). However, such resources have not been applied yet to the problem of substituting unknown terms in SMT. We suggest that by using such monolingual resources we could provide paraphrases for a larger number of texts with unknown terms, thus increasing the overall coverage of the SMT system, i.e. the number of texts it properly translates. Even with larger paraphrase resources, we may encounter texts in which not all unknown terms are successfully handled through paraphrasing, which often results in poor translations (see Section 2.1). To further increase coverage, we therefore propose to generate and translate texts that convey a somewhat more general meaning than the original source text. For example, using such approach, the following text could be generated: “Cisco accused Apple of patent violation”. Although less informative than the original, a translation for such texts may be useful. Such non-symmetric relationships (as between filed a lawsuit and accused) are difficult to learn from parallel corpora and therefore monolingual resources are more appropriate for this purpose. The second goal we wish to accomplish by employing source-language resources is to rank the alternative generated texts. This goal can be achieved by using context-models on the source language prior to translation. This has two advantages. First, the ranking allows us to prune some 791 candidates before supplying them to the translation engine, thus improving translation efficiency. Second, the ranking may be combined with target language information in order to choose the best translation, thus improving translation quality. We position the problem of generating alternative texts for translation within the Textual Entailment (TE) framework (Giampiccolo et al., 2007). TE provides a generic way for handling language variability, identifying when the meaning of one text is entailed by the other (i.e. the meaning of the entailed text can be inferred from the meaning of the entailing one). When the meanings of two texts are equivalent (paraphrase), entailment is mutual. Typically, a more general version of a certain text is entailed by it. Hence, through TE we can formalize the generation of both equivalent and more general texts for the source text. When possible, a paraphrase is used. Otherwise, an alternative text whose meaning is entailed by the original source is generated and translated. We assess our approach by applying an SMT system to a text domain that is different from the one used to train the system. We use WordNet as a source language resource for entailment relationships and several common statistical contextmodels for selecting the best generated texts to be sent to translation. We show that the use of source language resources, and in particular the extension to non-symmetric textual entailment relationships, is useful for substantially increasing the amount of texts that are properly translated. This increase is observed relative to both using paraphrases produced by the same resource (WordNet) and using paraphrases produced from multilingual parallel corpora. We demonstrate that by using simple context-models on the source, efficiency can be improved, while translation quality is maintained. We believe that with the use of more sophisticated context-models further quality improvement can be achieved. 2 Background 2.1 Unknown Terms A very common problem faced by machine translation systems is the need to translate terms (words or multi-word expressions) that are not found in the system’s lexicon or phrase table. The reasons for such unknown terms in SMT systems include scarcity of training material and the application of the system to text domains that differ from the ones used for training. In SMT, when unknown terms are found in the source text, the systems usually omit or copy them literally into the target. Though copying the source words can be of some help to the reader if the unknown word has a cognate in the target language, this will not happen in the most general scenario where, for instance, languages use different scripts. In addition, the presence of a single unknown term often affects the translation of wider portions of text, inducing errors in both lexical selection and ordering. This phenomenon is demonstrated in the following sentences, where the translation of the English sentence (1) is acceptable only when the unknown word (in bold) is replaced with a translatable paraphrase (3): 1. “..., despite bearing the heavy burden of the unemployed 10% or more of the labor force.” 2. “..., malgr´e la lourde charge de compte le 10% ou plus de chˆomeurs labor la force .” 3. “..., malgr´e la lourde charge des chˆomeurs de 10% ou plus de la force du travail.” Several approaches have been proposed to deal with unknown terms in SMT systems, rather than omitting or copying the terms. For example, (Eck et al., 2008) replace the unknown terms in the source text by their definition in a monolingual dictionary, which can be useful for gisting. To translate across languages with different alphabets approaches such as (Knight and Graehl, 1997; Habash, 2008) use transliteration techniques to tackle proper nouns and technical terms. For translation from highly inflected languages, certain approaches rely on some form of lexical approximation or morphological analysis (Koehn and Knight, 2003; Yang and Kirchhoff, 2006; Langlais and Patry, 2007; Arora et al., 2008). Although these strategies yield gain in coverage and translation quality, they only account for unknown terms that should be transliterated or are variations of known ones. 2.2 Paraphrasing in MT A recent strategy to broadly deal with the problem of unknown terms is to paraphrase the source text with terms whose translation is known to the system, using paraphrases learnt from multilingual corpora, typically involving at least one “pivot” language different from the target language of immediate interest (Callison-Burch et 792 al., 2006; Cohn and Lapata, 2007; Zhao et al., 2008; Callison-Burch, 2008; Guzm´an and Garrido, 2008). The procedure to extract paraphrases in these approaches is similar to standard phrase extraction in SMT systems, and therefore a large amount of additional parallel corpus is required. Moreover, as discussed in Section 5, when unknown texts are not from the same domain as the SMT training corpus, it is likely that paraphrases found through such methods will yield misleading translations. Bond et al. (2008) use grammars to paraphrase the whole source sentence, covering aspects like word order and minor lexical variations (tenses etc.), but not content words. The paraphrases are added to the source side of the corpus and the corresponding target sentences are duplicated. This, however, may yield distorted probability estimates in the phrase table, since these were not computed from parallel data. The main use of monolingual paraphrases in MT to date has been for evaluation. For example, (Kauchak and Barzilay, 2006) paraphrase references to make them closer to the system translation in order to obtain more reliable results when using automatic evaluation metrics like BLEU (Papineni et al., 2002). 2.3 Textual Entailment and Entailment Rules Textual Entailment (TE) has recently become a prominent paradigm for modeling semantic inference, capturing the needs of a broad range of text understanding applications (Giampiccolo et al., 2007). Yet, its application to SMT has been so far limited to MT evaluation (Pado et al., 2009). TE defines a directional relation between two texts, where the meaning of the entailed text (hypothesis, h) can be inferred from the meaning of the entailing text, t. Under this paradigm, paraphrases are a special case of the entailment relation, when the relation is symmetric (the texts entail each other). Otherwise, we say that one text directionally entails the other. A common practice for proving (or generating) h from t is to apply entailment rules to t. An entailment rule, denoted LHS ⇒RHS, specifies an entailment relation between two text fragments (the Left- and Right- Hand Sides), possibly with variables (e.g. build X in Y ⇒X is completed in Y ). A paraphrasing rule is denoted with ⇔. When a rule is applied to a text, a new text is inferred, where the matched LHS is replaced with the RHS. For example, the rule skyscraper ⇒building is applied to “The world’s tallest skyscraper was completed in Taiwan” to infer “The world’s tallest building was completed in Taiwan”. In this work, we employ lexical entailment rules, i.e. rules without variables. Various resources for lexical rules are available, and the prominent one is WordNet (Fellbaum, 1998), which has been used in virtually all TE systems (Giampiccolo et al., 2007). Typically, a rule application is valid only under specific contexts. For example, mouse ⇒rodent should not be applied to “Use the mouse to mark your answers”. Context-models can be exploited to validate the application of a rule to a text. In such models, an explicit Word Sense Disambiguation (WSD) is not necessarily required; rather, an implicit sense-match is sought after (Dagan et al., 2006). Within the scope of our paper, rule application is handled similarly to Lexical Substitution (McCarthy and Navigli, 2007), considering the contextual relationship between the text and the rule. However, in general, entailment rule application addresses other aspects of context matching as well (Szpektor et al., 2008). 3 Textual Entailment for Statistical Machine Translation Previous solutions for handling unknown terms in a source text s augment the SMT system’s phrase table based on multilingual corpora. This allows indirectly paraphrasing s, when the SMT system chooses to use a paraphrase included in the table and produces a translation with the corresponding target phrase for the unknown term. We propose using monolingual paraphrasing methods and resources for this task to obtain a more extensive set of rules for paraphrasing the source. These rules are then applied to s directly to produce alternative versions of the source text prior to the translation step. Moreover, further coverage increase can be achieved by employing directional entailment rules, when paraphrasing is not possible, to generate more general texts for translation. Our approach, based on the textual entailment framework, considers the newly generated texts as entailed from the original one. Monolingual semantic resources such as WordNet can provide entailment rules required for both these symmetric and asymmetric entailment relations. 793 Input: A text t with one or more unknown terms; a monolingual resource of entailment rules; k - maximal number of source alternatives to produce Output: A translation of either (in order of preference): a paraphrase of t OR a text entailed by t OR t itself 1. For each unknown term - fetch entailment rules: (a) Fetch rules for paraphrasing; disregard rules whose RHS is not in the phrase table (b) If the set of rules is empty: fetch directional entailment rules; disregard rules whose RHS is not in the phrase table 2. Apply a context-model to compute a score for each rule application 3. Compute total source score for each entailed text as a combination of individual rule scores 4. Generate and translate the top-k entailed texts 5. If k > 1 (a) Apply target model to score the translation (b) Compute final source-target score 6. Pick highest scoring translation Figure 1: Scheme for handling unknown terms by using monolingual resources through textual entailment Through the process of applying entailment rules to the source text, multiple alternatives of entailed texts are generated. To rank the candidate texts we employ monolingual context-models to provide scores for rule applications over the source sentence. This can be used to (a) directly select the text with the highest score, which can then be translated, or (b) to select a subset of top candidates to be translated, which will then be ranked using the target language information as well. This pruning reduces the load of the SMT system, and allows for potential improvements in translation quality by considering both source- and target-language information. The general scheme through which we achieve these goals, which can be implemented using different context-models and scoring techniques, is detailed in Figure 1. Details of our concrete implementation are given in Section 4. Preliminary analysis confirmed (as expected) that readers prefer translations of paraphrases, when available, over translations of directional entailments. This consideration is therefore taken into account in the proposed method. The input is a text unit to be translated, such as a sentence or paragraph, with one or more unknown terms. For each unknown term we first fetch a list of candidate rules for paraphrasing (e.g. synonyms), where the unknown term is the LHS. For example, if our unknown term is dodge, a possible candidate might be dodge ⇔circumvent. We inflect the RHS to keep the original morphological information of the unknown term and filter out rules where the inflected RHS does not appear in the phrase table (step 1a in Figure 1). When no applicable rules for paraphrasing are available (1b), we fetch directional entailment rules (e.g. hypernymy rules such as dodge ⇒ avoid), and filter them in the same way as for paraphrasing rules. To each set of rules for a given unknown term we add the “identity-rule”, to allow leaving the unknown term unchanged, the correct choice in cases of proper names, for example. Next, we apply a context-model to compute an applicability score of each rule to the source text (step 2). An entailed text’s total score is the combination (e.g. product, see Section 4) of the scores of the rules used to produce it (3). A set of the top-k entailed texts is then generated and sent for translation (4). If more than one alternative is produced by the source model (and k > 1), a target model is applied on the selected set of translated texts (5a). The combined source-target model score is a combination of the scores of the source and target models (5b). The final translation is selected to be the one that yields the highest combined sourcetarget score (6). Note that setting k = 1 is equivalent to using the source-language model alone. Our algorithm validates the application of the entailment rules at two stages – before and after translation, through context-models applied at each end. As the experiments will show in Section 4, a large number of possible combinations of entailment rules is a common scenario, and therefore using the source context models to reduce this number plays an important role. 4 Experimental Setting To assess our approach, we conducted a series of experiments; in each experiment we applied the scheme described in 3, changing only the models being used for scoring the generated and translated texts. The setting of these experiments is described in what follows. SMT data To produce sentences for our experiments, we use Matrax (Simard et al., 2005), a standard phrase-based SMT system, with the exception that it allows gaps in phrases. We use approximately 1M sentence pairs from the English-French 794 Europarl corpus for training, and then translate a test set of 5,859 English sentences from the News corpus into French. Both resources are taken from the shared translation task in WMT-2008 (Callison-Burch et al., 2008). Hence, we compare our method in a setting where the training and test data are from different domains, a common scenario in the practical use of MT systems. Of the 5,859 translated sentences, 2,494 contain unknown terms (considering only sequences with alphabetic symbols), summing up to 4,255 occurrences of unknown terms. 39% of the 2,494 sentences contain more than a single unknown term. Entailment resource We use WordNet 3.0 as a resource for entailment rules. Paraphrases are generated using synonyms. Directionally entailed texts are created using hypernyms, which typically conform with entailment. We do not rely on sense information in WordNet. Hence, any other semantic resource for entailment rules can be utilized. Each sentence is tagged using the OpenNLP POS tagger2. Entailment rules are applied for unknown terms tagged as nouns, verbs, adjectives and adverbs. The use of relations from WordNet results in 1,071 sentences with applicable rules (with phrase table entries) for the unknown terms when using synonyms, and 1,643 when using both synonyms and hypernyms, accounting for 43% and 66% of the test sentences, respectively. The number of alternative sentences generated for each source text varies from 1 to 960 when paraphrasing rules were applied, and reaches very large numbers, up to 89,700 at the “worst case”, when all TE rules are employed, an average of 456 alternatives per sentence. Scoring source texts We test our proposed method using several context-models shown to perform reasonably well in previous work: • FREQ: The first model we use is a contextindependent baseline. A common useful heuristic to pick an entailment rule is to select the candidate with the highest frequency in the corpus (Mccarthy et al., 2004). In this model, a rule’s score is the normalized number of occurrences of its RHS in the training corpus, ignoring the context of the LHS. • LSA: Latent Semantic Analysis (Deerwester et al., 1990) is a well-known method for rep2http://opennlp.sourceforge.net resenting the contextual usage of words based on corpus statistics. We represented each term by a normalized vector of the top 100 SVD dimensions, as described in (Gliozzo, 2005). This model measures the similarity between the sentence words and the RHS in the LSA space. • NB: We implemented the unsupervised Na¨ıve Bayes model described in (Glickman et al., 2006) to estimate the probability that the unknown term entails the RHS in the given context. The estimation is based on corpus co-occurrence statistics of the context words with the RHS. • LMS: This model generates the Language Model probability of the RHS in the source. We use 3-grams probabilities as produced by the SRILM toolkit (Stolcke, 2002). Finally, as a simple baseline, we generated a random score for each rule application, RAND. The score of each rule application by any of the above models is normalized to the range (0,1]. To combine individual rule applications in a given sentence, we use the product of their scores. The monolingual data used for the models above is the source side of the training parallel corpus. Target-language scores On the target side we used either a standard 3-gram language-model, denoted LMT, or the score assigned by the complete SMT log-linear model, which includes the language model as one of its components (SMT). A pair of a source:target models comprises a complete model for selecting the best translated sentence, where the overall score is the product of the scores of the two models. We also applied several combinations of source models, such as LSA combined with LMS, to take advantage of their complementary strengths. Additionally, we assessed our method with sourceonly models, by setting the number of sentences to be selected by the source model to one (k = 1). 5 Results 5.1 Manual Evaluation To evaluate the translations produced using the various source and target models and the different rule-sets, we rely mostly on manual assessment, since automatic MT evaluation metrics like BLEU do not capture well the type of semantic variations 795 Model Precision (%) Coverage (%) PARAPH. TE PARAPH. TE 1 –:SMT 75.8 73.1 32.5 48.1 2 NB:SMT 75.2 71.5 32.3 47.1 3 LSA:SMT 74.9 72.4 32.1 47.7 4 NB:– 74.7 71.1 32.1 46.8 5 LMS:LMT 73.8 70.2 31.7 46.3 6 FREQ:– 72.5 68.0 31.2 44.8 7 RAND 57.2 63.4 24.6 41.8 Table 1: Translation acceptance when using only paraphrases and when using all entailment rules. “:” indicates which model is applied to the source (left side) and which to the target language (right side). generated in our experiments, particularly at the sentence level. In the manual evaluation, two native speakers of the target language judged whether each translation preserves the meaning of its reference sentence, marking it as acceptable or unacceptable. From the sentences for which rules were applicable, we randomly selected a sample of sentences for each annotator, allowing for some overlapping for agreement analysis. In total, the translations of 1,014 unique source sentences were manually annotated, of which 453 were produced using only hypernyms (no paraphrases were applicable). When a sentence was annotated by both annotators, one annotation was picked randomly. Inter-annotator agreement was measured by the percentage of sentences the annotators agreed on, as well as via the Kappa measure (Cohen, 1960). For different models, the agreement rate varied from 67% to 78% (72% overall), and the Kappa value ranged from 0.34 to 0.55, which is comparable to figures reported for other standard SMT evaluation metrics (Callison-Burch et al., 2008). Translation with TE For each model m, we measured Precisionm, the percentage of acceptable translations out of all sampled translations. Precisionm was measured both when using only paraphrases (PARAPH.) and when using all entailment rules (TE). We also measured Coveragem, the percentage of sentences with acceptable translations, Am, out of all sentences (2,494). As our annotators evaluated only a sample of sentences, Am is estimated as the model’s total number of sentences with applicable rules, Sm, multiplied by the model’s Precision (Sm was 1,071 for paraphrases and 1,643 for entailment rules): Coveragem = Sm·Precisionm 2,494 . Table 1 presents the results of several sourcetarget combinations when using only paraphrases and when also using directional entailment rules. When all rules are used, a substantial improvement in coverage is consistently obtained across all models, reaching a relative increase of 50% over paraphrases only, while just a slight decrease in precision is observed (see Section 5.3 for some error analysis). This confirms our hypothesis that directional entailment rules can be very useful for replacing unknown terms. For the combination of source-target models, the value of k is set depending on which rule-set is used. Preliminary analysis showed that k = 5 is sufficient when only paraphrases are used and k = 20 when directional entailment rules are also considered. We measured statistical significance between different models for precision of the TE results according to the Wilcoxon signed ranks test (Wilcoxon, 1945). Models 1-6 in Table 1 are significantly better than the RAND baseline (p < 0.03), and models 1-3 are significantly better than model 6 (p < 0.05). The difference between –:SMT and NB:SMT or LSA:SMT is not statistically significant. The results in Table 1 therefore suggest that taking a source model into account preserves the quality of translation. Furthermore, the quality is maintained even when source models’ selections are restricted to a rather small top-k ranks, at a lower computational cost (for the models combining source and target, like NB:SMT or LSA:SMT). This is particularly relevant for on-demand MT systems, where time is an issue. For such systems, using this source-language based pruning methodology will yield significant performance gains as compared to target-only models. We also evaluated the baseline strategy where unknown terms are omitted from the translation, resulting in 25% precision. Leaving unknown words untranslated also yielded very poor translation quality in an analysis performed on a similar dataset. Comparison to related work We compared our algorithm with an implementation of the algorithm proposed by (Callison-Burch et al., 2006) (see Section 2.2), henceforth CB, using the Spanish side of Europarl as the pivot language. Out of the tested 2,494 sentences with unknown terms, CB found paraphrases for 706 sentences (28.3%), while with any of our models, including 796 Model Precision (%) Coverage (%) Better (%) NB:SMT (TE) 85.3 56.2 72.7 CB 85.3 24.2 12.7 Table 2: Comparison between our top model and the method by Callison-Burch et al. (2006), showing the percentage of times translations were considered acceptable, the model’s coverage and the percentage of times each model scored better than the other (in the 14% remaining cases, both models produced unacceptable translations). NB:SMT, our algorithm found applicable entailment rules for 1,643 sentences (66%). The quality of the CB translations was manually assessed for a sample of 150 sentences. Table 2 presents the precision and coverage on this sample for both CB and NB:SMT, as well as the number of times each model’s translation was preferred by the annotators. While both models achieve equally high precision scores on this sample, the NB:SMT model’s translations were undoubtedly preferred by the annotators, with a considerably higher coverage. With the CB method, given that many of the phrases added to the phrase table are noisy, the global quality of the sentences seem to have been affected, explaining why the judges preferred the NB:SMT translations. One reason for the lower coverage of CB is the fact that paraphrases were acquired from a corpus whose domain is different from that of the test sentences. The entailment rules in our models are not limited to paraphrases and are derived from WordNet, which has broader applicability. Hence, utilizing monolingual resources has proven beneficial for the task. 5.2 Automatic MT Evaluation Although automatic MT evaluation metrics are less appropriate for capturing the variations generated by our method, to ensure that there was no degradation in the system-level scores according to such metrics we also measured the models’ performance using BLEU and METEOR (Agarwal and Lavie, 2007). The version of METEOR we used on the target language (French) considers the stems of the words, instead of surface forms only, but does not make use of WordNet synonyms. We evaluated the performance of the top models of Table 1, as well as of a baseline SMT system that left unknown terms untranslated, on the sample of 1,014 manually annotated sentences. As shown in Table 3, all models resulted in improvement with respect to the original sentences (baseModel BLEU (TE) METEOR (TE) –:SMT 15.50 0.1325 NB:SMT 15.37 0.1316 LSA:SMT 15.51 0.1318 NB:– 15.37 0.1311 CB 15.33 0.1299 Baseline SMT 15.29 0.1294 Table 3: Performance of the best models according to automatic MT evaluation metrics at the corpus level. The baseline refers to translation of the text without applying any entailment rules. line). The difference in METEOR scores is statistically significant (p < 0.05) for the three top models against the baseline. The generally low scores may be attributed to the fact that training and test sentences are from different domains. 5.3 Discussion The use of entailed texts produced using our approach clearly improves the quality of translations, as compared to leaving unknown terms untranslated or omitting them altogether. While it is clear that textual entailment is useful for increasing coverage in translation, further research is required to identify the amount of information loss incurred when non-symmetric entailment relations are being used, and thus to identify the cases where such relations are detrimental to translation. Consider, for example, the sentence: “Conventional military models are geared to decapitate something that, in this case, has no head.”. In this sentence, the unknown term was replaced by kill, which results in missing the point originally conveyed in the text. Accordingly, the produced translation does not preserve the meaning of the source, and was considered unacceptable: “Les mod`eles militaires visent `a faire quelque chose que, dans ce cas, n’est pas responsable.”. In other cases, the selected hypernyms were too generic words, such as entity or attribute, which also fail to preserve the sentence’s meaning. On the other hand, when the unknown term was a very specific word, hypernyms played an important role. For example, “Bulgaria is the most sought-after east European real estate target, with its low-cost ski chalets and oceanfront homes”. Here, chalets are replaced by houses or units (depending on the model), providing a translation that would be acceptable by most readers. Other incorrect translations occurred when the unknown term was part of a phrase, for example, troughs replaced with depressions in peaks 797 and troughs, a problem that also strongly affects paraphrasing. In another case, movement was the hypernym chosen to replace labor in labor movement, yielding an awkward text for translation. Many of the cases which involved ambiguity were resolved by the applied context-models, and can be further addressed, together with the above mentioned problems, with better source-language context models. We suggest that other types of entailment rules could be useful for the task beyond the straightforward generalization using hypernyms, which was demonstrated in this work. This includes other types of lexical entailment relations, such as holonymy (e.g. Singapore ⇒Southeast Asia) as well as lexical syntactic rules (X cure Y ⇒treat Y with X). Even syntactic rules, such as clause removal, can be recruited for the task: “Obama, the 44th president, declared Monday . . . ” ⇒“Obama declared Monday . . . ”. When the system is unable to translate a term found in the embedded clause, the translation of the less informative sentence may still be acceptable by readers. 6 Conclusions and Future Work In this paper we propose a new entailment-based approach for addressing the problem of unknown terms in machine translation. Applying this approach with lexical entailment rules from WordNet, we show that using monolingual resources and textual entailment relationships allows substantially increasing the quality of translations produced by an SMT system. Our experiments also show that it is possible to perform the process efficiently by relying on source language contextmodels as a filter prior to translation. This pipeline maintains translation quality, as assessed by both human annotators and standard automatic measures. For future work we suggest generating entailed texts with a more extensive set of rules, in particular lexical-syntactic ones. Combining rules from monolingual and bilingual resources seems appealing as well. Developing better context-models to be applied on the source is expected to further improve our method’s performance. Specifically, we suggest taking into account the prior likelihood that a rule is correct as part of the model score. Finally, some researchers have advocated recently the use of shared structures such as parse forests (Mi and Huang, 2008) or word lattices (Dyer et al., 2008) in order to allow a compact representation of alternative inputs to an SMT system. This is an approach that we intend to explore in future work, as a way to efficiently handle the different source language alternatives generated by entailment rules. However, since most current MT systems do not accept such type of inputs, we consider the results on pruning by source-side context models as broadly relevant. Acknowledgments This work was supported in part by the ICT Programme of the European Community, under the PASCAL 2 Network of Excellence, ICT-216886 and The Israel Science Foundation (grant No. 1112/08). We wish to thank Roy Bar-Haim and the anonymous reviewers of this paper for their useful feedback. This publication only reflects the authors’ views. References Abhaya Agarwal and Alon Lavie. 2007. METEOR: An Automatic Metric for MT Evaluation with High Levels of Correlation with Human Judgments. In Proceedings of WMT-08. Karunesh Arora, Michael Paul, and Eiichiro Sumita. 2008. Translation of Unknown Words in PhraseBased Statistical Machine Translation for Languages of Rich Morphology. In Proceedings of SLTU. Francis Bond, Eric Nichols, Darren Scott Appling, and Michael Paul. 2008. Improving Statistical Machine Translation by Paraphrasing the Training Data. In Proceedings of IWSLT. Chris Callison-Burch, Philipp Koehn, and Miles Osborne. 2006. Improved Statistical Machine Translation Using Paraphrases. In Proceedings of HLTNAACL. Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2008. Further Meta-Evaluation of Machine Translation. In Proceedings of WMT. Chris Callison-Burch. 2008. Syntactic Constraints on Paraphrases Extracted from Parallel Corpora. In Proceedings of EMNLP. Jacob Cohen. 1960. A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1):37–46. Trevor Cohn and Mirella Lapata. 2007. Machine Translation by Triangulation: Making Effective Use of Multi-Parallel Corpora. In Proceedings of ACL. 798 Ido Dagan, Oren Glickman, Alfio Massimiliano Gliozzo, Efrat Marmorshtein, and Carlo Strapparava. 2006. Direct Word Sense Matching for Lexical Substitution. In Proceedings of ACL. Scott Deerwester, S.T. Dumais, G.W. Furnas, T.K. Landauer, and R.A. Harshman. 1990. Indexing by Latent Semantic Analysis. Journal of the American Society for Information Science, 41. Christopher Dyer, Smaranda Muresan, and Philip Resnik. 2008. Generalizing Word Lattice Translation. In Proceedings of ACL-HLT. Matthias Eck, Stephan Vogel, and Alex Waibel. 2008. Communicating Unknown Words in Machine Translation. In Proceedings of LREC. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database (Language, Speech, and Communication). The MIT Press. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The Third PASCAL Recognising Textual Entailment Challenge. In Proceedings of ACL-WTEP Workshop. Oren Glickman, Ido Dagan, Mikaela Keller, Samy Bengio, and Walter Daelemans. 2006. Investigating Lexical Substitution Scoring for Subtitle Generation. In Proceedings of CoNLL. Alfio Massimiliano Gliozzo. 2005. Semantic Domains in Computational Linguistics. Ph.D. thesis, University of Trento. Francisco Guzm´an and Leonardo Garrido. 2008. Translation Paraphrases in Phrase-Based Machine Translation. In Proceedings of CICLing. Nizar Habash. 2008. Four Techniques for Online Handling of Out-of-Vocabulary Words in ArabicEnglish Statistical Machine Translation. In Proceedings of ACL-HLT. David Kauchak and Regina Barzilay. 2006. Paraphrasing for Automatic Evaluation. In Proceedings of HLT-NAACL. Kevin Knight and Jonathan Graehl. 1997. Machine Transliteration. In Proceedings of ACL. Philipp Koehn and Kevin Knight. 2003. Empirical Methods for Compound Splitting. In Proceedings of EACL. Philippe Langlais and Alexandre Patry. 2007. Translating Unknown Words by Analogical Learning. In Proceedings of EMNLP-CoNLL. Dekang Lin and Patrick Pantel. 2001. DIRT – Discovery of Inference Rules from Text. In Proceedings of SIGKDD. Diana McCarthy and Roberto Navigli. 2007. SemEval-2007 Task 10: English Lexical Substitution Task. In Proceedings of SemEval. Diana Mccarthy, Rob Koeling, Julie Weeds, and John Carroll. 2004. Finding Predominant Word Senses in Untagged Text. In Proceedings of ACL. Haitao Mi and Liang Huang. 2008. Forest-based Translation Rule Extraction. In Proceedings of EMNLP. Sebastian Pado, Michel Galley, Daniel Jurafsky, and Christopher D. Manning. 2009. Textual Entailment Features for Machine Translation Evaluation. In Proceedings of WMT. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of ACL. M. Simard, N. Cancedda, B. Cavestro, M. Dymetman, E. Gaussier, C. Goutte, and K. Yamada. 2005. Translating with Non-contiguous Phrases. In Proceedings of HLT-EMNLP. Andreas Stolcke. 2002. SRILM – An Extensible Language Modeling Toolkit. In Proceedings of ICSLP. Idan Szpektor, Ido Dagan, Roy Bar-Haim, and Jacob Goldberger. 2008. Contextual Preferences. In Proceedings of ACL-HLT. Frank Wilcoxon. 1945. Individual Comparisons by Ranking Methods. Biometrics Bulletin, 1(6):80–83. Mei Yang and Katrin Kirchhoff. 2006. Phrase-Based Backoff Models for Machine Translation of Highly Inflected Languages. In Proceedings of EACL. Shiqi Zhao, Haifeng Wang, Ting Liu, and Sheng Li. 2008. Pivot Approach for Extracting Paraphrase Patterns from Bilingual Corpora. In Proceedings of ACL-HLT. 799
2009
89
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 73–81, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Unsupervised Multilingual Grammar Induction Benjamin Snyder, Tahira Naseem, and Regina Barzilay Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology {bsnyder, tahira, regina}@csail.mit.edu Abstract We investigate the task of unsupervised constituency parsing from bilingual parallel corpora. Our goal is to use bilingual cues to learn improved parsing models for each language and to evaluate these models on held-out monolingual test data. We formulate a generative Bayesian model which seeks to explain the observed parallel data through a combination of bilingual and monolingual parameters. To this end, we adapt a formalism known as unordered tree alignment to our probabilistic setting. Using this formalism, our model loosely binds parallel trees while allowing language-specific syntactic structure. We perform inference under this model using Markov Chain Monte Carlo and dynamic programming. Applying this model to three parallel corpora (Korean-English, Urdu-English, and Chinese-English) we find substantial performance gains over the CCM model, a strong monolingual baseline. On average, across a variety of testing scenarios, our model achieves an 8.8 absolute gain in F-measure. 1 1 Introduction In this paper we investigate the task of unsupervised constituency parsing when bilingual parallel text is available. Our goal is to improve parsing performance on monolingual test data for each language by using unsupervised bilingual cues at training time. Multilingual learning has been successful for other linguistic induction tasks such as lexicon acquisition, morphological segmentation, and part-of-speech tagging (Genzel, 2005; Snyder and Barzilay, 2008; Snyder et al., 2008; Snyder 1Code and the outputs of our experiments are available at http://groups.csail.mit.edu/rbg/code/multiling induction. et al., 2009). We focus here on the unsupervised induction of unlabeled constituency brackets. This task has been extensively studied in a monolingual setting and has proven to be difficult (Charniak and Carroll, 1992; Klein and Manning, 2002). The key premise of our approach is that ambiguous syntactic structures in one language may correspond to less uncertain structures in the other language. For instance, the English sentence I saw [the student [from MIT]] exhibits the classic problem of PP-attachment ambiguity. However, its Urdu translation, literally glossed as I [[MIT of] student] saw, uses a genitive phrase that may only be attached to the adjacent noun phrase. Knowing the correspondence between these sentences should help us resolve the English ambiguity. One of the main challenges of unsupervised multilingual learning is to exploit cross-lingual patterns discovered in data, while still allowing a wide range of language-specific idiosyncrasies. To this end, we adapt a formalism known as unordered tree alignment (Jiang et al., 1995) to a probabilistic setting. Under this formalism, any two trees can be embedded in an alignment tree. This alignment tree allows arbitrary parts of the two trees to diverge in structure, permitting language-specific grammatical structure to be preserved. Additionally, a computational advantage of this formalism is that the marginalized probability over all possible alignments for any two trees can be efficiently computed with a dynamic program in linear time. We formulate a generative Bayesian model which seeks to explain the observed parallel data through a combination of bilingual and monolingual parameters. Our model views each pair of sentences as having been generated as follows: First an alignment tree is drawn. Each node in this alignment tree contains either a solitary monolingual constituent or a pair of coupled bilingual constituents. For each solitary mono73 lingual constituent, a sequence of part-of-speech tags is drawn from a language-specific distribution. For each pair of coupled bilingual constituents, a pair of part-of-speech sequences are drawn jointly from a cross-lingual distribution. Word-level alignments are then drawn based on the tree alignment. Finally, parallel sentences are assembled from these generated part-of-speech sequences and word-level alignments. To perform inference under this model, we use a Metropolis-Hastings within-Gibbs sampler. We sample pairs of trees and then compute marginalized probabilities over all possible alignments using dynamic programming. We test the effectiveness of our bilingual grammar induction model on three corpora of parallel text: English-Korean, English-Urdu and EnglishChinese. The model is trained using bilingual data with automatically induced word-level alignments, but is tested on purely monolingual data for each language. In all cases, our model outperforms a state-of-the-art baseline: the Constituent Context Model (CCM) (Klein and Manning, 2002), sometimes by substantial margins. On average, over all the testing scenarios that we studied, our model achieves an absolute increase in F-measure of 8.8 points, and a 19% reduction in error relative to a theoretical upper bound. 2 Related Work The unsupervised grammar induction task has been studied extensively, mostly in a monolingual setting (Charniak and Carroll, 1992; Stolcke and Omohundro, 1994; Klein and Manning, 2002; Seginer, 2007). While PCFGs perform poorly on this task, the CCM model (Klein and Manning, 2002) has achieved large gains in performance and is among the state-of-the-art probabilistic models for unsupervised constituency parsing. We therefore use CCM as our basic model of monolingual syntax. While there has been some previous work on bilingual CFG parsing, it has mainly focused on improving MT systems rather than monolingual parsing accuracy. Research in this direction was pioneered by (Wu, 1997), who developed Inversion Transduction Grammars to capture crosslingual grammar variations such as phrase reorderings. More general formalisms for the same purpose were later developed (Wu and Wong, 1998; Chiang, 2005; Melamed, 2003; Eisner, 2003; Zhang and Gildea, 2005; Blunsom et al., 2008). We know of only one study which evaluates these bilingual grammar formalisms on the task of grammar induction itself (Smith and Smith, 2004). Both our model and even the monolingual CCM baseline yield far higher performance on the same Korean-English corpus. Our approach is closer to the unsupervised bilingual parsing model developed by Kuhn (2004), which aims to improve monolingual performance. Assuming that trees induced over parallel sentences have to exhibit certain structural regularities, Kuhn manually specifies a set of rules for determining when parsing decisions in the two languages are inconsistent with GIZA++ wordlevel alignments. By incorporating these constraints into the EM algorithm he was able to improve performance over a monolingual unsupervised PCFG. Still, the performance falls short of state-of-the-art monolingual models such as the CCM. More recently, there has been a body of work attempting to improve parsing performance by exploiting syntactically annotated parallel data. In one strand of this work, annotations are assumed only in a resource-rich language and are projected onto a resource-poor language using the parallel data (Hwa et al., 2005; Xi and Hwa, 2005). In another strand of work, syntactic annotations are assumed on both sides of the parallel data, and a model is trained to exploit the parallel data at test time as well (Smith and Smith, 2004; Burkett and Klein, 2008). In contrast to this work, our goal is to explore the benefits of multilingual grammar induction in a fully unsupervised setting. We finally note a recent paper which uses parameter tying to improve unsupervised dependency parse induction (Cohen and Smith, 2009). While the primary performance gains occur when tying related parameters within a language, some additional benefit is observed through bilingual tying, even in the absence of a parallel corpus. 3 Model We propose an unsupervised Bayesian model for learning bilingual syntactic structure using parallel corpora. Our key premise is that difficult-tolearn syntactic structures of one language may correspond to simpler or less uncertain structures in the other language. We treat the part-of-speech tag sequences of parallel sentences, as well as their 74 (i) (ii) (iii) Figure 1: A pair of trees (i) and two possible alignment trees. In (ii), no empty spaces are inserted, but the order of one of the original tree’s siblings has been reversed. In (iii), only two pairs of nodes have been aligned (indicated by arrows) and many empty spaces inserted. word-level alignments, as observed data. We obtain these word-level alignments using GIZA++ (Och and Ney, 2003). Our model seeks to explain this observed data through a generative process whereby two aligned parse trees are produced jointly. Though they are aligned, arbitrary parts of the two trees are permitted to diverge, accommodating languagespecific grammatical structure. In effect, our model loosely binds the two trees: node-to-node alignments need only be used where repeated bilingual patterns can be discovered in the data. 3.1 Tree Alignments We achieve this loose binding of trees by adapting unordered tree alignment (Jiang et al., 1995) to a probabilistic setting. Under this formalism, any two trees can be aligned using an alignment tree. The alignment tree embeds the original two trees within it: each node is labeled by a pair (x, y), (λ, y), or (x, λ) where x is a node from the first tree, y is a node from the second tree, and λ is an empty space. The individual structure of each tree must be preserved under the embedding with the exception of sibling order (to allow variations in phrase and word order). The flexibility of this formalism can be demonstrated by two extreme cases: (1) an alignment between two trees may actually align none of their individual nodes, instead inserting an empty space λ for each of the original two trees’ nodes. (2) if the original trees are isomorphic to one another, the alignment may match their nodes exactly, without inserting any empty spaces. See Figure 1 for an example. 3.2 Model overview As our basic model of syntactic structure, we adopt the Constituent-Context Model (CCM) of Klein and Manning (2002). Under this model, the part-of-speech sequence of each span in a sentence is generated either as a constituent yield — if it is dominated by a node in the tree — or otherwise as a distituent yield. For example, in the bracketed sentence [John/NNP [climbed/VB [the/DT tree/NN]]], the sequence VB DT NN is generated as a constituent yield, since it constitutes a complete bracket in the tree. On the other hand, the sequence VB DT is generated as a distituent, since it does not. Besides these yields, the contexts (two surrounding POS tags) of constituents and distituents are generated as well. In this example, the context of the constituent VB DT NN would be (NNP, #), while the context of the distituent VB DT would be (NNP, NN). The CCM model employs separate multinomial distributions over constituents, distituents, constituent contexts, and distituent contexts. While this model is deficient — each observed subsequence of part-of-speech tags is generated many times over — its performance is far higher than that of unsupervised PCFGs. Under our bilingual model, each pair of sentences is assumed to have been generated jointly in the following way: First, an unlabeled alignment tree is drawn uniformly from the set of all such trees. This alignment tree specifies the structure of each of the two individual trees, as well as the pairs of nodes which are aligned and those which are not aligned (i.e. paired with a λ). For each pair of aligned nodes, a corresponding pair of constituents and contexts are jointly drawn from a bilingual distribution. For unaligned nodes (i.e. nodes paired with a λ in the alignment 75 tree), a single constituent and context are drawn, from language-specific distributions. Distituents and their contexts are also drawn from languagespecific distributions. Finally, word-level alignments are drawn based on the structure of the alignment tree. In the next two sections, we describe our model in more formal detail by specifying the parameters and generative process by which sentences are formed. 3.3 Parameters Our model employs a number of multinomial distributions: • πC i : over constituent yields of language i, • πD i : over distituent yields of language i, • φC i : over constituent contexts of language i, • φD i : over distituent contexts of language i, • ω : over pairs of constituent yields, one from the first language and the other from the second language, • Gzpair : over a finite set of integer values {−m, . . . , −2, −1, 0, 1, 2, . . . , m}, measuring the Giza-score of aligned tree node pairs (see below), • Gznode : over a finite set of integer values {−m, . . . , −2, −1, 0}, measuring the Gizascore of unaligned tree nodes (see below). The first four distributions correspond exactly to the parameters of the CCM model. Parameter ω is a “coupling parameter” which measures the compatibility of tree-aligned constituent yield pairs. The final two parameters measure the compatibility of syntactic alignments with the observed lexical GIZA++ alignments. Intuitively, aligned nodes should have a high density of word-level alignments between them, and unaligned nodes should have few lexical alignments. More formally, consider a tree-aligned node pair (n1, n2) with corresponding yields (y1, y2). We call a word-level alignment good if it aligns a word in y1 with a word in y2. We call a wordlevel alignment bad if it aligns a word in y1 with a word outside y2, or vice versa. The Gizascore for (n1, n2) is the number of good word alignments minus the number of bad word alignments. For example, suppose the constituent my long name is node-aligned to its Urdu translation mera lamba naam. If only the word-pairs my/mera and name/naam are aligned, then the Giza-score for this node-alignment would be 2. If however, the English word long were (incorrectly) aligned under GIZA++ to some Urdu word outside the corresponding constituent, then the score would drop to 1. This score could even be negative if the number of bad alignments exceeds those that are good. Distribution Gzpair provides a probability for these scores (up to some fixed absolute value). For an unaligned node n with corresponding yield y, only bad GIZA++ alignments are possible, thus the Giza-score for these nodes will always be zero or negative. Distribution Gznode provides a probability for these scores (down to some fixed value). We want our model to find tree alignments such that both aligned node pairs and unaligned nodes have high Giza-score. 3.4 Generative Process Now we describe the stochastic process whereby the observed parallel sentences and their wordlevel alignments are generated, according to our model. As the first step in the Bayesian generative process, all the multinomial parameters listed in the previous section are drawn from their conjugate priors — Dirichlet distributions of appropriate dimension. Then, each pair of word-aligned parallel sentences is generated through the following process: 1. A pair of binary trees T1 and T2 along with an alignment tree A are drawn according to P(T1, T2, A). A is an alignment tree for T1 and T2 if it can be obtained by the following steps: First insert blank nodes (labeled by λ) into T1 and T2. Then permute the order of sibling nodes such that the two resulting trees T ′ 1 and T ′ 2 are identical in structure. Finally, overlay T ′ 1 and T ′ 2 to obtain A. We additionally require that A contain no extraneous nodes – that is no nodes with two blank labels (λ, λ). See Figure 1 for an example. We define the distribution P(T1, T2, A) to be uniform over all pairs of binary trees and their alignments. 2. For each node in A of the form (n1, λ) (i.e. nodes in T1 left unaligned by A), draw (i) a constituent yield according to πC 1 , 76 (ii) a constituent context according to φC 1 , (iii) a Giza-score according to Gznode. 3. For each node in A of the form (λ, n2) (i.e. nodes in T2 left unaligned by A), draw (i) a constituent yield according to πC 2 , (ii) a constituent context according to φC 2 , (iii) a Giza-score according to Gznode. 4. For each node in A of the form (n1, n2) (i.e. tree-aligned node pairs), draw (i) a pair of constituent yields (y1, y2) according to: φC 1 (y1) · φC 2 (y2) · ω(y1, y2) Z (1) which is a product of experts combining the language specific context-yield distributions as well as the coupling distribution ω with normalization constant Z, (ii) a pair of contexts according to the appropriate language-specific parameters, (iii) a Giza-score according to Gzpair. 5. For each span in Ti not dominated by a node (for each language i ∈{1, 2}), draw a distituent yield according to πD i and a distituent context according to φD i . 6. Draw actual word-level alignments consistent with the Giza-scores, according to a uniform distribution. In the next section we turn to the problem of inference under this model when only the partof-speech tag sequences of parallel sentences and their word-level alignments are observed. 3.5 Inference Given a corpus of paired part-of-speech tag sequences (s1, s2) and their GIZA++ alignments g, we would ideally like to predict the set of tree pairs (T1, T2) which have highest probability when conditioned on the observed data: P T1, T2 s1, s2, g  . We could rewrite this by explicitly integrating over the yield, context, coupling, Giza-score parameters as well as the alignment trees. However, since maximizing this integral directly would be intractable, we resort to standard Markov chain sampling techniques. We use Gibbs sampling (Hastings, 1970) to draw trees for each sentence conditioned on those drawn for all other sentences. The samples form a Markov chain which is guaranteed to converge to the true joint distribution over all sentences. In the monolingual setting, there is a wellknown tree sampling algorithm (Johnson et al., 2007). This algorithm proceeds in top-down fashion by sampling individual split points using the marginal probabilities of all possible subtrees. These marginals can be efficiently pre-computed and form the “inside” table of the famous InsideOutside algorithm. However, in our setting, trees come in pairs, and their joint probability crucially depends on their alignment. For the ith parallel sentence, we wish to jointly sample the pair of trees (T1, T2)i together with their alignment Ai. To do so directly would involve simultaneously marginalizing over all possible subtrees as well as all possible alignments between such subtrees when sampling upper-level split points. We know of no obvious algorithm for computing this marginal. We instead first sample the pair of trees (T1, T2)i from a simpler proposal distribution Q. Our proposal distribution assumes that no nodes of the two trees are aligned and therefore allows us to use the recursive topdown sampling algorithm mentioned above. After a new tree pair T ∗= (T ∗ 1 , T ∗ 2 )i is drawn from Q, we accept the pair with the following probability: min  1, P(T ∗|T−i, A−i) Q(T|T−i, A−i) P(T|T−i, A−i) Q(T ∗|T−i, A−i)  where T is the previously sampled tree-pair for sentence i, P is the true model probability, and Q is the probability under the proposal distribution. This use of a tractable proposal distribution and acceptance ratio is known as the MetropolisHastings algorithm and it preserves the convergence guarantee of the Gibbs sampler (Hastings, 1970). To compute the terms P(T ∗|T−i, A−i) and P(T|T−i, A−i) in the acceptance ratio above, we need to marginalize over all possible alignments between tree pairs. Fortunately, for any given pair of trees T1 and T2 this marginalization can be computed using a dynamic program in time O(|T1||T2|). Here we provide a very brief sketch. For every pair of nodes n1 ∈T1, n2 ∈T2, a table stores the marginal probability of the subtrees rooted at n1 and n2, respectively. A dynamic program builds this table from the bottom up: For each node pair n1, n2, we sum the probabilities of all local alignment configurations, each multiplied by the appro77 priate marginals already computed in the table for lower-level node pairs. This algorithm is an adaptation of the dynamic program presented in (Jiang et al., 1995) for finding minimum cost alignment trees (Fig. 5 of that publication). Once a pair of trees (T1, T2) has been sampled, we can proceed to sample an alignment tree A|T1, T2.2 We sample individual alignment decisions from the top down, at each step using the alignment marginals for the remaining subtrees (already computed using the afore-mentioned dynamic program). Once the triple (T1, T2, A) has been sampled, we move on to the next parallel sentence. We avoid directly sampling parameter values, instead using the marginalized closed forms for multinomials with Dirichlet conjugate-priors using counts and hyperparameter pseudo-counts (Gelman et al., 2004). Note that in the case of yield pairs produced according to Distribution 1 (in step 4 of the generative process) conjugacy is technically broken, since the yield pairs are no longer produced by a single multinomial distribution. Nevertheless, we count the produced yields as if they had been generated separately by each of the distributions involved in the numerator of Distribution 1. 4 Experimental setup We test our model on three corpora of bilingual parallel sentences: English-Korean, EnglishUrdu, and English-Chinese. Though the model is trained using parallel data, during testing it has access only to monolingual data. This set-up ensures that we are testing our model’s ability to learn better parameters at training time, rather than its ability to exploit parallel data at test time. Following (Klein and Manning, 2002), we restrict our model to binary trees, though we note that the alignment trees do not follow this restriction. Data The Penn Korean Treebank (Han et al., 2002) consists of 5,083 Korean sentences translated into English for the purposes of language training in a military setting. Both the Korean and English sentences are annotated with syntactic trees. We use the first 4,000 sentences for training and the last 1,083 sentences for testing. We note that in the Korean data, a separate tag is given for 2Sampling the alignment tree is important, as it provides us with counts of aligned constituents for the coupling parameter. each morpheme. We simply concatenate all the morpheme tags given for each word and treat the concatenation as a single tag. This procedure results in 199 different tags. The English-Urdu parallel corpus3 consists of 4,325 sentences from the first three sections of the Penn Treebank and their Urdu translations annotated at the part-of-speech level. The Urdu side of this corpus does not provide tree annotations so here we can test parse accuracy only on English. We use the remaining sections of the Penn Treebank for English testing. The English-Chinese treebank (Bies et al., 2007) consists of 3,850 Chinese newswire sentences translated into English. Both the English and Chinese sentences are annotated with parse trees. We use the first 4/5 for training and the final 1/5 for testing. During preprocessing of the corpora we remove all punctuation marks and special symbols, following the setup in previous grammar induction work (Klein and Manning, 2002). To obtain lexical alignments between the parallel sentences we employ GIZA++ (Och and Ney, 2003). We use intersection alignments, which are one-to-one alignments produced by taking the intersection of oneto-many alignments in each direction. These oneto-one intersection alignments tend to have higher precision. We initialize the trees by making uniform split decisions recursively from the top down for sentences in both languages. Then for each pair of parallel sentences we randomly sample an initial alignment tree for the two sampled trees. Baseline We implement a Bayesian version of the CCM as a baseline. This model uses the same inference procedure as our bilingual model (Gibbs sampling). In fact, our model reduces to this Bayesian CCM when it is assumed that no nodes between the two parallel trees are ever aligned and when word-level alignments are ignored. We also reimplemented the original EM version of CCM and found virtually no difference in performance when using EM or Gibbs sampling. In both cases our implementation achieves F-measure in the range of 69-70% on WSJ10, broadly in line with the performance reported by Klein and Manning (2002). Hyperparameters Klein (2005) reports using smoothing pseudo-counts of 2 for constituent 3http://www.crulp.org 78 Figure 2: The F-measure of the CCM baseline (dotted line) and bilingual model (solid line) plotted on the y-axis, as the maximum sentence length in the test set is increased (x-axis). Results are averaged over all training scenarios given in Table 1. yields and contexts and 8 for distituent yields and contexts. In our Bayesian model, these similar smoothing counts occur as the parameters of the Dirichlet priors. For Korean we found that the baseline performed well using these values. However, on our English and Chinese data, we found that somewhat higher smoothing values worked best, so we utilized values of 20 and 80 for constituent and distituent smoothing counts, respectively. Our model additionally requires hyperparameter values for ω (the coupling distribution for aligned yields), Gzpair and Gznode (the distributions over Giza-scores for aligned nodes and unaligned nodes, respectively). For ω we used a symmetric Dirichlet prior with parameter 1. For Gzpair and Gznode, in order to create a strong bias towards high Giza-scores, we used non-symmetric Dirichlet priors. In both cases, we capped the absolute value of the scores at 3, to prevent count sparsity. In the case of Gzpair we gave pseudocounts of 1,000 for negative values and zero, and pseudo-counts of 1,000,000 for positive scores. For Gznode we gave a pseudo-count of 1,000,000 for a score of zero, and 1,000 for all negative scores. This very strong prior bias encodes our intuition that syntactic alignments which respect lexical alignments should be preferred. Our method is not sensitive to these exact values and any reasonably strong bias gave similar results. In all our experiments, we consider the hyperparameters fixed and observed values. Testing and evaluation As mentioned above, we test our model only on monolingual data, where the parallel sentences are not provided to the model. To predict the bracketings of these monolingual test sentences, we take the smoothed counts accumulated in the final round of sampling over the training data and perform a maximum likelihood estimate of the monolingual CCM parameters. These parameters are then used to produce the highest probability bracketing of the test set. To evaluate both our model as well as the baseline, we use (unlabeled) bracket precision, recall, and F-measure (Klein and Manning, 2002). Following previous work, we include the wholesentence brackets but ignore single-word brackets. We perform experiments on different subsets of training and testing data based on the sentencelength. In particular we experimented with sentence length limits of 10, 20, and 30 for both the training and testing sets. We also report the upper bound on F-measure for binary trees. We average the results over 10 separate sampling runs. 5 Results Table 1 reports the full results of our experiments. In all testing scenarios the bilingual model outperforms its monolingual counterpart in terms of both precision and recall. On average, the bilingual model gains 10.2 percentage points in precision, 7.7 in recall, and 8.8 in F-measure. The gap between monolingual performance and the binary tree upper bound is reduced by over 19%. The extent of the gain varies across pairings. For instance, the smallest improvement is observed for English when trained with Urdu. The Korean-English pairing results in substantial improvements for Korean and quite large improvements for English, for which the absolute gain reaches 28 points in F-measure. In the case of Chinese and English, the gains for English are fairly minimal whereas those for Chinese are quite sub79 Max Sent. Length Monolingual Bilingual Upper Bound Test Train Precision Recall F1 Precision Recall F1 F1 EN with KR 10 10 52.74 39.53 45.19 57.76 43.30 49.50 85.6 20 41.87 31.38 35.87 61.66 46.22 52.83 85.6 30 33.43 25.06 28.65 64.41 48.28 55.19 85.6 20 20 35.12 25.12 29.29 56.96 40.74 47.50 83.3 30 26.26 18.78 21.90 60.07 42.96 50.09 83.3 30 30 23.95 16.81 19.76 58.01 40.73 47.86 82.4 KR with EN 10 10 71.07 62.55 66.54 75.63 66.56 70.81 93.6 20 71.35 62.79 66.80 77.61 68.30 72.66 93.6 30 71.37 62.81 66.82 77.87 68.53 72.91 93.6 20 20 64.28 54.73 59.12 70.44 59.98 64.79 91.9 30 64.29 54.75 59.14 70.81 60.30 65.13 91.9 30 30 63.63 54.17 58.52 70.11 59.70 64.49 91.9 EN with CH 10 10 50.09 34.18 40.63 37.46 25.56 30.39 81.0 20 58.86 40.17 47.75 50.24 34.29 40.76 81.0 30 64.81 44.22 52.57 68.24 46.57 55.36 81.0 20 20 41.90 30.52 35.31 38.64 28.15 32.57 84.3 30 52.83 38.49 44.53 58.50 42.62 49.31 84.3 30 30 46.35 33.67 39.00 51.40 37.33 43.25 84.1 CH with EN 10 10 39.87 27.71 32.69 40.62 28.23 33.31 81.9 20 43.44 30.19 35.62 47.54 33.03 38.98 81.9 30 43.63 30.32 35.77 54.09 37.59 44.36 81.9 20 20 29.80 23.46 26.25 36.93 29.07 32.53 88.0 30 30.05 23.65 26.47 43.99 34.63 38.75 88.0 30 30 24.46 19.41 21.64 39.61 31.43 35.05 88.4 EN with UR 10 10 57.98 45.68 51.10 73.43 57.85 64.71 88.1 20 70.57 55.60 62.20 80.24 63.22 70.72 88.1 30 75.39 59.40 66.45 79.04 62.28 69.67 88.1 20 20 57.78 43.86 49.87 67.26 51.06 58.05 86.3 30 63.12 47.91 54.47 64.45 48.92 55.62 86.3 30 30 57.36 43.02 49.17 57.97 43.48 49.69 85.7 Table 1: Unlabeled precision, recall and F-measure for the monolingual baseline and the bilingual model on several test sets. We report results for different combinations of maximum sentence length in both the training and test sets. The right most column, in all cases, contains the maximum F-measure achievable using binary trees. The best performance for each test-length is highlighted in bold. stantial. This asymmetry should not be surprising, as Chinese on its own seems to be quite a bit more difficult to parse than English. We also investigated the impact of sentence length for both the training and testing sets. For our model, adding sentences of greater length to the training set leads to increases in parse accuracy for short sentences. For the baseline, however, adding this additional training data degrades performance in the case of English paired with Korean. Figure 2 summarizes the performance of our model for different sentence lengths on several of the test-sets. As shown in the figure, the largest improvements tend to occur at longer sentence lengths. 6 Conclusion We have presented a probabilistic model for bilingual grammar induction which uses raw parallel text to learn tree pairs and their alignments. Our formalism loosely binds the two trees, using bilingual patterns when possible, but allowing substantial language-specific variation. We tested our model on three test sets and showed substantial improvement over a state-of-the-art monolingual baseline.4 4The authors acknowledge the support of the NSF (CAREER grant IIS-0448168, grant IIS-0835445, and grant IIS0835652). Thanks to Amir Globerson and members of the MIT NLP group for their helpful suggestions. Any opinions, findings, or conclusions are those of the authors, and do not necessarily reflect the views of the funding organizations 80 References Ann Bies, Martha Palmer, Justin Mott, and Colin Warner. 2007. English Chinese translation treebank v 1.0. LDC2007T02. Phil Blunsom, Trevor Cohn, and Miles Osborne. 2008. Bayesian synchronous grammar induction. In Proceedings of NIPS. David Burkett and Dan Klein. 2008. Two languages are better than one (for syntactic parsing). In Proceedings of EMNLP, pages 877–886. Eugene Charniak and Glen Carroll. 1992. Two experiments on learning probabilistic dependency grammars from corpora. In Proceedings of the AAAI Workshop on Statistically-Based NLP Techniques, pages 1–13. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the ACL, pages 263–270. Shay B. Cohen and Noah A. Smith. 2009. Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction. In Proceedings of the NAACL/HLT. Jason Eisner. 2003. Learning non-isomorphic tree mappings for machine translation. In The Companion Volume to the Proceedings of the ACL, pages 205–208. Andrew Gelman, John B. Carlin, Hal S. Stern, and Donald B. Rubin. 2004. Bayesian data analysis. Chapman and Hall/CRC. Dmitriy Genzel. 2005. Inducing a multilingual dictionary from a parallel multitext in related languages. In Proceedings of EMNLP/HLT, pages 875–882. C. Han, N.R. Han, E.S. Ko, H. Yi, and M. Palmer. 2002. Penn Korean Treebank: Development and evaluation. In Proc. Pacific Asian Conf. Language and Comp. W. K. Hastings. 1970. Monte carlo sampling methods using Markov chains and their applications. Biometrika, 57:97–109. R. Hwa, P. Resnik, A. Weinberg, C. Cabezas, and O. Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Journal of Natural Language Engineering, 11(3):311–325. T. Jiang, L. Wang, and K. Zhang. 1995. Alignment of trees – an alternative to tree edit. Theoretical Computer Science, 143(1):137–148. M. Johnson, T. Griffiths, and S. Goldwater. 2007. Bayesian inference for PCFGs via Markov chain Monte Carlo. In Proceedings of the NAACL/HLT, pages 139–146. Dan Klein and Christopher D. Manning. 2002. A generative constituent-context model for improved grammar induction. In Proceedings of the ACL, pages 128–135. D. Klein. 2005. The Unsupervised Learning of Natural Language Structure. Ph.D. thesis, Stanford University. Jonas Kuhn. 2004. Experiments in parallel-text based grammar induction. In Proceedings of the ACL, pages 470–477. I. Dan Melamed. 2003. Multitext grammars and synchronous parsers. In Proceedings of the NAACL/HLT, pages 79–86. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Yoav Seginer. 2007. Fast unsupervised incremental parsing. In Proceedings of the ACL, pages 384–391. David A. Smith and Noah A. Smith. 2004. Bilingual parsing with factored estimation: Using English to parse Korean. In Proceeding of EMNLP, pages 49– 56. Benjamin Snyder and Regina Barzilay. 2008. Unsupervised multilingual learning for morphological segmentation. In Proceedings of the ACL/HLT, pages 737–745. Benjamin Snyder, Tahira Naseem, Jacob Eisenstein, and Regina Barzilay. 2008. Unsupervised multilingual learning for POS tagging. In Proceedings of EMNLP, pages 1041–1050. Benjamin Snyder, Tahira Naseem, Jacob Eisenstein, and Regina Barzilay. 2009. Adding more languages improves unsupervised multilingual part-of-speech tagging: A Bayesian non-parametric approach. In Proceedings of the NAACL/HLT. Andreas Stolcke and Stephen M. Omohundro. 1994. Inducing probabilistic grammars by Bayesian model merging. In Proceedings of ICGI, pages 106–118. Dekai Wu and Hongsing Wong. 1998. Machine translation with a stochastic grammatical channel. In Proceedings of the ACL/COLING, pages 1408– 1415. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. Chenhai Xi and Rebecca Hwa. 2005. A backoff model for bootstrapping resources for non-english languages. In Proceedings of EMNLP, pages 851 – 858. Hao Zhang and Daniel Gildea. 2005. Stochastic lexicalized inversion transduction grammar for alignment. In Proceedings of the ACL, pages 475–482. 81
2009
9
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 800–808, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Case markers and Morphology: Addressing the crux of the fluency problem in English-Hindi SMT Ananthakrishnan Ramanathan, Hansraj Choudhary Avishek Ghosh, Pushpak Bhattacharyya Department of Computer Science and Engineering Indian Institute of Technology Bombay Powai, Mumbai-400076 India {anand, hansraj, avis, pb}@cse.iitb.ac.in Abstract We report in this paper our work on accurately generating case markers and suffixes in English-to-Hindi SMT. Hindi is a relatively free word-order language, and makes use of a comparatively richer set of case markers and morphological suffixes for correct meaning representation. From our experience of large-scale English-Hindi MT, we are convinced that fluency and fidelity in the Hindi output get an order of magnitude facelift if accurate case markers and suffixes are produced. Now, the moot question is: what entity on the English side encodes the information contained in case markers and suffixes on the Hindi side? Our studies of correspondences in the two languages show that case markers and suffixes in Hindi are predominantly determined by the combination of suffixes and semantic relations on the English side. We, therefore, augment the aligned corpus of the two languages, with the correspondence of English suffixes and semantic relations with Hindi suffixes and case markers. Our results on 400 test sentences, translated using an SMT system trained on around 13000 parallel sentences, show that suffix + semantic relation →case marker/suffix is a very useful translation factor, in the sense of making a significant difference to output quality as indicated by subjective evaluation as well as BLEU scores. 1 Introduction Two fundamental problems in applying statistical machine translation (SMT) techniques to EnglishHindi (and generally to Indian language) MT are: i) the wide syntactic divergence between the language pairs, and ii) the richer morphology and case marking of Hindi compared to English. The first problem manifests itself in poor word-order in the output translations, while the second one leads to incorrect inflections (word-endings) and case marking. Being a free word-order language, Hindi suffers badly when morphology and case markers are incorrect. To solve the former, word-order related, problem, we use a preprocessing technique, which we have discussed in (Ananthakrishnan et al., 2008). This procedure is similar to what is suggested in (Collins et al., 2005) and (Wang, 2007), and results in the input sentence being reordered to follow Hindi structure. The focus of this paper, however, is on the thorny problem of generating case markers and morphology. It is recognized that translating from poor to rich morphology is a challenge (Avramidis and Koehn, 2008) that calls for deeper linguistic analysis to be part of the translation process. Such analysis is facilitated by factored models (Koehn et al., 2007), which provide a framework for incorporating lemmas, suffixes, POS tags, and any other linguistic factors in a log-linear model for phrasebased SMT. In this paper, we motivate a factorization well-suited to English-Hindi translation. The factorization uses semantic relations and suffixes to generate inflections and case markers. Our experiments include two different kinds of semantic relations, namely, dependency relations provided by the Stanford parser, and the deeper semantic roles (agent, patient, etc.) provided by the universal networking language (UNL). Our experiments show that the use of semantic relations and syntactic reordering leads to substantially better quality translation. The use of even moderately accurate semantic relations has an especially salubrious effect on fluency. 800 2 Related Work There have been quite a few attempts at including morphological information within statistical MT. Nießen and Ney (2004) show that the use of morpho-syntactic information drastically reduces the need for bilingual training data. Popovic and Ney (2006) report the use of morphological and syntactic restructuring information for SpanishEnglish and Serbian-English translation. Koehn and Hoang (2007) propose factored translation models that combine feature functions to handle syntactic, morphological, and other linguistic information in a log-linear model. This work also describes experiments in translating from English to German, Spanish, and Czech, including the use of morphological factors. Avramidis and Koehn (2008) report work on translating from poor to rich morphology, namely, English to Greek and Czech translation. They use factored models with case and verb conjugation related factors determined by heuristics on parse trees. The factors are used only on the source side, and not on the target side. To handle syntactic differences, Melamed (2004) proposes methods based on tree-to-tree mappings. Imamura et al. (2005) present a similar method that achieves significant improvements over a phrase-based baseline model for Japanese-English translation. Another method for handling syntactic differences is preprocessing, which is especially pertinent when the target language does not have parsing tools. These algorithms attempt to reconcile the word-order differences between the source and target language sentences by reordering the source language data prior to the SMT training and decoding cycles. Nießen and Ney (2004) propose some restructuring steps for German-English SMT. Popovic and Ney (2006) report the use of simple local transformation rules for SpanishEnglish and Serbian-English translation. Collins et al. (2005) propose German clause restructuring to improve German-English SMT, while Wang et al. (2007) present similar work for ChineseEnglish SMT. Our earlier work (Ananthakrishnan et al., 2008) describes syntactic reordering and morphological suffix separation for English-Hindi SMT. 3 Motivation The fundamental differences between English and Hindi are: • English follows SVO order, whereas Hindi follows SOV order • English uses post-modifiers, whereas Hindi uses pre-modifiers • Hindi allows greater freedom in word-order, identifying constituents through case marking • Hindi has a relatively richer system of morphology We resolve the first two syntactic differences by reordering the English sentence to conform to Hindi word-order in a preprocessing step as described in (Ananthakrishnan et al., 2008). The focus of this paper, however, is on the last two of these differences, and here we dwell a bit on why this focus on case markers and morphology is crucial to the quality of translation. 3.1 Case markers While in English, the major constituents of a sentence (subject, object, etc.) can usually be identified by their position in the sentence, Hindi is a relatively free word-order language. Constituents can be moved around in the sentence without impacting the core meaning. For example, the following sentence pair conveys the same meaning (John saw Mary), albeit with different emphases. jAn n mrF ko dKA John ne Mary ko dekhaa John-nom Mary-acc saw mrF ko jAn n dKA Mary ko John ne dekhaa Mary-acc John-nom saw The identity of John as the subject and Mary as the object in both sentences comes from the case markers n (ne – nominative) and ko (ko – accusative). Therefore, even though Hindi is predominantly SOV in its word-order, correct case marking is a crucial part of making translations convey the right meaning. 801 3.2 Morphology The following examples illustrate the richer morphology of Hindi compared to English: Oblique case: The plural-marker in the word “boys” in English is translated as e (e – plural direct) or ao\ (on – plural oblique): The boys went to school. lXk pAWfAlA gy ladake paathashaalaa gaye The boys ate apples. lXko\ n sb KAy ladokon ne seba khaaye Future tense: Future tense in Hindi is marked on the verb. In the following example, “will go” is translated as jAy\g (jaaenge), with e\g (enge) as the future tense marker: The boys will go to school. lXk pAWfAlA jAy\g ladake paathashaalaa jayenge Causative constructions: The aAyA (aayaa) suffix indicates causativity: The boys made them cry. lXko\ n uh zlAyA ladakon ne unhe rulaayaa 3.3 Sparsity Using a standard SMT system for English-Hindi translation will cause severe data sparsity with respect to case marking and morphology. For example, the fact that the word boys in oblique case (say, when followed by n (ne)) should take the form lXko\ (ladakon) will be learnt only if the correspondence between boys and lXko\ n (ladakon ne) exists in the training corpus. The more general rule that n (ne) should be preceded by the oblique case ending ao\ (on) cannot be learnt. Similarly, the plural form of boys will be produced only if that form exists in the training corpus. Essentially, all morphological forms of a word and its translations have to exist in the training corpus, and every word has to appear with every possible case marker, which will require an impossible amount of training data. Therefore, it is imperative to make it possible for the system to learn general rules for morphology and case marking. The next section describes our approach to facilitating the learning of such rules. 4 Approach While translating from a language of moderate case marking and morphology (English) to one with relatively richer case marking and morphology (Hindi), we are faced with the problem of extracting information from the source language sentence, transferring the information onto the target side, and translating this information into the appropriate case markers and morphological affixes. The key bits of information for us are suffixes and semantic relations, and the vehicle that transfers and translates the information is the factored model for phrase based SMT (Koehn 2007). 4.1 Factored Model Factored models allow the translation to be broken down into various components, which are combined using a log-linear model: p(e|f) = 1 Z exp n X i=1 λihi(e, f) (1) Each hi is a feature function for a component of the translation (such as the language model), and the λ values are weights for the feature functions. 4.2 Our Factorization Our factorization, which is illustrated in figure 1, consists of: 1. a lemma to lemma translation factor (boy → lXk^ (ladak)) 2. a suffix + semantic relation to suffix/case marker factor (-s + subj →e (e)) 3. a lemma + suffix to surface form generation factor (lXk^ + e (ladak + e) →lXk (ladake)) The above factorization is motivated by the following: • Case markers are decided by semantic relations and tense-aspect information in suffixes. For example, if a clause has an object, and has a perfective form, the subject usually requires the case marker n (ne). John ate an apple. John|empty|subj eat|ed|empty an|empty|det apple|empty|obj 802 Figure 1: Semantic and Suffix Factors: the combination of English suffixes and semantic relations is aligned with Hindi suffixes and case markers jAn n sb KAyA john ne seba khaayaa Thus, the combination of the suffix and semantic relation generates the right case marker (ed|empty + empty|obj →n (ne)). • Target language suffixes are largely determined by source language suffixes and case markers (which in turn are determined by the semantic relations) The boys ate apples. The|empty|det boy|s|subj eat|ed|empty apple|s|obj lXko\ n sb KAy ladakon ne seba khaaye Here, the plural suffix on boys leads to two possibilities – lXk (ladake – plural direct) and lXko\ (ladakon – plural oblique). The case marker n (ne) requires the oblique case. • Our factorization provides the system with two sources to determine the case markers and suffixes. While the translation steps discussed above are one source, the language model over the suffix/case marker factor reinforces the decisions made. For example, the combination lXkA n (ladakaa ne) is impossible, while lXko\ n (ladakon ne) is very likely. The separation of the lemma and suffix helps in tiding over the data sparsity problem by allowing the system to reason about the suffix-case marker combination rather than the combination of the specific word and the case marker. 5 Semantic Relations The experiments have been conducted with two kinds of semantic relations. One of them is the relations from the Universal Networking Language (UNL), and the other is the grammatical relations produced by the Stanford parser. The relations in both UNL and the Stanford dependency parser are strictly binary and form a directed graph. These relations express the semantic dependencies among the various words in the sentence. Stanford: The Stanford dependency parser (Marie-Catherine and Manning, 2008) uses 55 relations to express the dependencies among the various words in a sentence. These relations form a hierarchical structure with the most general relation at the root. There are various argument relations like subject, object, objects of prepositions, and clausal complements, modifier relations like adjectival, adverbial, participial, and infinitival modifiers, and other relations like coordination, conjunct, expletive, and punctuation. UNL: The 44 UNL relations1 include relations such as agent, object, co-agent, and partner, temporal relations, locative relations, conjunctive and disjunctive relations, comparative relations and also hierarchical relationships like part-of and aninstance-of. Comparison: Unlike the Stanford parser which expresses the semantic relationships through grammatical relations, UNL uses attributes and universal words, in addition to the semantic roles, to express the same. Universal words are used to disambiguate words, while attributes are used to express the speaker’s point of view in the sentence. UNL relations, compared to the relations in the Stanford parser, are more semantic than grammatical. For instance, in the Stanford parser, the agent relation is the complement of a passive verb introduced by the preposition by, whereas in UNL it 1http://www.undl.org/unlsys/unl/unl2005/ 803 Figure 2: UNL and Stanford semantic relation graphs for the sentence “John said that he was hit by Jack” #sentences #words Training 12868 316508 Tuning 600 15279 Test 400 8557 Table 1: Corpus Statistics signifies the doer of an action. Consider the following sentence: John said that he was hit by Jack. In this sentence, the Stanford parser produces the relation agent(hit, Jack) and nsubj(said, John) as shown in figure 2. In UNL, however, both the cases use the agent relation. The other distinguishing aspect of UNL is the hyper-node that represents scope. In the example sentence, the whole clause “that he was hit by Jack” forms the object of the verb said, and hence is represented in a scope. The Stanford dependency parser on the other hand represents these dependencies with the help of the clausal complement relation, which links said with hit, and uses the complementizer relation to introduce the subordinating conjunction. The pre-dependency accuracy of the Stanford dependency parser is around 80% (MarieCatherine et al., 2006), while the accuracy achieved by the UNL generating system is 64.89%. 6 Experiments 6.1 Setup The corpus described in table 1 was used for the experiments. The SRILM toolkit 2 was used to create Hindi language models using the target side of the training corpus. Training, tuning, and decoding were performed using the Moses toolkit 3. Tuning (learning the λ values discussed in section 4.1) was done using minimum error rate training (Och, 2003). The Stanford parser 4 was used for parsing the English text for syntactic reordering and to generate “stanford” semantic relations. The program for syntactic reordering used the parse trees generated by the Stanford parser, and was written in perl using the module Parse::RecDescent. English morphological analysis was performed using morpha (Minnen et al., 2001), while Hindi suffix separation was done using the stemmer described in (Ananthakrishnan and Rao, 2003). Syntactic and morphological transformations, in the models where they were employed, were applied at every phase: training, tuning, and testing. Evaluation Criteria: Automatic evaluation was performed using BLEU and NIST on the entire test set of 400 sentences. Subjective evaluation was performed on 125 sentences from the test set. • BLEU (Papineni et al., 2001): measures the precision of n-grams with respect to the reference translations, with a brevity penalty. A higher BLEU score indicates better translation. • NIST 5: measures the precision of n-grams. This metric is a variant of BLEU, which was 2http://www.speech.sri.com/projects/srilm/ 3http://www.statmt.org/moses/ 4http://nlp.stanford.edu/software/lex-parser.shtml 5www.nist.gov/speech/tests/mt/doc/ngram-study.pdf 804 shown to correlate better with human judgments. Again, a higher score indicates better translation. • Subjective: Human evaluators judged the fluency and adequacy, and counted the number of errors in case markers and morphology. 6.2 Results Table 2 shows the impact of suffix and semantic factors. The models experimented with are described below: baseline: The default settings of Moses were used for this model. lemma + suffix: This uses the lemma and suffix factors on the source side, and the lemma and suffix/case marker on the target side. The translation steps are i) lemma to lemma and ii) suffix to suffix/case marker, and the generation step is lemma+suffix/case marker to surface form. lemma + suffix + unl: This model uses, in addition to the factors in the lemma+suffix model, a semantic relation factor (UNL relations). The translation steps are i) lemma to lemma and ii) suffix+semantic relation to suffix/case marker, and the generation step again is lemma+suffix/case marker to surface form. lemma + suffix + stanford: This is identical to the previous model, except that stanford dependency relations are used instead of UNL relations. We can see a substantial improvement in scores when semantic relations are used. Table 5 shows the impact of syntactic reordering. The surface form with distortion-based, lexicalized, and syntactic reordering were experimented with. The model with the suffix and semantic factors was used with syntactic reordering. For subjective evaluation, sentences were judged on fluency, adequacy and the number of errors in case marking/morphology. To judge fluency, the judges were asked to look at how well-formed the output sentence is according to Hindi grammar, without considering what the translation is supposed to convey. The fivepoint scale in table 3 was used for evaluation. To judge adequacy, the judges were asked to compare each output sentence to the reference translation and judge how well the meaning conveyed by the reference was also conveyed by the output sentence. The five-point scale in table 4 was used. Table 6 shows the average fluency and adequacy scores, and the average number of errors per sentence. All differences are significant at the 99% level, except the difference in adequacy between the surface-syntactic model and the lemma+suffix+stanford syntactic model, which is significant at the 95% level. 7 Discussion We can see from the results that better fluency and adequacy are achieved with the use of semantic relations. The improvement in fluency is especially noteworthy. Figure 3 shows the distribution of fluency and adequacy scores. What is worth noting is that the number of sentences at levels 4 and 5 in terms of fluency and adequacy are much higher in case of the model that uses semantic relations. That is, the use of semantic relations, in combination with syntactic reordering, produces many more sentences that are reasonably or even perfectly fluent and convey most or all of the meaning. Table 7 shows the impact of sentence length on translation quality. We can see that with smaller sentences the improvements using syntactic reordering and semantic relations are much more pronounced. All models find long sentences difficult to handle, which contributes to bringing the mean performances closer. However, it is clear that many more useful translations are being produced due to syntactic reordering and semantic relations. The following is an example of the kind of improvements achieved: Input: Inland waterway is one of the most popular picnic spots in Alappuzha. Baseline: m\ ek at,-TlFy jlmAg k sbs pEsd EpkEnk -Tl m\ jlo\ m\ dOXtF h{ men eka antahsthaliiya jalamaarga ke sabase prasiddha pikanika sthala men jalon men daudatii hai gloss: in a waterway of most popular picnic spot in waters runs. Reorder: at,-TlFy jlmAg aAlp yA k sbs pEsd EpkEnk -Tl m\ s ek h{ antahsthaliiya jalamaarga aalapuzaa ke sabase prasiddha pikanika sthala men se eka hai 805 Model BLEU NIST Baseline (surface) 24.32 5.85 lemma + suffix 25.16 5.87 lemma + suffix + unl 27.79 6.05 lemma + suffix + stanford 28.21 5.99 Table 2: Results: The impact of suffix and semantic factors Level Interpretation 5 Flawless Hindi, with no grammatical errors whatsoever 4 Good Hindi, with a few minor errors in morphology 3 Non-native Hindi, with possibly a few minor grammatical errors 2 Disfluent Hindi, with most phrases correct, but ungrammatical overall 1 Incomprehensible Table 3: Subjective Evaluation: Fluency Scale Level Interpretation 5 All meaning is conveyed 4 Most of the meaning is conveyed 3 Much of the meaning is conveyed 2 Little meaning is conveyed 1 None of the meaning is conveyed Table 4: Subjective Evaluation: Adequacy Scale Model Reordering BLEU NIST surface distortion 24.42 5.85 surface lexicalized 28.75 6.19 surface syntactic 31.57 6.40 lemma + suffix + stanford syntactic 31.49 6.34 Table 5: Results: The impact of reordering and semantic relations Model Reordering Fluency Adequacy #errors surface lexicalized 2.14 2.26 2.16 surface syntactic 2.6 2.71 1.79 lemma + suffix + stanford syntactic 2.88 2.82 1.44 Table 6: Subjective Evaluation: The impact of reordering and semantic relations Baseline Reorder Stanford F A E F A E F A E Small (<19 words) 2.63 2.84 1.30 3.30 3.52 0.74 3.66 3.75 0.62 Medium (20-34 words) 1.92 2.00 2.23 2.32 2.43 2.05 2.62 2.46 1.74 Large (>34 words) 1.62 1.69 4.00 1.86 1.73 3.36 1.86 1.86 2.82 Table 7: Impact of sentence length (F: Fluency; A:Adequacy; E:# Errors) 806 Figure 3: Subjective evaluation: analysis gloss: waterway Alappuzha of most popular picnic spot of one is Semantic: at,-TlFy jlmAg aAlp yA k sbs pEsd EpkEnk -Tlo\ m\ s ek h{ antahsthaliiya jalamaarga aalapuzaa ke sabase prasiddha pikanika sthalon men se eka hai gloss: waterway Alappuzha of most popular picnic spots of one is We can see that poor word-order makes the baseline output almost incomprehensible, while syntactic reordering solves the problem correctly. The morphology improvement using semantic relations can be seen in the correct inflection achieved in the word -Tlo\ (sthalon – plural oblique – spots), whereas the output without using semantic relations generates -Tl (sthala – singular – spot). The next couple of examples illustrate how case marking improves through the use of semantic relations. Input: Gandhi Darshan and Gandhi National Museum is across Rajghat. Reorder: gA\DF df n v gA\DF rA£~ Fy s\g}hAly rAjGAV m\ h{ gaandhii darshana va gaandhii raashtriiya sangrahaalaya raajaghaata men hai Semantic: gA\DF df n v gA\DF rA£~ Fy s\g}hAly rAjGAV k pAr h{ gaandhii darshana va gaandhii raashtriiya sangrahaalaya raajaghaata ke paara hai Here, the use of semantic relations produces the correct meaning that the locations mentioned are across (k pAr (ke paara)) Rajghat, and not in (m\ (men)) Rajghat as suggested by the translation produced without using semantic relations. Another common error in case marking is that two case markers are produced in successive positions in the translation, which is not possible in Hindi. The following example (a fragment) shows this error (kF (kii) repeated) being correctly handled by using semantic relations: Input: For varieties of migratory birds Reorder: pvAsF pE"yo\ kF kF pkAr k Ely pravaasii pakshiyon kii kii prakaara ke liye Semantic: pvAsF pE"yo\ kF pkAr k Ely pravaasii pakshiyon kii prakaara ke liye It is important to note that the gains made using syntactic reordering and semantic relations are limited by the accuracy of the parsers (see section 5). We observe that even the use of moderate quality semantic relations goes a long way in increasing the quality of translation. 8 Conclusion We have reported in this paper the marked improvement in the output quality of Hindi translations – especially fluency – when the correspondence of English semantic relations and suffixes with Hindi case markers and inflections is used as a translation factor in English-Hindi SMT. The improvement is statistically significant. Subjective evaluation too lends ample credence to this claim. Future work consists of investigations into (i) how the internal structure of constituents can be strictly preserved and (ii) how to glue together correctly the syntactically well-formed bits and pieces of the sentences. This course of future action is suggested by the fact that smaller sentences are much more fluent in translation compared to medium length and long sentences. 807 References Ananthakrishnan, R., and Rao, D., A Lightweight Stemmer for Hindi, Workshop on Computational Linguistics for South-Asian Languages, EACL, 2003. Ananthakrishnan, R., Bhattacharyya, P., Hegde, J. J., Shah, R. M., and Sasikumar, M., Simple Syntactic and Morphological Processing Can Help English-Hindi Statistical Machine Translation, Proceedings of IJCNLP, 2008. Avramidis, E., and Koehn, P., Enriching Morphologically Poor Languages for Statistical Machine Translation, Proceedings of ACL-08: HLT, 2008. Collins, M., Koehn, P., and I. Kucerova, Clause Restructuring for Statistical Machine Translation, Proceedings of ACL, 2005. Imamura, K., Okuma, H., Sumita, E., Practical Approach to Syntax-based Statistical Machine Translation, Proceedings of MTSUMMIT X, 2005. Koehn, P., and Hoang, H., Factored Translation Models, Proceedings of EMNLP, 2007. Marie-Catherine de Marneffe, MacCartney, B., and Manning, C., Generating Typed Dependency Parses from Phrase Structure Parses, Proceedings of LREC, 2006. Marie-Catherine de Marneffe and Manning, C., Stanford Typed Dependency Manual, 2008. Melamed, D., Statistical Machine Translation by Parsing, Proceedings of ACL, 2004. Minnen, G., Carroll, J., and Pearce, D., Applied Morphological Processing of English, Natural Language Engineering, 7(3), pages 207– 223, 2001. Nießen, S., and Ney, H., Statistical Machine Translation with Scarce Resources Using Morpho-syntactic Information, Computational Linguistics, 30(2), pages 181–204, 2004. Och, F., Minimum Error Rate Training in Statistical Machine Translation, Proceedings of ACL, 2003. Papineni, K., Roukos, S., Ward, T., and Zhu, W., BLEU: a Method for Automatic Evaluation of Machine Translation, IBM Research Report, Thomas J. Watson Research Center, 2001. Popovic, M., and Ney, H., Statistical Machine Translation with a Small Amount of Bilingual Training Data, 5th LREC SALTMIL Workshop on Minority Languages, 2006. Wang, C., Collins, M., and Koehn, P., Chinese Syntactic Reordering for Statistical Machine Translation, Proceedings of the EMNLPCoNLL, 2007. 808
2009
90
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 809–816, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Dependency Based Chinese Sentence Realization Wei He1, Haifeng Wang2, Yuqing Guo2, Ting Liu1 1Information Retrieval Lab, Harbin Institute of Technology, Harbin, China {whe,tliu}@ir.hit.edu.cn 2Toshiba (China) Research and Development Center, Beijing, China {wanghaifeng,guoyuqing}@rdc.toshiba.com.cn Abstract This paper describes log-linear models for a general-purpose sentence realizer based on dependency structures. Unlike traditional realizers using grammar rules, our method realizes sentences by linearizing dependency relations directly in two steps. First, the relative order between head and each dependent is determined by their dependency relation. Then the best linearizations compatible with the relative order are selected by log-linear models. The log-linear models incorporate three types of feature functions, including dependency relations, surface words and headwords. Our approach to sentence realization provides simplicity, efficiency and competitive accuracy. Trained on 8,975 dependency structures of a Chinese Dependency Treebank, the realizer achieves a BLEU score of 0.8874. 1 Introduction Sentence realization can be described as the process of converting the semantic and syntactic representation of a sentence or series of sentences into meaningful, grammatically correct and fluent text of a particular language. Most previous general-purpose realization systems are developed via the application of a set of grammar rules based on particular linguistic theories, e.g. Lexical Functional Grammar (LFG), Head Driven Phrase Structure Grammar (HPSG), Combinatory Categorical Grammar (CCG), Tree Adjoining Grammar (TAG) etc. The grammar rules are either developed by hand, such as those used in LinGo (Carroll et al., 1999), OpenCCG (White, 2004) and XLE (Crouch et al., 2007), or extracted automatically from annotated corpora, like the HPSG (Nakanishi et al., 2005), LFG (Cahill and van Genabith, 2006; Hogan et al., 2007) and CCG (White et al., 2007) resources derived from the Penn-II Treebank. Over the last decade, there has been a lot of interest in a generate-and-select paradigm for surface realization. The paradigm is characterized by a separation between realization and selection, in which rule-based methods are used to generate a space of possible paraphrases, and statistical methods are used to select the most likely realization from the space. Usually, two statistical models are used to rank the output candidates. One is n-gram model over different units, such as word-level bigram/trigram models (Bangalore and Rambow, 2000; Langkilde, 2000), or factored language models integrated with syntactic tags (White et al. 2007). The other is log-linear model with different syntactic and semantic features (Velldal and Oepen, 2005; Nakanishi et al., 2005; Cahill et al., 2007). However, little work has been done on probabilistic models learning direct mapping from input to surface strings, without the effort to construct a grammar. Guo et al. (2008) develop a general-purpose realizer couched in the framework of Lexical Functional Grammar based on simple n-gram models. Wan et al. (2009) present a dependency-spanning tree algorithm for word ordering, which first builds dependency trees to decide linear precedence between heads and modifiers then uses an n-gram language model to order siblings. Compared with n-gram model, log-linear model is more powerful in that it is easy to integrate a variety of features, and to tune feature weights to maximize the probability. A few papers have presented maximum entropy models for word or phrase ordering (Ratnaparkhi, 2000; Filippova and Strube, 2007). However, those attempts have been limited to specialized applications, such as air travel reservation or ordering constituents of a main clause in German. This paper presents a general-purpose realizer based on log-linear models for directly linearizing dependency relations given dependency structures. We reduce the generation space by 809 two techniques: the first is dividing the entire dependency tree into one-depth sub-trees and solving linearization in sub-trees; the second is the determination of relative positions between dependents and heads according to dependency relations. Then the best linearization for each sub-tree is selected by the log-linear model that incorporates three types of feature functions, including dependency relations, surface words and headwords. The evaluation shows that our realizer achieves competitive generation accuracy. The paper is structured as follows. In Section 2, we describe the idea of dividing the realization procedure for an entire dependency tree into a series of sub-procedures for sub-trees. We describe how to determine the relative positions between dependents and heads according to dependency relations in Section 3. Section 4 gives details of the log-linear model and the feature functions used for sentence realization. Section 5 explains the experiments and provides the results. 2 Sentence Realization from Dependency Structure 2.1 The Dependency Input The input to our sentence realizer is a dependency structure as represented in the HIT Chinese Dependency Treebank (HIT-CDT)1. In our dependency tree representations, dependency relations are represented as arcs pointing from a head to a dependent. The types of dependency arcs indicate the semantic or grammatical relationships between the heads and the dependents, which are recorded in the dependent nodes. Figure 1 gives an example of dependency tree representation for the sentence: (1) 这 是 武汉 航空 this is Wuhan Airline 首次 购买 波音 客机 first time buy Boeing airliner ‘This is the first time for Airline Wuhan to buy Boeing airliners.’ In a dependency structure, dependents are unordered, i.e. the string position of each node is not recorded in the representation. Our sentence realizer takes such an unordered dependency tree as input, determines the linear order of the words 1 HIT-CDT (http://ir.hit.edu.cn) includes 10,000 sentences and 215,334 words, which are manually annotated with part-of-speech tags and dependency labels. (Liu et al., 2006a) as encoded in the nodes of the dependency structure and produces a grammatical sentence. As the dependency structures input to our realizer have been lexicalized, lexical selection is not involved during the surface realization. 2.2 Divide and Conquer Strategy for Linearization For determining the linear order of words represented by nodes of the given dependency structure, in principle, the sentence realizer has to produce all possible sequences of the nodes from the input tree and selects the most likely linearization among them. If the dependency tree consists of a considerable number of nodes, this procedure would be very time-consuming. To reduce the number of possible realizations, our generation algorithm adopts a divide-andconquer strategy, which divides the whole tree into a set of sub-trees of depth one and recursively linearizes the sub-trees in a bottom-up fashion. As illustrated in Figure 2, sub-trees c and d, which are at the bottom of the tree, are linearized first, then sub-tree b is processed, and finally sub-tree a. The procedure imposes a projective constraint on the dependency structures, viz. each head dominates a continuous substring of the sentence realization. This assumption is feasible in the application of the dependency-based generation, because: (i) it has long been observed that the dependency structures of a vast majority of sentences in the languages of the world are projective (Igor, 1988) and (ii) non-projective dependencies in Chinese, for the most part, are used to account for non-local dependency phenomena. Figure 1: The dependency tree for the sentence “这是武汉航空首次购买波音客机” ①是(HED) is ②这(SBV) this ③购买(VOB) buy ④首次(ADV) first time ⑤客机(VOB) airliner ⑥航空(SBV) airline ⑧武汉(ATT) Wuhan ⑦波音(ATT) Boeing 810 Though non-local dependencies are important for accurate semantic analysis, they can be easily converted to local dependencies conforming to the projective constraint. In fact, we find that the 10, 000 manually-build dependency trees of the HIT-CDT do not contain any non-projective dependencies. 3 Relative Position Determination In dependency structures, the semantic or grammatical roles of the nodes are indicated by types of dependency relations. For example, the VOB dependency relation, which stands for the verbobject structure, means that the head is a verb and the dependent is an object of the verb; the ATT relation, means that the dependent is an attribute of the head. In languages with fairly rigid word order, the relative position between the head and dependent of a certain relation is in a fixed order. For example in Chinese, the object almost always occurs behind its dominating verb; the attribute modifier always occurs in front of its head word. Therefore, we can draw a conclusion that the relative positions between head and dependent of VOB and ATT can be determined by the types of dependency relations. We make a statistic on the relative positions between head and dependent for each dependency relation type. Following (Covington, 2001), we call a dependent that precedes its head predependent, a dependent that follows its head postdependent. The corpus used to gather appropriate statistics is HIT-CDT. Table 1 gives the numbers ①是(HED) is ②这(SBV) this 这 是 武汉航空首次购买波音客机 ③ ③购买(VOB) buy ④首次(ADV) first time ⑤ ⑥ 武汉航空 首次 购买 波音客机 ⑤客机(VOB) airliner ⑦波音(ATT) Boeing 波音 客机 ⑥航空(SBV) Airline ⑧武汉(ATT) Wuhan 武汉 航空 sub-tree a sub-tree b sub-tree c sub-tree d Figure 2: Illustration of the linearization procedure Relation Description Postdep. Predep. ADV adverbial 1 25977 APP appositive 807 0 ATT attribute 0 47040 CMP complement 2931 3 CNJ conjunctive 0 2124 COO coordinate 6818 0 DC dep. clause 197 0 DE DE phrase 0 10973 DEI DEI phrase 131 3 DI DI phrase 0 400 IC indep.clause 3230 0 IS indep.structure 125 794 LAD left adjunct 0 2644 MT mood-tense 3203 0 POB prep-obj 7513 0 QUN quantity 0 6092 RAD right adjunct 1332 1 SBV subject-verb 6 16016 SIM similarity 0 44 VOB verb-object 23487 21 VV verb-verb 6570 2 Table 1: Numbers of pre/post-dependents for each dependency relation 811 of predependent/postdependent for each type of dependency relations and its descriptions. Table 1 shows that 100% dependents of ATT relation are predependents and 23,487(99.9%) against 21(0.1%) VOB dependents are postdependents. Almost all the dependency relations have a dominant dependent type—predependent or postdependent. Although some dependency relations have exceptional cases (e.g. VOB), the number is so small that it can be ignored. The only exception is the IS relation, which has 794(86.4%) predependents and 125(13.6%) postdependents. The IS label is an abbreviation for independent structure. This type of dependency relation is usually used to represent interjections or comments set off by brackets, which usually has little grammatical connection with the head. Figure 3 gives an example of independent structure. This example is from a news report, and the phrase “新华社消息” (set apart by brackets in the original text) is a supplementary explanation for the source of the news. The connection between this phrase and the main clause is so weak that either it precedes or follows the head verb is acceptable in grammar. However, this kind of news-source-explanation is customary to place at the beginning of a sentence in Chinese. This can probably explain the majority of the IS-tagged dependents are predependents. If we simply treat all the IS dependents as predependents, we can assume that every dependency relation has only one type of dependent, either predependent or postdependent. Therefore, the relative position between head and dependent can be determined just by the types of dependency relations. In the light of this assumption, all dependents in a sub-tree can be classified into two groups— predependents and postdependents. The predependents must precede the head, and the postdependents must follow the head. This classification not only reduces the number of possible sequences, but also solves the linearization of a sub-tree if the sub-tree contains only one dependent, or two dependents of different types, viz. one predependent and one postdependent. In subtree c of Figure 2, the dependency relation between the only dependent and the head is ATT, which indicates that the dependent is a predependent. Therefore, node 7 is bound to precede node 5, and the only linearization result is “武汉 航空”. In sub-tree a of the same figure, the classification for SBV is predependent, and for VOB is postdependent, so the only linearization is <node 2, node 1, node 3>. In HIT-CDT, there are 108,086 sub-trees in the 10,000 sentences, 65% sub-trees have only one dependent, and 7% sub-trees have two dependents of different types (one predependent and one postdependent). This means that the relative position classification can deterministically linearize 72% sub-trees, and only the rest 28% sub-trees with more than one predependent or postdependent need to be further determined. 4 Log-linear Models We use log-linear models for selecting the sequence with the highest probability from all the possible linearizations of a sub-tree. 4.1 The Log-linear Model Log-linear models employ a set of feature functions to describe properties of the data, and a set of learned weights to determine the contribution of each feature. In this framework, we have a set of M feature functions M m t r hm ,..., 1 ), , ( = . For each feature function, there exists a model parameter M m t r m ,..., 1 ), , ( = λ that is fitted to optimize the likelihood of the training data. A conditional log-linear model for the probability of a realization r given the dependency tree t, has the general parametric form )] , ( exp[ ) ( 1 ) | ( 1 t r h t Z t r p m M m m ∑ = = λ λ λ (1) where ) (t Z λ is a normalization factor defined as ∑ ∑ ∈ = = ) ( ' 1 )] ,' ( exp[ ) ( t Y r m M m m t r h t Z λ λ (2) And Y(t) gives the set of all possible realizations of the dependency tree t. 4.2 Feature Functions We use three types of feature functions for capturing relations among nodes on the dependency tree. In order to better illustrate the feature functions used in the log-linear model, we redraw sub-tree b of Figure 2 in Figure 4. Here we assume the linearizations of sub-tree c and d have Figure 3: Example of independent structure ①严重(HED) serious ②新华社消息(IS) Xinhua news ③南方雪灾(SBV) southern snowstorm 812 been finished, and the strings of linearizing results are recorded in nodes 5 and 6. The sub-tree in Figure 4 has two predependents (SBV and ADV) and one postdependent (VOB). As a result of this classification, the only two possible linearizations of the sub-tree are <node 4, node 6, node 3, node 5> and <node 6, node 4, node 3, node 5>. Then the log-linear model that incorporates three types of feature functions is used to make further selection. Dependency Relation Model: For a particular sub-tree structure, the task of generating a string covered by the nodes on the sub-tree is equivalent to linearizing all the dependency relations in that sub-tree. We linearize the dependency relations by computing n-gram models, similar to traditional word-based language models, except using the names of dependency relations instead of words. For the two linearizations of Figure 4, the corresponding dependency relation sequences are “ADV SBV VOB VOB” and “SBV ADV VOB VOB”. The dependency relation model calculates the probability of dependency relation n-gram P(DR) according to Eq.(3). The probability score is integrated into the log-linear model as a feature. ) ... ( ) ( 1 1 m m DR DR P DR P = (3) ) | ( 1 1 1 − + − =∏ = k n k m k k DR DR P Word Model: We integrate an n-gram word model into the log-linear model for capturing the relation between adjacent words. For a string of words generated from a possible sequence of sub-tree nodes, the word models calculate wordbased n-gram probabilities of the string. For example, in Figure 4, the strings generated by the two possible sequences are “武汉航空 首次 购 买 波音客机” and “首次 武汉航空 购买 波音客 机”. The word model takes these two strings as input, and calculates the n-gram probabilities. Headword Model: 2 In dependency representations, heads usually play more important roles than dependents. The headword model calculates the n-gram probabilities of headwords, without regard to the words occurring at dependent nodes, in that dependent words are usually less important than headwords. In Figure 4, the two possible sequences of headwords are “航空 首次 购 买 客机” and “首次 航空 购买 客机”. The headword strings are usually more generic than the strings including all words, and thus the headword model is more likely to relax the data sparseness. Table 2 gives some examples of all the features used in the log-linear model. The examples listed in the table are features of the linearization <node 6, node 4, node 3, node 5>, extracted from the sub-tree in Figure 4. In this paper, all the feature functions used in the log-linear model are n-gram probabilities. However, the log-linear framework has great potential for including other types of features. 4.3 Parameter Estimation BLEU score, a method originally proposed to automatically evaluate machine translation quality (Papineni et al., 2002), has been widely used as a metric to evaluate general-purpose sentence generation (Langkilde, 2002; White et al., 2007; Guo et al. 2008, Wan et al. 2009). The BLEU measure computes the geometric mean of the precision of n-grams of various lengths between a sentence realization and a (set of) reference(s). To estimate the parameters ) ,..., ( 1 M λ λ for the feature functions ) ,..., ( 1 M h h , we use BLEU3 as optimization objective function and adopt the approach of minimum error rate training 2 Here the term “headword” is used to describe the word that occurs at head nodes in dependency trees. 3 The BLEU scoring script is supplied by NIST Open Machine Translation Evaluation at ftp://jaguar.ncsl.nist.gov/mt/resources/mteval-v11b.pl Feature function Examples of features Dependency Relation “SBV ADV VOB” “ADV VOB VOB” Word Model “武汉航空首次” “航空首次购买” “首次购买波音”“购买波音客机” Headword Model “航空首次” “首次购买” “购买客机” Table 2: Examples of feature functions ③购买(VOB) buy ④首次(ADV) first time ⑤客机(VOB) airliner “波音客机” airliners of Boeing ⑥航空(SBV) Airline “武汉航空” Airline Wuhan Figure 4: Sub-tree with multiple predependents 813 (MERT), which is popular in statistical machine translation (Och, 2003). 4.4 The Realization Algorithm The realization algorithm is a recursive procedure that starts from the root node of the dependency tree, and traverses the tree by depth-first search. The pseudo code of the realization algorithm is shown in Figure 5. 5 Experiments 5.1 Experimental Design Our experiments are carried out on HIT-CDT. We randomly select 526 sentences as the test set, and 499 sentences as the development set for optimizing the model parameters. The rest 8,975 sentences of the HIT-CDT are used for training of the dependency relation model. For training of word models, we use the Xinhua News part (6,879,644 words) of Chinese Gigaword Second Edition (LDC2005T14), segmented by the Language Technology Platform (LTP) 4 . And for training the headword model, we use both the HIT-CDT and the HIT Chinese Skeletal Dependency Treebank (HIT-CSDT). HIT-CSDT is a 4 http://ir.hit.edu.cn/demo/ltp component of LTP and contains 49,991 sentences in dependency structure representation (without dependency relation labels). As the input dependency representation does not contain punctuation information, we simply remove all punctuation marks in the test and development sets. 5.2 Evaluation Metrics In addition to BLEU score, percentage of exactly matched sentences and average NIST simple string accuracy (SSA) are adopted as evaluation metrics. The exact match measure is percentage of the generated string that exactly matches the corresponding reference sentence. The average NIST simple string accuracy score reflects the average number of insertion (I), deletion (D), and substitution (S) errors between the output sentence and the reference sentence. Formally, SSA = 1 – (I + D + S) / R, where R is the number of tokens in the reference sentence. 5.3 Experimental Results All the evaluation results are shown in Table 3. The first experiment, which is a baseline experiment, ignores the tree structure and randomly chooses position for every word. From the second experiment, we begin to utilize the tree structure and apply the realization algorithm described in Section 4.4. In the second experiment, predependents are distinguished from postdependents by the relative position determination method (RPD), then the orders inside predependents and postdependents are chosen randomly. From the third experiments, the log-linear models are used for scoring the generated sequences, with the aid of three types of feature functions as described in Section 4.2. First, the feature functions of trigram dependency relation model (DR), bigram word model (Bi-WM), trigram word model (Tri-WM) (with Katz backoff) and trigram headword model (HW) are used separately in experiments 3-6. Then we combine the feature 1:procedure SEARCH 2:input: sub-tree T {head:H dep.:D1…Dn} 3: if n = 0 then return 4: for i := 1 to n 5: SEARCH(Di) 6: Apre := {} 7: Apost := {} 8: for i := 1 to n 9: if PRE-DEP(Di) then Apre:=Apre∪{Di} 10: if POST-DEP(Di) then Apost:=Apost∪{Di} 11: for all permutations p1 of Apre 12: for all permutations p2 of Apost 13: sequence s := JOIN(p1,H,p2) 14: score r := LOG-LINEAR(s) 15: if best-score(r) then RECORD(r,s) Figure 5: The algorithm for linearizations of subtrees Model BLEU ExMatch SSA 1 Random 0.1478 0.0038 0.2044 2 RPD + Random 0.5943 0.1274 0.6369 3 RPD + DR 0.7204 0.2167 0.7683 4 RPD + Bi-WM 0.8289 0.4125 0.8270 5 RPD + Tri-WM 0.8508 0.4715 0.8415 6 RPD + HW 0.7592 0.2909 0.7638 7 RPD + DR + Bi-WM 0.8615 0.4810 0.8723 8 RPD + DR + Tri-WM 0.8772 0.5247 0.8817 9 RPD + DR + Tri-WM + HW 0.8874 0.5475 0.8920 Table 3: BLEU, ExMatch and SSA scores on the test set 814 functions incrementally based on the RPD and DR model. The relative position determination plays an important role in the realization algorithm. We observe that the BLEU score is boosted from 0.1478 to 0.5943 by using the RPD method. This can be explained by the reason that the linearizations of 72% sub-trees can be definitely determined by the RPD method. All of the four feature functions we have tested achieve considerable improvement in BLEU scores. The dependency relation model achieves 0.7204, the bigram word model 0.8289, the trigram word model 0.8508 and the headword model achieves 0.7592. While the combined models perform better than any of their individual component models. On the foundation of relative position determination method, the combination of dependency relation and bigram word model achieves a BLEU score of 0.8615, and the combination of dependency relation and trigram word model achieves a BLEU score of 0.8772. Finally the combination of dependency relation model, trigram word model and headword model achieves the best result 0.8874. 5.4 Discussion We first inspected the errors made by the relative position determination method. In the treebanktree test set, there are 7 predependents classified as postdependents and 3 postdependents classified as predependents by error. Among the 9,384 dependents, the error rate of the relative position determination method is very small (0.1%). Then we make a classification on the errors in the experiment of dependency relation model (with relative position determination method). Table 4 shows the distribution of the errors. The first type of errors is caused by duplicate dependency relations, i.e. a head with two or more dependents that have the same dependency relations. In this situation, only using the dependency relation model cannot generate the right linearization. However, word models, which utilize the word information, can make distinctions between the dependencies. The reason for the errors of SBV-ADV and ATT-QUN is probably because the order of these pairs of grammar roles is somewhat flexible. For example, the strings of “今天(ADV)/today 我(SBV)/I” and “我(SBV)/I 今天(ADV)/today” are both very common and acceptable in Chinese. The word models tend to combine the nodes that have strong correlation together. For example in Figure 6, node 2 is more likely to precede node 3 because the words “保护/protect” and “未来/future” have strong correlation, but the correct order is <node 3, node 2>. Headword model only consider the words occur at head nodes, which is helpful in the situation like Figure 6. In our experiments, the headword model gets a relatively low performance by itself, however, the addition of headword model to the combination of the other two feature functions improves the result from 0.8772 to 0.8874. This indicates that the headword model is complementary to the other feature functions. 6 Conclusions We have presented a general-purpose realizer based on log-linear models, which directly maps dependency relations into surface strings. The linearization of a whole dependency tree is divided into a series of sub-procedures on sub-trees. The dependents in the sub-trees are classified into two groups, predependents or postdependents, according to their dependency relations. The evaluation shows that this relative position determination method achieves a considerable result. The log-linear model, which incorporates three types of feature functions, including dependency relation, surface words and headwords, successfully captures factors in sentence realization and demonstrates competitive performance. References Srinivas Bangalore and Owen Rambow. 2000. Exploiting a Probabilistic Hierarchical Model for Generation. In Proceedings of the 18th International Conference on Computational Linguistics, pages 42-48. Saarbrücken, Germany. Error types Proportion 1 Duplicate dependency relations 60.0% 2 SBV-ADV 20.3% 3 ATT-QUN 6.3% 4 Other 13.4% Table 4: Error types in the RPD+DR experiment Figure 6: Sub-tree for “未来的鸟类保护工作” ①工作 work ②保护(ATT) protect “鸟类 保护” “birds protecting” ③的(SBV) of “未来 的” future 815 Aoife Cahill and Josef van Genabith. 2006. Robust PCFG-Based Generation Using Automatically Acquired LFG Approximations. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 10331040. Sydney, Australia. Aoife Cahill, Martin Forst and Christian Rohrer. 2007. Stochastic Realisation Ranking for a Free Word Order language. In Proceedings of 11th European Workshop on Natural Language Generation, pages 17-24. Schloss Dagstuhl, Germany. John Carroll, Ann Copestake, Dan Flickinger, and Victor Poznanski. 1999. An Efficient Chart Generator for (Semi-)Lexicalist Grammars. In Proceedings of the 7th European Workshop on Natural Language Generation, pages 86-95, Toulouse. Michael A. Covington. 2001. A Fundamental Algorithm for Dependency Parsing. In Proceedings of the 39th Annual ACM Southeast Conference, pages 95–102. Dick Crouch, Mary Dalrymple, Ron Kaplan, Tracy King, John Maxwell, and Paula Newman. 2007. XLE documentation. Palo Alto Research Center, CA. Katja Filippova and Michael Strube. 2007. Generating Constituent Order in German Clauses. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 320-327. Prague, Czech Republic. Yuqing Guo, Haifeng Wang and Josef van Genabith. 2008. Dependency-Based N-Gram Models for General Purpose Sentence Realisation. In Proceedings of the 22th International Conference on Computational Linguistics, pages 297-304. Manchester, UK. Deirdre Hogan, Conor Cafferkey, Aoife Cahill and Josef van Genabith. 2007. Exploiting Multi-Word Units in History-Based Probabilistic Generation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and CoNLL, pages 267-276. Prague, Czech Republic. Mel'čuk Igor. 1988. Dependency syntax: Theory and practice. In Suny Series in Linguistics. State University of New York Press, New York, USA. Irene Langkilde. 2000. Forest-Based Statistical Sentence Generation. In Proceedings of 1st Meeting of the North American Chapter of the Association for Computational Linguistics, pages 170-177. Seattle, WA. Irene Langkilde. 2002. An Empirical Verification of Coverage and Correctness for a General-Purpose Sentence Generator. In Proceedings of the Second International Conference on Natural Language Generation, pages 17-24. New York, USA. Ting Liu, Jinshan Ma, and Sheng Li. 2006a. Building a Dependency Treebank for Improving Chinese Parser. Journal of Chinese Language and Computing, 16(4): 207-224. Ting Liu, Jinshan Ma, Huijia Zhu, and Sheng Li. 2006b. Dependency Parsing Based on Dynamic Local Optimization. In Proceedings of CoNLL-X, pages 211-215, New York, USA. Hiroko Nakanishi, Yusuke Miyao and Jun’ichi Tsujii. 2005. Probabilistic Models for Disambiguation of an HPSG-Based Chart Generator. In Proceedings of the 9th International Workshop on Parsing Technology, pages 93-102. Vancouver, British Columbia. Franz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160-167, Sapporo, Japan. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311318. Philadelphia, PA. Adwait Ratnaparkhi. 2000. Trainable Methods for Natural Language Generation. In Proceedings of North American Chapter of the Association for Computational Linguistics, pages 194-201. Seattle, WA. Erik Velldal and Stephan Oepen. 2005. Maximum Entropy Models for Realization Ranking. In Proceedings of the 10th Machine Translation Summit, pages 109-116. Phuket, Thailand, Stephen Wan, Mark Dras, Robert Dale, Cécile Paris. 2009. Improving Grammaticality in Statistical Sentence Generation: Introducing a Dependency Spanning Tree Algorithm with an Argument Satisfaction Model. In Proceedings of the 12th Conference of the European Chapter of the ACL, pages 852860. Athens, Greece. Michael White. 2004. Reining in CCG Chart Realization. In Proceedings of the third International Natural Language Generation Conference, pages 182191. Hampshire, UK. Michael White, Rajakrishnan Rajkumar and Scott Martin. 2007. Towards Broad Coverage Surface Realization with CCG. In Proceedings of the Machine Translation Summit XI Workshop, pages 2230. Copenhagen, Danmark. 816
2009
91
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 817–825, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Incorporating Information Status into Generation Ranking Aoife Cahill and Arndt Riester Institut f¨ur Maschinelle Sprachverarbeitung (IMS) University of Stuttgart 70174 Stuttgart, Germany {aoife.cahill,arndt.riester}@ims.uni-stuttgart.de Abstract We investigate the influence of information status (IS) on constituent order in German, and integrate our findings into a loglinear surface realisation ranking model. We show that the distribution of pairs of IS categories is strongly asymmetric. Moreover, each category is correlated with morphosyntactic features, which can be automatically detected. We build a loglinear model that incorporates these asymmetries for ranking German string realisations from input LFG F-structures. We show that it achieves a statistically significantly higher BLEU score than the baseline system without these features. 1 Introduction There are many factors that influence word order, e.g. humanness, definiteness, linear order of grammatical functions, givenness, focus, constituent weight. In some cases, it can be relatively straightforward to automatically detect these features (i.e. in the case of definiteness, this is a syntactic property). The more complex the feature, the more difficult it is to automatically detect. It is common knowledge that information status1 (henceforth, IS) has a strong influence on syntax and word order; for instance, in inversions, where the subject follows some preposed element, Birner (1994) reports that the preposed element must not be newer in the discourse than the subject. We would like to be able to use information related to IS in the automatic generation of German text. Ideally, we would automatically annotate text with IS labels and learn from this data. Unfortunately, however, to date, there has been little success in automatically annotating text with IS. 1We take information status to be a subarea of information structure; the one dealing with varieties of givenness but not with contrast and focus in the strictest sense. We believe, however, that despite this shortcoming, we can still take advantage of some of the insights gained from looking at the influence of IS on word order. Specifically, we look at the problem from a more general perspective by computing an asymmetry ratio for each pair of IS categories. Results show that there are a large number of pairs exhibiting clear ordering preferences when co-occurring in the same clause. The question then becomes, without being able to automatically detect these IS category pairs, can we, nevertheless, take advantage of these strong asymmetric patterns in generation. We investigate the (automatically detectable) morphosyntactic characteristics of each asymmetric IS pair and integrate these syntactic asymmetric properties into the generation process. The paper is structured as follows: Section 2 outlines the underlying realisation ranking system for our experiments. Section 3 introduces information status and Section 4 describes how we extract and measure asymmetries in information status. In Section 5, we examine the syntactic characteristics of the IS asymmetries. Section 6 outlines realisation ranking experiments to test the integration of IS into the system. We discuss our findings in Section 7 and finally we conclude in Section 8. 2 Generation Ranking The task we are considering is generation ranking. In generation (or more specifically, surface realisation) ranking, we take an abstract representation of a sentence (for example, as produced by a machine translation or automatic summarisation system), produce a number of alternative string realisations corresponding to that input and use some model to choose the most likely string. We take the model outlined in Cahill et al. (2007), a log-linear model based on the Lexical Functional Grammar (LFG) Framework (Kaplan and Bresnan, 1982). LFG has two main levels of represen817 CS 1: ROOT:1458 CProot[std]:1451 DP[std]:906 DPx[std]:903 D[std]:593 die:34 NP:738 N[comm]:693 Behörden:85 Cbar:1448 Cbar-flat:1436 V[v,fin]:976 Vx[v,fin]:973 warnten:117 PP[std]:2081 PPx[std]:2072 P[pre]:1013 vor:154 DP[std]:1894 DPx[std]:1956 NP:1952 AP[std,+infl]:1946 APx[std,+infl]:1928 A[+infl]:1039 möglichen:185 N[comm]:1252 Nachbeben:263 PERIOD:397 .:389 "Die Behörden warnten vor möglichen Nachbeben." 'warnen<[34:Behörde], [263:Nachbeben]>' PRED 'Behörde' PRED 'die' PRED DET SPEC CASE nom, NUM pl, PERS 3 34 SUBJ 'vor<[263:Nachbeben]>' PRED 'Nachbeben' PRED 'möglich<[263:Nachbeben]>' PRED [263:Nachbeben] SUBJ attributive ATYPE 185 ADJUNCT CASE dat, NUM pl, PERS 3 263 OBJ 154 OBL MOOD indicative, TENSE past TNS-ASP [34:Behörde] TOPIC 117 Figure 1: An example C(onstituent) and F(unctional) Structure pair for (1) tation, C(onstituent)-Structure and F(unctional)Structure. C-Structure is a context-free tree representation that captures characteristics of the surface string while F-Structure is an abstract representation of the basic predicate-argument structure of the string. An example C- and F-Structure pair for the sentence in (1) is given in Figure 1. (1) Die the Beh¨orden authorities warnten warned vor of m¨oglichen possible Nachbeben. aftershocks ‘The authorities warned of possible aftershocks.’ The input to the generation system is an FStructure. A hand-crafted, bi-directional LFG of German (Rohrer and Forst, 2006) is used to generate all possible strings (licensed by the grammar) for this input. As the grammar is hand-crafted, it is designed only to parse (and therefore) generate grammatical strings.2 The task of the realisation ranking system is then to choose the most likely string. Cahill et al. (2007) describe a loglinear model that uses linguistically motivated features and improves over a simple tri-gram language model baseline. We take this log-linear model as our starting point.3 2There are some rare instances of the grammar parsing and therefore also generating ungrammatical output. 3Forst (2007) presents a model for parse disambiguation that incorporates features such as humanness, definiteness, linear order of grammatical functions, constituent weight. Many of these features are already present in the Cahill et al. (2007) model. An error analysis of the output of that system revealed that sometimes “unnatural” outputs were being selected as most probable, and that often information structural effects were the cause of subtle differences in possible alternatives. For instance, Example (3) appeared in the original TIGER corpus with the 2 preceding sentences (2). (2) Denn ausdr¨ucklich ist darin der rechtliche Maßstab der Vorinstanz, des S¨achsischen Oberverwaltungsgerichtes, best¨atigt worden. Und der besagt: Die Beteiligung am politischen Strafrecht der DDR, der Mangel an kritischer Auseinandersetzung mit totalit¨aren ¨Uberzeugungen rechtfertigen den Ausschluss von der Dritten Gewalt. ‘Because, the legal benchmark has explicitly been confirmed by the lower instance, the Saxonian Higher Administrative Court. And it indicates: the participation in the political criminal law of the GDR as well as deficits regarding the critical debate on totalitarian convictions justify an expulsion from the judiciary.’ (3) Man one hat has aus out of der the Vergangenheitsaufarbeitung coming to terms with the past gelernt. learnt ‘People have learnt from dealing with the past mistakes.’ The five alternatives output by the grammar are: a. Man hat aus der Vergangenheitsaufarbeitung gelernt. b. Aus der Vergangenheitsaufarbeitung hat man gelernt. c. Aus der Vergangenheitsaufarbeitung gelernt hat man. d. Gelernt hat man aus der Vergangenheitsaufarbeitung. e. Gelernt hat aus der Vergangenheitsaufarbeitung man. 818 The string chosen as most likely by the system of Cahill et al. (2007) is Alternative (b). No matter whether the context in (2) is available or the sentence is presented without any context, there seems to be a preference by native speakers for the original string (a). Alternative (e) is extremely marked4 to the point of being ungrammatical. Alternative (c) is also very marked and so is Alternative (d), although less so than (c) and (e). Alternative (b) is a little more marked than the original string, but it is easier to imagine a preceding context where this sentence would be perfectly appropriate. Such a context would be, e.g. (4). (4) Vergangenheitsaufarbeitung und Abwiegeln sind zwei sehr unterschiedliche Arten, mit dem Geschehenen umzugehen. ‘Dealing with the mistakes or playing them down are two very different ways to handle the past.’ If we limit ourselves to single sentences, the task for the model is then to choose the string that is closest to the “default” expected word order (i.e. appropriate in the most number of contexts). In this work, we concentrate on integrating insights from work on information status into the realisation ranking process. 3 Information Status The concept of information status (Prince, 1981; Prince, 1992) involves classifying NP/PP/DP expressions in texts according to various ways of their being given or new. It replaces and specifies more clearly the often vaguely used term givenness. The process of labelling a corpus for IS can be seen as a means of discourse analysis. Different classification systems have been proposed in the literature; see Riester (2008a) for a comparison of several IS labelling schemes and Riester (2008b) for a new proposal based on criteria from presupposition theory. In the work described here, we use the scheme of Riester (2008b). His main theoretic assumption is that IS categories (for definites) should group expressions according to the contextual resources in which their presuppositions find an antecedent. For definites, the set of main category labels found in Table 1 is assumed. The idea of resolution contexts derives from the concept of a presupposition trigger (e.g. a definite description) as potentially establishing an 4By marked, we mean that there are relatively few or specialised contexts in which this sentence is acceptable. Context resource IS label discourse D-GIVEN context encyclopedic/ ACCESSIBLE-GENERAL knowledge context environment/ SITUATIVE situative context bridging BRIDGING context (scenario) accommodation ACCESSIBLE(no context) DESCRIPTION Table 1: IS classification for definites anaphoric relation (van der Sandt, 1992) to an entity being available by some means or other. But there are some expressions whose referent cannot be identified and needs to be accommodated, compare (5). (5) [die monatelange F¨uhrungskrise der Hamburger Sozialdemokraten]ACC-DESC ‘the leadership crisis lasting for months among the Hamburg Social Democrats’ Examples like this one have been mentioned early on in the literature (e.g. Hawkins (1978), Clark and Marshall (1981)). Nevertheless, labeling schemes so far have neglected this issue, which is explicitly incorporated in the system of Riester (2008b). The status of an expression is ACCESSIBLEGENERAL (or unused, following Prince (1981)) if it is not present in the previous discourse but refers to an entity that is known to the intended recipent. There is a further differentiation of the ACCESSIBLE-GENERAL class into generic (TYPE) and non-generic (TOKEN) items. An expression is D-GIVEN (or textually evoked) if and only if an antecedent is available in the discourse context. D-GIVEN entities are subdivided according to whether they are repetitions of their antecedent, short forms thereof, pronouns or whether they use new linguistic material to add information about an already existing discourse referent (label: EPITHET). Examples representing a co-reference chain are shown in (6). (6) [Angela Merkel]ACC-GEN (first mention) . . . [Angela Merkel]D-GIV-REPEATED (second mention) . . . [Merkel]D-GIV-SHORT . . . [she]D-GIV-PRONOUN . . . [herself]D-GIV-REFLEXIVE . . . [the Hamburg-born politician]D-GIV-EPITHET Indexicals (referring to entities in the environment context) are labeled as SITUATIVE. Definite 819 items that can be identified within a scenario context evoked by a non-coreferential item receive the label BRIDGING; compare Example (7). (7) In in Sri Lanka Sri Lanka haben have tamilische Tamil Rebellen rebels erstmals for the first time einen an Luftangriff airstrike [gegen against die the Streitkr¨afte]BRIDG armed forces geflogen. flown. ’In Sri Lanka, Tamil rebels have, for the first time, carried out an airstrike against the armed forces.’ In the indefinite domain, a simple classification along the lines of Table 2 is proposed. Type IS label unrelated to context NEW part-whole relation PARTITIVE to previous entity other (unspecified) INDEF-REL relation to context Table 2: IS classification for indefinites There are a few more subdivisions. Table 3, for instance, contains the labels BRIDGING-CONTAINED and PARTITIVE-CONTAINED, going back to Prince’s (1981:236) “containing inferrables”. The entire IS label inventory used in this study comprises 19 (sub)classes in total. 4 Asymmetries in IS In order to find out whether IS categories are unevenly distributed within German sentences we examine a corpus of German radio news bulletins that has been manually annotated for IS (496 annotated sentences in total) using the scheme of Riester (2008b).5 For each pair of IS labels X and Y we count how often they co-occur in the corpus within a single clause. In doing so, we distinguish the numbers for “X preceding Y ” (= A) and “Y preceding X” (= B). The larger group is referred to as the dominant order. Subsequently, we compute a ratio indicating the degree of asymmetry between the two orders. If, for instance, the dominant pattern occurs 20 times (A) and the reverse pattern only 5 times (B), the asymmetry ratio B/A is 0.25.6 5The corpus was labeled by two independent annotators and the results were compared by a third person who took the final decision in case of disagreement. An evaluation as regards inter-coder agreement is currently underway. 6Even if some of the sentences we are learning from are marked in terms of word order, the ratios allow us to still learn the predominant order, since the marked order should occur much less frequently and the ratio will remain low. Dominant order (≫: “before”) B/A Total D-GIV-PRO≫INDEF-REL 0 19 D-GIV-PRO≫D-GIV-CAT 0.1 11 D-GIV-REL≫NEW 0.11 31 D-GIV-PRO≫SIT 0.13 17 ACC-DESC≫INDEF-REL 0.14 24 ACC-DESC≫ACC-GEN-TY 0.19 19 D-GIV-EPI≫INDEF-REL 0.2 12 D-GIV-REP≫NEW 0.21 23 D-GIV-PRO≫ACC-GEN-TY 0.22 11 ACC-GEN-TO≫ACC-GEN-TY 0.24 42 D-GIV-PRO≫ACC-DESC 0.24 46 EXPL≫NEW 0.25 30 D-GIV-REL≫D-GIV-EPI 0.25 15 BRIDG-CONT≫PART-CONT 0.25 15 ACC-DESC≫EXPL 0.29 27 D-GIV-PRO≫D-GIV-REP 0.29 18 D-GIV-PRO≫NEW 0.29 88 D-GIV-REL≫ACC-DESC 0.3 26 SIT≫EXPL 0.31 17 D-GIV-PRO≫BRIDG-CONT 0.31 21 D-GIV-PRO≫D-GIV-SHORT 0.32 29 . . . . . . ACC-DESC≫ACC-GEN-TO 0.91 201 SIT≫BRIDG 0.92 23 EXPL≫ACC-DESC 1 12 Table 3: Asymmetric pairs of IS labels Table 3 gives the top asymmetry pairs down to a ratio of about 1:3 as well as, down at the bottom, the pairs that are most evenly distributed. This means that the top pairs exhibit strong ordering preferences and are, hence, unevenly distributed in German sentences. For instance, the ordering D-GIVEN-PRONOUN before INDEF-REL (top line), shown in Example (8), occurs 19 times in the examined corpus while there is no example in the corpus for the reverse order.7 (8) [Sie]D-GIV-PRO she w¨urde would auch also [bei at verringerter reduced Anzahl]INDEF-REL number jede every vern¨unftige sensible Verteidigungsplanung defence planning sprengen. blast ‘Even if the numbers were reduced it would blow every sensible defence planning out of proportion.’ 5 Syntactic IS Asymmetries It seems that IS could, in principle, be quite beneficial in the generation ranking task. The problem, of course, is that we do not possess any reliable system of automatically assigning IS labels to unknown text and manual annotations are costly and time-consuming. As a substitute, we identify a list 7Note that we are not claiming that the reverse pattern is ungrammatical or impossible, we just observe that it is extremely infrequent. 820 of morphosyntactic characteristics that the expressions can adopt and investigate how these are correlated to our inventory of IS categories. For some IS labels there is a direct link between the typical phrases that fall into that IS category, and the syntactic features that describe it. One such example is D-GIVEN-PRONOUN, which always corresponds to a pronoun, or EXPL which always corresponds to expletive items. Such syntactic markers can easily be identified in the LFG F-structures. On the other hand, there are many IS labels for which there is no clear cut syntactic class that describes its typical phrases. Examples include NEW, ACCESSIBLE-GENERAL or ACCESSIBLE-DESCRIPTION. In order to determine whether we can ascertain a set of syntactic features that are representative of a particular IS label, we design an inventory of syntactic features that are found in all types of IS phrases. The complete inventory is given in Table 5. It is a much easier task to identify these syntactic characteristics than to try and automatically detect IS labels directly, which would require a deep semantic understanding of the text. We automatically mark up the news corpus with these syntactic characteristics, giving us a corpus both annotated for IS and syntactic features. We can now identify, for each IS label, what the most frequent syntactic characteristics of that label are. Some examples and their frequencies are given in Table 4. Syntactic feature Count D-GIVEN-PRONOUN PERS PRON 39 DA PRON 25 DEMON PRON 19 GENERIC PRON 11 NEW SIMPLE INDEF 113 INDEF ATTR 53 INDEF NUM 32 INDEF PPADJ 26 INDEF GEN 25 . .. Table 4: Syntactic characteristics of IS labels Combining the most frequent syntactic characteristics with the asymmetries presented in Table 3 gives us Table 6.8 8For reasons of space, we are only showing the very top of the table. 6 Generation Ranking Experiments Using the augmented set of IS asymmetries, we design new features to be included into the original model of Cahill et al. (2007). For each IS asymmetry, we extract all precedence patterns of the corresponding syntactic features. For example, from the first asymmetry in Table 6, we extract the following features: PERS PRON precedes INDEF ATTR PERS PRON precedes SIMPLE INDEF DA PRON precedes INDEF ATTR DA PRON precedes SIMPLE INDEF DEMON PRON precedes INDEF ATTR DEMON PRON precedes SIMPLE INDEF GENERIC PRON precedes INDEF ATTR GENERIC PRON precedes SIMPLE INDEF We extract these patterns for all of the asymmetric pairs in Table 3 (augmented with syntactic characteristics) that have a ratio >0.4. The patterns we extract need to be checked for inconsistencies because not all of them are valid. By inconsistencies, we mean patterns of the type X precedes X, Y precedes Y, and any pattern where the variant X precedes Y as well as Y precedes X is present. These are all automatically removed from the list of features to give a total of 130 new features for the log-linear ranking model. We train the log-linear ranking model on 7759 F-structures from the TIGER treebank. We generate strings from each F-structure and take the original treebank string to be the labelled example. All other examples are viewed as unlabelled. We tune the parameters of the log-linear model on a small development set of 63 sentences, and carry out the final evaluation on 261 unseen sentences. The ranking results of the model with the additional IS-inspired features are given in Table 7. Exact Model BLEU Match (%) Cahill et al. (2007) 0.7366 52.49 New Model (Model 1) 0.7534 54.40 Table 7: Ranking Results for new model with ISinspired syntactic asymmetry features. We evaluate the string chosen by the log-linear model against the original treebank string in terms of exact match and BLEU score (Papineni et al., 821 Syntactic feature Type Definites Definite descriptions SIMPLE DEF simple definite descriptions POSS DEF simple definite descriptions with a possessive determiner (pronoun or possibly genitive name) DEF ATTR ADJ definite descriptions with adjectival modifier DEF GENARG definite descriptions with a genitive argument DEF PPADJ definite descriptions with a PP adjunct DEF RELARG definite descriptions including a relative clause DEF APP definite descriptions including a title or job description as well as a proper name (e.g. an apposition) Names PROPER combinations of position/title and proper name (without article) BARE PROPER bare proper names Demonstrative descriptions SIMPLE DEMON simple demonstrative descriptions MOD DEMON adjectivally modified demonstrative descriptions Pronouns PERS PRON personal pronouns EXPL PRON expletive pronoun REFL PRON reflexive pronoun DEMON PRON demonstrative pronouns (not: determiners) GENERIC PRON generic pronoun (man – one) DA PRON ”da”-pronouns (darauf, dar¨uber, dazu, .. .) LOC ADV location-referring pronouns TEMP ADV,YEAR Dates and times Indefinites SIMPLE INDEF simple indefinites NEG INDEF negative indefinites INDEF ATTR indefinites with adjectival modifiers INDEF CONTRAST indefinites with contrastive modifiers (einige – some, andere – other, weitere – further, .. .) INDEF PPADJ indefinites with PP adjuncts INDEF REL indefinites with relative clause adjunct INDEF GEN indefinites with genitive adjuncts INDEF NUM measure/number phrases INDEF QUANT quantified indefinites Table 5: An inventory of interesting syntactic characteristics in IS phrases Label 1 (+ features) Label 2 (+ features) B/A Total D-GIVEN-PRONOUN INDEF-REL 0 19 PERS PRON 39 INDEF ATTR 23 DA PRON 25 SIMPLE INDEF 17 DEMON PRON 19 GENERIC PRON 11 D-GIVEN-PRONOUN D-GIVEN-CATAPHOR 0.1 11 PERS PRON 39 SIMPLE DEF 13 DA PRON 25 DA PRON 10 DEMON PRON 19 GENERIC PRON 11 D-GIVEN-REFLEXIVE NEW 0.11 31 REFL PRON 54 SIMPLE INDEF 113 INDEF ATTR 53 INDEF NUM 32 INDEF PPADJ 26 INDEF GEN 25 ... Table 6: IS asymmetric pairs augmented with syntactic characteristics 822 2002). We achieve an improvement of 0.0168 BLEU points and 1.91 percentage points in exact match. The improvement in BLEU is statistically significant (p < 0.01) using the paired bootstrap resampling significance test (Koehn, 2004). Going back to Example (3), the new model chooses a “better” string than the Cahill et al. (2007) model. The new model chooses the original string. While the string chosen by the Cahill et al. (2007) system is also a perfectly valid sentence, our empirical findings from the news corpus were that the default order of generic pronoun before definite NP were more frequent. The system with the new features helped to choose the original string, as it had learnt this asymmetry. Was it just the syntax? The results in Table 7 clearly show that the new model is beneficial. However, we want to know how much of the improvement gained is due to the IS asymmetries, and how much the syntactic asymmetries on their own can contribute. To this end, we carry out a further experiment where we calculate syntactic asymmetries based on the automatic markup of the corpus, and ignore the IS labels completely. Again we remove any inconsistent asymmetries and only choose asymmetries with a ratio of higher than 0.4. The top asymmetries are given in Table 8. Dominant order (≫: “before”) B/A Total BAREPROPER≫INDEF NUM 0 33 DA PRON≫INDEF NUM 0 16 DEF PPADJ≫TEMP ADV 0 15 SIMPLE INDEF≫INDEF QUANT 0 14 PERS PRON≫INDEF ATTR 0 12 DEF PPADJ≫EXPL PRON 0 12 GENERIC PRON≫INDEF ATTR 0 12 REFL PRON≫YEAR 0 11 INDEF PPADJ≫INDEF NUM 0.02 57 DEF APP≫BAREPROPER 0.03 34 BAREPROPER≫TEMP ADV 0.04 26 TEMP ADV≫INDEF NUM 0.04 25 PROPER≫INDEF GEN 0.05 20 DEF GENARG≫INDEF ATTR 0.06 18 . . . ... Table 8: Purely syntactic asymmetries For each asymmetry, we create a new feature X precedes Y. This results in a total of 66 features. Of these 30 overlap with the features used in the above experiment. We do not include the features extracted in the first attempt in this experiment. The same training procedure is carried out and we test on the same heldout test set of 261 sentences. The results are given in Table 9. Finally, we combine the two lists of features and evaluate, these results are also presented in Table 9. Exact Model BLEU Match (%) Cahill et al. (2007) 0.7366 52.49 Model 1 0.7534 54.40 Synt.-asym.-based Model 0.7419 54.02 Combination 0.7437 53.64 Table 9: Results for ranking model with purely syntactic asymmetry features They show that although the syntactic asymmetries alone contribute to an improvement over the baseline, the gain is not as large as when the syntactic asymmetries are constrained to correspond to IS label asymmetries (Model 1).9 Interestingly, the combination of the lists of features does not result in an improvement over Model 1. The difference in BLEU score between the model of Cahill et al. (2007) and the model that only takes syntactic-based asymmetries into account is not statistically significant, while the difference between Model 1 and this model is statistically significant (p < 0.05). 7 Discussion In the work described here, we concentrate only on taking advantage of the information that is readily available to us. Ideally, we would like to be able to use the IS asymmetries directly as features, however, without any means of automatically annotating new text with these categories, this is impossible. Our experiments were designed to test, whether we can achieve an improvement in the generation of German text, without a fully labelled corpus, using the insight that at least some IS categories correspond to morphosyntactic characteristics that can be easily identified. We do not claim to go beyond this level to the point where true IS labels would be used, rather we attempt to provide a crude approximation of IS using only morphosyntactic information. To be able to fully automatically annotate text with IS labels, one would need to supplement the morphosyntactic features 9The difference may also be due to the fewer features used in the second experiment. However, this emphasises, that the asymmetries gleaned from syntactic information alone are not strong enough to be able to determine the prevailing order of constituents. When we take the IS labels into account, we are honing in on a particular subset of interesting syntactic asymmetries. 823 with information about anaphora resolution, world knowledge, ontologies, and possibly even build dynamic discourse representations. We would also like to emphasise that we are only looking at one sentence at a time. Of course, there are other inter-sentential factors (not relying on external resources) that play a role in choosing the optimal string realisation, for example parallelism or the position of the sentence in the paragraph or text. Given that we only looked at IS factors within a sentence, we think that such a significant improvement in BLEU and exact match scores is very encouraging. In future work, we will look at what information can be automatically acquired to help generation ranking based on more than one sentence. While the experiments presented this paper are limited to a German realisation ranking system, there is nothing in the methodology that precludes it from being applied to another language. The IS annotation scheme is language-independent, and so all one needs to be able to apply this to another language is a corpus annotated with IS categories. We extracted our IS asymmetry patterns from a small corpus of spoken news items. This corpus contains text of a similar domain to the TIGER treebank. Further experiments are required to determine how domain specific the asymmetries are. Much related work on incorporating information status (or information structure) into language generation has been on spoken text, since information structure is often encoded by means of prosody. In a limited domain setting, Prevost (1996) describes a two-tiered information structure representation. During the high level planning stage of generation, using a small knowledge base, elements in the discourse are automatically marked as new or given. Contrast and focus are also assigned automatically. These markings influence the final string generated. We are focusing on a broad-coverage system, and do not use any external world-knowledge resources. Van Deemter and Odijk (1997) annotate the syntactic component from which they are generating with information about givenness. This information is determined by detecting contradictions and parallel sentences. Pulman (1997) also uses information about parallelism to predict word order. In contrast, we only look at one sentence when we approximate information status, future work will look at cross sentential factors. Endriss and Klabunde (2000) describe a sentence planner for German that annotates the propositional input with discourse-related features in order to determine the focus, and thus influence word order and accentuation. Their system, again, is domainspecific (generating monologue describing a film plot) and requires the existence of a knowledge base. The same holds for Yampolska (2007), who presents suggestions for generating information structure in Russian and Ukrainian football reports, using rules to determine parallel structures for the placement of contrastive accent, following similar work by Theune (1997). While our paper does not address the generation of speech / accentuation, it is of course conceivable to employ the IS annotated radio news corpus from which we derived the label asymmetries (and which also exists in a spoken and prosodically annotated version) in a similar task of learning the correlations between IS labels and pitch accents. Finally, Bresnan et al. (2007) present work on predicting the dative alternation in English using 14 features relating to information status which were manually annotated in their corpus. In our work, we manually annotate a small corpus in order to learn generalisations. From these we learn features that approximate the generalisations, enabling us to apply them to large amounts of unseen data without further manual annotation. 8 Conclusions In this paper we presented a novel method of including IS into the task of generation ranking. Since automatic annotation of IS labels themselves is not currently possible, we approximate the IS categories by their syntactic characteristics. By calculating strong asymmetries between pairs of IS labels, and establishing the most frequent syntactic characteristics of these asymmetries, we designed a new set of features for a log-linear ranking model. In comparison to a baseline model, we achieve statistically significant improvement in BLEU score. We showed that these improvements were not only due to the effect of purely syntactic asymmetries, but that the IS asymmetries were what drove the improved model. Acknowledgments This work was funded by the Collaborative Research Centre (SFB 732) at the University of Stuttgart. 824 References Betty J. Birner. 1994. Information Status and Word Order: an Analysis of English Inversion. Language, 70(2):233–259. Joan Bresnan, Anna Cueni, Tatiana Nikitina, and R. Harald Baayen. 2007. Predicting the Dative Alternation. Cognitive Foundations of Interpretation, pages 69–94. Aoife Cahill, Martin Forst, and Christian Rohrer. 2007. Stochastic Realisation Ranking for a Free Word Order Language. In Proceedings of the Eleventh European Workshop on Natural Language Generation, pages 17–24, Saarbr¨ucken, Germany. DFKI GmbH. Herbert H. Clark and Catherine R. Marshall. 1981. Definite Reference and Mutual Knowledge. In Aravind Joshi, Bonnie Webber, and Ivan Sag, editors, Elements of Discourse Understanding, pages 10–63. Cambridge University Press. Kees van Deemter and Jan Odijk. 1997. Context Modeling and the Generation of Spoken Discourse. Speech Communication, 21(1-2):101–121. Cornelia Endriss and Ralf Klabunde. 2000. Planning Word-Order Dependent Focus Assignments. In Proceedings of the First International Conference on Natural Language Generation (INLG), pages 156– 162, Morristown, NJ. Association for Computational Linguistics. Martin Forst. 2007. Disambiguation for a Linguistically Precise German Parser. Ph.D. thesis, University of Stuttgart. Arbeitspapiere des Instituts f¨ur Maschinelle Sprachverarbeitung (AIMS), Vol. 13(3). John A. Hawkins. 1978. Definiteness and Indefiniteness: A Study in Reference and Grammaticality Prediction. Croom Helm, London. Ron Kaplan and Joan Bresnan. 1982. Lexical Functional Grammar, a Formal System for Grammatical Representation. In Joan Bresnan, editor, The Mental Representation of Grammatical Relations, pages 173–281. MIT Press, Cambridge, MA. Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In Dekang Lin and Dekai Wu, editors, Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2004), pages 388–395, Barcelona. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL 2002), pages 311– 318, Philadelphia, PA. Scott Prevost. 1996. An Information Structural Approach to Spoken Language Generation. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics (ACL 1996), pages 294–301, Morristown, NJ. Ellen F. Prince. 1981. Toward a Taxonomy of GivenNew Information. In P. Cole, editor, Radical Pragmatics, pages 233–255. Academic Press, New York. Ellen F. Prince. 1992. The ZPG Letter: Subjects, Definiteness and Information Status. In W. C. Mann and S. A. Thompson, editors, Discourse Description: Diverse Linguistic Analyses of a Fund-Raising Text, pages 295–325. Benjamins, Amsterdam. Stephen G. Pulman. 1997. Higher Order Unification and the Interpretation of Focus. Linguistics and Philosophy, 20:73–115. Arndt Riester. 2008a. A Semantic Explication of ’Information Status’ and the Underspecification of the Recipients’ Knowledge. In Atle Grønn, editor, Proceedings of Sinn und Bedeutung 12, University of Oslo. Arndt Riester. 2008b. The Components of Focus and their Use in Annotating Information Structure. Ph.D. thesis, University of Stuttgart. Arbeitspapiere des Instituts f¨ur Maschinelle Sprachverarbeitung (AIMS), Vol. 14(2). Christian Rohrer and Martin Forst. 2006. Improving Coverage and Parsing Quality of a Large-Scale LFG for German. In Proceedings of the Language Resources and Evaluation Conference (LREC 2006), Genoa, Italy. Rob van der Sandt. 1992. Presupposition Projection as Anaphora Resolution. Journal of Semantics, 9:333– 377. Mari¨et Theune. 1997. Goalgetter: Predicting Contrastive Accent in Data-to-Speech Generation. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL/EACL 1997), pages 519–521, Madrid. Student paper. Nadiya Yampolska. 2007. Information Structure in Natural Language Generation: an Account for EastSlavic Languages. Term paper. Universit¨at des Saarlandes. 825
2009
92
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 826–833, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP A Syntax-Free Approach to Japanese Sentence Compression Tsutomu HIRAO, Jun SUZUKI and Hideki ISOZAKI NTT Communication Science Laboratories, NTT Corp. 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0237 Japan {hirao,jun,isozaki}@cslab.kecl.ntt.co.jp Abstract Conventional sentence compression methods employ a syntactic parser to compress a sentence without changing its meaning. However, the reference compressions made by humans do not always retain the syntactic structures of the original sentences. Moreover, for the goal of ondemand sentence compression, the time spent in the parsing stage is not negligible. As an alternative to syntactic parsing, we propose a novel term weighting technique based on the positional information within the original sentence and a novel language model that combines statistics from the original sentence and a general corpus. Experiments that involve both human subjective evaluations and automatic evaluations show that our method outperforms Hori’s method, a state-of-theart conventional technique. Because our method does not use a syntactic parser, it is 4.3 times faster than Hori’s method. 1 Introduction In order to compress a sentence while retaining its original meaning, the subject-predicate relationship of the original sentence should be preserved after compression. In accordance with this idea, conventional sentence compression methods employ syntactic parsers. English sentences are usually analyzed by a full parser to make parse trees, and the trees are then trimmed (Knight and Marcu, 2002; Turner and Charniak, 2005; Unno et al., 2006). For Japanese, dependency trees are trimmed instead of full parse trees (Takeuchi and Matsumoto, 2001; Oguro et al., 2002; Nomoto, 2008)1 This parsing approach is reasonable because the compressed output is grammatical if the 1Hereafter, we refer these compression processes as “tree trimming.” input is grammatical, but it offers only moderate compression rates. An alternative to the tree trimming approach is the sequence-oriented approach (McDonald, 2006; Nomoto, 2007; Clarke and Lapata, 2006; Hori and Furui, 2003). It treats a sentence as a sequence of words and structural information, such as a syntactic or dependency tree, is encoded in the sequence as features. Their methods have the potential to drop arbitrary words from the original sentence without considering the boundary determined by the tree structures. However, they still rely on syntactic information derived from fully parsed syntactic or dependency trees. We found that humans usually ignored the syntactic structures when compressing sentences. For example, in many cases, they compressed the sentence by dropping intermediate nodes of the syntactic tree derived from the source sentence. We believe that making compression strongly dependent on syntax is not appropriate for reproducing reference compressions. Moreover, on-demand sentence compression is made problematic by the time spent in the parsing stage. This paper proposes a syntax-free sequenceoriented sentence compression method. To maintain the subject-predicate relationship in the compressed sentence and retain fluency without using syntactic parsers, we propose two novel features: intra-sentence positional term weighting (IPTW) and the patched language model (PLM). IPTW is defined by the term’s positional information in the original sentence. PLM is a form of summarization-oriented fluency statistics derived from the original sentence and the general language model. The weight parameters for these features are optimized within the Minimum Classification Error (MCE) (Juang and Katagiri, 1992) learning framework. Experiments that utilize both human subjective and automatic evaluations show that our method is 826 センタ試験 で 公表 して い な޿ 枝問部分 の Source Sentence 推定し た が 福武 配点について センタ試験枝問 の Chunk 1 Chunk 2 Chunk 3 Chunk 4 Chunk 5 Chunk 6 Chunk 7 Compressed Sentence Chunk7 = a part of Chunk6 + parts of Chunk4 センタ試験 枝問の suitei shi ta haiten nitsuite fukutake ga edamonbubun no kouhyou shi te nai center shiken de 推定し た が 福武 配点について Chunk 1 Chunk 2 Chunk 3 suitei shi ta haiten nitsuite fukutake ga center shiken edamon no edamon no center shiken Compression compound noun i Figure 1: An example of the dependency relation between an original sentence and its compressed variant. superior to conventional sequence-oriented methods that employ syntactic parsers while being about 4.3 times faster. 2 Analysis of reference compressions Syntactic information does not always yield improved compression performance because humans usually ignore the syntactic structures when they compress sentences. Figure 1 shows an example. English translation of the source sentence is “Fukutake Publishing Co., Ltd. presumed preferential treatment with regard to its assessed scores for a part of the questions for a series of Center Examinations.” and its compression is “Fukutake presumed preferential scores for questions for a series of Center Examinations.” In the figure, each box indicates a syntactic chunk, bunsetsu. The solid arrows indicate dependency relations between words2. We observe that the dependency relations are changed by compression; humans create compound nouns using the components derived from different portions of the original sentence without regard to syntactic constraints. ‘Chunk 7’ in the compressed sentence was constructed by dropping both content and functional words and joining other content words contained in ‘Chunk 4’ and ‘Chunk 6’ of 2Generally, a dependency relation is defined between bunsetsu. Therefore, in order to identify word dependencies, we followed Kudo’s rule (Kudo and Matsumoto, 2004) the original sentence. ‘Chunk 5’ is dropped completely. This compression cannot be achieved by tree trimming. According to an investigation in our corpus of manually compressed Japanese sentences, which we used in the experimental evaluation, 98.7% of them contain at least one segment that does not retain the original tree structure. Human usually compress sentences by dropping the intermediate nodes in the dependency tree. However, the resulting compressions retain both adequacy and fluency. This statistic supports the view that sentence compression that strongly depends on syntax is not useful in reproducing reference compressions. We need a sentence compression method that can drop intermediate nodes in the syntactic tree aggressively beyond the tree-scoped boundary. In addition, sentence compression methods that strongly depend on syntactic parsers have two problems: ‘parse error’ and ‘decoding speed.’ 44% of sentences output by a state-of-the-art Japanese dependency parser contain at least one error (Kudo and Matsumoto, 2005). Even more, it is well known that if we parse a sentence whose source is different from the training data of the parser, the performance could be much worse. This critically degrades the overall performance of sentence compression. Moreover, summarization systems often have to process megabytes of documents. Parsers are still slow and users of on827 demand summarization systems are not prepared to wait for parsing to finish. 3 A Syntax Free Sequence-oriented Sentence Compression Method As an alternative to syntactic parsing, we propose two novel features, intra-sentence positional term weighting (IPTW) and the patched language model (PLM) for our syntax-free sentence compressor. 3.1 Sentence Compression as a Combinatorial Optimization Problem Suppose that a compression system reads sentence x= x1 , x2, . . . , xj, . . . , xN, where xj is the j-th word in the input sentence. The system then outputs the compressed sentence y =y1, y2, . . . , yi, . . . , yM, where yi is the ith word in the output sentence. Here, yi ∈ {x1, . . . , xN}. We assume y0=x0=<s> (BOS) and yM+1=xN+1=</s> (EOS). We define function I(·), which maps word yi to the index of the word in the original sentence. For example, if source sentence is x = x1, x2, . . . , x5 and its compressed variant is y = x1, x3, x4, I(y1) = 1, I(y2) = 3, I(y3) = 4. We define a significance score f(x, y, Λ) for compressed sentence y based on Hori’s method (Hori and Furui, 2003). Λ = {λg, λh} is a parameter vector. f(x, y; Λ) = M+1  i=1 {g(x, I(yi); λg) + h(x, I(yi), I(yi−1); λh)} (1) The first term of equation (1) (g(·)) is the importance of each word in the output sentence, and the second term (h(·)) is the the linguistic likelihood between adjacent words in the output sentence. The best subsequence ˆy= argmax y f(x, y; Λ) is identified by dynamic programming (DP) (Hori and Furui, 2003). 3.2 Features We use IPTW to define the significance score g(x, I(yi); λg). Moreover, we use PLM to define the linguistic likelihood h(x, I(yi+1), I(yi); λh). 3.2.1 Intra-sentence Positional Term Weighting (IPTW) IDF is a global term weighting scheme in that it measures the significance score of a word in a text corpus, which could be extremely large. By contrast, this paper proposes another type of term weighting; it measures the positional significance score of a word within its sentence. Here, we assume the following hypothesis: • The “significance” of a word depends on its position within its sentence. In Japanese, the main subject of a sentence usually appears at the beginning of the sentence (BOS) and the main verb phrase almost always appears at the end of the sentence (EOS). These words or phrases are usually more important than the other words in the sentence. In order to add this knowledge to the scoring function, term weight is modeled by the following Gaussian mixture. N(psn(x, I(yi)); λg) = m1 1 √ 2πσ1 exp  −1 2 psn(x, I(yi)) −μ1 σ1 2 + m2 1 √ 2πσ2 exp  −1 2 psn(x, I(yi)) −μ2 σ2 2 (2) Here, λg = {μk, σk, mk}k=1,2. psn(x, I(yi)) returns the relative position of yi in the original sentence x which is defined as follows: psn(x, I(yi)) = start(x, I(yi)) length(x) (3) ‘length(x)’ denotes the number of characters in the source sentence and ‘start(x, I(yi))’ denotes the accumulated run of characters from BOS to (x, I(yi)). In equation (2), μk,σk indicates the mean and the standard deviation for the normal distribution, respectively. mk is a mixture parameter. We use the distribution (2) in defining g(x, I(yi); λg) as follows: g(x, I(yi); λg) = ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ IDF(x, I(yi)) × N(psn(x, I(yi); λg) if pos(x,I(yi)) = noun, verb, adjective Constant × N(psn(x, I(yi); λg) otherwise (4) 828 Here, pos(x, I(yi)) denotes the part-of-speech tag for yi. λg is optimized by using the MCE learning framework. 3.2.2 Patched Language Model Many studies on sentence compression employ the n-gram language model to evaluate the linguistic likelihood of a compressed sentence. However, this model is usually computed by using a huge volume of text data that contains both short and long sentences. N-gram distribution of short sentences may different from that of long sentences. Therefore, the n-gram probability sometimes disagrees with our intuition in terms of sentence compression. Moreover, we cannot obtain a huge corpus consisting solely of compressed sentences. Even if we collect headlines as a kind of compressed sentence from newspaper articles, corpus size is still too small. Therefore, we propose the following novel linguistic likelihood based on statistics derived from the original sentences and a huge corpus: PLM(x, I(yj), I(yj−1)) = ⎧ ⎨ ⎩ 1 if I(yj) = I(yj−1) + 1 λPLM Bigram(x, I(yj), I(yj−1)) otherwise (5) PLM stands for Patched Language Model. Here, 0 ≤λPLM ≤1, Bigram(·) indicates word bigram probability. The first line of equation (5) agrees with Jing’s observation on sentence alignment tasks (Jing and McKeown, 1999); that is, most (or almost all) bigrams in a compressed sentence appear in the original sentence as they are. 3.2.3 POS bigram Since POS bigrams are useful for rejecting ungrammatical sentences, we adopt them as follows: Ppos(x, I(yi+1)|I(yi)) = P(pos(x, I(yi+1))|pos(x, I(yi))). (6) Finally, the linguistic likelihood between adjacent words within y is defined as follows: h(x, I(yi+1), I(yi); λh) = PLM(x, I(yi+1), I(yi)) + λ(pos(x,I(yi+1))|pos(x,I(yi)))Ppos(x, I(yi+1)|I(yi)) 3.3 Parameter Optimization We can regard sentence compression as a two class problem: we give a word in the original sentence class label +1 (the word is used in the compressed output) or −1 (the word is not used). In order to consider the interdependence of words, we employ the Minimum Classification Error (MCE) learning framework (Juang and Katagiri, 1992), which was proposed for learning the goodness of a sequence. xt denotes the t-th original sentence in the training data set T. y∗ t denotes the reference compression that is made by humans and ˆyt is a compressed sentence output by a system. When using the MCE framework, the misclassification measure is defined as the difference between the score of the reference sentence and that of the best non-reference output and we optimize the parameters by minimizing the measure. d(y, x; Λ) = { |T|  t=1 f(xt, y∗ t ; Λ) − max ˆ yt̸=y∗ t f(xt, ˆyt; Λ)} (7) It is impossible to minimize equation (7) because we cannot derive the gradient of the function. Therefore, we employ the following sigmoid function to smooth this measure. L(d(x, y; Λ)) = |T|  t=1 1 1 + exp(−c × d(xt, yt; Λ)) (8) Here, c is a constant parameter. To minimize equation (8), we use the following equation. ∇L=∂L ∂d  ∂d ∂λ1 , ∂d ∂λ2 , . . .  =0 (9) Here, ∂L ∂d is given by: ∂L ∂d = c 1 + exp (−c × d)  1 − 1 1 + exp (−c × d)  (10) Finally, the parameters are optimized by using the iterative form. For example, λw is optimized as follows: λw(new) = λw(old) −ϵ ∂L ∂λw(old) (11) 829 Our parameter optimization procedure can be replaced by another one such as MIRA (McDonald et al., 2005) or CRFs (Lafferty et al., 2001). The reason why we employed MCE is that it is very easy to implement. 4 Experimental Evaluation 4.1 Corpus and Evaluation Measures We randomly selected 1,000 lead sentences (a lead sentence is the first sentence of an article excluding the headline.) whose length (number of words) was greater than 30 words from the Mainichi Newspaper from 1994 to 2002. There were five different ideal compressions (reference compressions produced by human) for each sentence; all had a 0.6 compression rate. The average length of the input sentences was about 42 words and that of the reference compressions was about 24 words. For MCE learning, we selected the reference compression that maximize the BLEU score (Papineni et al., 2002) (= argmaxr∈RBLEU(r, R\r)) from the set of reference compressions and used it as correct data for training. Note that r is a reference compression and R is the set of reference compressions. We employed both automatic evaluation and human subjective evaluation. For automatic evaluation, we employed BLEU (Papineni et al., 2002) by following (Unno et al., 2006). We utilized 5fold cross validation, i.e., we broke the whole data set into five blocks and used four of them for training and the remainder for testing and repeated the evaluation on the test data five times changing the test block each time. We also employed human subjective evaluation, i.e., we presented the compressed sentences to six human subjects and asked them to evaluate the sentence for fluency and importance on a scale 1 (worst) to 5 (best). For each source sentence, the order in which the compressed sentences were presented was random. 4.2 Comparison of Sentence Compression Methods In order to investigate the effectiveness of the proposed features, we compared our method against Hori’s model (Hori and Furui, 2003), which is a state-of-the-art Japanese sentence compressor based on the sequence-oriented approach. Table 1 shows the feature set used in our experiment. Note that ‘Hori−’ indicates the earlier verTable 1: Configuration setup Label g() h() Proposed IPTW PLM + POS w/o PLM IPTW Bigram+POS w/o IPTW IDF PLM+POS Hori− IDF Trigram Proposed+Dep IPTW PLM + POS +Dep w/o PLM+Dep IPTW Bigram+POS+Dep w/o IPTW+Dep IDF PLM+POS+Dep Hori IDF Trigram+Dep Table 2: Results: automatic evaluation Label BLEU Proposed .679 w/o PLM .617 w/o IPTW .635 Hori− .493 Proposed+Dep .632 w/o PLM+Dep .669 w/o IPTW+Dep .656 Hori .600 sion of Hori’s method which does not require the dependency parser. For example, label ‘w/o IPTW + Dep’ employs IDF term weighting as function g(·) and word bigram, part-of-speech bigram and dependency probability between words as function h(·) in equation (1). To obtain the word dependency probability, we use Kudo’s relative-CaboCha (Kudo and Matsumoto, 2005). We developed the n-gram language model from a 9 year set of Mainichi Newspaper articles. We optimized the parameters by using the MCE learning framework. 5 Results and Discussion 5.1 Results: automatic evaluation Table 2 shows the evaluation results yielded by BLUE at the compression rate of 0.60. Without introducing dependency probability, both IPTW and PLM worked well. Our method achieved the highest BLEU score. Compared to ‘Proposed’, ‘w/o IPTW’ offers significantly worse performance. The results support the view that our hypothesis, namely that the significance score of a word depends on its position within a sentence, is effective for sentence compression. Figure 2 shows an example of Gaussian mixture with pre830 0 0.05 0.1 0.15 0.2 0 N/4 N/2 3N/4 N x1, x2, ,xj, ,xN <S> </S> x Figure 2: An example of Gaussian mixture with predicted parameters dicted parameters. From the figure, we can see that the positional weights for words have peaks at BOS and EOS. This is because, in many cases, the subject appears at the beginning of Japanese sentences and the predicate at the end. Replacing PLM with the bigram language model (w/o PLM) degrades the performance significantly. This result shows that the n-gram language model is improper for sentence compression because the n-gram probability is computed by using a corpus that includes both short and long sentences. Most bigrams in a compressed sentence followed those in the source sentence. The dependency probability is very helpful provided either IPTW or PLM is employed. For example, ‘w/o PLM + Dep’ achieved the second highest BLEU score. The difference of the score between ‘Proposed’ and ‘w/o PLM + Dep’ is only 0.01 but there were significant differences as determined by Wilcoxon signed rank test. Compared to ‘Hori−’, ‘Hori’ achieved a significantly higher BLEU score. The introduction of both IPTW and PLM makes the use of dependency probability unnecessary. In fact, the score of ‘Proposed + Dep’ is not good. We believe that this is due to overfitting. PLM is similar to dependency probability in that both features emphasize word pairs that occurred as bigrams in the source sentence. Therefore, by introducing dependency probability, the information within the feature vector is not increased even though the number of features is increased. Table 3: Results: human subjective evaluations Label Fluency Importance Proposed 4.05 (±0.846) 3.33 (±0.854) w/o PLM + Dep 3.91 (±0.759) 3.24 (±0.753) Hori− 3.09 (±0.899) 2.34 (±0.696) Hori 3.28 (±0.924) 2.64 (±0.819) Human 4.86 (±0.268) 4.66 (±0.317) 5.2 Results: human subjective evaluation We used human subjective evaluations to compare our method to human compression, ‘w/o PLM + Dep’ which achieved the second highest performance in the automatic evaluation, ‘Hori−’ and ‘Hori’. We randomly selected 100 sentences from the test corpus and evaluated their compressed variants in terms of ‘fluency’ and ‘importance.’ Table 3 shows the results, mean score of all judgements as well as the standard deviation. The results indicate that human compression achieved the best score in both fluency and importance. Human compression significantly outperformed other compression methods. This results supports the idea that humans can easily compress sentences with the compression rate of 0.6. Of the automatic methods, our method achieved the best score in both fluency and importance while ‘Hori−’ was the worst performer. Our method significantly outperformed both ‘Hori’ and ‘Hori−’ on both metrics. Moreover, our method outperformed ‘w/o PLM + Dep’ again. However, the differences in the scores are not significant. We believe that this is due to a lack of data. If we use more data for the significant test, significant differences will be found. Although our method does not employ any explicit syntactic information, its fluency and importance are extremely good. This confirms the effectiveness of the new features of IPTW and PLM. 5.3 Comparison of decoding speed We compare the decoding speed of our method against that of Hori’s method. We measured the decoding time for all 1,000 test sentences on a standard Linux Box (CPU: Intel c⃝CoreTM 2 Extreme QX9650 (3.00GHz), Memory: 8G Bytes). The results were as follows: Proposed: 22.14 seconds (45.2 sentences / sec), 831 Hori: 95.34 seconds (10.5 sentences / sec). Our method was about 4.3 times faster than Hori’s method due to the latter’s use of dependency parser. This speed advantage is significant when on-demand sentence compression is needed. 6 Related work Conventional sentence compression methods employ the tree trimming approach to compress a sentence without changing its meaning. For instance, most English sentence compression methods make full parse trees and trim them by applying the generative model (Knight and Marcu, 2002; Turner and Charniak, 2005), discriminative model (Knight and Marcu, 2002; Unno et al., 2006). For Japanese sentences, instead of using full parse trees, existing sentence compression methods trim dependency trees by the discriminative model (Takeuchi and Matsumoto, 2001; Nomoto, 2008) through the use of simple linear combined features (Oguro et al., 2002). The tree trimming approach guarantees that the compressed sentence is grammatical if the source sentence does not trigger parsing error. However, as we mentioned in Section 2, the tree trimming approach is not suitable for Japanese sentence compression because in many cases it cannot reproduce human-produced compressions. As an alternative to these tree trimming approaches, sequence-oriented approaches have been proposed (McDonald, 2006; Nomoto, 2007; Hori and Furui, 2003; Clarke and Lapata, 2006). Nomoto (2007) and McDonald (2006) employed the random field based approach. Hori et al. (2003) and Clarke et al. (2006) employed the linear model with simple combined features. They simply regard a sentence as a word sequence and structural information, such as full parse tree or dependency trees, are encoded in the sequence as features. The advantage of these methods over the tree trimming approach is that they have the potential to drop arbitrary words from the original sentence without the need to consider the boundaries determined by the tree structures. This approach is more suitable for Japanese compression than tree trimming. However, they still rely on syntactic information derived from full parsed trees or dependency trees. Moreover, their use of syntactic parsers seriously degrades the decoding speed. 7 Conclusions We proposed a syntax free sequence-oriented Japanese sentence compression method with two novel features: IPTW and PLM. Our method needs only a POS tagger. It is significantly superior to the methods that employ syntactic parsers. An experiment on a Japanese news corpus revealed the effectiveness of the new features. Although the proposed method does not employ any explicit syntactic information, it outperformed, with statistical significance, Hori’s method a stateof-the-art Japanese sentence compression method based on the sequence-oriented approach. The contributions of this paper are as follows: • We revealed that in compressing Japanese sentences, humans usually ignore syntactic structures; they drop intermediate nodes of the dependency tree and drop words within bunsetsu, • As an alternative to the syntactic parser, we proposed two novel features, Intra-sentence positional term weighting (IPTW) and the Patched language model (PLM), and showed their effectiveness by conducting automatic and human evaluations, • We showed that our method is about 4.3 times faster than Hori’s method which employs a dependency parser. References J. Clarke and M. Lapata. 2006. Models for sentence compression: A comparison across domains, training requirements and evaluation measures. In Proc. of the 21st COLING and 44th ACL, pages 377–384. C. Hori and S. Furui. 2003. A new approach to automatic speech summarization. IEEE trans. on Multimedia, 5(3):368–378. H. Jing and K. McKeown. 1999. The Decomposition of Human-Written Summary Sentences. In Proc. of the 22nd SIGIR, pages 129–136. B. H. Juang and S. Katagiri. 1992. Discriminative Learning for Minimum Error Classification. IEEE Trans. on Signal Processing, 40(12):3043–3053. K. Knight and D. Marcu. 2002. Summarization beyond sentence extraction. Artificial Intelligence, 139(1):91–107. 832 T. Kudo and Y. Matsumoto. 2004. A Boosting Algorithm for Classification of Semi-Structured Text. In Proc. of the EMNLP, pages 301–308. T. Kudo and Y. Matsumoto. 2005. Japanese Dependency Parsing Using Relative Preference of Dependency (in japanese). IPSJ Journal, 46(4):1082– 1092. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proc. of the 18th ICML, pages 282–289. R. McDonald, K. Crammer, and F. Pereira. 2005. Online Large Margrin Training of Dependency Parser. In Proc. of the 43rd ACL, pages 91–98. R. McDonald. 2006. Discriminative sentence compression with soft syntactic evidence. In Proc. of the 11th EACL, pages 297–304. T. Nomoto. 2007. Discriminative sentence compression with conditional random fields. Information Processing and Management, 43(6):1571–1587. T. Nomoto. 2008. A generic sentence trimmer with crfs. In Proc. of the ACL-08: HLT, pages 299–307. R. Oguro, H. Sekiya, Y. Morooka, K. Takagi, and K. Ozeki. 2002. Evaluation of a japanese sentence compression method based on phrase significance and inter-phrase dependency. In Proc. of the TSD 2002, pages 27–32. K. Papineni, S. Roukos, T. Ward, and W-J. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proc. of the 40th Annual Meeting of the Association for Computational Linguistic (ACL), pages 311–318. K. Takeuchi and Y. Matsumoto. 2001. Acquisition of sentence reduction rules for improving quality of text summaries. In Proc. of the 6th NLPRS, pages 447–452. J. Turner and E. Charniak. 2005. Supervised and unsupervised learning for sentence compression. In Proc. of the 43rd ACL, pages 290–297. Y. Unno, T. Ninomiya, Y. Miyao, and J. Tsujii. 2006. Trimming cfg parse trees for sentence compression using machine learning approach. In Proc. of the 21st COLING and 44th ACL, pages 850–857. 833
2009
93
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 834–842, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Application-driven Statistical Paraphrase Generation Shiqi Zhao, Xiang Lan, Ting Liu, Sheng Li Information Retrieval Lab, Harbin Institute of Technology 6F Aoxiao Building, No.27 Jiaohua Street, Nangang District Harbin, 150001, China {zhaosq,xlan,tliu,lisheng}@ir.hit.edu.cn Abstract Paraphrase generation (PG) is important in plenty of NLP applications. However, the research of PG is far from enough. In this paper, we propose a novel method for statistical paraphrase generation (SPG), which can (1) achieve various applications based on a uniform statistical model, and (2) naturally combine multiple resources to enhance the PG performance. In our experiments, we use the proposed method to generate paraphrases for three different applications. The results show that the method can be easily transformed from one application to another and generate valuable and interesting paraphrases. 1 Introduction Paraphrases are alternative ways that convey the same meaning. There are two main threads in the research of paraphrasing, i.e., paraphrase recognition and paraphrase generation (PG). Paraphrase generation aims to generate a paraphrase for a source sentence in a certain application. PG shows its importance in many areas, such as question expansion in question answering (QA) (Duboue and Chu-Carroll, 2006), text polishing in natural language generation (NLG) (Iordanskaja et al., 1991), text simplification in computer-aided reading (Carroll et al., 1999), and sentence similarity computation in the automatic evaluation of machine translation (MT) (Kauchak and Barzilay, 2006) and summarization (Zhou et al., 2006). This paper presents a method for statistical paraphrase generation (SPG). As far as we know, this is the first statistical model specially designed for paraphrase generation. It’s distinguishing feature is that it achieves various applications with a uniform model. In addition, it exploits multiple resources, including paraphrase phrases, patterns, and collocations, to resolve the data shortage problem and generate more varied paraphrases. We consider three paraphrase applications in our experiments, including sentence compression, sentence simplification, and sentence similarity computation. The proposed method generates paraphrases for the input sentences in each application. The generated paraphrases are then manually scored based on adequacy, fluency, and usability. The results show that the proposed method is promising, which generates useful paraphrases for the given applications. In addition, comparison experiments show that our method outperforms a conventional SMT-based PG method. 2 Related Work Conventional methods for paraphrase generation can be classified as follows: Rule-based methods: Rule-based PG methods build on a set of paraphrase rules or patterns, which are either hand crafted or automatically collected. In the early rule-based PG research, the paraphrase rules are generally manually written (McKeown, 1979; Zong et al., 2001), which is expensive and arduous. Some researchers then tried to automatically extract paraphrase rules (Lin and Pantel, 2001; Barzilay and Lee, 2003; Zhao et al., 2008b), which facilitates the rule-based PG methods. However, it has been shown that the coverage of the paraphrase patterns is not high enough, especially when the used paraphrase patterns are long or complicated (Quirk et al., 2004). Thesaurus-based methods: The thesaurus-based methods generate a paraphrase t for a source sentence s by substituting some words in s with their synonyms (Bolshakov and Gelbukh, 2004; 834 Kauchak and Barzilay, 2006). This kind of method usually involves two phases, i.e., candidate extraction and paraphrase validation. In the first phase, it extracts all synonyms from a thesaurus, such as WordNet, for the words to be substituted. In the second phase, it selects an optimal substitute for each given word from the synonyms according to the context in s. This kind of method is simple, since the thesaurus synonyms are easy to access. However, it cannot generate other types of paraphrases but only synonym substitution. NLG-based methods: NLG-based methods (Kozlowski et al., 2003; Power and Scott, 2005) generally involve two stages. In the first one, the source sentence s is transformed into its semantic representation r by undertaking a series of NLP processing, including morphology analyzing, syntactic parsing, semantic role labeling, etc. In the second stage, a NLG system is employed to generate a sentence t from r. s and t are paraphrases as they are both derived from r. The NLG-based methods simulate human paraphrasing behavior, i.e., understanding a sentence and presenting the meaning in another way. However, deep analysis of sentences is a big challenge. Moreover, developing a NLG system is also not trivial. SMT-based methods: SMT-based methods viewed PG as monolingual MT, i.e., translating s into t that are in the same language. Researchers employ the existing SMT models for PG (Quirk et al., 2004). Similar to typical SMT, a large parallel corpus is needed as training data in the SMT-based PG. However, such data are difficult to acquire compared with the SMT data. Therefore, data shortage becomes the major limitation of the method. To address this problem, we have tried combining multiple resources to improve the SMT-based PG model (Zhao et al., 2008a). There have been researchers trying to propose uniform PG methods for multiple applications. But they are either rule-based (Murata and Isahara, 2001; Takahashi et al., 2001) or thesaurusbased (Bolshakov and Gelbukh, 2004), thus they have some limitations as stated above. Furthermore, few of them conducted formal experiments to evaluate the proposed methods. 3 Statistical Paraphrase Generation 3.1 Differences between SPG and SMT Despite the similarity between PG and MT, the statistical model used in SMT cannot be directly applied in SPG, since there are some clear differences between them: 1. SMT has a unique purpose, i.e., producing high-quality translations for the inputs. On the contrary, SPG has distinct purposes in different applications, such as sentence compression, sentence simplification, etc. The usability of the paraphrases have to be assessed in each application. 2. In SMT, words of an input sentence should be totally translated, whereas in SPG, not all words of an input sentence need to be paraphrased. Therefore, a SPG model should be able to decide which part of a sentence needs to be paraphrased. 3. The bilingual parallel data for SMT are easy to collect. In contrast, the monolingual parallel data for SPG are not so common (Quirk et al., 2004). Thus the SPG model should be able to easily combine different resources and thereby solve the data shortage problem (Zhao et al., 2008a). 4. Methods have been proposed for automatic evaluation in MT (e.g., BLEU (Papineni et al., 2002)). The basic idea is that a translation should be scored based on their similarity to the human references. However, they cannot be adopted in SPG. The main reason is that it is more difficult to provide human references in SPG. Lin and Pantel (2001) have demonstrated that the overlapping between the automatically acquired paraphrases and handcrafted ones is very small. Thus the human references cannot properly assess the quality of the generated paraphrases. 3.2 Method Overview The SPG method proposed in this work contains three components, i.e., sentence preprocessing, paraphrase planning, and paraphrase generation (Figure 1). Sentence preprocessing mainly includes POS tagging and dependency parsing for the input sentences, as POS tags and dependency information are necessary for matching the paraphrase pattern and collocation resources in the following stages. Paraphrase planning (Section 3.3) aims to select the units to be paraphrased (called source units henceforth) in an input sentence and the candidate paraphrases for the source 835 Multiple Paraphrase Tables PT1 …… Paraphrase Planning Paraphrase Generation t Sentence Preprocessing s A PT2 PTn Figure 1: Overview of the proposed SPG method. units (called target units) from multiple resources according to the given application A. Paraphrase generation (Section 3.4) is designed to generate paraphrases for the input sentences by selecting the optimal target units with a statistical model. 3.3 Paraphrase Planning In this work, the multiple paraphrase resources are stored in paraphrase tables (PTs). A paraphrase table is similar to a phrase table in SMT, which contains fine-grained paraphrases, such as paraphrase phrases, patterns, or collocations. The PTs used in this work are constructed using different corpora and different score functions (Section 3.5). If the applications are not considered, all units of an input sentence that can be paraphrased using the PTs will be extracted as source units. Accordingly, all paraphrases for the source units will be extracted as target units. However, when a certain application is given, only the source and target units that can achieve the application will be kept. We call this process paraphrase planning, which is formally defined as in Figure 2. An example is depicted in Figure 3. The application in this example is sentence compression. All source and target units are listed below the input sentence, in which the first two source units are phrases, while the third and fourth are a pattern and a collocation, respectively. As can be seen, the first and fourth source units are filtered in paraphrase planning, since none of their paraphrases achieve the application (i.e., shorter in bytes than the source). The second and third source units are kept, but some of their paraphrases are filtered. 3.4 Paraphrase Generation Our SPG model contains three sub-models: a paraphrase model, a language model, and a usability model, which control the adequacy, fluency, Input: source sentence s Input: paraphrase application A Input: paraphrase tables PTs Output: set of source units SU Output: set of target units TU Extract source units of s from PTs: SU={su1, …, sun} For each source unit sui Extract its target units TUi={tui1, …, tuim} For each target unit tuij If tuij cannot achieve the application A Delete tuij from TUi End If End For If TUi is empty Delete sui from SU End If End for Figure 2: The paraphrase planning algorithm. and usability of the paraphrases, respectively1. Paraphrase Model: Paraphrase generation is a decoding process. The input sentence s is first segmented into a sequence of I units ¯sI 1, which are then paraphrased to a sequence of units ¯tI 1. Let (¯si, ¯ti) be a pair of paraphrase units, their paraphrase likelihood is computed using a score function φpm(¯si, ¯ti). Thus the paraphrase score ppm(¯sI 1, ¯tI 1) between s and t is decomposed into: ppm(¯sI 1, ¯tI 1) = IY i=1 φpm(¯si, ¯ti)λpm (1) where λpm is the weight of the paraphrase model. Actually, it is defined similarly to the translation model in SMT (Koehn et al., 2003). In practice, the units of a sentence may be paraphrased using different PTs. Suppose we have K PTs, (¯ski, ¯tki) is a pair of paraphrase units from the k-th PT with the score function φk(¯ski, ¯tki), then Equation (1) can be rewritten as: ppm(¯sI 1, ¯tI 1) = K Y k=1 ( Y ki φk(¯ski, ¯tki)λk) (2) where λk is the weight for φk(¯ski, ¯tki). Equation (2) assumes that a pair of paraphrase units is from only one paraphrase table. However, 1The SPG model applies monotone decoding, which does not contain a reordering sub-model that is often used in SMT. Instead, we use the paraphrase patterns to achieve word reordering in paraphrase generation. 836 The US government should take the overall situation into consideration and actively promote bilateral high-tech trades. The US government The US administration The US government on overall situation overall interest overall picture overview situation as a whole whole situation take [NN_1] into consideration consider [NN_1] take into account [NN_1] take account of [NN_1] take [NN_1] into account take into consideration [NN_1] <promote, OBJ, trades> <sanction, OBJ, trades> <stimulate, OBJ, trades> <strengthen, OBJ, trades> <support, OBJ, trades> <sustain, OBJ, trades> Paraphrase application: sentence compression Figure 3: An example of paraphrase planning. we find that about 2% of the paraphrase units appear in two or more PTs. In this case, we only count the PT that provides the largest paraphrase score, i.e., ˆk = arg maxk{φk(¯si, ¯ti)λk}. In addition, note that there may be some units that cannot be paraphrased or prefer to keep unchanged during paraphrasing. Therefore, we have a self-paraphrase table in the K PTs, which paraphrases any separate word w into itself with a constant score c: φself(w, w) = c (we set c = e−1). Language Model: We use a tri-gram language model in this work. The language model based score for the paraphrase t is computed as: plm(t) = J Y j=1 p(tj|tj−2tj−1)λlm (3) where J is the length of t, tj is the j-th word of t, and λlm is the weight for the language model. Usability Model: The usability model prefers paraphrase units that can better achieve the application. The usability of t depends on paraphrase units it contains. Hence the usability model pum(¯sI 1, ¯tI 1) is decomposed into: pum(¯sI 1, ¯tI 1) = IY i=1 pum(¯si, ¯ti)λum (4) where λum is the weight for the usability model and pum(¯si, ¯ti) is defined as follows: pum(¯si, ¯ti) = eµ(¯si,¯ti) (5) We consider three applications, including sentence compression, simplification, and similarity computation. µ(¯si, ¯ti) is defined separately for each: • Sentence compression: Sentence compression2 is important for summarization, subtitle generation, and displaying texts in small screens such as cell phones. In this application, only the target units shorter than the sources are kept in paraphrase planning. We define µ(¯si, ¯ti) = len(¯si) −len(¯ti), where len(·) denotes the length of a unit in bytes. • Sentence simplification: Sentence simplification requires using common expressions in sentences so that readers can easily understand the meaning. Therefore, only the target units more frequent than the sources are kept in paraphrase planning. Here, the frequency of a unit is measured using the language model mentioned above3. Specifically, the langauge model assigns a score scorelm(·) for each unit and the unit with larger score is viewed as more frequent. We define µ(¯si, ¯ti) = 1 iff scorelm(¯ti) > scorelm(¯si). • Sentence similarity computation: Given a reference sentence s′, this application aims to paraphrase s into t, so that t is more similar (closer in wording) with s′ than s. This application is important for the automatic evaluation of machine translation and summarization, since we can paraphrase the human translations/summaries to make them more similar to the system outputs, which can refine the accuracy of the evaluation (Kauchak and Barzilay, 2006). For this application, 2This work defines compression as the shortening of sentence length in bytes rather than in words. 3To compute the language model based score, the matched patterns are instantiated and the matched collocations are connected with words between them. 837 only the target units that can enhance the similarity to the reference sentence are kept in planning. We define µ(¯si, ¯ti) = sim(¯ti, s′)− sim(¯si, s′), where sim(·, ·) is simply computed as the count of overlapping words. We combine the three sub-models based on a log-linear framework and get the SPG model: p(t|s) = K X k=1 (λk X ki log φk(¯ski, ¯tki)) + λlm J X j=1 log p(tj|tj−2tj−1) + λum I X i=1 µ(¯si, ¯ti) (6) 3.5 Paraphrase Resources We use five PTs in this work (except the selfparaphrase table), in which each pair of paraphrase units has a score assigned by the score function of the corresponding method. Paraphrase phrases (PT-1 to PT-3): Paraphrase phrases are extracted from three corpora: (1) Corp-1: bilingual parallel corpus, (2) Corp2: monolingual comparable corpus (comparable news articles reporting on the same event), and (3) Corp-3: monolingual parallel corpus (parallel translations of the same foreign novel). The details of the corpora, methods, and score functions are presented in (Zhao et al., 2008a). In our experiments, PT-1 is the largest, which contains 3,041,822 pairs of paraphrase phrases. PT-2 and PT-3 contain 92,358, and 17,668 pairs of paraphrase phrases, respectively. Paraphrase patterns (PT-4): Paraphrase patterns are also extracted from Corp-1. We applied the approach proposed in (Zhao et al., 2008b). Its basic assumption is that if two English patterns e1 and e2 are aligned with the same foreign pattern f, then e1 and e2 are possible paraphrases. One can refer to (Zhao et al., 2008b) for the details. PT-4 contains 1,018,371 pairs of paraphrase patterns. Paraphrase collocations (PT-5): Collocations4 can cover long distance dependencies in sentences. Thus paraphrase collocations are useful for SPG. We extract collocations from a monolingual 4A collocation is a lexically restricted word pair with a certain syntactic relation. This work only considers verbobject collocations, e.g., <promote, OBJ, trades>. corpus and use a binary classifier to recognize if any two collocations are paraphrases. Due to the space limit, we cannot introduce the detail of the approach. We assign the score “1” for any pair of paraphrase collocations. PT-5 contains 238,882 pairs of paraphrase collocations. 3.6 Parameter Estimation To estimate parameters λk(1 ≤k ≤K), λlm, and λum, we adopt the approach of minimum error rate training (MERT) that is popular in SMT (Och, 2003). In SMT, however, the optimization objective function in MERT is the MT evaluation criteria, such as BLEU. As we analyzed above, the BLEU-style criteria cannot be adapted in SPG. We therefore introduce a new optimization objective function in this paper. The basic assumption is that a paraphrase should contain as many correct unit replacements as possible. Accordingly, we design the following criteria: Replacement precision (rp): rp assesses the precision of the unit replacements, which is defined as rp = cdev(+r)/cdev(r), where cdev(r) is the total number of unit replacements in the generated paraphrases on the development set. cdev(+r) is the number of the correct replacements. Replacement rate (rr): rr measures the paraphrase degree on the development set, i.e., the percentage of words that are paraphrased. We define rr as: rr = wdev(r)/wdev(s), where wdev(r) is the total number of words in the replaced units on the development set, and wdev(s) is the number of words of all sentences on the development set. Replacement f-measure (rf): We use rf as the optimization objective function in MERT, which is similar to the conventional f-measure and leverages rp and rr: rf = (2 × rp × rr)/(rp + rr). We estimate parameters for each paraphrase application separately. For each application, we first ask two raters to manually label all possible unit replacements on the development set as correct or incorrect, so that rp, rr, and rf can be automatically computed under each set of parameters. The parameters that result in the highest rf on the development set are finally selected. 4 Experimental Setup Our SPG decoder is developed by remodeling Moses that is widely used in SMT (Hoang and Koehn, 2008). The POS tagger and dependency parser for sentence preprocessing are SVM838 Tool (Gimenez and Marquez, 2004) and MSTParser (McDonald et al., 2006). The language model is trained using a 9 GB English corpus. 4.1 Experimental Data Our method is not restricted in domain or sentence style. Thus any sentence can be used in development and test. However, for the sentence similarity computation purpose in our experiments, we want to evaluate if the method can enhance the stringlevel similarity between two paraphrase sentences. Therefore, for each input sentence s, we need a reference sentence s′ for similarity computation. Based on the above consideration, we acquire experiment data from the human references of the MT evaluation, which provide several human translations for each foreign sentence. In detail, we use the first translation of a foreign sentence as the source s and the second translation as the reference s′ for similarity computation. In our experiments, the development set contains 200 sentences and the test set contains 500 sentences, both of which are randomly selected from the human translations of 2008 NIST Open Machine Translation Evaluation: Chinese to English Task. 4.2 Evaluation Metrics The evaluation metrics for SPG are similar to the human evaluation for MT (Callison-Burch et al., 2007). The generated paraphrases are manually evaluated based on three criteria, i.e., adequacy, fluency, and usability, each of which has three scales from 1 to 3. Here is a brief description of the different scales for the criteria: Adequacy 1: The meaning is evidently changed. 2: The meaning is generally preserved. 3: The meaning is completely preserved. Fluency 1: The paraphrase t is incomprehensible. 2: t is comprehensible. 3: t is a flawless sentence. Usability 1: t is opposite to the application purpose. 2: t does not achieve the application. 3: t achieves the application. 5 Results and Analysis We use our method to generate paraphrases for the three applications. Results show that the percentages of test sentences that can be paraphrased are 97.2%, 95.4%, and 56.8% for the applications of sentence compression, simplification, and similarity computation, respectively. The reason why the last percentage is much lower than the first two is that, for sentence similarity computation, many sentences cannot find unit replacements from the PTs that improve the similarity to the reference sentences. For the other applications, only some very short sentences cannot be paraphrased. Further results show that the average number of unit replacements in each sentence is 5.36, 4.47, and 1.87 for sentence compression, simplification, and similarity computation. It also indicates that sentence similarity computation is more difficult than the other two applications. 5.1 Evaluation of the Proposed Method We ask two raters to label the paraphrases based on the criteria defined in Section 4.2. The labeling results are shown in the upper part of Table 1. We can see that for adequacy and fluency, the paraphrases in sentence similarity computation get the highest scores. About 70% of the paraphrases are labeled “3”. This is because in sentence similarity computation, only the target units appearing in the reference sentences are kept in paraphrase planning. This constraint filters most of the noise. The adequacy and fluency scores of the other two applications are not high. The percentages of label “3” are around 30%. The main reason is that the average numbers of unit replacements for these two applications are much larger than sentence similarity computation. It is thus more likely to bring in incorrect unit replacements, which influence the quality of the generated paraphrases. The usability is needed to be manually labeled only for sentence simplification, since it can be automatically labeled in the other two applications. As shown in Table 1, for sentence simplification, most paraphrases are labeled “2” in usability, while merely less than 20% are labeled “3”. We conjecture that it is because the raters are not sensitive to the slight change of the simplification degree. Thus they labeled “2” in most cases. We compute the kappa statistic between the raters. Kappa is defined as K = P(A)−P(E) 1−P(E) (Carletta, 1996), where P(A) is the proportion of times that the labels agree, and P(E) is the proportion of times that they may agree by chance. We define P(E) = 1 3 , as the labeling is based on three point scales. The results show that the kappa statistics for adequacy and fluency are 0.6560 and 0.6500, which indicates a substantial agreement (K: 0.610.8) according to (Landis and Koch, 1977). The 839 Adequacy (%) Fluency (%) Usability (%) 1 2 3 1 2 3 1 2 3 Sentence rater1 32.92 44.44 22.63 21.60 47.53 30.86 0 0 100 compression rater2 40.54 34.98 24.49 25.51 43.83 30.66 0 0 100 Sentence rater1 29.77 44.03 26.21 22.01 42.77 35.22 25.37 61.84 12.79 simplification rater2 33.33 35.43 31.24 24.32 39.83 35.85 30.19 51.99 17.82 Sentence rater1 7.75 24.30 67.96 7.75 22.54 69.72 0 0 100 similarity rater2 7.75 19.01 73.24 6.69 21.48 71.83 0 0 100 Baseline-1 rater1 47.31 30.75 21.94 43.01 33.12 23.87 rater2 47.10 30.11 22.80 34.41 41.51 24.09 Baseline-2 rater1 29.45 52.76 17.79 25.15 52.76 22.09 rater2 33.95 46.01 20.04 27.61 48.06 24.34 Table 1: The evaluation results of the proposed method and two baseline methods. kappa statistic for usability is 0.5849, which is only moderate (K: 0.41-0.6). Table 2 shows an example of the generated paraphrases. A source sentence s is paraphrased in each application and we can see that: (1) for sentence compression, the paraphrase t is 8 bytes shorter than s; (2) for sentence simplification, the words wealth and part in t are easier than their sources asset and proportion, especially for the non-native speakers; (3) for sentence similarity computation, the reference sentence s′ is listed below t, in which the words appearing in t but not in s are highlighted in blue. 5.2 Comparison with Baseline Methods In our experiments, we implement two baseline methods for comparison: Baseline-1: Baseline-1 follows the method proposed in (Quirk et al., 2004), which generates paraphrases using typical SMT tools. Similar to Quirk et al.’s method, we extract a paraphrase table for the SMT model from a monolingual comparable corpus (PT-2 described above). The SMT decoder used in Baseline-1 is Moses. Baseline-2: Baseline-2 extends Baseline-1 by combining multiple resources. It exploits all PTs introduced above in the same way as our proposed method. The difference from our method is that Baseline-2 does not take different applications into consideration. Thus it contains no paraphrase planning stage or the usability sub-model. We tune the parameters for the two baselines using the development data as described in Section 3.6 and evaluate them with the test data. Since paraphrase applications are not considered by the baselines, each baseline method outputs a single best paraphrase for each test sentence. The generation results show that 93% and 97.8% of the test sentences can be paraphrased by Baseline-1 and Baseline-2. The average number of unit replacements per sentence is 4.23 and 5.95, respectively. This result suggests that Baseline-1 is less capable than Baseline-2, which is mainly because its paraphrase resource is limited. The generated paraphrases are also labeled by our two raters and the labeling results can be found in the lower part of Table 1. As can be seen, Baseline-1 performs poorly compared with our method and Baseline-2, as the percentage of label “1” is the highest for both adequacy and fluency. This result demonstrates that it is necessary to combine multiple paraphrase resources to improve the paraphrase generation performance. Table 1 also shows that Baseline-2 performs comparably with our method except that it does not consider paraphrase applications. However, we are interested how many paraphrases generated by Baseline-2 can achieve the given applications by chance. After analyzing the results, we find that 24.95%, 8.79%, and 7.16% of the paraphrases achieve sentence compression, simplification, and similarity computation, respectively, which are much lower than our method. 5.3 Informal Comparison with Application Specific Methods Previous research regarded sentence compression, simplification, and similarity computation as totally different problems and proposed distinct method for each one. Therefore, it is interesting to compare our method to the application-specific methods. However, it is really difficult for us to 840 Source sentence Liu Lefei says that in the long term, in terms of asset allocation, overseas investment should occupy a certain proportion of an insurance company’s overall allocation. Sentence compression Liu Lefei says that in [the long run]phr, [in area of [asset allocation][NN 1]]pat, overseas investment should occupy [a [certain][JJ 1] part of [an insurance company’s overall allocation][NN 1]]pat. Sentence simplification Liu Lefei says that in [the long run]phr, in terms of [wealth]phr [distribution]phr, overseas investment should occupy [a [certain][JJ 1] part of [an insurance company’s overall allocation][NN 1]]pat. Sentence similarity Liu Lefei says that in [the long run]phr, in terms [of capital]phr allocation, overseas investment should occupy [the [certain][JJ 1] ratio of [an insurance company’s overall allocation][NN 1]]pat. (reference sentence: Liu Lefei said that in terms of capital allocation, outbound investment should make up a certain ratio of overall allocations for insurance companies in the long run .) Table 2: The generated paraphrases of a source sentence for different applications. The target units after replacement are shown in blue and the pattern slot fillers are in cyan. [·]phr denotes that the unit is a phrase, while [·]pat denotes that the unit is a pattern. There is no collocation replacement in this example. reimplement the methods purposely designed for these applications. Thus here we just conduct an informal comparison with these methods. Sentence compression: Sentence compression is widely studied, which is mostly reviewed as a word deletion task. Different from prior research, Cohn and Lapata (2008) achieved sentence compression using a combination of several operations including word deletion, substitution, insertion, and reordering based on a statistical model, which is similar to our paraphrase generation process. Besides, they also used paraphrase patterns extracted from bilingual parallel corpora (like our PT-4) as a kind of rewriting resource. However, as most other sentence compression methods, their method allows information loss after compression, which means that the generated sentences are not necessarily paraphrases of the source sentences. Sentence Simplification: Carroll et al. (1999) has proposed an automatic text simplification method for language-impaired readers. Their method contains two main parts, namely the lexical simplifier and syntactic simplifier. The former one focuses on replacing words with simpler synonyms, while the latter is designed to transfer complex syntactic structures into easy ones (e.g., replacing passive sentences with active forms). Our method is, to some extent, simpler than Carroll et al.’s, since our method does not contain syntactic simplification strategies. We will try to address sentence restructuring in our future work. Sentence Similarity computation: Kauchak and Barzilay (2006) have tried paraphrasing-based sentence similarity computation. They paraphrase a sentence s by replacing its words with WordNet synonyms, so that s can be more similar in wording to another sentence s′. A similar method has also been proposed in (Zhou et al., 2006), which uses paraphrase phrases like our PT-1 instead of WordNet synonyms. These methods can be roughly viewed as special cases of ours, which only focus on the sentence similarity computation application and only use one kind of paraphrase resource. 6 Conclusions and Future Work This paper proposes a method for statistical paraphrase generation. The contributions are as follows. (1) It is the first statistical model specially designed for paraphrase generation, which is based on the analysis of the differences between paraphrase generation and other researches, especially machine translation. (2) It generates paraphrases for different applications with a uniform model, rather than presenting distinct methods for each application. (3) It uses multiple resources, including paraphrase phrases, patterns, and collocations, to relieve data shortage and generate more varied and interesting paraphrases. Our future work will be carried out along two directions. First, we will improve the components of the method, especially the paraphrase planning algorithm. The algorithm currently used is simple but greedy, which may miss some useful paraphrase units. Second, we will extend the method to other applications, We hope it can serve as a universal framework for most if not all applications. Acknowledgements The research was supported by NSFC (60803093, 60675034) and 863 Program (2008AA01Z144). Special thanks to Wanxiang Che, Ruifang He, Yanyan Zhao, Yuhang Guo and the anonymous reviewers for insightful comments and suggestions. 841 References Regina Barzilay and Lillian Lee. 2003. Learning to Paraphrase: An Unsupervised Approach Using Multiple-Sequence Alignment. In Proceedings of HLT-NAACL, pages 16-23. Igor A. Bolshakov and Alexander Gelbukh. 2004. Synonymous Paraphrasing Using WordNet and Internet. In Proceedings of NLDB, pages 312-323. Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (Meta-) Evaluation of Machine Translation. In Proceedings of ACL Workshop on Statistical Machine Translation, pages 136-158. Jean Carletta. 1996. Assessing Agreement on Classification Tasks: The Kappa Statistic. In Computational Linguistics, 22(2): 249-254. John Carroll, Guido Minnen, Darren Pearce, Yvonne Canning, Siobhan Devlin, John Tait. 1999. Simplifying Text for Language-Impaired Readers. In Proceedings of EACL, pages 269-270. Trevor Cohn and Mirella Lapata. 2008. Sentence Compression Beyond Word Deletion In Proceedings of COLING, pages 137-144. Pablo Ariel Duboue and Jennifer Chu-Carroll. 2006. Answering the Question You Wish They Had Asked: The impact of paraphrasing for Question Answering. In Proceedings of HLT-NAACL, pages 33-36. Jesus Gimenez and Lluis Marquez. 2004. SVMTool: A general POS tagger generator based on Support Vector Machines. In Proceedings of LREC, pages 43-46. Hieu Hoang and Philipp Koehn. 2008. Design of the Moses Decoder for Statistical Machine Translation. In Proceedings of ACL Workshop on Software engineering, testing, and quality assurance for NLP, pages 58-65. Lidija Iordanskaja, Richard Kittredge, and Alain Polgu`ere. 1991. Lexical Selection and Paraphrase in a Meaning-Text Generation Model. In C´ecile L. Paris, William R. Swartout, and William C. Mann (Eds.): Natural Language Generation in Artificial Intelligence and Computational Linguistics, pages 293-312. David Kauchak and Regina Barzilay. 2006. Paraphrasing for Automatic Evaluation. In Proceedings of HLT-NAACL, pages 455-462. Philipp Koehn, Franz Josef Och, Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proceedings of HLT-NAACL, pages 127-133. Raymond Kozlowski, Kathleen F. McCoy, and K. Vijay-Shanker. 2003. Generation of single-sentence paraphrases from predicate/argument structure using lexico-grammatical resources. In Proceedings of IWP, pages 1-8. J. R. Landis and G. G. Koch. 1977. The Measurement of Observer Agreement for Categorical Data. In Biometrics 33(1): 159-174. De-Kang Lin and Patrick Pantel. 2001. Discovery of Inference Rules for Question Answering. In Natural Language Engineering 7(4): 343-360. Ryan McDonald, Kevin Lerman, and Fernando Pereira. 2006. Multilingual Dependency Parsing with a Two-Stage Discriminative Parser. In Proceedings of CoNLL. Kathleen R. McKeown. 1979. Paraphrasing Using Given and New Information in a Question-Answer System. In Proceedings of ACL, pages 67-72. Masaki Murata and Hitoshi Isahara. 2001. Universal Model for Paraphrasing - Using Transformation Based on a Defined Criteria. In Proceedings of NLPRS, pages 47-54. Franz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of ACL, pages 160-167. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of ACL, pages 311-318. Richard Power and Donia Scott. 2005. Automatic generation of large-scale paraphrases. In Proceedings of IWP, pages 73-79. Chris Quirk, Chris Brockett, and William Dolan. 2004. Monolingual Machine Translation for Paraphrase Generation. In Proceedings of EMNLP, pages 142149. Tetsuro Takahashi, Tomoyam Iwakura, Ryu Iida, Atsushi Fujita, Kentaro Inui. 2001. KURA: A Transfer-based Lexico-structural Paraphrasing Engine. In Proceedings of NLPRS, pages 37-46. Shiqi Zhao, Cheng Niu, Ming Zhou, Ting Liu, and Sheng Li. 2008a. Combining Multiple Resources to Improve SMT-based Paraphrasing Model. In Proceedings of ACL-08:HLT, pages 1021-1029. Shiqi Zhao, Haifeng Wang, Ting Liu, and Sheng Li. 2008b. Pivot Approach for Extracting Paraphrase Patterns from Bilingual Corpora. In Proceedings of ACL-08:HLT, pages 780-788. Liang Zhou, Chin-Yew Lin, Dragos Stefan Munteanu, and Eduard Hovy. 2006. ParaEval: Using Paraphrases to Evaluate Summaries Automatically. In Proceedings of HLT-NAACL, pages 447-454. Chengqing Zong, Yujie Zhang, Kazuhide Yamamoto, Masashi Sakamoto, Satoshi Shirai. 2001. Approach to Spoken Chinese Paraphrasing Based on Feature Extraction. In Proceedings of NLPRS, pages 551556. 842
2009
94
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 843–851, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Semi-Supervised Cause Identification from Aviation Safety Reports Isaac Persing and Vincent Ng Human Language Technology Research Institute University of Texas at Dallas Richardson, TX 75083-0688 {persingq,vince}@hlt.utdallas.edu Abstract We introduce cause identification, a new problem involving classification of incident reports in the aviation domain. Specifically, given a set of pre-defined causes, a cause identification system seeks to identify all and only those causes that can explain why the aviation incident described in a given report occurred. The difficulty of cause identification stems in part from the fact that it is a multi-class, multilabel categorization task, and in part from the skewness of the class distributions and the scarcity of annotated reports. To improve the performance of a cause identification system for the minority classes, we present a bootstrapping algorithm that automatically augments a training set by learning from a small amount of labeled data and a large amount of unlabeled data. Experimental results show that our algorithm yields a relative error reduction of 6.3% in F-measure for the minority classes in comparison to a baseline that learns solely from the labeled data. 1 Introduction Automatic text classification is one of the most important applications in natural language processing (NLP). The difficulty of a text classification task depends on various factors, but typically, the task can be difficult if (1) the amount of labeled data available for learning the task is small; (2) it involves multiple classes; (3) it involves multilabel categorization, where more than one label can be assigned to each document; (4) the class distributions are skewed, with some categories significantly outnumbering the others; and (5) the documents belong to the same domain (e.g., movie review classification). In particular, when the documents to be classified are from the same domain, they tend to be more similar to each other with respect to word usage, thus making the classes less easily separable. This is one of the reasons why topic-based classification, even with multiple classes as in the 20 Newsgroups dataset1, tends to be easier than review classification, where reviews from the same domain are to be classified according to the sentiment expressed2. In this paper, we introduce a new text classification problem involving the Aviation Safety Reporting System (ASRS) that can be viewed as a difficult task along each of the five dimensions discussed above. Established in 1967, ASRS collects voluntarily submitted reports about aviation safety incidents written by flight crews, attendants, controllers, and other related parties. These incident reports are made publicly available to researchers for automatic analysis, with the ultimate goal of improving the aviation safety situation. One central task in the automatic analysis of these reports is cause identification, or the identification of why an incident happened. Aviation safety experts at NASA have identified 14 causes (or shaping factors in NASA terminology) that could explain why an incident occurred. Hence, cause identification can be naturally recast as a text classification task: given an incident report, determine which of a set of 14 shapers contributed to the occurrence of the incident described in the report. As mentioned above, cause identification is considered challenging along each of the five aforementioned dimensions. First, there is a scarcity of incident reports labeled with the shapers. This can be attributed to the fact that there has been very little work on this task. While the NASA researchers have applied a heuristic method for labeling a report with shapers (Posse 1http://kdd.ics.uci.edu/databases/20newsgroups/ 2Of course, the fact that sentiment classification requires a deeper understanding of a text also makes it more difficult than topic-based text classification (Pang et al., 2002). 843 et al., 2005), the method was evaluated on only 20 manually labeled reports, which are not made publicly available. Second, the fact that this is a 14-class classification problem makes it more challenging than a binary classification problem. Third, a report can be labeled with more than one category, as several shapers can contribute to the occurrence of an aviation incident. Fourth, the class distribution is very skewed: based on an analysis of our 1,333 annotated reports, 10 of the 14 categories can be considered minority classes, which account for only 26% of the total number of labels associated with the reports. Finally, our cause identification task is domain-specific, involving the classification of documents that all belong to the aviation domain. This paper focuses on improving the accuracy of minority class prediction for cause identification. Not surprisingly, when trained on a dataset with a skewed class distribution, most supervised machine learning algorithms will exhibit good performance on the majority classes, but relatively poor performance on the minority classes. Unfortunately, achieving good accuracies on the minority classes is very important in our task of identifying shapers from aviation safety reports, where 10 out of the 14 shapers are minority classes, as mentioned above. Minority class prediction has been tackled extensively in the machine learning literature, using methods that typically involve sampling and re-weighting of training instances, with the goal of creating a less skewed class distribution (e.g., Pazzani et al. (1994), Fawcett (1996), Kubat and Matwin (1997)). Such methods, however, are unlikely to perform equally well for our cause identification task given our small labeled set, as the minority class prediction problem is complicated by the scarcity of labeled data. More specifically, given the scarcity of labeled data, many words that are potentially correlated with a shaper (especially a minority shaper) may not appear in the training set, and the lack of such useful indicators could hamper the acquisition of an accurate classifier via supervised learning techniques. We propose to address the problem of minority class prediction in the presence of a small training set by means of a bootstrapping approach, where we introduce an iterative algorithm to (1) use a small set of labeled reports and a large set of unlabeled reports to automatically identify words that are most relevant to the minority shaper under consideration, and (2) augment the labeled data by using the resulting words to annotate those unlabeled reports that can be confidently labeled. We evaluate our approach using cross-validation on 1,333 manually annotated reports. In comparison to a supervised baseline approach where a classifier is acquired solely based on the training set, our bootstrapping approach yields a relative error reduction of 6.3% in F-measure for the minority classes. In sum, the contributions of our work are threefold. First, we introduce a new, challenging text classification problem, cause identification from aviation safety reports, to the NLP community. Second, we created an annotated dataset for cause identification that is made publicly available for stimulating further research on this problem3. Third, we introduce a bootstrapping algorithm for improving the prediction of minority classes in the presence of a small training set. The rest of the paper is organized as follows. In Section 2, we present the 14 shapers. Section 3 explains how we preprocess and annotate the reports. Sections 4 and 5 describe the baseline approaches and our bootstrapping algorithm, respectively. We present results in Section 6, discuss related work in Section 7, and conclude in Section 8. 2 Shaping Factors As mentioned in the introduction, the task of cause identification involves labeling an incident report with all the shaping factors that contributed to the occurrence of the incident. Table 1 lists the 14 shaping factors, as well as a description of each shaper taken verbatim from Posse et al. (2005). As we can see, the 14 classes are not mutually exclusive. For instance, a lack of familiarity with equipment often implies a deficit in proficiency in its use, so the two shapers frequently co-occur. In addition, while some classes cover a specific and well-defined set of issues (e.g., Illusion), some encompass a relatively large range of situations. For instance, resource deficiency can include problems with equipment, charts, or even aviation personnel. Furthermore, ten shaping factors can be considered minority classes, as each of them account for less than 10% of the labels. Accurately predicting minority classes is important in this domain because, for example, the physical factors minority shaper is frequently associated with incidents involving near-misses between aircraft. 3http://www.hlt.utdallas.edu/∼persingq/ASRSdataset.html 844 Id Shaping Factor Description % 1 Attitude Any indication of unprofessional or antagonistic attitude by a controller or flight crew member, e.g., complacency or get-homeitis (in a hurry to get home). 2.4 2 Communication Environment Interferences with communications in the cockpit such as noise, auditory interference, radio frequency congestion, or language barrier. 5.5 3 Duty Cycle A strong indication of an unusual working period, e.g., a long day, flying very late at night, exceeding duty time regulations, having short and inadequate rest periods. 1.8 4 Familiarity A lack of factual knowledge, such as new to or unfamiliar with company, airport, or aircraft. 3.2 5 Illusion Bright lights that cause something to blend in, black hole, white out, sloping terrain, etc. 0.1 6 Other Anything else that could be a shaper, such as shift change, passenger discomfort, or disorientation. 13.3 7 Physical Environment Unusual physical conditions that could impair flying or make things difficult. 16.0 8 Physical Factors Pilot ailment that could impair flying or make things more difficult, such as being tired, drugged, incapacitated, suffering from vertigo, illness, dizziness, hypoxia, nausea, loss of sight or hearing. 2.2 9 Preoccupation A preoccupation, distraction, or division of attention that creates a deficit in performance, such as being preoccupied, busy (doing something else), or distracted. 6.7 10 Pressure Psychological pressure, such as feeling intimidated, pressured, or being low on fuel. 1.8 11 Proficiency A general deficit in capabilities, such as inexperience, lack of training, not qualified, or not current. 14.4 12 Resource Deficiency Absence, insufficient number, or poor quality of a resource, such as overworked or unavailable controller, insufficient or out-of-date chart, malfunctioning or inoperative or missing equipment. 30.0 13 Taskload Indicators of a heavy workload or many tasks at once, such as short-handed crew. 1.9 14 Unexpected Something sudden and surprising that is not expected. 0.6 Table 1: Descriptions of shaping factor classes. The “%” column shows the percent of labels the shapers account for. 3 Dataset We downloaded our corpus from the ASRS website4. The corpus consists of 140,599 incident reports collected during the period from January 1998 to December 2007. Each report is a free text narrative that describes not only why an incident happened, but also what happened, where it happened, how the reporter felt about the incident, the reporter’s opinions of other people involved in the incident, and any other comments the reporter cared to include. In other words, a lot of information in the report is irrelevant to (and thus complicates) the task of cause identification. 3.1 Preprocessing Unlike newswire articles, at which many topicbased text classification tasks are targeted, the ASRS reports are informally written using various domain-specific abbreviations and acronyms, tend to contain poor grammar, and have capitalization information removed, as illustrated in the following sentence taken from one of the reports. HAD BEEN CLRED FOR APCH BY ZOA AND HAD BEEN HANDED OFF TO SANTA ROSA TWR. 4http://asrs.arc.nasa.gov/ This sentence is grammatically incorrect (due to the lack of a subject), and contains abbreviations such as CLRED, APCH, and TWR. This makes it difficult for a non-aviation expert to understand. To improve readability (and hence facilitate the annotation process), we preprocess each report as follows. First, we expand the abbreviations/acronyms with the help of an official list of acronyms/abbreviations and their expanded forms5. Second, though not as crucial as the first step, we heuristically restore the case of the words by relying on an English lexicon: if a word appears in the lexicon, we assume that it is not a proper name, and therefore convert it into lowercase. After preprocessing, the example sentence appears as had been cleared for approach by ZOA and had been handed off to santa rosa tower. Finally, to facilitate automatic analysis, we stem each word in the narratives. 3.2 Human Annotation Next, we randomly picked 1,333 preprocessed reports and had two graduate students not affiliated 5See http://akama.arc.nasa.gov/ASRSDBOnline/pdf/ ASRS Decode.pdf. In the very infrequently-occurring case where the same abbreviation or acronym may have more than expansion, we arbitrarily chose one of the possibilities. 845 Id Total (%) F1 F2 F3 F4 F5 1 52 (3.9) 11 7 7 17 10 2 119 (8.9) 29 29 22 16 23 3 38 (2.9) 10 5 6 9 8 4 70 (5.3) 11 12 9 14 24 5 3 (0.2) 0 0 0 1 2 6 289 (21.7) 76 44 60 42 67 7 348 (26.1) 73 63 82 59 71 8 48 (3.6) 11 14 8 11 4 9 145 (10.9) 29 25 38 28 25 10 38 (2.9) 12 10 4 7 5 11 313 (23.5) 65 50 74 46 78 12 652 (48.9) 149 144 125 123 111 13 42 (3.2) 7 8 8 6 13 14 14 (1.1) 3 3 3 3 2 Table 2: Number of occurrences of each shaping factor in the dataset. The “Total” column shows the number of narratives labeled with each shaper and the percentage of narratives tagged with each shaper in the 1,333 labeled narrative set. The “F” columns show the number narratives associated with each shaper in folds F1 – F5. x (# Shapers) 1 2 3 4 5 6 Percentage 53.6 33.2 10.3 2.7 0.2 0.1 Table 3: Percentage of documents with x labels. with this research independently annotate them with shaping factors, based solely on the definitions presented in Table 1. To measure interannotator agreement, we compute Cohen’s Kappa (Carletta, 1996) from the two sets of annotations, obtaining a Kappa value of only 0.43. This not only suggests the difficulty of the cause identification task, but also reveals the vagueness inherent in the definition of the 14 shapers. As a result, we had the two annotators re-examine each report for which there was a disagreement and reach an agreement on its final set of labels. Statistics of the annotated dataset can be found in Table 2, where the “Total” column shows the size of each of the 14 classes, expressed both as the number of reports that are labeled with a particular shaper and as a percent (in parenthesis). Since we will perform 5-fold cross validation in our experiments, we also show the number of reports labeled with each shaper under the “F” columns for each fold. To get a better idea of how many reports have multiple labels, we categorize the reports according to the number of labels they contain in Table 3. 4 Baseline Approaches In this section, we describe two baseline approaches to cause identification. Since our ultimate goal is to evaluate the effectiveness of our bootstrapping algorithm, the baseline approaches only make use of small amounts of labeled data for acquiring classifiers. More specifically, both baselines recast the cause identification problem as a set of 14 binary classification problems, one for predicting each shaper. In the binary classification problem for predicting shaper si, we create one training instance from each document in the training set, labeling the instance as positive if the document has si as one of its labels, and negative otherwise. After creating training instances, we train a binary classifier, ci, for predicting si, employing as features the top 50 unigrams that are selected according to information gain computed over the training data (see Yang and Pedersen (1997)). The SVM learning algorithm as implemented in the LIBSVM software package (Chang and Lin, 2001) is used for classifier training, owing to its robust performance on many text classification tasks. In our first baseline, we set all the learning parameters to their default values. As noted before, we divide the 1,333 annotated reports into five folds of roughly equal size, training the classifiers on four folds and applying them separately to the remaining fold. Results are reported in terms of precision (P), recall (R), and F-measure (F), which are computed by aggregating over the 14 shapers as follows. Let tpi be the number of test reports correctly labeled as positive by ci; pi be the total number of test reports labeled as positive by ci; and ni be the total number of test reports that belong to si according to the gold standard. Then, P = P i tpi P i pi , R = P i tpi P i ni , and F = 2PR P + R. Our second baseline is similar to the first, except that we tune the classification threshold (CT) to optimize F-measure. More specifically, recall that LIBSVM trains a classifier that by default employs a CT of 0.5, thus classifying an instance as positive if and only if the probability that it belongs to the positive class is at least 0.5. However, this may not be the optimal threshold to use as far as performance is concerned, especially for the minority classes, where the class distribution is skewed. This is the motivation behind tuning the CT of each classifier. To ensure a fair comparison with the first baseline, we do not employ additional labeled data for parameter tuning; rather, we reserve 25% of the available training data for tuning, and use the remaining 75% for classifier 846 acquisition. This amounts to using three folds for training and one fold for development in each cross validation experiment. Using the development data, we tune the 14 CTs jointly to optimize overall F-measure. However, an exact solution to this optimization problem is computationally expensive. Consequently, we find a local maximum by employing a local search algorithm, which alters one parameter at a time to optimize F-measure by holding the remaining parameters fixed. 5 Our Bootstrapping Algorithm One of the potential weaknesses of the two baselines described in the previous section is that the classifiers are trained on only a small amount of labeled data. This could have an adverse effect on the accuracy of the resulting classifiers, especially those for the minority classes. The situation is somewhat aggravated by the fact that we are adopting a one-versus-all scheme for generating training instances for a particular shaper, which, together with the small amount of labeled data, implies that only a couple of positive instances may be available for training the classifier for a minority class. To alleviate the data scarcity problem and improve the accuracy of the classifiers, we propose in this section a bootstrapping algorithm that automatically augments a training set by exploiting a large amount of unlabeled data. The basic idea behind the algorithm is to iteratively identify words that are high-quality indicators of the positive or negative examples, and then automatically label unlabeled documents that contain a sufficient number of such indicators. Our bootstrapping algorithm, shown in Figure 1, aims to augment the set of positive and negative training instances for a given shaper. The main function, Train, takes as input four arguments. The first two arguments, P and N, are the positive and negative instances, respectively, generated by the one-versus-one scheme from the initial training set, as described in the previous section. The third argument, U, is the unlabeled set of documents, which consists of all but the documents in the training set. In particular, U contains the documents in the development and test sets. Hence, we are essentially assuming access to the test documents (but not their labels) during the training process, as in a transductive learning setting. The last argument, k, is the number of bootstrapping iterations. In addition, the algoTrain(P, N, U, k) Inputs: P: positively labeled training examples of shaper x N: negatively labeled training examples of shaper x U: set of unlabeled narratives in corpus k: number of bootstrapping iterations PW ←∅ NW ←∅ for i = 0 to k −1 do if |P| > |N| then [P, PW ] ←ExpandTrainingSet(P,N, U, PW ) else [N, NW ] ←ExpandTrainingSet(N,P, U, NW ) end if end for ExpandTrainingSet(A,B, U, W ) Inputs: A, B, U: narrative sets W : unigram feature set for j = 1 to 4 do t ←arg maxt/∈W  log( C(t,A) C(t,B)+1)  // C(t, X): number of narratives in X containing t W ←W ∪{t} end for return [A ∪S(W, U), W ] // S(W, U): narratives in U containing ≥3 words in W Figure 1: Our bootstrapping algorithm. rithm uses two variables, PW and NW, to store the sets of high-quality indicators for the positive instances and the negative instances, respectively, that are found during the bootstrapping process. Next, we begin our k bootstrapping iterations. In each iteration, we expand either P or N, depending on their relative sizes. In order to keep the two sets as close in size as possible, we choose to expand the smaller of the two sets.6 After that, we execute the function ExpandTrainingSet to expand the selected set. Without loss of generality, assume that P is chosen for expansion. To do this, ExpandTrainingSet selects four words that seem much more likely to appear in P than in N from the set of candidate words7. To select these words, we calculate the log likelihood ratio log( C(t,P ) C(t,N)+1) for each candidate word t, where C(t, P) is the number of narratives in P that contain t, and C(t, N) similarly is the number of narratives in N that contain t. If this ratio is large, 6It may seem from the way P and N are constructed that N is almost always larger than P and therefore is unlikely to be selected for expansion. However, the ample size of the unlabeled set means that the algorithm still adds large numbers of narratives to the training data. Hence, even for minority classes, P often grows larger than N by iteration 3. 7A candidate word is a word that appears in the training set (P ∪N) at least four times. 847 we posit that t is a good indicator of P. Note that incrementing the count in the denominator by one has a smoothing effect: it avoids selecting words that appears infrequently in P and not at all in N. There is a reason for selecting multiple words (rather than just one word) in each bootstrapping iteration: we want to prevent the algorithm from selecting words that are too specific to one subcategory of a shaping factor. For example, shaping factor 7 (Physical Environment) is composed largely of incidents influenced by weather phenomena. In one experiment, we tried selecting only one word per bootstrapping iteration. For shaper 7, the first word added to PW was “snow”. Upon the next iteration, the algorithm added “plow” to PW. While “plow” may itself be indicative of shaper 7, we believe its selection was due to the recent addition to P of a large number of narratives containing “snow”. Hence, by selecting four words per iteration, we are forcing the algorithm to “branch out” among these subcategories. After adding the selected words to PW, we augment P with all the unlabeled documents containing at least three words from PW. The reason we impose the “at least three” requirement is precision: we want to ensure, with a reasonable level of confidence, that the unlabeled documents chosen to augment P should indeed be labeled with the shaper under consideration, as incorrectly labeled documents would contaminate the labeled data, thus accelerating the deterioration of the quality of the automatically labeled data in subsequent bootstrapping iterations and adversely affecting the accuracy of the classifier trained on it (Pierce and Cardie, 2001). The above procedure is repeated in each bootstrapping iteration. As mentioned above, if N is smaller in size than P, we will expand N instead, adding to NW the four words that are the strongest indicators of a narrative being a negative example of the shaper under consideration, and augmenting N with those unlabeled narratives that contain at least three words from NW. The number of bootstrapping iterations is controlled by the input parameter k. As we will see in the next section, we run the bootstrapping algorithm for up to five iterations only, as the quality of the bootstrapped data deteriorates fairly rapidly. The exact value of k will be determined automatically using development data, as discussed below. After bootstrapping, the augmented training data can be used in combination with any of the two baseline approaches to acquire a classifier for identifying a particular shaper. Whichever baseline is used, we need to reserve one of the five folds to tune the parameter k in our cross validation experiments. In particular, if the second baseline is used, we will tune CT and k jointly on the development data using the local search algorithm described previously, where we adjust the values of both CT and k for one of the 14 classifiers in each step of the search process to optimize the overall F-measure score. 6 Evaluation 6.1 Baseline Systems Since our evaluation centers on the question of how effective our bootstrapping algorithm is in exploiting unlabeled documents to improve classifier performance, our two baselines only employ the available labeled documents to train the classifiers. Recall that our first baseline, which we call B0.5 (due to its being a baseline with a CT of 0.5), employs default values for all of the learning parameters. Micro-averaged 5-fold cross validation results of this baseline for all 14 shapers and for just 10 minority classes (due to our focus on improving minority class prediction) are expressed as percentages in terms of precision (P), recall (R), and F-measure (F) in the first row of Table 4. As we can see, the baseline achieves an F-measure of 45.4 (14 shapers) and 35.4 (10 shapers). Comparing these two results, the higher F-measure achieved using all 14 shapers can be attributed primarily to improvements in recall. This should not be surprising: as mentioned above, the number of positive instances of a minority class may be small, thus causing the resulting classifier to be biased towards classifying a document as negative. Instead of employing a CT value of 0.5, our second baseline, Bct, tunes CT using one of the training folds and simply trains a classifier on the remaining three folds. For parameter tuning, we tested CTs of 0.0, 0.05, . . ., 1.0. Results of this baseline are shown in row 2 of Table 4. In comparison to the first baseline, we see that F-measure improves considerably by 7.4% and 4.5% for 14 shapers and 10 shapers respectively8, which illus8It is important to note that the parameters are optimized separately for each pair of 14-shaper and 10-shaper experiments in this paper, and that the 10-shaper results are not 848 All 14 Classes 10 Minority Classes System P R F P R F B0.5 67.0 34.4 45.4 68.3 23.9 35.4 Bct 47.4 59.2 52.7 47.8 34.3 39.9 E0.5 60.9 40.4 48.6 53.2 35.3 42.4 Ect 50.5 54.9 52.6 49.1 39.4 43.7 Table 4: 5-fold cross validation results. trates the importance of employing the right CT for the cause identification task. 6.2 Our Approach Next, we evaluate the effectiveness of our bootstrapping algorithm in improving classifier performance. More specifically, we apply the two baselines separately to the augmented training set produced by our bootstrapping algorithm. When combining our bootstrapping algorithm with the first baseline, we produce a system that we call E0.5 (due to its being trained on the expanded training set with a CT of 0.5). E0.5 has only one tunable parameter, k (i.e., the number of bootstrapping iterations), whose allowable values are 0, 1, . . ., 5. When our algorithm is used in combination with the second baseline, we produce another system, Ect, which has both k and the CT as its parameters. The allowable values of these parameters, which are to be tuned jointly, are the same as those employed by Bct and E0.5. Results of E0.5 are shown in row 3 of Table 4. In comparison to B0.5, we see that F-measure increases by 3.2% and 7.0% for 14 shapers and 10 shapers, respectively. Such increases can be attributed to less imbalanced recall and precision values, as a result of a large gain in recall accompanied by a roughly equal drop in precision. These results are consistent with our intuition: recall can be improved with a larger training set, but precision can be hampered when learning from noisily labeled data. Overall, these results suggest that learning from the augmented training set is useful, especially for the minority classes. Results of Ect are shown in row 4 of Table 4. In comparison to Bct, we see mixed results: Fmeasure increases by 3.8% for 10 shapers (which represents a relative error reduction of 6.3%, but drops by 0.1% for 14 shapers. Overall, these results suggest that when the CT is tunable, training set expansion helps the minority classes but hurts the remaining classes. A closer look at the results reveals that the 0.1% F-measure drop is due simply extracted from the 14-shaper experiments. to a large drop in recall accompanied by a smaller gain in precision. In other words, for the four non-minority classes, the benefits obtained from using the bootstrapped documents can also be obtained by simply adjusting the CT. This could be attributed to the fact that a decent classifier can be trained using only the hand-labeled training examples for these four shapers, and as a result, the automatically labeled examples either provide very little new knowledge or are too noisy to be useful. On the other hand, for the 10 minority classes, the 3.8% gain in F-measure can be attributed to a simultaneous rise in recall and precision. Note that such gain cannot possibly be obtained by simply adjusting the CT, since adjusting the CT always results in higher recall and lower precision or vice versa. Overall, the simultaneous rise in recall and precision implies that the bootstrapped documents have provided useful knowledge, particularly in the form of positive examples, for the classifiers. Even though the bootstrapped documents are noisily labeled, they can still be used to improve the classifiers, as the set of initially labeled positive examples for the minority classes is too small. 6.3 Additional Analyses Quality of the bootstrapped data. Since the bootstrapped documents are noisily labeled, a natural question is: How noisy are they? To get a sense of the accuracy of the bootstrapped documents without further manual labeling, recall that our experimental setup resembles a transductive setting where the test documents are part of the unlabeled data, and consequently, some of them may have been automatically labeled by the bootstrapping algorithm. In fact, 137 documents in the five test folds were automatically labeled in the 14-shaper Ect experiments, and 69 automatically labeled documents were similarity obtained from the 10-shaper Ect experiments. For 14 shapers, the accuracies of the positively and negatively labeled documents are 74.6% and 97.1%, respectively, and the corresponding numbers for 10 shapers are 43.2% and 81.3%. These numbers suggest that negative examples can be acquired with high accuracies, but the same is not true for positive examples. Nevertheless, learning the 10 shapers from the not-so-accurately-labeled positive examples still allows us to outperform the corresponding baseline. 849 Shaping Factor Positive Expanders Negative Expanders Familiarity unfamiliar, layout, unfamilarity, rely Physical Environment cloud, snow, ice, wind Physical Factors fatigue, tire, night, rest, hotel, awake, sleep, sick declare, emergency, advisory, separation Preoccupation distract, preoccupied, awareness, situational, task, interrupt, focus, eye, configure, sleep declare, ice snow, crash, fire, rescue, anti, smoke Pressure bad, decision, extend, fuel, calculate, reserve, diversion, alternate Table 5: Example positive and negative expansion words collected by Ect for selected shaping factors. Analysis of the expanders. To get an idea of whether the words acquired during the bootstrapping process (henceforth expanders) make intuitive sense, we show in Table 5 example positive and negative expanders obtained for five shaping factors from the Ect experiments. As we can see, many of the positive expanders are intuitively obvious. We might, however, wonder about the connection between, for example, the shaper Familiarity and the word “rely”, or between the shaper Pressure and the word “extend”. We suspect that the bootstrapping algorithm is likely to make poor word selections particularly in the cases of the minority classes, where the positively labeled training data used to select expansion words is more sparse. As suggested earlier, poor word choice early in the algorithm is likely to cause even poorer word choice later on. On the other hand, while none of the negative expanders seem directly meaningful in relation to the shaper for which they were selected, some of them do appear to be related to other phenomena that may be negatively correlated with the shaper. For instance, the words “snow” and “ice” were selected as negative expanders for Preoccupation and also as positive expanders for Physical Environment. While these two shapers are only slightly negatively correlated, it is possible that Preoccupation may be strongly negatively correlated with the subset of Physical Environment incidents involving cold weather. 7 Related Work Since we recast cause identification as a text classification task and proposed a bootstrapping approach that targets at improving minority class prediction, the work most related to ours involves one or both of these topics. Guzm´an-Cabrera et al. (2007) address the problem of class skewness in text classification. Specifically, they first under-sample the majority classes, and then bootstrap the classifier trained on the under-sampled data using unlabeled documents collected from the Web. Minority classes can be expanded without the availability of unlabeled data as well. For example, Chawla et al. (2002) describe a method by which synthetic training examples of minority classes can be generated from other labeled training examples to address the problem of imbalanced data in a variety of domains. Nigam et al. (2000) propose an iterative semisupervised method that employs the EM algorithm in combination with the naive Bayes generative model to combine a small set of labeled documents and a large set of unlabeled documents. McCallum and Nigam (1999) suggest that the initial labeled examples can be obtained using a list of keywords rather than through annotated data, yielding an unsupervised algorithm. Similar bootstrapping methods are applicable outside text classification as well. One of the most notable examples is Yarowsky’s (1995) bootstrapping algorithm for word sense disambiguation. Beginning with a list of unlabeled contexts surrounding a word to be disambiguated and a list of seed words for each possible sense, the algorithm iteratively uses the seeds to label a training set from the unlabeled contexts, and then uses the training set to identify more seed words. 8 Conclusions We have introduced a new problem, cause identification from aviation safety reports, to the NLP community. We recast it as a multi-class, multilabel text classification task, and presented a bootstrapping algorithm for improving the prediction of minority classes in the presence of a small training set. Experimental results show that our algorithm yields a relative error reduction of 6.3% in F-measure over a purely supervised baseline when applied to the minority classes. By making our annotated dataset publicly available, we hope to stimulate research in this challenging problem. 850 Acknowledgments We thank the three anonymous reviewers for their invaluable comments on an earlier draft of the paper. We are indebted to Muhammad Arshad Ul Abedin, who provided us with a preprocessed version of the ASRS corpus and, together with Marzia Murshed, annotated the 1,333 documents. This work was supported in part by NASA Grant NNX08AC35A and NSF Grant IIS-0812261. References Jean Carletta. 1996. Assessing agreement on classification tasks: The Kappa statistic. Computational Linguistics, 22(2):249–254. Chih-Chung Chang and Chih-Jen Lin, 2001. LIBSVM: A library for support vector machines. Software available at http://www.csie.ntu. edu.tw/∼cjlin/libsvm. Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. 2002. SMOTE: Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16:321–357. Tom Fawcett. 1996. Learning with skewed class distributions — summary of responses. Machine Learning List: Vol. 8, No. 20. Rafael Guzm´an-Cabrera, Manuel Montes-y-G´omez, Paolo Rosso, and Luis Villase˜nor Pineda. 2007. Taking advantage of the Web for text classification with imbalanced classes. In Proceedings of MICAI, pages 831–838. Miroslav Kubat and Stan Matwin. 1997. Addressing the curse of imbalanced training sets: One-sided selection. In Proceedings of ICML, pages 179–186. Andrew McCallum and Kamal Nigam. 1999. Text classification by bootstrapping with keywords, EM and shrinkage. In Proceedings of the ACL Workshop for Unsupervised Learning in Natural Language Processing, pages 52–58. Kamal Nigam, Andrew McCallum, Sebastian Thrun, and Tom Mitchell. 2000. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2/3):103–134. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of EMNLP, pages 79–86. Michael Pazzani, Christopher Merz, Patrick Murphy, Kamal Ali, Timothy Hume, and Clifford Brunk. 1994. Reducing misclassification costs. In Proceedings of ICML, pages 217–225. David Pierce and Claire Cardie. 2001. Limitations of co-training for natural language learning from large datasets. In Proceedings of EMNLP, pages 1–9. Christian Posse, Brett Matzke, Catherine Anderson, Alan Brothers, Melissa Matzke, and Thomas Ferryman. 2005. Extracting information from narratives: An application to aviation safety reports. In Proceedings of the Aerospace Conference 2005, pages 3678–3690. Yiming Yang and Jan O. Pedersen. 1997. A comparative study on feature selection in text categorization. In Proceedings of ICML, pages 412–420. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the ACL, pages 189–196. 851
2009
95
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 852–860, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP SMS based Interface for FAQ Retrieval Govind Kothari IBM India Research Lab [email protected] Sumit Negi IBM India Research Lab [email protected] Tanveer A. Faruquie IBM India Research Lab [email protected] Venkatesan T. Chakaravarthy IBM India Research Lab [email protected] L. Venkata Subramaniam IBM India Research Lab [email protected] Abstract Short Messaging Service (SMS) is popularly used to provide information access to people on the move. This has resulted in the growth of SMS based Question Answering (QA) services. However automatically handling SMS questions poses significant challenges due to the inherent noise in SMS questions. In this work we present an automatic FAQ-based question answering system for SMS users. We handle the noise in a SMS query by formulating the query similarity over FAQ questions as a combinatorial search problem. The search space consists of combinations of all possible dictionary variations of tokens in the noisy query. We present an efficient search algorithm that does not require any training data or SMS normalization and can handle semantic variations in question formulation. We demonstrate the effectiveness of our approach on two reallife datasets. 1 Introduction The number of mobile users is growing at an amazing rate. In India alone a few million subscribers are added each month with the total subscriber base now crossing 370 million. The anytime anywhere access provided by mobile networks and portability of handsets coupled with the strong human urge to quickly find answers has fueled the growth of information based services on mobile devices. These services can be simple advertisements, polls, alerts or complex applications such as browsing, search and e-commerce. The latest mobile devices come equipped with high resolution screen space, inbuilt web browsers and full message keypads, however a majority of the users still use cheaper models that have limited screen space and basic keypad. On such devices, SMS is the only mode of text communication. This has encouraged service providers to build information based services around SMS technology. Today, a majority of SMS based information services require users to type specific codes to retrieve information. For example to get a duplicate bill for a specific month, say June, the user has to type DUPBILLJUN. This unnecessarily constraints users who generally find it easy and intuitive to type in a “texting” language. Some businesses have recently allowed users to formulate queries in natural language using SMS. For example, many contact centers now allow customers to “text” their complaints and requests for information over SMS. This mode of communication not only makes economic sense but also saves the customer from the hassle of waiting in a call queue. Most of these contact center based services and other regular services like “AQA 63336”1 by Issuebits Ltd, GTIP2 by AlienPant Ltd., “Texperts”3 by Number UK Ltd. and “ChaCha”4 use human agents to understand the SMS text and respond to these SMS queries. The nature of texting language, which often as a rule rather than exception, has misspellings, non-standard abbreviations, transliterations, phonetic substitutions and omissions, makes it difficult to build automated question answering systems around SMS technology. This is true even for questions whose answers are well documented like a FAQ database. Unlike other automatic question answering systems that focus on generating or searching answers, in a FAQ database the question and answers are already provided by an expert. The task is then to identify the best matching question-answer pair for a given query. In this paper we present a FAQ-based question answering system over a SMS interface. Our 1http://www.aqa.63336.com/ 2http://www.gtip.co.uk/ 3http://www.texperts.com/ 4http://www.chacha.com/ 852 system allows the user to enter a question in the SMS texting language. Such questions are noisy and contain spelling mistakes, abbreviations, deletions, phonetic spellings, transliterations etc. Since mobile handsets have limited screen space, it necessitates that the system have high accuracy. We handle the noise in a SMS query by formulating the query similarity over FAQ questions as a combinatorial search problem. The search space consists of combinations of all possible dictionary variations of tokens in the noisy query. The quality of the solution, i.e. the retrieved questions is formalized using a scoring function. Unlike other SMS processing systems our model does not require training data or human intervention. Our system handles not only the noisy variations of SMS query tokens but also semantic variations. We demonstrate the effectiveness of our system on real-world data sets. The rest of the paper is organized as follows. Section 2 describes the relevant prior work in this area and talks about our specific contributions. In Section 3 we give the problem formulation. Section 4 describes the Pruning Algorithm which finds the best matching question for a given SMS query. Section 5 provides system implementation details. Section 6 provides details about our experiments. Finally we conclude in Section 7. 2 Prior Work There has been growing interest in providing access to applications, traditionally available on Internet, on mobile devices using SMS. Examples include Search (Schusteritsch et al., 2005), access to Yellow Page services (Kopparapu et al., 2007), Email 5, Blog 6 , FAQ retrieval 7 etc. As highlighted earlier, these SMS-based FAQ retrieval services use human experts to answer questions. There are other research and commercial systems which have been developed for general question and answering. These systems generally adopt one of the following three approaches: Human intervention based, Information Retrieval based, or Natural language processing based. Human intervention based systems exploit human communities to answer questions. These systems 8 are interesting because they suggest similar questions resolved in the past. Other systems 5http://www.sms2email.com/ 6http://www.letmeparty.com/ 7http://www.chacha.com/ 8http://www.answers.yahoo.com/ like Chacha and Askme9 use qualified human experts to answer questions in a timely manner. The information retrieval based system treat question answering as an information retrieval problem. They search large corpus of text for specific text, phrases or paragraphs relevant to a given question (Voorhees, 1999). In FAQ based question answering, where FAQ provide a ready made database of question-answer, the main task is to find the closest matching question to retrieve the relevant answer (Sneiders, 1999) (Song et al., 2007). The natural language processing based system tries to fully parse a question to discover semantic structure and then apply logic to formulate the answer (Molla et al., 2003). In another approach the questions are converted into a template representation which is then used to extract answers from some structured representation (Sneiders, 2002) (Katz et al., 2002). Except for human intervention based QA systems most of the other QA systems work in restricted domains and employ techniques such as named entity recognition, co-reference resolution, logic form transformation etc which require the question to be represented in linguistically correct format. These methods do not work for SMS based FAQ answering because of the high level of noise present in SMS text. There exists some work to remove noise from SMS (Choudhury et al., 2007) (Byun et al., 2007) (Aw et al., 2006) (Kobus et al., 2008). However, all of these techniques require aligned corpus of SMS and conventional language for training. Building this aligned corpus is a difficult task and requires considerable human effort. (Acharya et al., 2008) propose an unsupervised technique that maps non-standard words to their corresponding conventional frequent form. Their method can identify non-standard transliteration of a given token only if the context surrounding that token is frequent in the corpus. This might not be true in all domains. 2.1 Our Contribution To the best of our knowledge we are the first to handle issues relating to SMS based automatic question-answering. We address the challenges in building a FAQ-based question answering system over a SMS interface. Our method is unsupervised and does not require aligned corpus or explicit SMS normalization to handle noise. We propose an efficient algorithm that handles noisy 9http://www.askmehelpdesk.com/ 853 lexical and semantic variations. 3 Problem Formulation We view the input SMS S as a sequence of tokens S = s1, s2, . . . , sn. Let Q denote the set of questions in the FAQ corpus. Each question Q ∈Q is also viewed as a sequence of terms. Our goal is to find the question Q∗from the corpus Q that best matches the SMS S. As mentioned in the introduction, the SMS string is bound to have misspellings and other distortions, which needs to be taken care of while performing the match. In the preprocessing stage, we develop a Domain dictionary D consisting of all the terms that appear in the corpus Q. For each term t in the dictionary and each SMS token si, we define a similarity measure α(t, si) that measures how closely the term t matches the SMS token si. We say that the term t is a variant of si, if α(t, si) > 0; this is denoted as t ∼si. Combining the similarity measure and the inverse document frequency (idf) of t in the corpus, we define a weight function ω(t, si). The similarity measure and the weight function are discussed in detail in Section 5.1. Based on the weight function, we define a scoring function for assigning a score to each question in the corpus Q. The score measures how closely the question matches the SMS string S. Consider a question Q ∈Q. For each token si, the scoring function chooses the term from Q having the maximum weight; then the weight of the n chosen terms are summed up to get the score. Score(Q) = n X i=1 " max t:t∈Q and t∼si ω(t, si) # (1) Our goal is to efficiently find the question Q∗having the maximum score. 4 Pruning Algorithm We now describe algorithms for computing the maximum scoring question Q∗. For each token si, we create a list Li consisting of all terms from the dictionary that are variants of si. Consider a token si. We collect all the variants of si from the dictionary and compute their weights. The variants are then sorted in the descending order of their weights. At the end of the process we have n ranked lists. As an illustration, consider an SMS query “gud plc buy 10s strng on9”. Here, n = 6 and six lists of variants will be created as shown Figure 1: Ranked List of Variations in Figure 1. The process of creating the lists is speeded up using suitable indices, as explained in detail in Section 5. Now, we assume that the lists L1, L2, . . . , Ln are created and explain the algorithms for computing the maximum scoring question Q∗. We describe two algorithms for accomplishing the above task. The two algorithms have the same functionality i.e. they compute Q∗, but the second algorithm called the Pruning algorithm has a better run time efficiency compared to the first algorithm called the naive algorithm. Both the algorithms require an index which takes as input a term t from the dictionary and returns Qt, the set of all questions in the corpus that contain the term t. We call the above process as querying the index on the term t. The details of the index creation is discussed in Section 5.2. Naive Algorithm: In this algorithm, we scan each list Li and query the index on each term appearing in Li. The returned questions are added to a collection C. That is, C = n [ i=1  [ t∈Li Qt   The collection C is called the candidate set. Notice that any question not appearing in the candidate set has a score 0 and thus can be ignored. It follows that the candidate set contains the maximum scoring question Q∗. So, we focus on the questions in the collection C, compute their scores and find the maximum scoring question Q∗. The scores of the question appearing in C can be computed using Equation 1. The main disadvantage with the naive algorithm is that it queries each term appearing in each list and hence, suffers from high run time cost. We next explain the Pruning algorithm that avoids this pitfall and queries only a substantially small subset of terms appearing in the lists. Pruning Algorithm: The pruning algorithm 854 is inspired by the Threshold Algorithm (Fagin et al., 2001). The Pruning algorithm has the property that it queries fewer terms and ends up with a smaller candidate set as compared to the naive algorithm. The algorithm maintains a candidate set C of questions that can potentially be the maximum scoring question. The algorithm works in an iterative manner. In each iteration, it picks the term that has maximum weight among all the terms appearing in the lists L1, L2, . . . , Ln. As the lists are sorted in the descending order of the weights, this amounts to picking the maximum weight term amongst the first terms of the n lists. The chosen term t is queried to find the set Qt. The set Qt is added to the candidate set C. For each question Q ∈Qt, we compute its score Score(Q) and keep it along with Q. The score can be computed by Equation 1 (For each SMS token si, we choose the term from Q which is a variant of si and has the maximum weight. The sum of the weights of chosen terms yields Score(Q)). Next, the chosen term t is removed from the list. Each iteration proceeds as above. We shall now develop a thresholding condition such that when it is satisfied, the candidate set C is guaranteed to contain the maximum scoring question Q∗. Thus, once the condition is met, we stop the above iterative process and focus only on the questions in C to find the maximum scoring question. Consider end of some iteration in the above process. Suppose Q is a question not included in C. We can upperbound the score achievable by Q, as follows. At best, Q may include the top-most token from every list L1, L2, . . . , Ln. Thus, score of Q is bounded by Score(Q) ≤ n X i=0 ω(Li[1]). (Since the lists are sorted Li[1] is the term having the maximum weight in Li). We refer to the RHS of the above inequality as UB. Let bQ be the question in C having the maximum score. Notice that if bQ ≥UB, then it is guaranteed that any question not included in the candidate set C cannot be the maximum scoring question. Thus, the condition “ bQ ≥UB” serves as the termination condition. At the end of each iteration, we check if the termination condition is satisfied and if so, we can stop the iterative process. Then, we simply pick the question in C having the maximum score and return it. The algorithm is shown in Figure 2. In this section, we presented the Pruning algoProcedure Pruning Algorithm Input: SMS S = s1, s2, . . . , sn Output: Maximum scoring question Q∗. Begin Construct lists L1, L2, . . . , Ln //(see Section 5.3). // Li lists variants of si in descending //order of weight. Candidate list C = ∅. repeat j∗= argmaxiω(Li[1]) t∗= Lj∗[1] // t∗is the term having maximum weight among // all terms appearing in the n lists. Delete t∗from the list Lj∗. Query the index and fetch Qt∗ // Qt∗: the set of all questions in Q //having the term t∗ For each Q ∈Qt∗ Compute Score(Q) and add Q with its score into C UB = Pn i=1 ω(Li[1]) b Q = argmaxQ∈CScore(Q). if Score( b Q) ≥UB, then // Termination condition satisfied Output b Q and exit. forever End Figure 2: Pruning Algorithm rithm that efficiently finds the best matching question for the given SMS query without the need to go through all the questions in the FAQ corpus. The next section describes the system implementation details of the Pruning Algorithm. 5 System Implementation In this section we describe the weight function, the preprocessing step and the creation of lists L1, L2, . . . , Ln. 5.1 Weight Function We calculate the weight for a term t in the dictionary w.r.t. a given SMS token si. The weight function is a combination of similarity measure between t and si and Inverse Document Frequency (idf) of t. The next two subsections explain the calculation of the similarity measure and the idf in detail. 5.1.1 Similarity Measure Let D be the dictionary of all the terms in the corpus Q. For term t ∈D and token si of the SMS, the similarity measure α(t, si) between them is 855 α(t, si) =          LCSRatio(t,si) EditDistanceSMS(t,si) if t and si share same starting character * 0 otherwise (2) where LCSRatio(t, si) = length(LCS(t,si)) length(t) and LCS(t, si) is the Longest common subsequence between t and si. * The rationale behind this heuristic is that while typing a SMS, people typically type the first few characters correctly. Also, this heuristic helps limit the variants possible for a given token. The Longest Common Subsequence Ratio (LCSR) (Melamed, 1999) of two strings is the ratio of the length of their LCS and the length of the longer string. Since in SMS text, the dictionary term will always be longer than the SMS token, the denominator of LCSR is taken as the length of the dictionary term. We call this modified LCSR as the LCSRatio. Procedure EditDistanceSMS Input: term t, token si Output: Consonant Skeleton Edit distance Begin return LevenshteinDistance(CS(si), CS(t)) + 1 // 1 is added to handle the case where // Levenshtein Distance is 0 End Consonant Skeleton Generation (CS) 1. remove consecutive repeated characters // (call →cal) 2. remove all vowels //(waiting →wtng, great →grt) Figure 3: EditDistanceSMS The EditDistanceSMS shown in Figure 3 compares the Consonant Skeletons (Prochasson et al., 2007) of the dictionary term and the SMS token. If the consonant keys are similar, i.e. the Levenshtein distance between them is less, the similarity measure defined in Equation 2 will be high. We explain the rationale behind using the EditDistanceSMS in the similarity measure α(t, si) through an example. For the SMS token “gud” the most likely correct form is “good”. The two dictionary terms “good” and “guided” have the same LCSRatio of 0.5 w.r.t “gud”, but the EditDistanceSMS of “good” is 1 which is less than that of “guided”, which has EditDistanceSMS of 2 w.r.t “gud”. As a result the similarity measure between “gud” and “good” will be higher than that of “gud” and “guided”. 5.1.2 Inverse Document Frequency If f number of documents in corpus Q contain a term t and the total number of documents in Q is N, the Inverse Document Frequency (idf) of t is idf(t) = log N f (3) Combining the similarity measure and the idf of t in the corpus, we define the weight function ω(t, si) as ω(t, si) = α(t, si) ∗idf(t) (4) The objective behind the weight function is 1. We prefer terms that have high similarity measure i.e. terms that are similar to the SMS token. Higher the LCSRatio and lower the EditDistanceSMS, higher will be the similarity measure. Thus for example, for a given SMS token “byk”, similarity measure of word “bike“ is higher than that of “break”. 2. We prefer words that are highly discriminative i.e. words with a high idf score. The rationale for this stems from the fact that queries, in general, are composed of informative words. Thus for example, for a given SMS token “byk”, idf of “bike” will be more than that of commonly occurring word “back”. Thus, even though the similarity measure of “bike” and “back” are same w.r.t. “byk”, “bike” will get a higher weight than “back” due to its idf. We combine these two objectives into a single weight function multiplicatively. 5.2 Preprocessing Preprocessing involves indexing of the FAQ corpus, formation of Domain and Synonym dictionaries and calculation of the Inverse Document Frequency for each term in the Domain dictionary. As explained earlier the Pruning algorithm requires retrieval of all questions Qt that contains a given term t. To do this efficiently we index the FAQ corpus using Lucene10. Each question in the FAQ corpus is treated as a Document; it is tokenized using whitespace as delimiter and indexed. 10http://lucene.apache.org/java/docs/ 856 The Domain dictionary D is built from all terms that appear in the corpus Q. The weight calculation for Pruning algorithm requires the idf for a given term t. For each term t in the Domain dictionary, we query the Lucene indexer to get the number of Documents containing t. Using Equation 3, the idf(t) is calculated. The idf for each term t is stored in a Hashtable, with t as the key and idf as its value. Another key step in the preprocessing stage is the creation of the Synonym dictionary. The Pruning algorithm uses this dictionary to retrieve semantically similar questions. Details of this step is further elaborated in the List Creation sub-section. The Synonym Dictionary creation involves mapping each word in the Domain dictionary to it’s corresponding Synset obtained from WordNet11. 5.3 List Creation Given a SMS S, it is tokenized using white-spaces to get a sequence of tokens s1, s2, . . . , sn. Digits occurring in SMS token (e.g ‘10s’ , “4get”) are replaced by string based on a manually crafted digitto-string mapping (“10” →“ten”). A list Li is setup for each token si using terms in the domain dictionary. The list for a single character SMS token is set to null as it is most likely to be a stop word . A term t from domain dictionary is included in Li if its first character is same as that of the token si and it satisfies the threshold condition length(LCS(t, si)) > 1. Each term t that is added to the list is assigned a weight given by Equation 4. Terms in the list are ranked in descending order of their weights. Henceforth, the term “list” implies a ranked list. For example the SMS query “gud plc 2 buy 10s strng on9” (corresponding question “Where is a good place to buy tennis strings online?”), is tokenized to get a set of tokens {‘gud’, ‘plc’, ‘2’, ‘buy’, ‘10s’, ‘strng’, ‘on9’}. Single character tokens such as ‘2’ are neglected as they are most likely to be stop words. From these tokens corresponding lists are setup as shown in Figure 1. 5.3.1 Synonym Dictionary Lookup To retrieve answers for SMS queries that are semantically similar but lexically different from questions in the FAQ corpus we use the Synonym dictionary described in Section 5.2. Figure 4 illustrates some examples of such SMS queries. 11http://wordnet.princeton.edu/ Figure 4: Semantically similar SMS and questions Figure 5: Synonym Dictionary LookUp For a given SMS token si, the list of variations Li is further augmented using this Synonym dictionary. For each token si a fuzzy match is performed between si and the terms in the Synonym dictionary and the best matching term from the Synonym dictionary, δ is identified. As the mappings between the Synonym and the Domain dictionary terms are maintained, we obtain the corresponding Domain dictionary term β for the Synonym term δ and add that term to the list Li. β is assigned a weight given by ω(β, si) = α(δ, si) ∗idf(β) (5) It should be noted that weight for β is based on the similarity measure between Synonym dictionary term δ and SMS token si. For example, the SMS query “hw2 countr quik srv”( corresponding question “How to return a very fast serve?”) has two terms “countr” → “counter” and “quik” →“quick” belonging to the Synonym dictionary. Their associated mappings in the Domain dictionary are “return” and “fast” respectively as shown in Figure 5. During the list setup process the token “countr” is looked 857 up in the Domain dictionary. Terms from the Domain dictionary that begin with the same character as that of the token “countr” and have a LCS > 1 such as “country”,“count”, etc. are added to the list and assigned a weight given by Equation 4. After that, the token “countr” is looked up in the Synonym dictionary using Fuzzy match. In this example the term “counter” from the Synonym dictionary fuzzy matches the SMS token. The Domain dictionary term corresponding to the Synonym dictionary term “counter” is looked up and added to the list. In the current example the corresponding Domain dictionary term is “return”. This term is assigned a weight given by Equation 5 and is added to the list as shown in Figure 5. 5.4 FAQ retrieval Once the lists are created, the Pruning Algorithm as shown in Figure 2 is used to find the FAQ question Q∗that best matches the SMS query. The corresponding answer to Q∗from the FAQ corpus is returned to the user. The next section describes the experimental setup and results. 6 Experiments We validated the effectiveness and usability of our system by carrying out experiments on two FAQ data sets. The first FAQ data set, referred to as the Telecom Data-Set, consists of 1500 frequently asked questions, collected from a Telecom service provider’s website. The questions in this data set are related to the Telecom providers products or services. For example queries about call rates/charges, bill drop locations, how to install caller tunes, how to activate GPRS etc. The second FAQ corpus, referred to as the Yahoo DataSet, consists of 7500 questions from three Yahoo! Answers12 categories namely Sports.Swimming, Sports.Tennis, Sports.Running. To measure the effectiveness of our system, a user evaluation study was performed. Ten human evaluators were asked to choose 10 questions randomly from the FAQ data set. None of the evaluators were authors of the paper. They were provided with a mobile keypad interface and asked to “text” the selected 10 questions as SMS queries. Through that exercise 100 relevant SMS queries per FAQ data set were collected. Figure 6 shows sample SMS queries. In order to validate that the system was able to handle queries that were out of 12http://answers.yahoo.com/ Figure 6: Sample SMS queries Data Set Relevant Queries Irrelevant Queries Telecom 100 50 Yahoo 100 50 Table 1: SMS Data Set. the FAQ domain, we collected 5 irrelevant SMS queries from each of the 10 human-evaluators for both the data sets. Irrelevant queries were (a) Queries out of the FAQ domain e.g. queries related to Cricket, Billiards, activating GPS etc (b) Absurd queries e.g. “ama ameyu tuem” (sequence of meaningless words) and (c) General Queries e.g. “what is sports”. Table 1 gives the number of relevant and irrelevant queries used in our experiments. The average word length of the collected SMS messages for Telecom and Yahoo datasets was 4 and 7 respectively. We manually cleaned the SMS query data word by word to create a clean SMS test-set. For example, the SMS query ”h2 mke a pdl bke fstr” was manually cleaned to get ”how to make pedal bike faster”. In order to quantify the level of noise in the collected SMS data, we built a character-level language model(LM)13 using the questions in the FAQ data-set (vocabulary size is 44 characters) and computed the perplexity14 of the language model on the noisy and the cleaned SMS test-set. The perplexity of the LM on a corpus gives an indication of the average number of bits needed per n-gram to encode the corpus. Noise will result in the introduction of many previously unseen n-grams in the corpus. Higher number of bits are needed to encode these improbable n-grams which results in increased perplexity. From Table 2 we can see the difference in perplexity for noisy and clean SMS data for the Yahoo and Telecom data-set. The high level of perplexity in the SMS data set indicates the extent of noise present in the SMS corpus. To handle irrelevant queries the algorithm described in Section 4 is modified. Only if the Score(Q∗) is above a certain threshold, it’s answer is returned, else we return “null”. The threshold 13http://en.wikipedia.org/wiki/Language model 14bits = log2(perplexity) 858 Cleaned SMS Noisy SMS Yahoo bigram 14.92 74.58 trigram 8.11 93.13 Telecom bigram 17.62 59.26 trigram 10.27 63.21 Table 2: Perplexity for Cleaned and Noisy SMS Figure 7: Accuracy on Telecom FAQ Dataset was determined experimentally. To retrieve the correct answer for the posed SMS query, the SMS query is matched against questions in the FAQ data set and the best matching question(Q∗) is identified using the Pruning algorithm. The system then returns the answer to this best matching question to the human evaluator. The evaluator then scores the response on a binary scale. A score of 1 is given if the returned answer is the correct response to the SMS query, else it is assigned 0. The scoring procedure is reversed for irrelevant queries i.e. a score of 0 is assigned if the system returns an answer and 1 is assigned if it returns “null” for an “irrelevant” query. The result of this evaluation on both data-sets is shown in Figure 7 and 8. Figure 8: Accuracy on Yahoo FAQ Dataset In order to compare the performance of our system, we benchmark our results against Lucene’s 15 Fuzzy match feature. Lucene supports fuzzy searches based on the Levenshtein Distance, or Edit Distance algorithm. To do a fuzzy search 15http://lucene.apache.org we specify the ∼symbol at the end of each token of the SMS query. For example, the SMS query “romg actvt” on the FAQ corpus is reformulated as “romg∼0.3 actvt∼0.3”. The parameter after the ∼specifies the required similarity. The parameter value is between 0 and 1, with a value closer to 1 only terms with higher similarity will be matched. These queries are run on the indexed FAQs. The results of this evaluation on both data-sets is shown in Figure 7 and 8. The results clearly demonstrate that our method performs 2 to 2.5 times better than Lucene’s Fuzzy match. It was observed that with higher values of similarity parameter (∼0.6, ∼0.8), the number of correctly answered queries was even lower. In Figure 9 we show the runtime performance of the Naive vs Pruning algorithm on the Yahoo FAQ Dataset for 150 SMS queries. It is evident from Figure 9 that not only does the Pruning Algorithm outperform the Naive one but also gives a nearconstant runtime performance over all the queries. The substantially better performance of the Pruning algorithm is due to the fact that it queries much less number of terms and ends up with a smaller candidate set compared to the Naive algorithm. Figure 9: Runtime of Pruning vs Naive Algorithm for Yahoo FAQ Dataset 7 Conclusion In recent times there has been a rise in SMS based QA services. However, automating such services has been a challenge due to the inherent noise in SMS language. In this paper we gave an efficient algorithm for answering FAQ questions over an SMS interface. Results of applying this on two different FAQ datasets shows that such a system can be very effective in automating SMS based FAQ retrieval. 859 References Rudy Schusteritsch, Shailendra Rao, Kerry Rodden. 2005. Mobile Search with Text Messages: Designing the User Experience for Google SMS. CHI, Portland, Oregon. Sunil Kumar Kopparapu, Akhilesh Srivastava and Arun Pande. 2007. SMS based Natural Language Interface to Yellow Pages Directory, In Proceedings of the 4th International conference on mobile technology, applications, and systems and the 1st International symposium on Computer human interaction in mobile technology, Singapore. Monojit Choudhury, Rahul Saraf, Sudeshna Sarkar, Vijit Jain, and Anupam Basu. 2007. Investigation and Modeling of the Structure of Texting Language, In Proceedings of IJCAI-2007 Workshop on Analytics for Noisy Unstructured Text Data, Hyderabad. E. Voorhees. 1999. The TREC-8 question answering track report. D. Molla. 2003. NLP for Answer Extraction in Technical Domains, In Proceedings of EACL, USA. E. Sneiders. 2002. Automated question answering using question templates that cover the conceptual model of the database, In Proceedings of NLDB, pages 235−239. B. Katz, S. Felshin, D. Yuret, A. Ibrahim, J. Lin, G. Marton, and B. Temelkuran. 2002. Omnibase: Uniform access to heterogeneous data for question answering, Natural Language Processing and Information Systems, pages 230−234. E. Sneiders. 1999. Automated FAQ Answering: Continued Experience with Shallow Language Understanding, Question Answering Systems. Papers from the 1999 AAAI Fall Symposium. Technical Report FS-99−02, November 5−7, North Falmouth, Massachusetts, USA, AAAI Press, pp.97−107 W. Song, M. Feng, N. Gu, and L. Wenyin. 2007. Question similarity calculation for FAQ answering, In Proceeding of SKG 07, pages 298−301. Aiti Aw, Min Zhang, Juan Xiao, and Jian Su. 2006. A phrase-based statistical model for SMS text normalization, In Proceedings of COLING/ACL, pages 33−40. Catherine Kobus, Franois Yvon and Graldine Damnati. 2008. Normalizing SMS: are two metaphors better than one?, In Proceedings of the 22nd International Conference on Computational Linguistics, pages 441−448 Manchester. Jeunghyun Byun, Seung-Wook Lee, Young-In Song, Hae-Chang Rim. 2008. Two Phase Model for SMS Text Messages Refinement, Association for the Advancement of Artificial Intelligence. AAAI Workshop on Enhanced Messaging Ronald Fagin , Amnon Lotem , Moni Naor. 2001. Optimal aggregation algorithms for middleware, In Proceedings of the 20th ACM SIGMOD-SIGACTSIGART symposium on Principles of database systems. I. Dan Melamed. 1999. Bitext maps and alignment via pattern recognition, Computational Linguistics. E. Prochasson, Christian Viard-Gaudin, Emmanuel Morin. 2007. Language Models for Handwritten Short Message Services, In Proceedings of the 9th International Conference on Document Analysis and Recognition. Sreangsu Acharya, Sumit Negi, L. V. Subramaniam, Shourya Roy. 2008. Unsupervised learning of multilingual short message service (SMS) dialect from noisy examples, In Proceedings of the second workshop on Analytics for noisy unstructured text data. 860
2009
96
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 861–869, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Semantic Tagging of Web Search Queries Mehdi Manshadi Xiao Li University of Rochester Microsoft Research Rochester, NY Redmond, WA [email protected] [email protected] Abstract We present a novel approach to parse web search queries for the purpose of automatic tagging of the queries. We will define a set of probabilistic context-free rules, which generates bags (i.e. multi-sets) of words. Using this new type of rule in combination with the traditional probabilistic phrase structure rules, we define a hybrid grammar, which treats each search query as a bag of chunks (i.e. phrases). A hybrid probabilistic parser is used to parse the queries. In order to take contextual information into account, a discriminative model is used on top of the parser to re-rank the n-best parse trees generated by the parser. Experiments show that our approach outperforms a basic model, which is based on Conditional Random Fields. 1 Introduction Understanding users’ intent from web search queries is an important step in designing an intelligent search engine. While it remains a challenge to have a scientific definition of ''intent'', many efforts have been devoted to automatically mapping queries into different domains i.e. topical classes such as product, job and travel (Broder et al. 2007; Li et al. 2008). This work goes beyond query-level classification. We assume that the queries are already classified into the correct domain and investigate the problem of semantic tagging at the word level, which is to assign a label from a set of pre-defined semantic labels (specific to the domain) to every word in the query. For example, a search query in the product domain can be tagged as: cheap garmin streetpilot c340 gps | | | | | SortOrder Brand Model Model Type Many specialized search engines build their indexes directly from relational databases, which contain highly structured information. Given a query tagged with the semantic labels, a search engine is able to compare the values of semantic labels in the query (e.g., Brand = “garmin”) with its counterpart values in documents, thereby providing users with more relevant search results. Despite this importance, there has been relatively little published work on semantic tagging of web search queries. Allan and Raghavan (2002) and Barr et al. (2008) study the linguistic structure of queries by performing part-of-speech tagging. Pasca et al. (2007) use queries as a source of knowledge for extracting prominent attributes for semantic concepts. On the other hand, there has been much work on extracting structured information from larger text segments, such as addresses (Kushmerick 2001), bibliographic citations (McCallum et al. 1999), and classified advertisements (Grenager et al. 2005), among many others. The most widely used approaches to these problems have been sequential models including hidden Markov models (HMMs), maximum entropy Markov models (MEMMs) (Mccallum 2000), and conditional random fields (CRFs) (Lafferty et al. 2001) These sequential models, however, are not optimal for processing web search queries for the following reasons.. The first problem is that the global constraints and long distance dependencies on state variables are difficult to capture using sequential models. Because of this limitation, Viola and Narasimhand (2007) use a discriminative context-free (phrase structure) grammar for extracting information from semi-structured data and report higher performances over CRFs. Secondly, sequential models treat the input text as an ordered sequence of words. A web search query, however, is often formulated by a user as a bag of keywords. For example, if a user is look861 ing for cheap garmin gps, it is possible that the query comes in any ordering of these three words. We are looking for a model that, once it observes this query, assumes that the other permutations of the words in this query are also likely. This model should also be able to handle cases where some local orderings have to be fixed as in the query buses from New York City to Boston, where the words in the phrases from New York city and to Boston have to come in the exact order. The third limitation is that the sequential models treat queries as unstructured (linear) sequences of words. The study by Barr et al. (2008) on Yahoo! query logs suggests that web search queries, to some degree, carry an underlying linguistic structure. As an example, consider a query about finding a local business near some location such as: seattle wa drugstore 24/7 98109 This query has two constituents: the Business that the user is looking for (24/7 drugstore) and the Neighborhood (seattle wa 98109). The model should not only be able to recognize the two constituents but it also needs to understand the structure of each constituent. Note that the arbitrary ordering of the words in the query is a big challenge to understanding the structure of the query. The problem is not only that the two constituents can come in either order, but also that a subconstituent such as 98109 can also be far from the other words belonging to the same constituent. We are looking for a model that is able to generate a hierarchical structure for this query as shown in figure (1). The last problem that we discuss here is that the two powerful sequential models i.e. MEMM and CRF are discriminative models; hence they are highly dependent on the training data. Preparing labeled data, however, is very expensive. Therefore in cases where there is no or a small amount of labeled data available, these models do a poor job. In this paper, we define a hybrid, generative grammar model (section 3) that generates bags of phrases (also called chunks in this paper). The chunks are generated by a set of phrase structure (PS) rules. At a higher level, a bag of chunks is generated from individual chunks by a second type of rule, which we call context-free multiset generating rules. We define a probabilistic version of this grammar in which every rule has a probability associated with it. Our grammar model eliminates the local dependency assumption made by sequential models and the ordering constraints imposed by phrase structure grammars (PSG). This model better reflects the underlying linguistic structure of web search queries. The model’s power, however, comes at the cost of increased time complexity, which is exponential in the length of the query. This, is less of an issue for parsing web search queries, as they are usually very short (2.8 words/query in average (Xue et al., 2004)). Yet another drawback of our approach is due to the context-free nature of the proposed grammar model. Contextual information often plays a big role in resolving tagging ambiguities and is one of the key benefits of discriminative models such as CRFs. But such information is not straightforward to incorporate in our grammar model. To overcome this limitation, we further present a discriminative re-ranking module on top of the parser to re-rank the n-best parse trees generated by the parser using contextual features. As seen later, in the case where there is not a large amount of labeled data available, the parser part is the dominant part of the module and performs reasonably well. In cases where there is a large amount of labeled data available, the discriminative re-ranking incorporates into the system and enhances the performance. We evaluate this model on the task of tagging search queries in the product domain. As seen later, preliminary experiments show that this hybrid generative/discriminative model performs significantly better than a CRF-based module in both absence and presence of the labeled data. The structure of the paper is as follows. Section 2 introduces a linguistic grammar formalism that motivates our grammar model. In section 3, we define our grammar model. In section 4 we address the design and implementation of a parser for this kind of grammar. Section 5 gives an example of such a grammar designed for the purpose of automatic tagging of queries. Section 6 discusses motivations for and benefits of running a discriminative re-ranker on top of the parser. In section 7, we explain the evaluations Figure 1. A simple grammar for product domain 862 and discuss the results. Section 8 summarizes this work and discusses future work. 2 ID/LP Grammar Context-free phrase structure grammars are widely used for parsing natural language. The adequate power of this type of grammar plus the efficient parsing algorithms available for it has made it very popular. PSGs treat a sentence as an ordered sequence of words. There are however natural languages that are free word order. For example, a three-word sentence consisting of a subject, an object and a verb in Russian, can occur in all six possible orderings. PSGs are not a well-suited model for this type of language, since six different PS-rules must be defined in order to cover such a simple structure. To address this issue, Gazdar (1985) introduced the concept of ID/LP rules within the framework of Generalized Phrase Structure Grammar (GPSG). In this framework, Immediate Dominance or ID rules are of the form: (1) A→ B, C This rule specifies that a non-terminal A can be rewritten as B and C, but it does not specify the order. Therefore A can be rewritten as both BC and CB. In other words the rule in (1) is equivalent to two PS-rules: (2) A → BC A → CB Similarly one ID rule will suffice to cover the simple subject-object-verb structure in Russian: (3) S  Sub, Obj, Vrb However even in free-word-order languages, there are some ordering restrictions on some of the constituents. For example in Russian an adjective always comes before the noun that it modifies. To cover these ordering restrictions, Gazdar defined Linear Precedence (LP) rules. (4) gives an example of a linear precedence rule: (4) ADJ < N This specifies that ADJ always comes before N when both occur on the right-hand side of a single rule. Although very intuitive, ID/LP rules are not widely used in the area of natural language processing. The main reason is the timecomplexity issue of ID/LP grammar. It has been shown that parsing ID/LP rules is an NPcomplete problem (Barton 1985). Since the length of a natural language sentence can easily reach 30-40 (and sometimes even up to 100) words, ID/LP grammar is not a practical model for natural language syntax. In our case, however, the time-complexity is not a bottleneck as web search queries are usually very short (2.8 words per query in average). Moreover, the nature of ID rules can be deceptive as it might appear that ID rules allow any reordering of the words in a valid sentence to occur as another vaild sentence of the language. But in general this is not the case. For example consider a grammar with only two ID rules given in (5) and consider S as the start symbol: (5) S → B, c B → d, e It can be easily verified that dec is a sentence of the language but dce is not. In fact, although the permutation of subconstituents of a constituent is allowed, a subconstituent can not be pulled out from its mother consitutent and freely move within the other constituents. This kind of movement however is a common behaviour in web search queries as shown in figure (1). It means that even ID rules are not powerful enough to model the free-word-order nature of web search queries. This leads us to define to a new type of grammar model. 3 Our Grammar Model 3.1 The basic model We propose a set of rules in the form: (6) S → {B, c} B → {D, E} D → {d} E → {e} which can be used to generate multisets of words. For the notation convenience and consistancy, throughout this paper, we show terminals and non-terminals by lowercase and uppercase letters, respectively and sets and multisets by bold font uppercase letters. Using the rules in (6) a sentence of the language (which is a multiset in this model) can be derived as follows: (7) S ⇒ {B, c} ⇒ {D, E, c} ⇒ {D, e, c}⇒ {d, e, c} Once the set is generated, it can be realized as any of the six permutation of d, e, and c. Therefore a single sequence of derivations can lead to six different strings of words. As another example consider the grammar in (8). (8) Query → {Business, Location} Business → {Attribute, Business} Location → {City, State} Business → {drugstore} | {Resturant} Attribute→ {Chinese} | {24/7} City→ {Seattle} | {Portland} State→ {WA} | {OR} 863 where Query is the start symbol and by A → B|C we mean two differnet rules A → B and A → C. Figures (2) and (3) show the tree structures for the queries Restaurant Rochester Chinese MN, and Rochester MN Chinese Restaurant, respectively. As seen in these figures, no matter what the order of the words in the query is, the grammar always groups the words Resturant and Chinese together as the Business and the words Rochester and MN together as the Location. It is important to notice that the above grammars are context-free as every non-terminal A, which occurs on the left-hand side of a rule r, can be replaced with the set of terminals and nonterminals on the right-hand side of r, no matter what the context in which A occurs is. More formally we define a Context-Free multiSet generating Grammar (CFSG) as a 4tuple G=(N, T, S, R) where • N is a set of non-terminals; • T is a set of terminals; • S ∈ N is a special non-terminal called start symbol, • R is a set of rules {Ai→ Xj} where Ai is a non-terminal and Xj is a set of terminals and non-terminals. Given two multisets Y and Z over the set N ∪ T, we say Y dervies Z (shown as Y ⇒ Z) iff there exists A, W, and X such that: Y = W + {A}1 Z = W + X A→ X ∈ R Here ⇒* is defined as the reflexive transitive closure of ⇒. Finally we define the language of multisets generated by the grammar G (shown as L(G)) as L = { X | X is a multiset over N∪T and S ⇒*X} The sequence of ⇒ used to derive X from S is called a derivation of X. Given the above 1 If X and Y are two multisets, X+Y simply means appending X to Y. For example {a, b, a} + {b, c, d} = {a, b, a, b, c, d}. definitions, parsing a multiset X means to find all (if any) the derivations of X from S. 2 3.2 Probabilisic CFSG Very often a sentence in the language has more than one derivation, that is the sentence is syntactically ambiguous. One natural way of resolving the ambiguity is using a probabilistic grammar. Analogous to PCFG (Manning and Schütze 1999), we define the probabilistic version of a CFSG, in which every rule Ai→Xj has a probability P(Ai→Xj) and for every nonterminal Ai, we have: (9) Σj P(Ai→ Xj) = 1 Consider a sentence w1w2…wn, a parse tree T of this sentence, and an interior node v in T labeled with Av and assume that v1, v2, …vk are the children of the node v in T. We define: (10) α(v) = P(Av→ {Av1… Avk})α(v1) … α(vk) with the initial conditions α(wi)=1. If u is the root of the tree T we have: (11) P(w1w2…wn , T) = α(u) The parse tree that the probabilistic model assigns to the sentence is defined as: (12) Tmax = argmaxT (P(w1w2…wn , T)) where T ranges over all possible parse trees of the sentence. 4 Parsing Algorithm 4.1 Deterministic parser The parsing algorithm for the CFSG is straightforward. We used a modified version of the Bottom-Up Chart Parser for the phrase structure grammars (Allen 1995, see 3.4). Given the grammar G=(N,T,S,R) and the query q=w1w2…wn, the algorithm in figure (4) is used to parse q. The algorithm is based on the concept of an active arc. An active arc is defined as a 3– 2 Every sentence of a language corresponds to a vector of |T| integers where the kth element represents how many times the kth terminal occurs in the multi-set. In fact, the languages defined by grammars are not interesting but the derivations are. Figure 2. A CFSG parse tree Figure 3. A CFSG parse tree 864 tuple (r, U, I) where r is a rule A → X in R, U is a subset of X, and I is a subset of {1, 2 …n} (where n is the number of words in the query). This active arc tries to find a match to the right-hand side of r (i.e. X) and suggests to replace it with the non-terminal A. U contains the part of the righthand side that has not been matched yet. Therefore when an arc is newly created U=X. Equivalently, X\U3 is the part of the right hand side that has so far been matched with a subset of words in the query, where I stores the positions of these words in q. An active arc is completed when U=Ø. Every completed active arc can be reduced to a tuple (A, I), which we call a constituent. A constituent (A, I) shows that the non-terminal A matches the words in the query that are positioned at the numbers in I. Every constituent that is built by the parser is stored in a data structure called chart and remains there throughout the whole process. Agenda is another data structure that temporarily stores the constituents. At initialization step, the constituents (w1, {1}), … (wn, {n}) are added to both chart and agenda. At each iteration, we pull out a constituent from the agenda and try to find a match to this constituent from the remaining list of terminals and non-terminals on the right-hand side of an active arc. More precisely, given a constituent c=(A, I) and an active arc γ = (r:BX, U, J), we check if A ∈ U and I ∩ J = Ø; if so, γ is extendable by c, therefore we extend γ by removing A from U and appending I to J. Note that the extension process keeps a copy of every active arc before it extends it. In practice every active arc and every constituent keep a set of pointers to its children constituents (stored in chart). This information is necessary for the termination step in order to print the parse trees. The algorithm succeeds if there is a constituent in the chart that corresponds to the start symbol and covers all the words in the query, i.e. there is a constituent of the form (S, {1,2,….n}) in the chart. 4.2 Probabilistic Parser The algorithm given in figure (4) works for a deterministic grammar. As mentioned before, we use a probabilistic version of the grammar. Therefore the algorithm is modified for the probabilistic case. The probabilistic parser keeps a probability p for every active arc and every constituent: γ = (r, U, J, pγ ) 3 A\B is defined as {x | x ∈ A & x ∉ B} c =(A, I, pc ) When extending γ using c, we have: (13) pγ ← pγ pc When creating c from the completed active arc γ : (14) pc ← pγ p(r) Although search queries are usually short, the running time is still an issue when the length of the query exceeds 7 or 8. Therefore a couple of techniques have been used to make the naïve algorithm more efficient. For example we have used pruning techniques to filter out structures with very low probability. Also, a dynamic programming version of the algorithm has been used, where for every subset I of the word positions and every non-terminal A only the highestranking constituent c=(A, I, p) is kept and the rest are ignored. Note that although more efficient, the dynamic programming version is still exponential in the length of the query. 5 A grammar for semantic tagging As mentioned before, in our system queries are already classified into different domains like movies, books, products, etc. using an automatic query classifier. For every domain we have a schema, which is a set of pre-defined tags. For example figure (5) shows an example of a schema for the product domain. The task defined for this system is to automatically tag the words in the query with the tags defined in the schema: cheap garmin streetpilot c340 gps | | | | | SortOrder Brand Model Model Type Initialization: For each word wi in q add (wi, {i}) to Chart and to Agenda For all r: A→X in R, create an active arc (r, X, {}) and add it to the list of active arcs. Iteration Repeat Pull a constituent c = (A, I) from Agenda For every active arc γ =(r:BX, U, I) Extend γ using c if extendable If U=Ø add (B, I) to Chart and to Agenda Until Agenda is empty Termination For every item c=(S, {1..n}) in Chart, return the tree rooted at c. Figure 4. An algorithm for parsing deterministic CFSG 865 We mentioned that one of the motivations of parsing search queries is to have a deeper understanding of the structure of the query. The evaluation of such a deep model, however, is not an easy task. There is no Treebank available for web search queries. Furthermore, the definition of the tree structure for a query is quite arbitrary. Therefore even when human resources are available, building such a Treebank is not a trivial task. For these reasons, we evaluate our grammar model on the task of automatic tagging of queries for which we have labeled data available. The other advantage of this evaluation is that there exists a CRF-based module in our system used for the task of automatic tagging. The performance of this module can be considered as the baseline for our evaluation. We have manually designed a grammar for the purpose of automatic tagging. The resources available for training and testing were a set of search queries from the product domain. Therefore a set of CFSG rules were written for the product domain. We defined very simple and intuitive rules (shown in figure 6) that could easily be generalized to the other domains Note that Type, Brand, Model, … could be either pre-terminals generating word tokens, or non-terminals forming the left-hand side of the phrase structure rules. For the product domain, Type and Attribute are generated by a phrase structure grammar. Model and Attribute may also be generated by a set of manually designed regular expressions. The rest of the tags are simply pre-terminals generating word tokens. Note that we have a lexicon, e.g.., a Brand lexicon, for all the tags except Type and Attribute. The model, however, extends the lexicon by including words discovered from labeled data (if available). The gray color for a non-terminal on the right-hand side (RHS) of some rule means that the nonterminal is optional (see Query rule in figure (6)). We used the optional non-terminals to make the task of defining the grammar easier. For example if we consider a rule with n optional nonterminals on its RHS, without optional nonterminals we have to define 2n different rules to have an equivalent grammar. The parser can treat the optional non-terminals in different ways such as pre-compiling the rules to the equivalent set of rules with no optional non-terminal, or directly handling optional non-terminals during the parsing. The first approach results in exponentially many rules in the system, which causes sparsity issues when learning the probability of the rules. Therefore in our system the parser handles optional non-terminals directly. In fact, every nonterminal has its own probability for not occurring on the RHS of a rule, therefore the model learns n+1 probabilities for a rule with n optional nonterminals on its RHS: one for the rule itself and one for every non-terminal on its RHS. It means that instead of learning 2n probabilities for 2n different rules, the model only learns n+1 probabilities. That solves the sparsity problem, but causes another issue which we call short length preference. This occurs because we have assumed that the probability of a non-terminal being optional is independent of other optional non-terminals. Since for almost all non-terminals on the RHS of the query rule, the probability that the nonterminal does not exist in an instance of a query is higher than 0.5, a null query is the most likely query that the model generates! We solve this problem by conditioning the probabilities on the length of queries. This brings a trade-off between the two other alternatives: ignoring sparsity problem to prevent making many independence assumptions and making a lot of independence assumptions to address the sparsity issue. Unlike sequential models, the grammar model is able to capture critical global constraints. For example, it is very unlikely for a query to have more than one Type, Brand, etc. This is an important property of the product queries that can help to resolve the ambiguity in many cases. In practice, the probability that the model learns for a rule like: Query → {Brand*, Product*, Model*, …} Brand* → {Brand} Brand* → {Brand*, Brand} Type* → {Type} Type* → {Type*, Type} Model* → {Model} Model* → {Model*, Model} … Figure 6. A simple grammar for product domain Type: Camera, Shoe, Cell phone, … Brand: Canon, Nike, At&t, … Model: dc1700, powershot, ipod nano Attribute: 1GB, 7mpixel, 3X, … BuyingIntenet: Sale, deal, … ResearchIntent: Review, compare, … SortOrder: Best, Cheap, … Merchant: Walmart, Target, … Figure 5. Example of schema for product domain 866 Type*  {Type*, Type} compared to the rule: Type*  Type is very small; the model penalizes the occurrence of more than one Type in a query. Figure (7a) shows an example of a parse tree generated for the query “Canon vs Sony Camera” in which B, Q, and T are abbreviations for Brand, Query, and Type, and U is a special tag for the words that does not fall into any other tag categories and have been left unlabeled in our corpus such as a, the, for, etc. Therefore the parser assigns the tag sequence B U B T to this query. It is true that the word “vs” plays a critical role in this query, representing that the user’s intention is to compare the two brands; but as mentioned above in our labeled data such words has left unlabeled. The general model, however, is able to easily capture these sorts of phenomena. A more careful look at the grammar shows that there is another parse tree for this query as shown in figure (7b). These two trees basically represent the same structure and generate the same sequence of tags. The number of trees generated for the same structure increases exponentially with the number of equal tags in the tree. To prevent this over-generation we used rules analogous to GPSG’s LP rules such as: B* < B which allows only a unique way of generating a bag of the Brand tags. Using this LP rule, the only valid tree for the above query is the one in figure (7a). 6 Discriminative re-ranking By using a context-free grammar, we are missing a great source of clues that can help to resolve ambiguity. Discriminative models, on the other hand, allow us to define numerous features, which can cooperate to resolve the ambiguities. Similar studies in parsing natural language sentences (Collins and Koo 2005) have shown that if, instead of taking the most likely tree structure generated by a parser, the n-best parse trees are passed through a discriminative re-ranking module, the accuracy of the model will increase significantly. We use the same idea to improve the performance of our model. We run a Support Vector Machine (SVM) based re-ranking module on top of the parser. Several contextual features (such as bigrams) are defined to help in disambiguation. This combination provides a framework that benefits from the advantages of both generative and discriminative models. In particular, when there is no or a very small amount of labeled data, a parser could still work by using unsupervised learning approaches to learn the rules, or by simply using a set of hand-built rules (as we did above for the task of semantic tagging). When there is enough labeled data, then a discriminative model can be trained on the labeled data to learn contextual information and to further enhance the tagging performance. 7 Evaluation Our resources are a set of 21000 manually labeled queries, a manually designed grammar, a lexicon for every tag (except Type and Attribute), and a set of regular expressions defined for Models and Attributes. Note that with a grammar similar to the one in figure (6), generating a parse tree from a labeled query is straightforward. Then the parser is trained on the trees to learn the parameters of the model (probabilities in this case). We randomly extracted 3000, out of 21000, queries as the test set and used the remaining 18000 for training. We created training sets with different sizes to evaluate the impact of training data size on tagging performance. Three modules were used in the evaluation: the CRF-based model4, the parser, and the parser plus the SVM-based re-ranking. Figure (8) shows the learning curve of the word-level F-score for all the three modules. As seen in this plot, when there is a small amount of training data, the parser performs better than the CRF module and parser+SVM module performs better than the other two. With a large amount of training data, the CRF and parser almost have the same performance. Once again the parser+SVM module 4 The CRF module also uses the lexical resources and regular expressions. In fact, it applies a deterministic context free grammar to the query to find all the possible groupings of words into chunks and uses this information as a set of features in the system. Figure 7. Two equivalent CFSG parse trees 867 outperforms the other two. These results show that, as expected, the CRF-based model is more dependent on the training data than the parser. Parser+SVM always performs at least as well as the parser-only module even with a very small set of training data. This is because the rank given to every parse tree by the parser is used as a feature in the SVM module. When there is a very small amount of training data, this feature is dominant and the output of the re-reranking module is basically the same as the parser’s highest-rank output. Table (1) shows the performance of all three modules when the whole training set was used to train the system. The first three columns in the table show the word-level precision, recall, and F-score; and the last column represents the query level accuracy (a query is considered correct if all the words in the query have been labeled correctly). There are two rows for the parser+SVM in the table: one for n=2 (i.e. re-ranking the 2-Best trees) and one for n=10. It is interesting to see that even with the re-ranking of only the first two trees generated by the parser, the difference between the accuracy of the parser+SVM module and the parser-only module is quite significant. Re-ranking with a larger number of trees (n>10) did not increase performance significantly. 8 Summary We introduced a novel approach for deep parsing of web search queries. Our approach uses a grammar for generating multisets called a context-free multiset generating grammar (CFSG). We used a probabilistic version of this grammar. A parser was designed for parsing this type of grammar. Also a discriminative re-ranking module based on a support vector machine was used to take contextual information into account. We have used this system for automatic tagging of web search queries and have compared it with a CRF-based model designed for the same task. The parser performs much better when there is a small amount of training data, but an adequate lexicon for every tag. This is a big advantage of the parser model, because in practice providing labeled data is very expensive but very often the lexicons can be easily extracted from the structured data on the web (for example extracting movie titles from imdb or book titles from Amazon). Our hybrid model (parser plus discriminative re-ranking), on the other hand, outperforms the other two modules regardless of the size of the training data. The main drawback with our approach is to completely ignore the ordering. Note that although strict ordering constraints such as those imposed by PSG is not appropriate for modeling query structure, it might be helpful to take ordering information into account when resolving ambiguity. We leave this for future work. Another interesting and practically useful problem that we have left for future work is to design an unsupervised learning algorithm for CFSG similar to its phrase structure counterpart: inside-outside algorithm (Baker 1979). Having such a capability, we are able to automatically learn the underlying structure of queries by processing the huge amount of available unlabeled queries. Acknowledgement We need to thank Ye-Yi Wang for his helpful advices. We also thank William de Beaumont for his great comments on the paper. References Allan, J. and Raghavan, H. (2002) Using Part-ofspeech Patterns to Reduce Query Ambiguity, Proceedings of SIGIR 2002, pp. 307-314. Allen, J. F. (1995) Natural Language Understanding, Benjamin Cummings. Baker, J. K. (1979) Trainable grammars for speech recognition. In Jared J. Wolf and Dennis H. Klatt, editors, Speech communication papers presented at the 97th Meeting of the Acoustical Society of America, MIT, Cambridge, MA. Barton, E. (1985) On the complexity of ID/LP rules, Computational Linguistics, Volume 11, Pages 205218. Figure 8. The learning curve for the three modules Train No = 18000  Test No = 3000  P  R  F  Q  CRF  0.815  0.812  0.813  0.509  Parser  0.808  0.814  0.811  0.494  Parser+SVM (n = 2)  0.823  0.827  0.825  0.531  Parser+SVM (n = 10)  0.832  0.835  0.833  0.555  Table 1. The results of evaluating the three modules 868 Barr, C., Jones, R., Regelson, M., (2008) The Linguistic Structure of English Web-Search Queries, In Proceedings of EMNLP-08: conference on Empirical Methods in Natural Language Processing. Broder, A., Fontoura, M., Gabrilovich, E., Joshi, A., Josifovski, V., and Zhang, T. (2007) Robust classification of rare queries using web knowledge. In Proceedings of SIGIR’07 Collins, M., Koo, T., (2005) Discriminative Reranking for Natural Language Parsing, Computational Linguistics, v.31 p.25-70. Gazdar, G., Klein, E., Sag, I., Pullum, G., (1985) Generalized Phrase Structure Grammar, Harvard University Press. Grenager, T., Klein, D., and Manning, C. (2005) Unsupervised learning of field segmentation models for information extraction, In Proceedings of ACL05. Kushmerick, N., Johnston, E., and McGuinness, S. (2001). Information extraction by text classification, In Proceedings of the IJCAI-01 Workshopon Adaptive Text Extraction and Mining. Li, X., Wang, Y., and Acero, A. (2008) Learning query intent from regularized click graphs. In Proceedings of SIGIR’08 Manning, C., Schütze, H. (1999) Foundations of Statistical Natural Language Processing, The MIT Press, Cambridge, MA. McCallum, A., Freitag, D., Pereira, F. (2000) Maximum entropy markov models for information extraction and segmentation, Proceedings of the Seventeenth International Conference on Machine Learning, Pages: 591 - 598 McCallum, A., Nigam, K., Rennie, J., and Seymore, K. (1999) A machine learning approach to building domain-specific search engines, In IJCAI-1999. Pasca, M., Van Durme, B., and Garera, N. (2007) The Role of Documents vs. Queries in Extracting Class Attributes from Text, ACM Sixteenth Conference on Information and Knowledge Management (CIKM 2007). Lisboa, Portugal. Viola, P., Narasimhan, M., Learning to extract information from semi-structured text using a discriminative context free grammar SIGIR 2005: 330-337. Xue, GR, HJ Zeng, Z Chen, Y Yu, WY Ma, WS Xi, WG Fan, (2004), Optimizing web search using web click-through data, Proceedings of the thirteenth ACM international conference. 869
2009
97
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 870–878, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Mining Bilingual Data from the Web with Adaptively Learnt Patterns Long Jiang1, Shiquan Yang2, Ming Zhou1, Xiaohua Liu1, Qingsheng Zhu2 1Microsoft Research Asia Beijing, 100190, P.R.China 2Chongqing University, Chongqing, 400044, P.R.China {longj,mingzhou,xiaoliu}@microsoft.com [email protected],[email protected] Abstract Mining bilingual data (including bilingual sentences and terms1) from the Web can benefit many NLP applications, such as machine translation and cross language information retrieval. In this paper, based on the observation that bilingual data in many web pages appear collectively following similar patterns, an adaptive pattern-based bilingual data mining method is proposed. Specifically, given a web page, the method contains four steps: 1) preprocessing: parse the web page into a DOM tree and segment the inner text of each node into snippets; 2) seed mining: identify potential translation pairs (seeds) using a word based alignment model which takes both translation and transliteration into consideration; 3) pattern learning: learn generalized patterns with the identified seeds; 4) pattern based mining: extract all bilingual data in the page using the learned patterns. Our experiments on Chinese web pages produced more than 7.5 million pairs of bilingual sentences and more than 5 million pairs of bilingual terms, both with over 80% accuracy. 1 Introduction Bilingual data (including bilingual sentences and bilingual terms) are critical resources for building many applications, such as machine translation (Brown, 1993) and cross language information retrieval (Nie et al., 1999). However, most existing bilingual data sets are (i) not adequate for their intended uses, (ii) not up-to-date, (iii) apply only to limited domains. Because it‟s very hard and expensive to create a large scale bilin 1 In this paper terms refer to proper nouns, technical terms, movie names, and so on. And bilingual terms/sentences mean terms/sentences and their translations. gual dataset with human effort, recently many researchers have turned to automatically mining them from the Web. If the content of a web page is written in two languages, we call the page a Bilingual Web Page. Many such pages exist in non-English web sites. Most of them have a primary language (usually a non-English language) and a secondary language (usually English). The content in the secondary language is often the translation of some primary language text in the page. Since bilingual web pages are very common in non-English web sites, mining bilingual data from them should be an important task. However, as far as we know, there is no publication available on mining bilingual sentences directly from bilingual web pages. Most existing methods for mining bilingual sentences from the Web, such as (Nie et al., 1999; Resnik and Smith, 2003; Shi et al., 2006), try to mine parallel web documents within bilingual web sites first and then extract bilingual sentences from mined parallel documents using sentence alignment methods. As to mining term translations from bilingual web pages, Cao et al. (2007) and Lin et al. (2008) proposed two different methods to extract term translations based on the observation that authors of many bilingual web pages, especially those whose primary language is Chinese, Japanese or Korean, sometimes annotate terms with their English translations inside a pair of parentheses, like “c1c2...cn(e1 e2 ... em)” (c1c2...cn is a primary language term and e1 e2 ... em is its English translation). Actually, in addition to the parenthesis pattern, there is another interesting phenomenon that in many bilingual web pages bilingual data appear collectively and follow similar surface patterns. Figure 1 shows an excerpt of a page which introduces different kinds of dogs2. The page provides 2 http://www.chinapet.net 870 a list of dog names in both English and Chinese. Note that those bilingual names do not follow the parenthesis pattern. However, most of them are identically formatted as: “{Number}。{English name}{Chinese name}{EndOfLine}”. One exceptional pair (“1.Alaskan Malamute 啊拉斯加 雪橇犬”) differs only slightly. Furthermore, there are also many pages containing consistently formatted bilingual sentences (see Figure 2). The page3 lists the (claimed) 200 most common oral sentences in English and their Chinese translations to facilitate English learning. Figure 1. Consistently formatted term translation pairs Figure 2. Consistently formatted sentence translation pairs People create such web pages for various reasons. Some online stores list their products in two languages to make them understandable to foreigners. Some pages aim to help readers with foreign language learning. And in some pages where foreign names or technical terms are mentioned, the authors provide the translations for disambiguation. For easy reference, from now on we will call pages which contain many consistently formatted translation pairs Collective Bilingual Pages. According to our estimation, at least tens of millions of collective bilingual pages exist in Chinese web sites. Most importantly, each such page usually contains a large amount of bilingual 3 http://cul.beelink.com/20060205/2021119.shtml data. This shows the great potential of bilingual data mining. However, the mining task is not straightforward, for the following reasons: 1) The patterns vary in different pages, so it‟s impossible to mine the translation pairs using predefined templates; 2) Some pages contain consistently formatted texts in two languages but they are not translation pairs; 3) Not all translations in a collective bilingual page necessarily follow an exactly consistent format. As shown in Figure 1, the ten translation pairs are supposed to follow the same pattern, however, due to typos, the pattern of the first pair is slightly different. Because of these difficulties, simply using a classifier to extract translation pairs from adjacent bilingual texts in a collective bilingual page may not achieve satisfactory results. Therefore in this paper, we propose a pattern-based approach: learning patterns adaptively from collective bilingual pages instead of using the parenthesis pattern, then using the learned patterns to extract translation pairs from corresponding web pages. Specifically, our approach contains four steps: 1) Preprocessing: parse the web page into a DOM tree and segment the inner text of each node into snippets; 2) Seed mining: identify potential translation pairs (seeds) using an alignment model which takes both translation and transliteration into consideration; 3) Pattern learning: learn generalized patterns with the identified seeds; 4) Pattern based mining: extract all bilingual data in the page using the learnt patterns. Let us take mining bilingual data from the text shown in Figure 1 as an example. Our method identifies “Boxer 拳师” and “Eskimo Dog 爱斯 基摩犬” as two potential translation pairs based on a dictionary and a transliteration model (Step 2 above). Then we learn a generalized pattern that both pairs follow as “{BulletNumber}{Punctuation}{English term}{Chinese term}{EndOfLine}”, (Step 3 above). Finally, we apply it to match in the entire text and get all translation pairs following the pattern (Step 4 above). The remainder of this paper is organized as follows. In Section 2, we list some related work. The overview of our mining approach is presented in Section 3. In Section 4, we give de871 tailed introduction to each of the four modules in our mining approach. The experimental results are reported in Section 5 followed by our conclusion and some future work in Section 6. Please note that in this paper we describe our method using example bilingual web pages in English and Chinese, however, the method can be applied to extract bilingual data from web pages written in any other pair of languages, such as Japanese and English, Korean and English etc. 2 Related Work Mining Bilingual Data from the Web As far as we know, there is no publication available on mining parallel sentences directly from bilingual web pages. Most existing methods of mining bilingual sentences from the Web, such as (Nie et al., 1999; Resnik and Smith, 2003; Shi et al., 2006), mine parallel web documents within bilingual web sites first and then extract bilingual sentences from mined parallel documents using sentence alignment methods. However, since the number of bilingual web sites is quite small, these methods can not yield a large number of bilingual sentences. (Shi et al., 2006), mined a total of 1,069,423 pairs of English-Chinese parallel sentences. In addition to mining from parallel documents, (Munteanu and Marcu, 2005) proposed a method for discovering bilingual sentences in comparable corpora. As to the term translation extraction from bilingual web pages, (Cao et al., 2007) and (Lin et al., 2008) proposed two different methods utilizing the parenthesis pattern. The primary insight is that authors of many bilingual web pages, especially those whose primary language is Chinese, Japanese or Korean sometimes annotate terms with their English translations inside a pair of parentheses. Their methods are tested on a large set of web pages and achieve promising results. However, since not all translations in bilingual web pages follow the parenthesis pattern, these methods may miss a lot of translations appearing on the Web. Apart from mining term translations directly from bilingual web pages, more approaches have been proposed to mine term translations from text snippets returned by a web search engine (Jiang et al., 2007; Zhang and Vines, 2004; Cheng et al., 2004; Huang et al., 2005). In their methods the source language term is usually given and the goal is to find the target language translations from the Web. To obtain web pages containing the target translations, they submit the source term to the web search engine and collect returned snippets. Various techniques have been proposed to extract the target translations from the snippets. Though these methods achieve high accuracy, they are not suitable for compiling a large-scale bilingual dictionary for the following reasons: 1) they need a list of predefined source terms which is not easy to obtain; 2) the relevance ranking in web search engines is almost entirely orthogonal to the intent of finding the bilingual web pages containing the target translation, so many desired bilingual web pages may never be returned; 3) most such methods rely heavily on the frequency of the target translation in the collected snippets which makes mining low-frequency translations difficult. Moreover, based on the assumption that anchor texts in different languages referring to the same web page are possibly translations of each other, (Lu et al., 2004) propose a novel approach to construct a multilingual lexicon by making use of web anchor texts and their linking structure. However, since only famous web pages may have inner links from other pages in multiple languages, the number of translations that can be obtained with this method is limited. Pattern-based Relation Extraction Pattern-based relation extraction has also been studied for years. For instance, (Hearst, 1992; Finkelstein-Landau and Morin, 1999) proposed an iterative pattern learning method for extracting semantic relationships between terms. (Brin, 1998) proposed a method called DIPRE (Dual Iterative Pattern Relation Expansion) to extract a relation of books (author, title) pairs from the Web. Since translation can be regarded as a kind of relation, those ideas can be leveraged for extracting translation pairs. 3 Overview of the Proposed Approach Web pages Seed mining Pattern-based mining Pattern learning Preprocessing Bilingual dictionary input output depend Translation pairs Transliteration model depend Figure 3. The framework of our approach 872 As illustrated in Figure 3, our mining system consists of four main steps: preprocessing, seed mining, pattern learning and pattern based mining. The input is a set of web documents and the output is mined bilingual data. In the preprocessing step, the input web documents are parsed into DOM trees and the inner text of each tree node is segment into snippets. Then we select those tree nodes whose inner texts are likely to contain translation pairs collectively with a simple rule. The seed mining module receives the inner text of each selected tree node and uses a wordbased alignment model to identify potential translation pairs. The alignment model can handle both translation and transliteration in a unified framework. The pattern learning module receives identified potential translation pairs from the seed mining as input, and then extracts generalized pattern candidates with the PAT tree algorithm. Then a SVM classifier is trained to select good patterns from all extracted pattern candidates. In the pattern-based mining step, the selected patterns were used to match within the whole inner text to extract all translation pairs following the patterns. 4 Adaptive Pattern-based Bilingual Data Mining In this section, we will present the details about the four steps in the proposed approach. 4.1 Preprocessing HTML Page Parsing The Document Object Model (DOM) is an application programming interface used for parsing HTML documents. With DOM, an HTML document is parsed into a tree structure, where each node belongs to some predefined types (e.g. DIV, TABLE, TEXT, COMMENT, etc.). We removed nodes with types of “B”, “FONT”, “I” and so on, because they are mainly used for controlling visual effect. After removal, their child nodes will be directly connected to their parents. Text Segmentation After an HTML document is parsed, the inner text of each node in the DOM tree will be segmented into a list of text snippets according to their languages. That means each snippet will be labeled as either an English snippet (E) or a Chinese snippet (C). The text segmentation was performed based on the Unicode values of characters 4 first and then guided by the following rules to decide the boundary of a snippet under some special situations: 1) Open punctuations (such as „(„) are padded into next snippet, and close punctuations (such as „)‟) are padded into previous snippet; other punctuations (such as „;„) are padded into previous snippet; 2) English snippets which contains only 1 or 2 ASCII letters are merged with previous and next Chinese snippets (if exist). Since sometimes Chinese sentences or terms also contain some abbreviations in English. Table 1 gives some examples of how the inner texts are segmented. Inner text China Development Bank (中国) 国 家开发银行 Segmentation China Development Bank |(中国) 国家开发银行 Inner text Windows XP 视窗操作系统XP 版 Segmentation Windows XP |视窗操作系统XP 版 Table 1. Example segmentations („|‟ indicates the separator between adjacent snippets) Since a node‟s inner text includes all inner texts of its children, the segmentation to all texts of a DOM tree has to be performed from the leaf nodes up to the root in order to avoid repetitive work. When segmenting a node‟s inner text, we first segment the texts immediately dominated by this node and then combine those results with its children‟s segmented inner texts in sequence. As a result of the segmentation, the inner text of every node will look like “…ECECC 5EC…”. Two adjacent snippets in different languages (indicated as “EC” or “CE”) are considered a Bilingual Snippet Pair (BSP). Collective Nodes Selection Since our goal is to mine bilingual knowledge from collective bilingual pages, we have to decide if a page is really a collective bilingual page. In this paper, the criterion is that a collective page must contain at least one Collective Node which is defined as a node whose inner text contains no fewer than 10 non-overlapping bilingual snippet pairs and which contains less than 10 4 For languages with the same character zone, other techniques are needed to segment the text. 5 Adjacent snippets in the same language only appear in the inner texts of some non-leaf nodes. 873 percent of other snippets which do not belong to any bilingual snippet pairs. 4.2 Seed Mining The input of this module is a collective node whose inner text has been segmented into continuous text snippets, such as …EkChEk+1Ch+1Ch+2…. In this step, every adjacent snippet pair in different languages will be checked by an alignment model to see if it is a potential translation pair. The alignment model combines a translation and a transliteration model to compute the likelihood of a bilingual snippet pair being a translation pair. If it is, we call the snippet pair as a Translation Snippet Pair (TSP). If both of two adjacent pairs, e.g. EkCh and ChEk+1, are considered as TSPs, the one with lower translation score will be regarded as a NON-TSP. Before computing the likelihood of a bilingual snippet pair being a TSP, we preprocess it via the following steps: a) Isolating the English and Chinese contents from their contexts in the bilingual snippet pair. Here, we use a very simple rule: in the English snippet, we regard all characters within (and including) the first and the last English letter in the snippet as the English content; similarly, in the Chinese snippet we regard all characters within (and including) the first and the last Chinese character in the snippet as the Chinese content; b) Word segmentation of the Chinese content. Here, the Forward Maximum Matching algorithm (Chen and Liu, 1992) based on a dictionary is adopted; c) Stop words filtering. We compiled a small list of stop words manually (for example, “of”, “to”, “的”, etc.) and remove them from the English and Chinese content; d) Stemming of the English content. We use an in-house stemming tool to get the uninflected form of all English words. After preprocessing, all English words form a collection E={e1,e2,…,em } and all Chinese words constitute a collection C={c1,c2,…,cn}, where ei is an English word, and ci is a Chinese word. We then use a linking algorithm which takes both translation and transliteration into consideration to link words across the two collections. In our linking algorithm, there are three situations in which two words will be linked. The first is that the two words are considered translations of each other by the translation dictionary. The second is that the pronunciation similarity of the two words is above a certain threshold so that one can be considered the transliteration of the other. The third is that the two words are identical (this rule is especially designed for linking numbers or English abbreviations in Chinese snippets). The dictionary is an in-house dictionary and the transliteration model is adapted from (Jiang et al., 2007). After the linking, a translation score over the English and Chinese content is computed by calculating the percentage of words which can be linked in the two collections. For some pairs, there are many conflicting links, for example, some words have multiple senses in the dictionary. Then we select the one with highest translation score. For example, given the bilingual snippet pair of “Little Smoky River” and “小斯莫基河”, its English part is separated as “Little/Smoky/River”, and its Chinese part is separated as “小/斯/莫/基/ 河”. According to the dictionary, “Little” can be linked with “小”, and “River” can be linked with “河”. However, “Smoky” is translated as “冒烟 的” in the dictionary which does not match any Chinese characters in the Chinese snippet. However the transliteration score (pronunciation similarity) between “Smoky” (IPA: s.m.o.k.i) and “斯/莫/基” (Pinyin: si mo ji) is higher than the threshold, so the English word “Smoky” can be linked to three Chinese characters “斯”, “莫” and “基”. The result is a translation score of 1.0 for the pair “Little Smoky River” and “小斯莫基河”. 4.3 Pattern Learning The pattern learning module is critical for mining bilingual data from collective pages, because many translation pairs whose translation scores are not high enough may still be extracted by pattern based mining methods. In previous modules, the inner texts of all nodes are segmented into continuous text snippets, and translation snippet pairs (TSP) are identified in all bilingual snippet pairs. Next, in the pattern learning module, those translation snippet pairs are used to find candidate patterns and then a SVM classifier is built to select the most useful patterns shared by most translation pairs in the whole text. 874 Candidate Pattern Extraction First, as in the seed mining module, we isolate the English and Chinese contents from their contexts in a TSP and then replace the contents with two placeholders “[E]” and “[C]” respectively. Second, we merge the two snippets of a TSP into a string and add a starting tag “[#]” and an ending tag “[#]” to its start and end. Following (Chang and Lui, 2001), all processed strings are used to build a PAT tree, and we then extract all substrings containing “E” and “C” as pattern candidates from the PAT tree. However, pattern candidates which start or end with “[E]” (or “[C]”) will be removed, since they cannot specify unambiguous boundaries when being matched in a string. Web page authors commonly commit formatting errors when authoring the content into an html page, as shown in Figure 1. There, the ten bilingual terms should have been written in the same pattern, however, because of the mistaken use of “.” instead of “。”, the first translation pair follows a slightly different pattern. Some other typical errors may include varying length or types of white space, adjacent punctuation marks instead of one punctuation mark, and so on. To make the patterns robust enough to handle such variation, we generalized all pattern candidates through the following two steps: 1) Replace characters in a pattern with their classes. We define three classes of characters: Punctuation (P), Number (N), and White Space (S). Table 2 lists the three classes and the corresponding regular expressions in Microsoft .Net Framework6. 2) Merge identical adjacent classes. Class Corresponding regular expression P [\p{P}] N [\d] S [\s] Table 2. Character classes For example, from the translation snippet pair of “7. Don‟t worry.” and “别担心。”, we will learn the following pattern candidates:  “#[N][P][S][E][P][S][C][P]#”;  “[N][P][S][E][P][S][C][P]#”;  “[N][P][S][E][P][S][C][P]”;  …  “[S][E][P][S][C][P]”; 6 In System.Text.RegularExpressions namespace Pattern Selection After all pattern candidates are extracted, a SVM classifier is used to select the good ones:    x w x fw     , ) ( where, x is the feature vector of a pattern candidate pi, and w is the vector of weights.  , stands for an inner product. f is the decision function to decide which candidates are good. In this SVM model, each pattern candidate pi has the following four features: 1) Generality: the percentage of those bilingual snippet pairs which can match pi in all bilingual snippet pairs. This feature measures if the pattern is a common pattern shared by many bilingual snippet pairs; 2) Average translation score: the average translation score of all bilingual snippet pairs which can match pi. This feature helps decide if those pairs sharing the same pattern are really translations; 3) Length: the length of pi. In general, longer patterns are more specific and can produce more accurate translations, however, they are likely to produce fewer matches; 4) Irregularity: the standard deviation of the numbers of noisy snippets. Here noisy snippets mean those snippets between any two adjacent translation pairs which can match pi. If the irregularity of a pattern is low, we can be confident that pairs sharing this pattern have a reliably similar inner relationship with each other. To estimate the weight vector, we extracted all pattern candidates from 300 bilingual web pages and asked 2 human annotators to label each of the candidates as positive or negative. The annotation took each of them about 20 hours. Then with the labeled training examples, we use SVM light7 to estimate the weights. 4.4 Pattern-based Mining After good patterns are selected, every two adjacent snippets in different languages in the inner text will be merged as a target string. As we mentioned previously, we add a starting tag “[#]” and an ending tag “[#]” to the start and end of every target string. Then we attempt to match each of the selected patterns in each of the target strings and extract translation pairs. If the target 7 http://svmlight.joachims.org/ 875 string was matched with more than one pattern, the matched string with highest translation score will be kept. The matching process is actually quite simple, since we transform the learnt patterns into standard regular expressions and then make use of existing regular expression matching tools (e.g., Microsoft .Net Framework) to extract translation pairs. However, to make our patterns more robust, when transforming the selected patterns into standard regular expressions, we allow each character class to match more than once. That means “[N]”, “[P]” and “[S]” will be transformed into “[\d]+”, “[\p{P}]+” and “[\s]+” respectively. And “[E]” and “[C]” will be transformed into “[^\u4e00-\u9fa5]+” (any character except Chinese character) and “.+”, respectively. 5 Experimental Results In the following subsections, first, we will report the results of our bilingual data mining on a large set of Chinese web pages and compare them with previous work. Second, we will report some experimental results on a manually constructed test data set to analyze the impact of each part of our method. 5.1 Evaluation on a Large Set of Pages With the proposed method, we performed bilingual data extraction on about 3.5 billion web pages crawled from Chinese web sites. Out of them, about 20 million were determined to contain bilingual collective nodes. From the inner texts of those nodes, we extracted 12,610,626 unique translation pairs. If we consider those pairs whose English parts contain more than 5 words as sentence translations and all others as term translations, we get 7,522,803 sentence translations and 5,087,823 term translations. We evaluated the quality of these mined translations by sampling 200 sentence translations and 200 term translations and presenting those to human judges, with a resulting precision of 83.5% for sentence translations and 80.5% for term translations. As we mentioned in Section 2, (Shi et al., 2006) reported that in total they mined 1,069,423 pairs of English-Chinese parallel sentences from bilingual web sites. However, our method yields about 7.5 million pairs, about seven times as many. We also re-implemented the extraction method using the parenthesis pattern proposed by (Lin et al., 2008) and were able to mine 6,538,164 bilingual terms from the same web pages. A sample of 200 terms was submitted for human judgment, resulting in a precision of 78.5% which is a little lower than that of our original result. Further analysis showed that fewer than 20% of the bilingual terms mined with our method overlap with the data mined using the re-implemented method proposed by (Lin et al., 2008). This indicates that our method can find many translations which are not covered by the parenthesis pattern and therefore can be used together with the parenthesis pattern based method to build a bilingual lexicon. Out of the term translations we mined, we found many which co-occur with their source terms only once in the Web. We check this by searching in Google with a Boolean query made of the term and its translation and then get the number of pages containing the query. If one attempts to extract this kind of low-frequency translation using a search engine-based method, the desired bilingual page which contains the target translation is not likely to be returned in the top n results when searching with the source term as the query. Even if the desired page is returned, the translation itself may be difficult to extract due to its low frequency. 5.2 Evaluation on a Human Made Test Data Set Besides the evaluation of our method on a huge set of web pages, we also carried out some experiments on a human-constructed test data set. We randomly selected 500 collective nodes from the huge set of Chinese web pages and asked two annotators to label all bilingual data in their inner texts. Half of the labeled data are then used as the development data set and the rest as the test data set to evaluate our systems with different settings. Table 3 shows the evaluation results. Setting Type Recall Precision F-Score Without pattern Exact 52.2 75.4 61.7 Fuzzy 56.3 79.3 65.8 Without PG Exact 69.2 78.6 73.6 Fuzzy 74.3 82.9 78.4 With PG Exact 79.3 80.5 79.9 Fuzzy 86.7 87.9 87.3 Table 3. Performance of different settings In Table 3, “Without pattern” means that we simply treat those seed pairs found by the alignment model as final bilingual data. “Without PG” and “With PG” mean not generalizing and generalizing the learnt patterns to class based form, 876 respectively. Evaluation type “Exact” means the mined bilingual data are considered correct only if they are exactly same as the data labeled by human, while “Fuzzy” means the mined bilingual data are considered correct if they contain the data labeled by the human. As shown in Table 3, the system without pattern-based extraction yields only 52.2% recall. However, after adding pattern-based extraction, recall is improved sharply, to 69.2% for “Without PG” and to 79.3% for “With PG”. Most of the improvement comes from those translations which have very low translation scores and therefore are discarded by the seed mining module, however, most of them are found with the help of the learnt patterns. From Table 3, we can also see that the system “With PG” outperforms “Without PG” in terms of both precision and recall. The reason may be that web writers often make mistakes when writing on web pages, such as punctuation misuse, punctuation loss, and extra spaces etc., so extracting with a strict surface pattern will often miss those translations which follow slightly different patterns. To find out the reasons why some nontranslation pairs are extracted, we checked 20 pairs which are not translations but extracted by the system. Out of them, 5 are caused by wrong segmentations. For example, “大提琴与小提琴 双重协奏曲Double Concerto for Violin and Cello D 大调第二交响曲Symphony No.2 in D Major” is segmented into “大提琴与小提琴双重 协奏曲”, “Double Concerto for Violin and Cello D”, “大调第二交响曲”, and “Symphony No.2 in D Major”. However, the ending letter „D‟ of the second segment should have been padded into the third segment. For 9 pairs, the Chinese parts are explanative texts of corresponding English texts, but not translations. Because they contain the translations of the key words in the English text, our seed mining module failed to identify them as non-translation pairs. For 3 pairs, they follow the same pattern with some genuine translation pairs and therefore were extracted by the pattern based mining module. However, they are not translation pairs. For the other 3 pairs, the errors came from the pattern generalization. To evaluate the contribution of each feature used in the pattern selection module, we eliminated one feature at a time in turn from the feature set to see how the performance changed in the absence of any single feature. The results are reported below. Eliminated feature F-Score (Exact) Null 79.9 Generality 72.3 Avg. translation score 74.3 Length 77.5 Irregularity 76.6 Table 4. Contribution of every feature From the table above, we can see that every feature contributes to the final performance and that Generality is the most useful feature among all four features. 6 Conclusions Bilingual web pages have shown great potential as a source of up-to-date bilingual terms/sentences which cover many domains and application types. Based on the observation that many web pages contain bilingual data collections which follow a mostly consistent but possibly somewhat variable pattern, we propose a unified approach for mining bilingual sentences and terms from such pages. Our approach can adaptively learn translation patterns according to different formatting styles in various web pages and then use the learnt patterns to extract more bilingual data. The patterns are generalized to minimize the impact of format variation and typos. According to experimental results on a large set of web pages as well as on a manually made test data set, our method is quite promising. In the future, we would like to integrate the text segmentation module with the seed mining and pattern learning module to improve the accuracy of text segmentation. We also want to evaluate the usefulness of our mined data for machine translation or other applications. References P. F. Brown, S. A. Della Pietra, V. J. Della Pietra and R. L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19:2, 263-311. Sergey Brin. 1998. Extracting patterns and relations from the World Wide Web. In Proc. of the 1998 International Workshop on the Web and Databases. Pp: 172-183. G.H. Cao, J.F. Gao and J.Y. Nie. 2007. A system to mine large-scale bilingual dictionaries from monolingual web pages. MT summit. Pp: 57-64. 877 Chia-Hui Chang and Shao-Chen Lui. 2001. IEPAD: Inform extract based on pattern discovery. In Proc. of the 10th ACM WWW conference. Keh-Jiann Chen, Shing-Huan Liu. 1992. Word Identification for Mandarin Chinese Sentences. In the Proceedings of COLING 1992. Pp:101-107. Cheng, P., Teng, J., Chen, R., Wang, J., Lu, W., and Cheng, L. 2004. Translating Unknown Queries with Web Corpora for Cross-Language Information Retrieval. In the Proceedings of SIGIR 2004, pp 162-169. Michal Finkelstein-Landau, Emmanuel Morin. 1999. Extracting Semantic Relationships between Terms: Supervised vs. Unsupervised Methods. In Proceedings of International Workshop on Ontological Engineering on the Global Information Infrastructure. Pp:71-80. Marti A. Hearst. 1992. Automatic Acquisition of Hyponyms from Large Text Corpora. In the Proceedings of COLING-92. Pp: 539-545. Huang, F., Zhang, Y., and Vogel, S. 2005. Mining Key phrase Translations from Web Corpora. In the Proceedings of HLT-EMNLP. L. Jiang, M. Zhou, L.-F. Chien, C. Niu. 2007. Named Entity Translation with Web Mining and Transliteration, Proceedings of the 20th IJCAI. Pp: 16291634. D. Lin, S. Zhao, B. Durme and M. Pasca. 2008. Mining Parenthetical Translations from the Web by Word Alignment. In ACL-08. pp 994-1002. Lu, W. and Lee, H. 2004. Anchor text mining for translation of Web queries: A transitive translation approach. ACM transactions on Information Systems, Vol.22, April 2004, pages 242-269. D. S. Munteanu, D. Marcu. Improving Machine Translation Performance by Exploiting NonParallel Corpora. 2005. Computational Linguistics. 31(4). Pp: 477-504. J-Y Nie, M. Simard, P. Isabelle, and R. Durand. 1999. Cross-Language Information Retrieval Based on Parallel Texts and Automatic Mining of parallel Text from the Web. In SIGIR 1999. Pp: 74-81. Philip Resnik, Noah A. Smith. 2003. The Web as a Parallel Corpus. Computational Linguistics. 29(3). Pp: 349-380. Li Shao and Hwee Tou Ng. 2004. Mining new word translations from comparable corpora. In Proc. of COLING 2004. Pp: 618–624. Lei Shi, Cheng Niu, Ming Zhou, Jianfeng Gao. 2006. A DOM Tree Alignment Model for Mining Parallel Data from the Web. In ACL 2006. Jung H. Shin, Young S. Han and Key-Sun Choi. 1996. Bilingual knowledge acquisition from KoreanEnglish parallel corpus using alignment method: Korean-English alignment at word and phrase level. In Proceedings of the 16th conference on Computational linguistics, Copenhagen, Denmark. J.C. Wu, T. Lin and J.S. Chang. 2005. Learning Source-Target Surface Patterns for Web-based Terminology Translation. ACL Interactive Poster and Demonstration Sessions,. Pp 37-40, Ann Arbor. Zhang, Y. and Vines, P.. 2004. Using the Web for Automated Translation Extraction in CrossLanguage Information Retrieval. In the Proceedings of SIGIR 2004. Pp: 162-169. 878
2009
98
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 879–887, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Comparing Objective and Subjective Measures of Usability in a Human-Robot Dialogue System Mary Ellen Foster and Manuel Giuliani and Alois Knoll Informatik VI: Robotics and Embedded Systems Technische Universit¨at M¨unchen Boltzmannstraße 3, 85748 Garching bei M¨unchen, Germany {foster,giuliani,knoll}@in.tum.de Abstract We present a human-robot dialogue system that enables a robot to work together with a human user to build wooden construction toys. We then describe a study in which na¨ıve subjects interacted with this system under a range of conditions and then completed a user-satisfaction questionnaire. The results of this study provide a wide range of subjective and objective measures of the quality of the interactions. To assess which aspects of the interaction had the greatest impact on the users’ opinions of the system, we used a method based on the PARADISE evaluation framework (Walker et al., 1997) to derive a performance function from our data. The major contributors to user satisfaction were the number of repetition requests (which had a negative effect on satisfaction), the dialogue length, and the users’ recall of the system instructions (both of which contributed positively). 1 Introduction Evaluating the usability of a spoken language dialogue system generally requires a large-scale user study, which can be a time-consuming process both for the experimenters and for the experimental subjects. In fact, it can be difficult even to define what the criteria are for evaluating such a system (cf. Novick, 1997). In recent years, techniques have been introduced that are designed to predict user satisfaction based on more easily measured properties of an interaction such as dialogue length and speech-recognition error rate. The design of such performance methods for evaluating dialogue systems is still an area of open research. The PARADISE framework (PARAdigm for DIalogue System Evaluation; Walker et al. (1997)) describes a method for using data to derive a performance function that predicts user-satisfaction scores from the results on other, more easily computed measures. PARADISE uses stepwise multiple linear regression to model user satisfaction based on measures representing the performance dimensions of task success, dialogue quality, and dialogue efficiency, and has been applied to a wide range of systems (e.g., Walker et al., 2000; Litman and Pan, 2002; M¨oller et al., 2008). If the resulting performance function can be shown to predict user satisfaction as a function of other, more easily measured system properties, it will be widely applicable: in addition to making it possible to evaluate systems based on automatically available data from log files without the need for extensive experiments with users, for example, such a performance function can be used in an online, incremental manner to adapt system behaviour to avoid entering a state that is likely to reduce user satisfaction, or can be used as a reward function in a reinforcement-learning scenario (Walker, 2000). Automated evaluation metrics that rate system behaviour based on automatically computable properties have been developed in a number of other fields: widely used measures include BLEU (Papineni et al., 2002) for machine translation and ROUGE (Lin, 2004) for summarisation, for example. When employing any such metric, it is crucial to verify that the predictions of the automated evaluation process agree with human judgements of the important aspects of the system output. If not, the risk arises that the automated measures do not capture the behaviour that is actually relevant for the human users of a system. For example, Callison-Burch et al. (2006) presented a number of 879 counter-examples to the claim that BLEU agrees with human judgements. Also, Foster (2008) examined a range of automated metrics for evaluation generated multimodal output and found that few agreed with the preferences expressed by human judges. In this paper, we apply a PARADISE-style process to the results of a user study of a human-robot dialogue system. We build models to predict the results on a set of subjective user-satisfaction measures, based on objective measures that were either gathered automatically from the system logs or derived from the video recordings of the interactions. The results indicate that the most significant contributors to user satisfaction were the number of system turns in the dialogues, the users’ ability to recall the instructions given by the robot, and the number of times that the user had to ask for instructions to be repeated. The former two measures were positively correlated with user satisfaction, while the latter had a negative impact on user satisfaction; however the correlation in all cases was relatively low. At the end of the paper, we discuss possible reasons for these results and propose other measures that might have a larger effect on users’ judgements. 2 Task-Based Human-Robot Dialogue This study makes use of the JAST human-robot dialogue system (Rickert et al., 2007) which supports multimodal human-robot collaboration on a joint construction task. The user and the robot work together to assemble wooden construction toys on a common workspace, coordinating their actions through speech, gestures, and facial displays. The robot (Figure 1) consists of a pair of manipulator arms with grippers, mounted in a position to resemble human arms, and an animatronic talking head (van Breemen, 2005) capable of producing facial expressions, rigid head motion, and lip-synchronised synthesised speech. The system can interact in English or German. The robot is able to manipulate objects in the workspace and to perform simple assembly tasks. In the system that was used in the current study, the robot instructs the user on building a particular compound object, explaining the necessary assembly steps and retrieving pieces as required, with the user performing the actual assembly actions. To make joint action necessary for success in the assembly task, the workspace is divided into Figure 1: The JAST dialogue robot SYSTEM First we will build a windmill. Okay? USER Okay. SYSTEM To make a windmill, we must make a snowman. SYSTEM [picking up and holding out red cube] To make a snowman, insert the green bolt through the end of this red cube and screw it into the blue cube. USER [takes cube, performs action] Okay. SYSTEM [picking up and holding out a small slat] To make a windmill, insert the yellow bolt through the middle of this short slat and the middle of another short slat and screw it into the snowman. USER [takes slat, performs action] Okay. SYSTEM Very good! Figure 2: Sample human-robot dialogue 880 (a) Windmill (b) Snowman (c) L Shape (d) Railway signal Figure 3: The four target objects used in the experiment two areas—one belonging to the robot and one to the user—so that the robot must hand over some pieces to the user. Figure 2 shows a sample dialogue in which the system explains to the user how to build an object called a ‘windmill’, which has a sub-component called a ‘snowman’. 3 Experiment Design The human-robot system was evaluated via a user study in which subjects interacted with the complete system; all interactions were in German. As a between-subjects factor, we manipulated two aspects of the generated output: the strategy used by the dialogue manager to explain a plan to the user, and the type of referring expressions produced by the system. Foster et al. (2009) give the details of these factors and describes the effects of each individual manipulation. In this paper, we concentrate on the relationships among the different factors that were measured during the study: the efficiency and quality of the dialogues, the users’ success at building the required objects and at learning the construction plans for new objects, and the users’ subjective reactions to the system. 3.1 Subjects 43 subjects (27 male) took part in this experiment; the results of one additional subject were discarded due to technical problems with the system. The mean age of the subjects was 24.5, with a minimum of 14 and a maximum of 55. Of the subjects who indicated an area of study, the two most common areas were Informatics (12 subjects) and Mathematics (10). On a scale of 1–5, subjects gave a mean assessment of their knowledge of computers at 3.4, of speech-recognition systems at 2.3, and of human-robot systems at 2.0. The subjects were compensated for their participation in the experiment. 3.2 Scenario In this experiment, each subject built the same three objects in collaboration with the system, always in the same order. The first target was a ‘windmill’ (Figure 3a), which has a subcomponent called a ‘snowman’ (Figure 3b). Once the windmill was completed, the system then walked the user through building an ‘L shape’ (Figure 3c). Finally, the robot instructed the user to build a ‘railway signal’ (Figure 3d), which combines an L shape with a snowman. During the construction of the railway signal, the system asked the user if they remembered how to build a snowman and an L shape. If the user did not remember, the system explained the building process again; if they did remember, the system simply told them to build another one. 3.3 Dependent Variables We gathered a wide range of dependent measures: objective measures derived from the system logs and video recordings, as well as subjective measures based on the users’ own ratings of their experience interacting with the system. 3.3.1 Objective Measures We collected a range of objective measures from the log files and videos of the interactions. Like Litman and Pan (2002), we divided our objective measures into three categories based on those used in the PARADISE framework: dialogue efficiency, dialogue quality, and task success. The dialogue efficiency measures concentrated on the timing of the interaction: the time taken to complete the three construction tasks, the number of system turns required for the complete interaction, and the mean time taken by the system to respond to the user’s requests. We considered four measures of dialogue quality. The first two measures looked specifically for signs of problems in the interaction, using data au881 tomatically extracted from the logs: the number of times that the user asked the system to repeat its instructions, and the number of times that the user failed to take an object that the robot attempted to hand over. The other two dialogue quality measures were computed based on the video recordings: the number of times that the user looked at the robot, and the percentage of the total interaction that they spent looking at the robot. We considered these gaze-based measures to be measures of dialogue quality since it has previously been shown that, in this sort of task-based interaction where there is a visually salient object, participants tend to look at their partner more often when there is a problem in the interaction (e.g., Argyle and Graham, 1976). The task success measures addressed user success in the two main tasks undertaken in these interactions: assembling the target objects following the robot’s instructions, and learning and remembering to make a snowman and an L shape. We measured task success in two ways, corresponding to these two main tasks. The user’s success in the overall assembly task was assessed by counting the proportion of target objects that were assembled as intended (i.e., as in Figure 3), which was judged based on the video recordings. To test whether the subjects had learned how to build the sub-components that were required more than once (the snowman and the L shape), we recorded whether they said yes or no when they were asked if they remembered each of these components during the construction of the railway signal. 3.3.2 Subjective Measures In addition to the above objective measures, we also gathered a range of subjective measures. Before the interaction, we asked subjects to rate their current level on a set of 22 emotions (Ortony et al., 1988) on a scale from 1 to 4; the subjects then rated their level on the same emotional scales again after the interaction. After the interaction, the subjects also filled out a user-satisfaction questionnaire, which was based on that used in the user evaluation of the COMIC dialogue system (White et al., 2005), with modifications to address specific aspects of the current dialogue system and the experimental manipulations in this study. There were 47 items in total, each of which requested that the user choose their level of agreement with a given statement on a five-point Likert scale. The items were divided into the following categories: Mean (Stdev) Min Max Length (sec) 305.1 (54.0) 195.2 488.4 System turns 13.4 (1.73) 11 18 Response time (sec) 2.79 (1.13) 1.27 7.21 Table 1: Dialogue efficiency results Opinion of the robot as a partner 21 items addressing the ease with which subjects were able to interact with the robot Instruction quality 6 items specifically addressing the quality of the assembly instructions given by the robot Task success 11 items asking the user to rate how well they felt they performed on the various assembly tasks Feelings of the user 9 items asking users to rate their feelings while using the system At the end of the questionnaire, subjects were also invited to give free-form comments. 4 Results In this section, we present the results of each of the individual dependent measures; in the following section, we examine the relationship among the different types of measures. These results are based on the data from 40 subjects: we excluded results from two subjects for whom the video data was not clear, and from one additional subject who appeared to be ‘testing’ the system rather than making a serious effort to interact with it. 4.1 Objective Measures Dialogue efficiency The results on the dialogue efficiency measures are shown in Table 1. The average subject took 305.1 seconds—that is, just over five minutes—to build all three of the objects, and an average dialogue took 13 system turns to complete. When a user made a request, the mean delay before the beginning of the system response was about three seconds, although for one user this time was more than twice as long. This response delay resulted from two factors. First, preparing long system utterances with several referring expressions (such as the third and fourth system turns in Figure 2) takes some time; second, if a user made a request during a system turn (i.e., a ‘barge-in’ attempt), the system was not able to respond until the current turn was completed. 882 Mean (Stdev) Min Max Repetition requests 1.86 (1.79) 0 6 Failed hand-overs 1.07 (1.35) 0 6 Looks at the robot 23.55 (8.21) 14 50 Time looking at robot (%) 27 (8.6) 12 51 Table 2: Dialogue quality results These three measures of efficiency were correlated with each other: the correlation between length and turns was 0.38; between length and response time 0.47; and between turns and response time 0.19 (all p < 0.0001). Dialogue quality Table 2 shows the results for the dialogue quality measures: the two indications of problems, and the two measures of the frequency with which the subjects looked at the robot’s head. On average, a subject asked for an instruction to be repeated nearly two times per interaction, while failed hand-overs occurred just over once per interaction; however, as can be seen from the standard-deviation values, these measures varied widely across the data. In fact, 18 subjects never failed to take an object from the robot when it was offered, while one subject did so five times and one six times. Similarly, 11 subjects never asked for any repetitions, while five subjects asked for repetitions five or more times.1 On average, the subjects in this study spent about a quarter of the interaction looking at the robot head, and changed their gaze to the robot 23.5 times over the course of the interaction. Again, there was a wide range of results for both of these measures: 15 subjects looked at the robot fewer than 20 times during the interaction, 20 subjects looked at the robot between 20 to 30 times, while 5 subjects looked at the robot more than 30 times. The two measures that count problems were mildly correlated with each other (R2 = 0.26, p < 0.001), as were the two measures of looking at the robot (R2 = 0.13, p < 0.05); there was no correlation between the two classes of measures. Task success Table 3 shows the success rate for assembling each object in the sequence. Objects in italics represent sub-components, as follows: the first snowman was constructed as part of the windmill, while the second formed part of the railway signal; the first L-shape was a goal in itself, 1The requested repetition rate was significantly affected by the description strategy used by the dialogue manager; see Foster et al. (2009) for details. Object Rate Memory Snowman 0.76 Windmill 0.55 L shape 0.90 L shape 0.90 0.88 Snowman 0.86 0.70 Railway signal 0.71 Overall 0.72 0.79 Table 3: Task success results while the second was also part of the process of building the railway signal. The Rate column indicates subjects’ overall success at building the relevant component—for example, 55% of the subjects built the windmill correctly, while both of the L-shapes were built with 90% accuracy. For the second occurrence of the snowman and the Lshape, the Memory column indicates the percentage of subjects who claimed to remember how to build it when asked. The Overall row at the bottom indicates subjects’ overall success rate at building the three main target objects (windmill, L shape, railway signal): on average, a subject built about two of the three objects correctly. The overall correct-assembly rate was correlated with the overall rate of remembering objects: R2 = 0.20, p < 0.005. However, subjects who said that they did remember how to build a snowman or an L shape the second time around were no more likely to do it correctly than those who said that they did not remember. 4.2 Subjective Measures Two types of subjective measures were gathered during this study: responses on the usersatisfaction questionnaire, and self-assessment of emotions. Table 4 shows the mean results for each category from the user-satisfaction questionnaire across all of the subjects, in all cases on a 5-point Likert scale. The subjects in this study gave a generally positive assessment of their interactions with the system—with a mean overall satisfaction score of 3.75—and rated their perceived task success particularly highly, with a mean score of 4.1. To analyse the emotional data, we averaged all of the subjects’ emotional self-ratings before and after the experiment, counting negative emotions on an inverse scale, and then computed the difference between the two means. Table 5 shows the results from this analysis; note that this value was assessed on a 1–4 scale. While the mean emotional 883 Question category Mean (Stdev) Robot as partner 3.63 (0.65) Instruction quality 3.69 (0.71) Task success 4.10 (0.68) Feelings 3.66 (0.61) Overall 3.75 (0.57) Table 4: User-satisfaction questionnaire results Mean (Stdev) Min Max Before the study 2.99 (0.32) 2.32 3.68 After the study 3.05 (0.32) 2.32 3.73 Change +0.06 (0.24) −0.55 +0.45 Table 5: Mean emotional assessments score across all of the subjects did not change over the course of the experiment, the ratings of individual subjects did show larger changes. As shown in the final row of the table, one subject’s mean rating decreased by 0.55 over the course of the interaction, while that of another subject increased by 0.45. There was a slight correlation between the subjects’ description of their emotional state after the experiment and their responses to the questionnaire items asking for feelings about the interaction: R2 = 0.14, p < 0.01. 5 Building Performance Functions In the preceding section, we presented results on a number of objective and subjective measures, and also examined the correlation among measures of the same type. The results on the objective measures varied widely across the subjects; also, the subjects generally rated their experience of using the system positively, but again with some variation. In this section, we examine the relationship among measures of different types in order to determine which of the objective measures had the largest effect on users’ subjective reactions to the dialogue system. To determine the relationship among the factors, we employed the procedure used in the PARADISE evaluation framework (Walker et al., 1997). The PARADISE model uses stepwise multiple linear regression to predict subjective user satisfaction based on measures representing the performance dimensions of task success, dialogue quality, and dialogue efficiency, resulting in a predictor function of the following form: Satisfaction = n ∑ i=1 wi ∗N (mi) The mi terms represent the value of each measure, while the N function transforms each measure into a normal distribution using z-score normalisation. Stepwise linear regression produces coefficients (wi) describing the relative contribution of each predictor to the user satisfaction. If a predictor does not contribute significantly, its wi value is zero after the stepwise process. Using stepwise linear regression, we computed a predictor function for each of the subjective measures that we gathered during our study: the mean score for each of the individual user-satisfaction categories (Table 4), the mean score across the whole questionnaire (the last line of Table 4), as well as the difference between the users’ emotional states before and after the study (the last line of Table 5). We included all of the objective measures from Section 4.1 as initial predictors. The resulting predictor functions are shown in Table 6. The following abbreviations are used for the factors that occur in the table: Rep for the number of repetition requests, Turns for the number of system turns, Len for the length of the dialogue, and Mem for the subjects’ memory for the components that were built twice. The R2 column indicates the percentage of the variance that is explained by the performance function, while the Significance column gives significance values for each term in the function. Although the R2 values for the predictor functions in Table 6 are generally quite low, indicating that the functions do not explain most of the variance in the data, the factors that remain after stepwise regression still provide an indication as to which of the objective measures had an effect on users’ opinions of the system. In general, users who had longer interactions with the system (in terms of system turns) and who said that they remembered the robot’s instructions tended to give the system higher scores, while users who asked for more instructions to be repeated tended to give it lower scores; for the robot-as-partner questions, the length of the dialogue in seconds also made a slight negative contribution. None of the other objective factors contributed significantly to any of the predictor functions. 6 Discussion That the factors included in Table 6 were the most significant contributors to user satisfaction is not surprising. If a user asks for instructions to be re884 Measure Function R2 Significance Robot as partner 3.60+0.53∗N (Turns)−0.39∗N (Rep)−0.18∗N (Len) 0.12 Turns: p < 0.01, Rep: p < 0.05, Length: p ≈0.17 Instruction quality 3.66−0.22∗N (Rep) 0.081 Rep: p < 0.05 Task success 4.07+0.20∗N (Mem) 0.058 Mem: p ≈0.07 Feelings 3.63+0.34∗N (Turns)−0.32∗N (Rep) 0.044 Turns: p ≈0.06, Rep: p ≈0.08 Overall 3.73−0.36∗N (Rep)+0.31∗N (Turns) 0.062 Rep: p < 0.05, Turns: p ≈0.06 Emotion change 0.07+0.14∗N (Turns)+0.11∗N (Mem)−0.090∗N (Rep) 0.20 Turns: p < 0.05, Mem: p < 0.01, Rep: p ≈0.17 Table 6: Predictor functions peated, this is a clear indication of a problem in the dialogue; similarly, users who remembered the system’s instructions were equally clearly having a relatively successful interaction. In the current study, increased dialogue length had a positive contribution to user satisfaction; this contrasts with results such as those of Litman and Pan (2002), who found that increased dialogue length was associated with decreased user satisfaction. We propose two possible explanations for this difference. First, the system analysed by Litman and Pan (2002) was an information-seeking dialogue system, in which efficient access to the information is an important criterion. The current system, on the other hand, has the goal of joint task execution, and pure efficiency is a less compelling measure of dialogue quality in this setting. Second, it is possible that the sheer novelty factor of interacting with a fully-embodied humanoid robot affected people’s subjective responses to the system, so that subjects who had longer interactions also enjoyed the experience more. Support for this explanation is provided by the fact that dialogue length was only a significant factor in the more ‘subjective’ parts of the questionnaire, but did not have a significant impact on the users’ judgements about instruction quality or task success. Other studies of human-robot dialogue systems have also had similar results: for example, the subjects in the study described by Sidner et al. (2005) who used a robot that moved while talking reported higher levels of engagement in the interaction, and also tended to have longer conversations with the robot. While the predictor functions give useful insights into the relative contribution of the objective measures to the subjective user satisfaction, the R2 values are generally lower than those found in other PARADISE-style evaluations. For example, Walker et al. (1998) reported an R2 value of 0.38, the values reported by Walker et al. (2000) on the training sets ranged from 0.39 to 0.56, Litman and Pan (2002) reported an R2 value of 0.71, while the R2 values reported by M¨oller et al. (2008) for linear regression models similar to those presented here were between 0.22 and 0.57. The low R2 values from this analysis clearly suggest that, while the factors included in Table 6 did affect users’ opinions—particularly their opinion of the robot as a partner and the change in their reported emotional state—the users’ subjective judgements were also affected by factors other than those captured by the objective measures considered here. In most of the previous PARADISE-style studies, measures addressing the performance of the automated speech-recognition system and other input-processing components were included in the models. For example, the factors listed by M¨oller et al. (2008) include several measures of word error rate and of parsing accuracy. However, the scenario that was used in the current study required minimal speech input from the user (see Figure 2), so we did not include any such input-processing factors in our models. Other objective factors that might be relevant for predicting user satisfaction in the current study include a range of non-verbal behaviour from the users. For example, the user’s reaction time to instructions from the robot, the time the users need to adapt to the robot’s movements during handover actions (Huber et al., 2008), or the time taken for the actual assembly of the objects. Also, other measures of the user’s gaze behaviour might be 885 useful: more global measures such as how often the users look at the robot arms or at the objects on the table, as well as more targeted measures examining factors such as the user’s gaze and other behaviour during and after different types of system outputs. In future studies, we will also gather data on these additional non-verbal behaviours, and we expect to find higher correlations with subjective judgements. 7 Conclusions and Future Work We have presented the JAST human-robot dialogue system and described a user study in which the system instructed users to build a series of target objects out of wooden construction toys. This study resulted in a range of objective and subjective measures, which were used to derive performance functions in the style of the PARADISE evaluation framework. Three main factors were found to affect the users’ subjective ratings: longer dialogues and higher recall performance were associated with increased user satisfaction, while dialogues with more repetition requests tended to be associated with lower satisfaction scores. The explained variance of the performance functions was generally low, suggesting that factors other than those measured in this study contributed to the user satisfaction scores; we have suggested several such factors. The finding that longer dialogues were associated with higher user satisfaction disagrees with the results of many previous PARADISE-style evaluation studies. However, it does confirm and extend the results of previous studies specifically addressing interactions between users and embodied agents: as in the previous studies, the users in this case seem to view the agent as a social entity with whom they enjoy having a conversation. A newer version of the JAST system is currently under development and will shortly undergo a user evaluation. This new system will support an extended set of interactions where both agents know the target assembly plan, and will will also incorporate enhanced components for vision, object recognition, and goal inference. When evaluating this new system, we will include similar measures to those described here to enable the evaluations of the two systems to be compared. We will also gather additional objective measures in order to measure their influence on the subjective results. These additional measures will include those mentioned at the end of the preceding section, as well as measures targeted at the revised scenario and the updated system capabilities—for example, an additional dialogue quality measure will assess how often the goal-inference system was able to detect and correctly respond to an error by the user. Acknowledgements This research was supported by the European Commission through the JAST2 (ISTFP6-003747-IP) and INDIGO3 (IST-FP6-045388) projects. Thanks to Pawel Dacka for his help in running the experiment and analysing the data. References M. Argyle and J. A. Graham. 1976. The Central Europe experiment: Looking at persons and looking at objects. Environmental Psychology and Nonverbal Behavior, 1(1):6–16. doi:10. 1007/BF01115461. A. J. N. van Breemen. 2005. iCat: Experimenting with animabotics. In Proceedings of the AISB 2005 Creative Robotics Symposium. C. Callison-Burch, M. Osborne, and P. Koehn. 2006. Re-evaluating the role of BLEU in machine translation research. In Proceedings of EACL 2006. ACL Anthology E06-1032. M. E. Foster. 2008. Automated metrics that agree with human judgements on generated output for an embodied conversational agent. In Proceedings of INLG 2008. ACL Anthology W08-1113. M. E. Foster, M. Giuliani, A. Isard, C. Matheson, J. Oberlander, and A. Knoll. 2009. Evaluating description and reference strategies in a cooperative human-robot dialogue system. In Proceedings of IJCAI 2009. M. Huber, M. Rickert, A. Knoll, T. Brandt, and S. Glasauer. 2008. Human-robot interaction in handing-over tasks. In Proceedings of IEEE RO-MAN 2008. doi:10.1109/ROMAN.2008. 4600651. C. Y. Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Proceedings of the ACL 2004 Workshop on Text Summarization. ACL Anthology W04-1013. 2http://www.euprojects-jast.net/ 3http://www.ics.forth.gr/indigo/ 886 D. J. Litman and S. Pan. 2002. Designing and evaluating an adaptive spoken dialogue system. User Modeling and User-Adapted Interaction, 12(2–3):111–137. doi:10.1023/A: 1015036910358. S. M¨oller, K.-P. Engelbrecht, and R. Schleicher. 2008. Predicting the quality and usability of spoken dialogue systems. Speech Communication, 50:730–744. doi:10.1016/j.specom. 2008.03.001. D. G. Novick. 1997. What is effectiveness? In Working notes, CHI ’97 Workshop on HCI Research and Practice Agenda Based on Human Needs and Social Responsibility. http://www.cs.utep.edu/novick/ papers/eff.chi.html. A. Ortony, G. L. Clore, and A. Collins. 1988. The Cognitive Structure of Emotions. Cambridge University Press. K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of ACL 2002. ACL Anthology P02-1040. M. Rickert, M. E. Foster, M. Giuliani, T. By, G. Panin, and A. Knoll. 2007. Integrating language, vision and action for human robot dialog systems. In Proceedings of HCI International 2007. doi:10.1007/978-3-540-73281-5_ 108. C. L. Sidner, C. Lee, C. D. Kidd, N. Lesh, and C. Rich. 2005. Explorations in engagement for humans and robots. Artificial Intelligence, 166(1–2):140–164. doi:10.1016/j.artint. 2005.03.005. M. Walker, C. Kamm, and D. Litman. 2000. Towards developing general models of usability with PARADISE. Natural Language Engineering, 6(3–4):363–377. M. A. Walker. 2000. An application of reinforcement learning to dialogue strategy selection in a spoken dialogue system for email. Journal of Artificial Intelligence Research, 12:387–416. M. A. Walker, J. Fromer, G. D. Fabbrizio, C. Mestel, and D. Hindle. 1998. What can I say?: Evaluating a spoken language interface to email. In Proceedings of CHI 1998. doi:10.1145/ 274644.274722. M. A. Walker, D. J. Litman, C. A. Kamm, and A. Abella. 1997. PARADISE: A framework for evaluating spoken dialogue agents. In Proceedings of ACL/EACL 1997. ACL Anthology P971035. M. White, M. E. Foster, J. Oberlander, and A. Brown. 2005. Using facial feedback to enhance turn-taking in a multimodal dialogue system. In Proceedings of HCI International 2005. 887
2009
99
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1–11, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Efficient Third-order Dependency Parsers Terry Koo and Michael Collins MIT CSAIL, Cambridge, MA, 02139, USA {maestro,mcollins}@csail.mit.edu Abstract We present algorithms for higher-order dependency parsing that are “third-order” in the sense that they can evaluate substructures containing three dependencies, and “efficient” in the sense that they require only O(n4) time. Importantly, our new parsers can utilize both sibling-style and grandchild-style interactions. We evaluate our parsers on the Penn Treebank and Prague Dependency Treebank, achieving unlabeled attachment scores of 93.04% and 87.38%, respectively. 1 Introduction Dependency grammar has proven to be a very useful syntactic formalism, due in no small part to the development of efficient parsing algorithms (Eisner, 2000; McDonald et al., 2005b; McDonald and Pereira, 2006; Carreras, 2007), which can be leveraged for a wide variety of learning methods, such as feature-rich discriminative models (Lafferty et al., 2001; Collins, 2002; Taskar et al., 2003). These parsing algorithms share an important characteristic: they factor dependency trees into sets of parts that have limited interactions. By exploiting the additional constraints arising from the factorization, maximizations or summations over the set of possible dependency trees can be performed efficiently and exactly. A crucial limitation of factored parsing algorithms is that the associated parts are typically quite small, losing much of the contextual information within the dependency tree. For the purposes of improving parsing performance, it is desirable to increase the size and variety of the parts used by the factorization.1 At the same time, the need for more expressive factorizations 1For examples of how performance varies with the degree of the parser’s factorization see, e.g., McDonald and Pereira (2006, Tables 1 and 2), Carreras (2007, Table 2), Koo et al. (2008, Tables 2 and 4), or Suzuki et al. (2009, Tables 3–6). must be balanced against any resulting increase in the computational cost of the parsing algorithm. Consequently, recent work in dependency parsing has been restricted to applications of secondorder parsers, the most powerful of which (Carreras, 2007) requires O(n4) time and O(n3) space, while being limited to second-order parts. In this paper, we present new third-order parsing algorithms that increase both the size and variety of the parts participating in the factorization, while simultaneously maintaining computational requirements of O(n4) time and O(n3) space. We evaluate our parsers on the Penn WSJ Treebank (Marcus et al., 1993) and Prague Dependency Treebank (Hajiˇc et al., 2001), achieving unlabeled attachment scores of 93.04% and 87.38%. In summary, we make three main contributions: 1. Efficient new third-order parsing algorithms. 2. Empirical evaluations of these parsers. 3. A free distribution of our implementation.2 The remainder of this paper is divided as follows: Sections 2 and 3 give background, Sections 4 and 5 describe our new parsing algorithms, Section 6 discusses related work, Section 7 presents our experimental results, and Section 8 concludes. 2 Dependency parsing In dependency grammar, syntactic relationships are represented as head-modifier dependencies: directed arcs between a head, which is the more “essential” word in the relationship, and a modifier, which supplements the meaning of the head. For example, Figure 1 contains a dependency between the verb “report” (the head) and its object “sales” (the modifier). A complete analysis of a sentence is given by a dependency tree: a set of dependencies that forms a rooted, directed tree spanning the words of the sentence. Every dependency tree is rooted at a special “*” token, allowing the 2http://groups.csail.mit.edu/nlp/dpo3/ 1 Insiders must report purchases and immediately sales * Figure 1: An example dependency structure. selection of the sentential head to be modeled as if it were a dependency. For a sentence x, we define dependency parsing as a search for the highest-scoring analysis of x: y∗(x) = argmax y∈Y(x) SCORE(x, y) (1) Here, Y(x) is the set of all trees compatible with x and SCORE(x, y) evaluates the event that tree y is the analysis of sentence x. Since the cardinality of Y(x) grows exponentially with the length of the sentence, directly solving Eq. 1 is impractical. A common strategy, and one which forms the focus of this paper, is to factor each dependency tree into small parts, which can be scored in isolation. Factored parsing can be formalized as follows: SCORE(x, y) = X p∈y SCOREPART(x, p) That is, we treat the dependency tree y as a set of parts p, each of which makes a separate contribution to the score of y. For certain factorizations, efficient parsing algorithms exist for solving Eq. 1. We define the order of a part according to the number of dependencies it contains, with analogous terminology for factorizations and parsing algorithms. In the remainder of this paper, we focus on factorizations utilizing the following parts: g g h h h h h m m m m m s s s t dependency sibling grandchild tri-sibling grand-sibling Specifically, Sections 4.1, 4.2, and 4.3 describe parsers that, respectively, factor trees into grandchild parts, grand-sibling parts, and a mixture of grand-sibling and tri-sibling parts. 3 Existing parsing algorithms Our new third-order dependency parsers build on ideas from existing parsing algorithms. In this section, we provide background on two relevant parsers from previous work. (a) + = h h m m e e (b) + = h h m m r r+1 Figure 2: The dynamic-programming structures and derivations of the Eisner (2000) algorithm. Complete spans are depicted as triangles and incomplete spans as trapezoids. For brevity, we elide the symmetric right-headed versions. 3.1 First-order factorization The first type of parser we describe uses a “firstorder” factorization, which decomposes a dependency tree into its individual dependencies. Eisner (2000) introduced a widely-used dynamicprogramming algorithm for first-order parsing; as it is the basis for many parsers, including our new algorithms, we summarize its design here. The Eisner (2000) algorithm is based on two interrelated types of dynamic-programming structures: complete spans, which consist of a headword and its descendents on one side, and incomplete spans, which consist of a dependency and the region between the head and modifier. Formally, we denote a complete span as Ch,e where h and e are the indices of the span’s headword and endpoint. An incomplete span is denoted as Ih,m where h and m are the index of the head and modifier of a dependency. Intuitively, a complete span represents a “half-constituent” headed by h, whereas an incomplete span is only a partial half-constituent, since the constituent can be extended by adding more modifiers to m. Each type of span is created by recursively combining two smaller, adjacent spans; the constructions are specified graphically in Figure 2. An incomplete span is constructed from a pair of complete spans, indicating the division of the range [h, m] into constituents headed by h and m. A complete span is created by “completing” an incomplete span with the other half of m’s constituent. The point of concatenation in each construction—m in Figure 2(a) or r in Figure 2(b)—is the split point, a free index that must be enumerated to find the optimal construction. In order to parse a sentence x, it suffices to find optimal constructions for all complete and incomplete spans defined on x. This can be 2 (a) + = h h m m e e (b) + = h h m m s s (c) + = m m s s r r+1 Figure 3: The dynamic-programming structures and derivations of the second-order sibling parser; sibling spans are depicted as boxes. For brevity, we elide the right-headed versions. accomplished by adapting standard chart-parsing techniques (Cocke and Schwartz, 1970; Younger, 1967; Kasami, 1965) to the recursive derivations defined in Figure 2. Since each derivation is defined by two fixed indices (the boundaries of the span) and a third free index (the split point), the parsing algorithm requires O(n3) time and O(n2) space (Eisner, 1996; McAllester, 1999). 3.2 Second-order sibling factorization As remarked by Eisner (1996) and McDonald and Pereira (2006), it is possible to rearrange the dynamic-programming structures to conform to an improved factorization that decomposes each tree into sibling parts—pairs of dependencies with a shared head. Specifically, a sibling part consists of a triple of indices (h, m, s) where (h, m) and (h, s) are dependencies, and where s and m are successive modifiers to the same side of h. In order to parse this factorization, the secondorder parser introduces a third type of dynamicprogramming structure: sibling spans, which represent the region between successive modifiers of some head. Formally, we denote a sibling span as Ss,m where s and m are a pair of modifiers involved in a sibling relationship. Modified versions of sibling spans will play an important role in the new parsing algorithms described in Section 4. Figure 3 provides a graphical specification of the second-order parsing algorithm. Note that incomplete spans are constructed in a new way: the second-order parser combines a smaller incomplete span, representing the next-innermost dependency, with a sibling span that covers the region between the two modifiers. Sibling parts (h, m, s) can thus be obtained from Figure 3(b). Despite the use of second-order parts, each derivation is (a) = + g g h h h m m e e (b) = + g g h h h m m r r+1 (c) = + g g h h h m m e e (d) = + g g h h h m m r r+1 Figure 4: The dynamic-programming structures and derivations of Model 0. For brevity, we elide the right-headed versions. Note that (c) and (d) differ from (a) and (b) only in the position of g. still defined by a span and split point, so the parser requires O(n3) time and O(n2) space. 4 New third-order parsing algorithms In this section we describe our new third-order dependency parsing algorithms. Our overall method is characterized by the augmentation of each span with a “grandparent” index: an index external to the span whose role will be made clear below. This section presents three parsing algorithms based on this idea: Model 0, a second-order parser, and Models 1 and 2, which are third-order parsers. 4.1 Model 0: all grandchildren The first parser, Model 0, factors each dependency tree into a set of grandchild parts—pairs of dependencies connected head-to-tail. Specifically, a grandchild part is a triple of indices (g, h, m) where (g, h) and (h, m) are dependencies.3 In order to parse this factorization, we augment both complete and incomplete spans with grandparent indices; for brevity, we refer to these augmented structures as g-spans. Formally, we denote a complete g-span as Cg h,e, where Ch,e is a normal complete span and g is an index lying outside the range [h, e], with the implication that (g, h) is a dependency. Incomplete g-spans are defined analogously and are denoted as Ig h,m. Figure 4 depicts complete and incomplete gspans and provides a graphical specification of the 3The Carreras (2007) parser also uses grandchild parts but only in restricted cases; see Section 6 for details. 3 OPTIMIZEALLSPANS(x) 1. ∀g, i Cg i,i = 0 ◁base case 2. for w = 1 . . . (n −1) ◁span width 3. for i = 1 . . . (n −w) ◁span start index 4. j = i + w ◁span end index 5. for g < i or g > j ◁grandparent index 6. Ig i,j = max i≤r<j {Cg i,r + Ci j,r+1} + SCOREG(x, g, i, j) 7. Ig j,i = max i≤r<j {Cg j,r+1 + Cj i,r} + SCOREG(x, g, j, i) 8. Cg i,j = max i<m≤j {Ig i,m + Ci m,j} 9. Cg j,i = max i≤m<j {Ig j,m + Cj m,i} 10. endfor 11. endfor 12. endfor Figure 5: A bottom-up chart parser for Model 0. SCOREG is the scoring function for grandchild parts. We use the g-span identities as shorthand for their chart entries (e.g., Ig i,j refers to the entry containing the maximum score of that g-span). Model 0 dynamic-programming algorithm. The algorithm resembles the first-order parser, except that every recursive construction must also set the grandparent indices of the smaller g-spans; fortunately, this can be done deterministically in all cases. For example, Figure 4(a) depicts the decomposition of Cg h,e into an incomplete half and a complete half. The grandparent of the incomplete half is copied from Cg h,e while the grandparent of the complete half is set to h, the head of m as defined by the construction. Clearly, grandchild parts (g, h, m) can be read off of the incomplete g-spans in Figure 4(b,d). Moreover, since each derivation copies the grandparent index g into successively smaller g-spans, grandchild parts will be produced for all grandchildren of g. Model 0 can be parsed by adapting standard top-down or bottom-up chart parsing techniques. For concreteness, Figure 5 provides a pseudocode sketch of a bottom-up chart parser for Model 0; although the sketch omits many details, it suffices for the purposes of illustration. The algorithm progresses from small widths to large in the usual manner, but after defining the endpoints (i, j) there is an additional loop that enumerates all possible grandparents. Since each derivation is defined by three fixed indices (the g-span) and one free index (the split point), the complexity of the algorithm is O(n4) time and O(n3) space. Note that the grandparent indices cause each g(a) = + g g h h h m m e e (b) = + g g h h h m m s s (c) = + h h h m m s s r r+1 Figure 6: The dynamic-programming structures and derivations of Model 1. Right-headed and right-grandparented versions are omitted. span to have non-contiguous structure. For example, in Figure 4(a) the words between g and h will be controlled by some other g-span. Due to these discontinuities, the correctness of the Model 0 dynamic-programming algorithm may not be immediately obvious. While a full proof of correctness is beyond the scope of this paper, we note that each structure on the right-hand side of Figure 4 lies completely within the structure on the left-hand side. This nesting of structures implies, in turn, that the usual properties required to ensure the correctness of dynamic programming hold. 4.2 Model 1: all grand-siblings We now describe our first third-order parsing algorithm. Model 1 decomposes each tree into a set of grand-sibling parts—combinations of sibling parts and grandchild parts. Specifically, a grand-sibling is a 4-tuple of indices (g, h, m, s) where (h, m, s) is a sibling part and (g, h, m) and (g, h, s) are grandchild parts. For example, in Figure 1, the words “must,” “report,” “sales,” and “immediately” form a grand-sibling part. In order to parse this factorization, we introduce sibling g-spans Sh m,s, which are composed of a normal sibling span Sm,s and an external index h, with the implication that (h, m, s) forms a valid sibling part. Figure 6 provides a graphical specification of the dynamic-programming algorithm for Model 1. The overall structure of the algorithm resembles the second-order sibling parser, with the addition of grandparent indices; as in Model 0, the grandparent indices can be set deterministically in all cases. Note that the sibling g-spans are crucial: they allow grand-sibling parts (g, h, m, s) to be read off of Figure 6(b), while simultaneously propagating grandparent indices to smaller g-spans. 4 (a) = + g g h h h m m e e (b) = g h h m m s (c) = + h h h m m s s s t (d) = + h h h m m s s r r+1 Figure 7: The dynamic-programming structures and derivations of Model 2. Right-headed and right-grandparented versions are omitted. Like Model 0, Model 1 can be parsed via adaptations of standard chart-parsing techniques; we omit the details for brevity. Despite the move to third-order parts, each derivation is still defined by a g-span and a split point, so that parsing requires only O(n4) time and O(n3) space. 4.3 Model 2: grand-siblings and tri-siblings Higher-order parsing algorithms have been proposed which extend the second-order sibling factorization to parts containing multiple siblings (McDonald and Pereira, 2006, also see Section 6 for discussion). In this section, we show how our g-span-based techniques can be combined with a third-order sibling parser, resulting in a parser that captures both grand-sibling parts and tri-sibling parts—4-tuples of indices (h, m, s, t) such that both (h, m, s) and (h, s, t) are sibling parts. In order to parse this factorization, we introduce a new type of dynamic-programming structure: sibling-augmented spans, or s-spans. Formally, we denote an incomplete s-span as Ih,m,s where Ih,m is a normal incomplete span and s is an index lying in the strict interior of the range [h, m], such that (h, m, s) forms a valid sibling part. Figure 7 provides a graphical specification of the Model 2 parsing algorithm. An incomplete s-span is constructed by combining a smaller incomplete s-span, representing the next-innermost pair of modifiers, with a sibling g-span, covering the region between the outer two modifiers. As in Model 1, sibling g-spans are crucial for propagating grandparent indices, while allowing the recovery of tri-sibling parts (h, m, s, t). Figure 7(b) shows how an incomplete s-span can be converted into an incomplete g-span by exchanging the internal sibling index for an external grandparent index; in the process, grand-sibling parts (g, h, m, s) are enumerated. Since every derivation is defined by an augmented span and a split point, Model 2 can be parsed in O(n4) time and O(n3) space. It should be noted that unlike Model 1, Model 2 produces grand-sibling parts only for the outermost pair of grandchildren,4 similar to the behavior of the Carreras (2007) parser. In fact, the resemblance is more than passing, as Model 2 can emulate the Carreras (2007) algorithm by “demoting” each third-order part into a second-order part: SCOREGS(x, g, h, m, s) = SCOREG(x, g, h, m) SCORETS(x, h, m, s, t) = SCORES(x, h, m, s) where SCOREG, SCORES, SCOREGS and SCORETS are the scoring functions for grandchildren, siblings, grand-siblings and tri-siblings, respectively. The emulated version has the same computational complexity as the original, so there is no practical reason to prefer it over the original. Nevertheless, the relationship illustrated above highlights the efficiency of our approach: we are able to recover third-order parts in place of second-order parts, at no additional cost. 4.4 Discussion The technique of grandparent-index augmentation has proven fruitful, as it allows us to parse expressive third-order factorizations while retaining an efficient O(n4) runtime. In fact, our thirdorder parsing algorithms are “optimally” efficient in an asymptotic sense. Since each third-order part is composed of four separate indices, there are Θ(n4) distinct parts. Any third-order parsing algorithm must at least consider the score of each part, hence third-order parsing is Ω(n4) and it follows that the asymptotic complexity of Models 1 and 2 cannot be improved. The key to the efficiency of our approach is a fundamental asymmetry in the structure of a directed tree: a head can have any number of modifiers, while a modifier always has exactly one head. Factorizations like that of Carreras (2007) obtain grandchild parts by augmenting spans with the indices of modifiers, leading to limitations on 4The reason for the restriction is that in Model 2, grandsiblings can only be derived via Figure 7(b), which does not recursively copy the grandparent index for reuse in smaller g-spans as Model 1 does in Figure 6(b). 5 the grandchildren that can participate in the factorization. Our method, by “inverting” the modifier indices into grandparent indices, exploits the structural asymmetry. As a final note, the parsing algorithms described in this section fall into the category of projective dependency parsers, which forbid crossing dependencies. If crossing dependencies are allowed, it is possible to parse a first-order factorization by finding the maximum directed spanning tree (Chu and Liu, 1965; Edmonds, 1967; McDonald et al., 2005b). Unfortunately, designing efficient higherorder non-projective parsers is likely to be challenging, based on recent hardness results (McDonald and Pereira, 2006; McDonald and Satta, 2007). 5 Extensions We briefly outline a few extensions to our algorithms; we hope to explore these in future work. 5.1 Probabilistic inference Many statistical modeling techniques are based on partition functions and marginals—summations over the set of possible trees Y(x). Straightforward adaptations of the inside-outside algorithm (Baker, 1979) to our dynamic-programming structures would suffice to compute these quantities. 5.2 Labeled parsing Our parsers are easily extended to labeled dependencies. Direct integration of labels into Models 1 and 2 would result in third-order parts composed of three labeled dependencies, at the cost of increasing the time and space complexities by factors of O(L3) and O(L2), respectively, where L bounds the number of labels per dependency. 5.3 Word senses If each word in x has a set of possible “senses,” our parsers can be modified to recover the best joint assignment of syntax and senses for x, by adapting methods in Eisner (2000). Complexity would increase by factors of O(S4) time and O(S3) space, where S bounds the number of senses per word. 5.4 Increased context If more vertical context is desired, the dynamicprogramming structures can be extended with additional ancestor indices, resulting in a “spine” of ancestors above each span. Each additional ancestor lengthens the vertical scope of the factorization (e.g., from grand-siblings to “great-grandsiblings”), while increasing complexity by a factor of O(n). Horizontal context can also be increased by adding internal sibling indices; each additional sibling widens the scope of the factorization (e.g., from grand-siblings to “grand-tri-siblings”), while increasing complexity by a factor of O(n). 6 Related work Our method augments each span with the index of the head that governs that span, in a manner superficially similar to parent annotation in CFGs (Johnson, 1998). However, parent annotation is a grammar transformation that is independent of any particular sentence, whereas our method annotates spans with indices into the current sentence. These indices allow the use of arbitrary features predicated on the position of the grandparent (e.g., word identity, POS tag, contextual POS tags) without affecting the asymptotic complexity of the parsing algorithm. Efficiently encoding this kind of information into a sentence-independent grammar transformation would be challenging at best. Eisner (2000) defines dependency parsing models where each word has a set of possible “senses” and the parser recovers the best joint assignment of syntax and senses. Our new parsing algorithms could be implemented by defining the “sense” of each word as the index of its head. However, when parsing with senses, the complexity of the Eisner (2000) parser increases by factors of O(S3) time and O(S2) space (ibid., Section 4.2). Since each word has n potential heads, a direct application of the word-sense parser leads to time and space complexities of O(n6) and O(n4), respectively, in contrast to our O(n4) and O(n3).5 Eisner (2000) also uses head automata to score or recognize the dependents of each head. An interesting question is whether these automata could be coerced into modeling the grandparent indices used in our parsing algorithms. However, note that the head automata are defined in a sentenceindependent manner, with two automata per word in the vocabulary (ibid., Section 2). The automata are thus analogous to the rules of a CFG and at5In brief, the reason for the inefficiency is that the wordsense parser is unable to exploit certain constraints, such as the fact that the endpoints of a sibling g-span must have the same head. The word-sense parser would needlessly enumerate all possible pairs of heads in this case. 6 tempts to use them to model grandparent indices would face difficulties similar to those already described for grammar transformations in CFGs. It should be noted that third-order parsers have previously been proposed by McDonald and Pereira (2006), who remarked that their secondorder sibling parser (see Figure 3) could easily be extended to capture m > 1 successive modifiers in O(nm+1) time (ibid., Section 2.2). To our knowledge, however, Models 1 and 2 are the first third-order parsing algorithms capable of modeling grandchild parts. In our experiments, we find that grandchild interactions make important contributions to parsing performance (see Table 3). Carreras (2007) presents a second-order parser that can score both sibling and grandchild parts, with complexities of O(n4) time and O(n3) space. An important limitation of the parser’s factorization is that it only defines grandchild parts for outermost grandchildren: (g, h, m) is scored only when m is the outermost modifier of h in some direction. Note that Models 1 and 2 have the same complexity as Carreras (2007), but strictly greater expressiveness: for each sibling or grandchild part used in the Carreras (2007) factorization, Model 1 defines an enclosing grand-sibling, while Model 2 defines an enclosing tri-sibling or grand-sibling. The factored parsing approach we focus on is sometimes referred to as “graph-based” parsing; a popular alternative is “transition-based” parsing, in which trees are constructed by making a series of incremental decisions (Yamada and Matsumoto, 2003; Attardi, 2006; Nivre et al., 2006; McDonald and Nivre, 2007). Transition-based parsers do not impose factorizations, so they can define arbitrary features on the tree as it is being built. As a result, however, they rely on greedy or approximate search algorithms to solve Eq. 1. 7 Parsing experiments In order to evaluate the effectiveness of our parsers in practice, we apply them to the Penn WSJ Treebank (Marcus et al., 1993) and the Prague Dependency Treebank (Hajiˇc et al., 2001; Hajiˇc, 1998).6 We use standard training, validation, and test splits7 to facilitate comparisons. Accuracy is 6For English, we extracted dependencies using Joakim Nivre’s Penn2Malt tool with standard head rules (Yamada and Matsumoto, 2003); for Czech, we “projectivized” the training data by finding best-match projective trees. 7For Czech, the PDT has a predefined split; for English, we split the Sections as: 2–21 training, 22 validation, 23 test. measured with unlabeled attachment score (UAS): the percentage of words with the correct head.8 7.1 Features for third-order parsing Our parsing algorithms can be applied to scores originating from any source, but in our experiments we chose to use the framework of structured linear models, deriving our scores as: SCOREPART(x, p) = w · f(x, p) Here, f is a feature-vector mapping and w is a vector of associated parameters. Following standard practice for higher-order dependency parsing (McDonald and Pereira, 2006; Carreras, 2007), Models 1 and 2 evaluate not only the relevant third-order parts, but also the lower-order parts that are implicit in their third-order factorizations. For example, Model 1 defines feature mappings for dependencies, siblings, grandchildren, and grand-siblings, so that the score of a dependency parse is given by: MODEL1SCORE(x, y) = X (h,m)∈y wdep · fdep(x, h, m) X (h,m,s)∈y wsib · fsib(x, h, m, s) X (g,h,m)∈y wgch · fgch(x, g, h, m) X (g,h,m,s)∈y wgsib · fgsib(x, g, h, m, s) Above, y is simultaneously decomposed into several different types of parts; trivial modifications to the Model 1 parser allow it to evaluate all of the necessary parts in an interleaved fashion. A similar treatment of Model 2 yields five feature mappings: the four above plus ftsib(x, h, m, s, t), which represents tri-sibling parts. The lower-order feature mappings fdep, fsib, and fgch are based on feature sets from previous work (McDonald et al., 2005a; McDonald and Pereira, 2006; Carreras, 2007), to which we added lexicalized versions of several features. For example, fdep contains lexicalized “in-between” features that depend on the head and modifier words as well as a word lying in between the two; in contrast, previous work has generally defined in-between features for POS tags only. As another example, our 8As in previous work, English evaluation ignores any token whose gold-standard POS tag is one of {‘‘ ’’ : , .}. 7 second-order mappings fsib and fgch define lexical trigram features, while previous work has generally used POS trigrams only. Our third-order feature mappings fgsib and ftsib consist of four types of features. First, we define 4-gram features that characterize the four relevant indices using words and POS tags; examples include POS 4-grams and mixed 4-grams with one word and three POS tags. Second, we define 4gram context features consisting of POS 4-grams augmented with adjacent POS tags: for example, fgsib(x, g, h, m, s) includes POS 7-grams for the tags at positions (g, h, m, s, g+1, h+1, m+1). Third, we define backed-off features that track bigram and trigram interactions which are absent in the lower-order feature mappings: for example, ftsib(x, h, m, s, t) contains features predicated on the trigram (m, s, t) and the bigram (m, t), neither of which exist in any lower-order part. Fourth, noting that coordinations are typically annotated as grand-siblings (e.g., “report purchases and sales” in Figure 1), we define coordination features for certain grand-sibling parts. For example, fgsib(x, g, h, m, s) contains features examining the implicit head-modifier relationship (g, m) that are only activated when the POS tag of s is a coordinating conjunction. Finally, we make two brief remarks regarding the use of POS tags. First, we assume that input sentences have been automatically tagged in a preprocessing step.9 Second, for any feature that depends on POS tags, we include two copies of the feature: one using normal POS tags and another using coarsened versions10 of the POS tags. 7.2 Averaged perceptron training There are a wide variety of parameter estimation methods for structured linear models, such as log-linear models (Lafferty et al., 2001) and max-margin models (Taskar et al., 2003). We chose the averaged structured perceptron (Freund and Schapire, 1999; Collins, 2002) as it combines highly competitive performance with fast training times, typically converging in 5–10 iterations. We train each parser for 10 iterations and select pa9For Czech, the PDT provides automatic tags; for English, we used MXPOST (Ratnaparkhi, 1996) to tag validation and test data, with 10-fold cross-validation on the training set. Note that the reliance on POS-tagged input can be relaxed slightly by treating POS tags as word senses; see Section 5.3 and McDonald (2006, Table 6.1). 10For Czech, we used the first character of the tag; for English, we used the first two characters, except PRP and PRP$. Beam Pass Orac Acc1 Acc2 Time1 Time2 0.0001 26.5 99.92 93.49 93.49 49.6m 73.5m 0.001 16.7 99.72 93.37 93.29 25.9m 24.2m 0.01 9.1 99.19 93.26 93.16 6.7m 7.9m Table 1: Effect of the marginal-probability beam on English parsing. For each beam value, parsers were trained on the English training set and evaluated on the English validation set; the same beam value was applied to both training and validation data. Pass = %dependencies surviving the beam in training data, Orac = maximum achievable UAS on validation data, Acc1/Acc2 = UAS of Models 1/2 on validation data, and Time1/Time2 = minutes per perceptron training iteration for Models 1/2, averaged over all 10 iterations. For perspective, the English training set has a total of 39,832 sentences and 950,028 words. A beam of 0.0001 was used in all experiments outside this table. rameters from the iteration that achieves the best score on the validation set. 7.3 Coarse-to-fine pruning In order to decrease training times, we follow Carreras et al. (2008) and eliminate unlikely dependencies using a form of coarse-to-fine pruning (Charniak and Johnson, 2005; Petrov and Klein, 2007). In brief, we train a log-linear first-order parser11 and for every sentence x in training, validation, and test data we compute the marginal probability P(h, m | x) of each dependency. Our parsers are then modified to ignore any dependency (h, m) whose marginal probability is below 0.0001×maxh′ P(h′, m | x). Table 1 provides information on the behavior of the pruning method. 7.4 Main results Table 2 lists the accuracy of Models 1 and 2 on the English and Czech test sets, together with some relevant results from related work.12 The models marked “†” are not directly comparable to our work as they depend on additional sources of information that our models are trained without— unlabeled data in the case of Koo et al. (2008) and 11For English, we generate marginals using a projective parser (Baker, 1979; Eisner, 2000); for Czech, we generate marginals using a non-projective parser (Smith and Smith, 2007; McDonald and Satta, 2007; Koo et al., 2007). Parameters for these models are obtained by running exponentiated gradient training for 10 iterations (Collins et al., 2008). 12Model 0 was not tested as its factorization is a strict subset of the factorization of Model 1. 8 Parser Eng Cze McDonald et al. (2005a,2005b) 90.9 84.4 McDonald and Pereira (2006) 91.5 85.2 Koo et al. (2008), standard 92.02 86.13 Model 1 93.04 87.38 Model 2 92.93 87.37 Koo et al. (2008), semi-sup† 93.16 87.13 Suzuki et al. (2009)† 93.79 88.05 Carreras et al. (2008)† 93.5 Table 2: UAS of Models 1 and 2 on test data, with relevant results from related work. Note that Koo et al. (2008) is listed with standard features and semi-supervised features. †: see main text. Suzuki et al. (2009) and phrase-structure annotations in the case of Carreras et al. (2008). All three of the “†” models are based on versions of the Carreras (2007) parser, so modifying these methods to work with our new third-order parsing algorithms would be an interesting topic for future research. For example, Models 1 and 2 obtain results comparable to the semi-supervised parsers of Koo et al. (2008), and additive gains might be realized by applying their cluster-based feature sets to our enriched factorizations. 7.5 Ablation studies In order to better understand the contributions of the various feature types, we ran additional ablation experiments; the results are listed in Table 3, in addition to the scores of Model 0 and the emulated Carreras (2007) parser (see Section 4.3). Interestingly, grandchild interactions appear to provide important information: for example, when Model 2 is used without grandchild-based features (“Model 2, no-G” in Table 3), its accuracy suffers noticeably. In addition, it seems that grandchild interactions are particularly useful in Czech, while sibling interactions are less important: consider that Model 0, a second-order grandchild parser with no sibling-based features, can easily outperform “Model 2, no-G,” a third-order sibling parser with no grandchild-based features. 8 Conclusion We have presented new parsing algorithms that are capable of efficiently parsing third-order factorizations, including both grandchild and sibling interactions. Due to space restrictions, we have been necessarily brief at some points in this paper; some additional details can be found in Koo (2010). Parser Eng Cze Model 0 93.07 87.39 Carreras (2007) emulation 93.14 87.25 Model 1 93.49 87.64 Model 1, no-3rd 93.17 87.57 Model 2 93.49 87.46 Model 2, no-3rd 93.20 87.43 Model 2, no-G 92.92 86.76 Table 3: UAS for modified versions of our parsers on validation data. The term no-3rd indicates a parser that was trained and tested with the thirdorder feature mappings fgsib and ftsib deactivated, though lower-order features were retained; note that “Model 2, no-3rd” is not identical to the Carreras (2007) parser as it defines grandchild parts for the pair of grandchildren. The term no-G indicates a parser that was trained and tested with the grandchild-based feature mappings fgch and fgsib deactivated; note that “Model 2, no-G” emulates the third-order sibling parser proposed by McDonald and Pereira (2006). There are several possibilities for further research involving our third-order parsing algorithms. One idea would be to consider extensions and modifications of our parsers, some of which have been suggested in Sections 5 and 7.4. A second area for future work lies in applications of dependency parsing. While we have evaluated our new algorithms on standard parsing benchmarks, there are a wide variety of tasks that may benefit from the extended context offered by our thirdorder factorizations; for example, the 4-gram substructures enabled by our approach may be useful for dependency-based language modeling in machine translation (Shen et al., 2008). Finally, in the hopes that others in the NLP community may find our parsers useful, we provide a free distribution of our implementation.2 Acknowledgments We would like to thank the anonymous reviewers for their helpful comments and suggestions. We also thank Regina Barzilay and Alexander Rush for their much-appreciated input during the writing process. The authors gratefully acknowledge the following sources of support: Terry Koo and Michael Collins were both funded by a DARPA subcontract under SRI (#27-001343), and Michael Collins was additionally supported by NTT (Agmt. dtd. 06/21/98). 9 References Giuseppe Attardi. 2006. Experiments with a Multilanguage Non-Projective Dependency Parser. In Proceedings of the 10th CoNLL, pages 166–170. Association for Computational Linguistics. James Baker. 1979. Trainable Grammars for Speech Recognition. In Proceedings of the 97th meeting of the Acoustical Society of America. Xavier Carreras, Michael Collins, and Terry Koo. 2008. TAG, Dynamic Programming, and the Perceptron for Efficient, Feature-rich Parsing. In Proceedings of the 12th CoNLL, pages 9–16. Association for Computational Linguistics. Xavier Carreras. 2007. Experiments with a HigherOrder Projective Dependency Parser. In Proceedings of the CoNLL Shared Task Session of EMNLPCoNLL, pages 957–961. Association for Computational Linguistics. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine N-best Parsing and MaxEnt Discriminative Reranking. In Proceedings of the 43rd ACL. Y.J. Chu and T.H. Liu. 1965. On the Shortest Arborescence of a Directed Graph. Science Sinica, 14:1396–1400. John Cocke and Jacob T. Schwartz. 1970. Programming Languages and Their Compilers: Preliminary Notes. Technical report, New York University. Michael Collins, Amir Globerson, Terry Koo, Xavier Carreras, and Peter L. Bartlett. 2008. Exponentiated Gradient Algorithms for Conditional Random Fields and Max-Margin Markov Networks. Journal of Machine Learning Research, 9:1775–1822, Aug. Michael Collins. 2002. Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms. In Proceedings of the 7th EMNLP, pages 1–8. Association for Computational Linguistics. Jack R. Edmonds. 1967. Optimum Branchings. Journal of Research of the National Bureau of Standards, 71B:233–240. Jason Eisner. 1996. Three New Probabilistic Models for Dependency Parsing: An Exploration. In Proceedings of the 16th COLING, pages 340–345. Association for Computational Linguistics. Jason Eisner. 2000. Bilexical Grammars and Their Cubic-Time Parsing Algorithms. In Harry Bunt and Anton Nijholt, editors, Advances in Probabilistic and Other Parsing Technologies, pages 29–62. Kluwer Academic Publishers. Yoav Freund and Robert E. Schapire. 1999. Large Margin Classification Using the Perceptron Algorithm. Machine Learning, 37(3):277–296. Jan Hajiˇc, Eva Hajiˇcov´a, Petr Pajas, Jarmila Panevova, and Petr Sgall. 2001. The Prague Dependency Treebank 1.0, LDC No. LDC2001T10. Linguistics Data Consortium. Jan Hajiˇc. 1998. Building a Syntactically Annotated Corpus: The Prague Dependency Treebank. In Eva Hajiˇcov´a, editor, Issues of Valency and Meaning. Studies in Honor of Jarmila Panevov´a, pages 12–19. Mark Johnson. 1998. PCFG Models of Linguistic Tree Representations. Computational Linguistics, 24(4):613–632. Tadao Kasami. 1965. An Efficient Recognition and Syntax-analysis Algorithm for Context-free Languages. Technical Report AFCRL-65-758, Air Force Cambridge Research Lab. Terry Koo, Amir Globerson, Xavier Carreras, and Michael Collins. 2007. Structured Prediction Models via the Matrix-Tree Theorem. In Proceedings of EMNLP-CoNLL, pages 141–150. Association for Computational Linguistics. Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple Semi-supervised Dependency Parsing. In Proceedings of the 46th ACL, pages 595–603. Association for Computational Linguistics. Terry Koo. 2010. Advances in Discriminative Dependency Parsing. Ph.D. thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, June. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of the 18th ICML, pages 282–289. Morgan Kaufmann. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. David A. McAllester. 1999. On the Complexity Analysis of Static Analyses. In Proceedings of the 6th Static Analysis Symposium, pages 312–329. Springer-Verlag. Ryan McDonald and Joakim Nivre. 2007. Characterizing the Errors of Data-Driven Dependency Parsers. In Proceedings of EMNLP-CoNLL, pages 122–131. Association for Computational Linguistics. Ryan McDonald and Fernando Pereira. 2006. Online Learning of Approximate Dependency Parsing Algorithms. In Proceedings of the 11th EACL, pages 81–88. Association for Computational Linguistics. Ryan McDonald and Giorgio Satta. 2007. On the Complexity of Non-Projective Data-Driven Dependency Parsing. In Proceedings of IWPT. 10 Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online Large-Margin Training of Dependency Parsers. In Proceedings of the 43rd ACL, pages 91–98. Association for Computational Linguistics. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajiˇc. 2005b. Non-Projective Dependency Parsing using Spanning Tree Algorithms. In Proceedings of HLT-EMNLP, pages 523–530. Association for Computational Linguistics. Ryan McDonald. 2006. Discriminative Training and Spanning Tree Algorithms for Dependency Parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA, USA, July. Joakim Nivre, Johan Hall, Jens Nilsson, G¨uls¸en Eryiˇgit, and Svetoslav Marinov. 2006. Labeled Pseudo-Projective Dependency Parsing with Support Vector Machines. In Proceedings of the 10th CoNLL, pages 221–225. Association for Computational Linguistics. Slav Petrov and Dan Klein. 2007. Improved Inference for Unlexicalized Parsing. In Proceedings of HLTNAACL, pages 404–411. Association for Computational Linguistics. Adwait Ratnaparkhi. 1996. A Maximum Entropy Model for Part-Of-Speech Tagging. In Proceedings of the 1st EMNLP, pages 133–142. Association for Computational Linguistics. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A New String-to-Dependency Machine Translation Algorithm with a Target Dependency Language Model. In Proceedings of the 46th ACL, pages 577– 585. Association for Computational Linguistics. David A. Smith and Noah A. Smith. 2007. Probabilistic Models of Nonprojective Dependency Trees. In Proceedings of EMNLP-CoNLL, pages 132–140. Association for Computational Linguistics. Jun Suzuki, Hideki Isozaki, Xavier Carreras, and Michael Collins. 2009. An Empirical Study of Semi-supervised Structured Conditional Models for Dependency Parsing. In Proceedings of EMNLP, pages 551–560. Association for Computational Linguistics. Ben Taskar, Carlos Guestrin, and Daphne Koller. 2003. Max margin markov networks. In Sebastian Thrun, Lawrence K. Saul, and Bernhard Sch¨olkopf, editors, NIPS. MIT Press. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical Dependency Analysis with Support Vector Machines. In Proceedings of the 8th IWPT, pages 195– 206. Association for Computational Linguistics. David H. Younger. 1967. Recognition and parsing of context-free languages in time n3. Information and Control, 10(2):189–208. 11
2010
1
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 88–97, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics The Human Language Project: Building a Universal Corpus of the World’s Languages Steven Abney University of Michigan [email protected] Steven Bird University of Melbourne and University of Pennsylvania [email protected] Abstract We present a grand challenge to build a corpus that will include all of the world’s languages, in a consistent structure that permits large-scale cross-linguistic processing, enabling the study of universal linguistics. The focal data types, bilingual texts and lexicons, relate each language to one of a set of reference languages. We propose that the ability to train systems to translate into and out of a given language be the yardstick for determining when we have successfully captured a language. We call on the computational linguistics community to begin work on this Universal Corpus, pursuing the many strands of activity described here, as their contribution to the global effort to document the world’s linguistic heritage before more languages fall silent. 1 Introduction The grand aim of linguistics is the construction of a universal theory of human language. To a computational linguist, it seems obvious that the first step is to collect significant amounts of primary data for a large variety of languages. Ideally, we would like a complete digitization of every human language: a Universal Corpus. If we are ever to construct such a corpus, it must be now. With the current rate of language loss, we have only a small window of opportunity before the data is gone forever. Linguistics may be unique among the sciences in the crisis it faces. The next generation will forgive us for the most egregious shortcomings in theory construction and technology development, but they will not forgive us if we fail to preserve vanishing primary language data in a form that enables future research. The scope of the task is enormous. At present, we have non-negligible quantities of machinereadable data for only about 20–30 of the world’s 6,900 languages (Maxwell and Hughes, 2006). Linguistics as a field is awake to the crisis. There has been a tremendous upsurge of interest in documentary linguistics, the field concerned with the the “creation, annotation, preservation, and dissemination of transparent records of a language” (Woodbury, 2010). However, documentary linguistics alone is not equal to the task. For example, no million-word machine-readable corpus exists for any endangered language, even though such a quantity would be necessary for wide-ranging investigation of the language once no speakers are available. The chances of constructing large-scale resources will be greatly improved if computational linguists contribute their expertise. This collaboration between linguists and computational linguists will extend beyond the construction of the Universal Corpus to its exploitation for both theoretical and technological ends. We envisage a new paradigm of universal linguistics, in which grammars of individual languages are built from the ground up, combining expert manual effort with the power tools of probabilistic language models and grammatical inference. A universal grammar captures redundancies which exist across languages, constituting a “universal linguistic prior,” and enabling us to identify the distinctive properties of specific languages and families. The linguistic prior and regularities due to common descent enable a new economy of scale for technology development: cross-linguistic triangulation can improve performance while reducing per-language data requirements. Our aim in the present paper is to move beyond generalities to a concrete plan of attack, and to challenge the field to a communal effort to create a Universal Corpus of the world’s languages, in consistent machine-readable format, permitting large-scale cross-linguistic processing. 88 2 Human Language Project 2.1 Aims and scope Although language endangerment provides urgency, the corpus is not intended primarily as a Noah’s Ark for languages. The aims go beyond the current crisis: we wish to support crosslinguistic research and technology development at the largest scale. There are existing collections that contain multiple languages, but it is rare to have consistent formats and annotation across languages, and few such datasets contain more than a dozen or so languages. If we think of a multi-lingual corpus as consisting of an array of items, with columns representing languages and rows representing resource types, the usual focus is on “vertical” processing. Our particular concern, by contrast, is “horizontal” processing that cuts indiscriminately across languages. Hence we require an unusual degree of consistency across languages. The kind of processing we wish to enable is much like the large-scale systematic research that motivated the Human Genome Project. One of the greatest impacts of having the sequence may well be in enabling an entirely new approach to biological research. In the past, researchers studied one or a few genes at a time. With whole-genome sequences . . . they can approach questions systematically and on a grand scale. They can study . . . how tens of thousands of genes and proteins work together in interconnected networks to orchestrate the chemistry of life. (Human Genome Project, 2007) We wish to make it possible to investigate human language equally systematically and on an equally grand scale: a Human Linguome Project, as it were, though we have chosen the “Human Language Project” as a more inviting title for the undertaking. The product is a Universal Corpus,1 in two senses of universal: in the sense of including (ultimately) all the world’s languages, and in the sense of enabling software and processing methods that are language-universal. However, we do not aim for a collection that is universal in the sense of encompassing all language documentation efforts. Our goal is the construction of a specific resource, albeit a very large 1http://universalcorpus.org/ resource. We contrast the proposed effort with general efforts to develop open resources, standards, and best practices. We do not aim to be allinclusive. The project does require large-scale collaboration, and a task definition that is simple and compelling enough to achieve buy-in from a large number of data providers. But we do not need and do not attempt to create consensus across the entire community. (Although one can hope that what proves successful for a project of this scale will provide a good foundation for future standards.) Moreover, we do not aim to collect data merely in the vague hope that it will prove useful. Although we strive for maximum generality, we also propose a specific driving “use case,” namely, machine translation (MT), (Hutchins and Somers, 1992; Koehn, 2010). The corpus provides a testing ground for the development of MT system-construction methods that are dramatically “leaner” in their resource requirements, and which take advantage of cross-linguistic bootstrapping. The large engineering question is how one can turn the size of the task—constructing MT systems for all the world’s languages simultaneously—to one’s advantage, and thereby consume dramatically less data per language. The choice of MT as the use case is also driven by scientific considerations. To explain, we require a bit of preamble. We aim for a digitization of each human language. What exactly does it mean to digitize an entire language? It is natural to think in terms of replicating the body of resources available for well-documented languages, and the pre-eminent resource for any language is a treebank. Producing a treebank involves a staggering amount of manual effort. It is also notoriously difficult to obtain agreement about how parse trees should be defined in one language, much less in many languages simultaneously. The idea of producing treebanks for 6,900 languages is quixotic, to put it mildly. But is a treebank actually necessary? Let us suppose that the purpose of a parse tree is to mediate interpretation. A treebank, arguably, represents a theoretical hypothesis about how interpretations could be constructed; the primary data is actually the interpretations themselves. This suggests that we annotate sentences with representations of meanings instead of syntactic structures. Now that seems to take us out of the frying pan into the fire. If obtaining consen89 sus on parse trees is difficult, obtaining consensus on meaning representations is impossible. However, if the language under consideration is anything other than English, then a translation into English (or some other reference language) is for most purposes a perfectly adequate meaning representation. That is, we view machine translation as an approximation to language understanding. Here is another way to put it. One measure of adequacy of a language digitization is the ability of a human—already fluent in a reference language—to acquire fluency in the digitized language using only archived material. Now it would be even better if we could use a language digitization to construct an artificial speaker of the language. Importantly, we do not need to solve the AI problem: the speaker need not decide what to say, only how to translate from meanings to sentences of the language, and from sentences back to meanings. Taking sentences in a reference language as the meaning representation, we arrive back at machine translation as the measure of success. In short, we have successfully captured a language if we can translate into and out of the language. The key resource that should be built for each language, then, is a collection of primary texts with translations into a reference language. “Primary text” includes both written documents and transcriptions of recordings. Large volumes of primary texts will be useful even without translation for such tasks as language modeling and unsupervised learning of morphology. Thus, we anticipate that the corpus will have the usual “pyramidal” structure, starting from a base layer of unannotated text, some portion of which is translated into a reference language at the document level to make the next layer. Note that, for maximally authentic primary texts, we assume the direction of translation will normally be from primary text to reference language, not the other way around. Another layer of the corpus consists of sentence and word alignments, required for training and evaluating machine translation systems, and for extracting bilingual lexicons. Curating such annotations is a more specialized task than translation, and so we expect it will only be done for a subset of the translated texts. In the last and smallest layer, morphology is annotated. This supports the development of morphological analyzers, to preprocess primary texts to identify morpheme boundaries and recognize allomorphs, reducing the amount of data required for training an MT system. This most-refined target annotation corresponds to the interlinear glossed texts that are the de facto standard of annotation in the documentary linguistics community. We postulate that interlinear glossed text is sufficiently fine-grained to serve our purposes. It invites efforts to enrich it by automatic means: for example, there has been work on parsing the English translations and using the word-by-word glosses to transfer the parse tree to the object language, effectively creating a treebank automatically (Xia and Lewis, 2007). At the same time, we believe that interlinear glossed text is sufficiently simple and well-understood to allow rapid construction of resources, and to make cross-linguistic consistency a realistic goal. Each of these layers—primary text, translations, alignments, and morphological glosses—seems to be an unavoidable piece of the overall solution. The fact that these layers will exist in diminishing quantity is also unavoidable. However, there is an important consequence: the primary texts will be permanently subject to new translation initiatives, which themselves will be subject to new alignment and glossing initiatives, in which each step is an instance of semisupervised learning (Abney, 2007). As time passes, our ability to enhance the quantity and quality of the annotations will only increase, thanks to effective combinations of automatic, professional, and crowd-sourced effort. 2.2 Principles The basic principles upon which the envisioned corpus is based are the following: Universality. Covering as many languages as possible is the first priority. Progress will be gauged against concrete goals for numbers of languages, data per language, and coverage of language families (Whalen and Simons, 2009). Machine readability and consistency. “Covering” languages means enabling machine processing seamlessly across languages. This will support new types of linguistic inquiry and the development and testing of inference methods (for morphology, parsers, machine translation) across large numbers of typologically diverse languages. Community effort. We cannot expect a single organization to assemble a resource on this scale. It will be necessary to get community buy-in, and 90 many motivated volunteers. The repository will not be the sole possession of any one institution. Availability. The content of the corpus will be available under one or more permissive licenses, such as the Creative Commons Attribution License (CC-BY), placing as few limits as possible on community members’ ability to obtain and enhance the corpus, and redistribute derivative data. Utility. The corpus aims to be maximally useful, and minimally parochial. Annotation will be as lightweight as possible; richer annotations will will emerge bottom-up as they prove their utility at the large scale. Centrality of primary data. Primary texts and recordings are paramount. Secondary resources such as grammars and lexicons are important, but no substitute for primary data. It is desirable that secondary resources be integrated with—if not derived from—primary data in the corpus. 2.3 What to include What should be included in the corpus? To some extent, data collection will be opportunistic, but it is appropriate to have a well-defined target in mind. We consider the following essential. Metadata. One means of resource identification is to survey existing documentation for the language, including bibliographic references and locations of web resources. Provenance and proper citation of sources should be included for all data. For written text. (1) Primary documents in original printed form, e.g. scanned page images or PDF. (2) Transcription. Not only optical character recognition output, but also the output of tools that extract text from PDF, will generally require manual editing. For spoken text. (1) Audio recordings. Both elicited and spontaneous speech should be included. It is highly desirous to have some connected speech for every language. (2) Slow speech “audio transcriptions.” Carefully respeaking a spoken text can be much more efficient than written transcription, and may one day yield to speech recognition methods. (3) Written transcriptions. We do not impose any requirements on the form of transcription, though orthographic transcription is generally much faster to produce than phonetic transcription, and may even be more useful as words are represented by normalized forms. For both written and spoken text. (1) Translations of primary documents into a reference language (possibly including commentary). (2) Sentence-level segmentation and translation. (3) Word-level segmentation and glossing. (4) Morpheme-level segmentation and glossing. All documents will be included in primary form, but the percentage of documents with manual annotation, or manually corrected annotation, decreases at increasingly fine-grained levels of annotation. Where manual fine-grained annotation is unavailable, automatic methods for creating it (at a lower quality) are desirable. Defining such methods for a large range of resource-poor languages is an interesting computational challenge. Secondary resources. Although it is possible to base descriptive analyses exclusively on a text corpus (Himmelmann, 2006, p. 22), the following secondary resources should be secured if they are available: (1) A lexicon with glosses in a reference language. Ideally, everything should be attested in the texts, but as a practical matter, there will be words for which we have only a lexical entry and no instances of use. (2) Paradigms and phonology, for the construction of a morphological analyzer. Ideally, they should be inducible from the texts, but published grammatical information may go beyond what is attested in the text. 2.4 Inadequacy of existing efforts Our key desideratum is support for automatic processing across a large range of languages. No data collection effort currently exists or is proposed, to our knowledge, that addresses this desideratum. Traditional language archives such as the Audio Archive of Linguistic Fieldwork (UC Berkeley), Documentation of Endangered Languages (Max Planck Institute, Nijmegen), the Endangered Languages Archive (SOAS, University of London), and the Pacific And Regional Archive for Digital Sources in Endangered Cultures (Australia) offer broad coverage of languages, but the majority of their offerings are restricted in availability and do not support machine processing. Conversely, large-scale data collection efforts by the Linguistic Data Consortium and the European Language Resources Association cover less than one percent of the world’s languages, with no evident plans for major expansion of coverage. Other efforts concern the definition and aggregation of language resource metadata, including OLAC, IMDI, and 91 CLARIN (Simons and Bird, 2003; Broeder and Wittenburg, 2006; V´aradi et al., 2008), but this is not the same as collecting and disseminating data. Initiatives to develop standard formats for linguistic annotations are orthogonal to our goals. The success of the project will depend on contributed data from many sources, in many different formats. Converting all data formats to an official standard, such as the RDF-based models being developed by ISO Technical Committee 37 Sub-committee 4 Working Group 2, is simply impractical. These formats have onerous syntactic and semantic requirements that demand substantial further processing together with expert judgment, and threaten to crush the large-scale collaborative data collection effort we envisage, before it even gets off the ground. Instead, we opt for a very lightweight format, sketched in the next section, to minimize the effort of conversion and enable an immediate start. This does not limit the options of community members who desire richer formats, since they are free to invest the effort in enriching the existing data. Such enrichment efforts may gain broad support if they deliver a tangible benefit for cross-language processing. 3 A Simple Storage Model Here we sketch a simple approach to storage of texts (including transcribed speech), bitexts, interlinear glossed text, and lexicons. We have been deliberately schematic since the goal is just to give grounds for confidence that there exists a general, scalable solution. For readability, our illustrations will include space-separated sequences of tokens. However, behind the scenes these could be represented as a sequence of pairs of start and end offsets into a primary text or speech signal, or as a sequence of integers that reference an array of strings. Thus, when we write (1a), bear in mind it may be implemented as (1b) or (1c). (1) a. This is a point of order . b. (0,4), (5,7), (8,9), (10,15), (16,18), ... c. 9347, 3053, 0038, 3342, 3468, ... In what follows, we focus on the minimal requirements for storing and disseminating aligned text, not the requirements for efficient in-memory data structures. Moreover, we are agnostic about whether the normalized, tokenized format is stored entire or computed on demand. We take an aligned text to be composed of a series of aligned sentences, each consisting of a small set of attributes and values, e.g.: ID: europarl/swedish/ep-00-01-17/18 LANGS: swd eng SENT: det g¨aller en ordningsfr˚aga TRANS: this is a point of order ALIGN: 1-1 2-2 3-3 4-4 4-5 4-6 PROVENANCE: pharaoh-v1.2, ... REV: 8947 2010-05-02 10:35:06 leobfld12 RIGHTS: Copyright (C) 2010 Uni...; CC-BY The value of ID identifies the document and sentence, and any collection to which the document belongs. Individual components of the identifier can be referenced or retrieved. The LANGS attribute identifies the source and reference language using ISO 639 codes.2 The SENT attribute contains space-delimited tokens comprising a sentence. Optional attributes TRANS and ALIGN hold the translation and alignment, if these are available; they are omitted in monolingual text. A provenance attribute records any automatic or manual processes which apply to the record, and a revision attribute contains the version number, timestamp, and username associated with the most recent modification of the record, and a rights attribute contains copyright and license information. When morphological annotation is available, it is represented by two additional attributes, LEX and AFF. Here is a monolingual example: ID: example/001 LANGS: eng SENT: the dogs are barking LEX: the dog be bark AFF: - PL PL ING Note that combining all attributes of these two examples—that is, combining word-by-word translation with morphological analysis—yields interlinear glossed text. A bilingual lexicon is an indispensable resource, whether provided as such, induced from a collection of aligned text, or created by merging contributed and induced lexicons. A bilingual lexicon can be viewed as an inventory of cross-language correspondences between words or groups of words. These correspondences are just aligned text fragments, albeit much smaller than a sentence. Thus, we take a bilingual lexicon to be a kind of text in which each record contains a single lexeme and its translation, represented using the LEX and TRANS attributes we have already introduced, e.g.: 2http://www.sil.org/iso639-3/ 92 ID: swedishlex/v3.2/0419 LANGS: swd eng LEX: ordningsfr˚aga TRANS: point of order In sum, the Universal Corpus is represented as a massive store of records, each representing a single sentence or lexical entry, using a limited set of attributes. The store is indexed for efficient access, and supports access to slices identified by language, content, provenance, rights, and so forth. Many component collections would be “unioned” into this single, large Corpus, with only the record identifiers capturing the distinction between the various data sources. Special cases of aligned text and wordlists, spanning more than 1,000 languages, are Bible translations and Swadesh wordlists (Resnik et al., 1999; Swadesh, 1955). Here there are obvious use-cases for accessing a particular verse or word across all languages. However, it is not necessary to model n-way language alignments. Instead, such sources are implicitly aligned by virtue of their structure. Extracting all translations of a verse, or all cognates of a Swadesh wordlist item, is an index operation that returns monolingual records, e.g.: ID: swadesh/47 ID: swadesh/47 LANGS: fra LANGS: eng LEX: chien LEX: dog 4 Building the Corpus Data collection on this scale is a daunting prospect, yet it is important to avoid the paralysis of over-planning. We can start immediately by leveraging existing infrastructure, and the voluntary effort of interested members of the language resources community. One possibility is to found a “Language Commons,” an open access repository of language resources hosted in the Internet Archive, with a lightweight method for community members to contribute data sets. A fully processed and indexed version of selected data can be made accessible via a web services interface to a major cloud storage facility, such as Amazon Web Services. A common query interface could be supported via APIs in multiple NLP toolkits such as NLTK and GATE (Bird et al., 2009; Cunningham et al., 2002), and also in generic frameworks such as UIMA and SOAP, leaving developers to work within their preferred environment. 4.1 Motivation for data providers We hope that potential contributors of data will be motivated to participate primarily by agreement with the goals of the project. Even someone who has specialized in a particular language or language family maintains an interest, we expect, in the universal question—the exploration of Language writ large. Data providers will find benefit in the availability of volunteers for crowd-sourcing, and tools for (semi-)automated quality control, refinement, and presentation of data. For example, a data holder should be able to contribute recordings and get help in transcribing them, through a combination of volunteer labor and automatic processing. Documentary linguists and computational linguists have much to gain from collaboration. In return for the data that documentary linguistics can provide, computational linguistics has the potential to revolutionize the tools and practice of language documentation. We also seek collaboration with communities of language speakers. The corpus provides an economy of scale for the development of literacy materials and tools for interactive language instruction, in support of language preservation and revitalization. For small languages, literacy in the mother tongue is often defended on the grounds that it provides the best route to literacy in the national language (Wagner, 1993, ch. 8). An essential ingredient of any local literacy program is to have a substantial quantity of available texts that represent familiar topics including cultural heritage, folklore, personal narratives, and current events. Transition to literacy in a language of wider communication is aided when transitional materials are available (Waters, 1998, pp. 61ff). Mutual benefits will also flow from the development of tools for low-cost publication and broadcast in the language, with copies of the published or broadcast material licensed to and archived in the corpus. 4.2 Roles The enterprise requires collaboration of many individuals and groups, in a variety of roles. Editors. A critical group are people with sufficient engagement to serve as editors for particular language families, who have access to data or are able to negotiate redistribution rights, and oversee the workflow of transcription, translation, and annotation. 93 CL Research. All manual annotation steps need to be automated. Each step presents a challenging semi-supervised learning and cross-linguistic bootstrapping problem. In addition, the overall measure of success—induction of machine translation systems from limited resources—pushes the state of the art (Kumar et al., 2007). Numerous other CL problems arise: active learning to improve the quality of alignments and bilingual lexicons; automatic language identification for lowdensity languages; and morphology learning. Tool builders. We need tools for annotation, format conversion, spidering and language identification, search, archiving, and presentation. Innovative crowd-sourcing solutions are of particular interest, e.g. web-based functionality for transcribing audio and video of oral literature, or setting up a translation service based on aligned texts for a low-density language, and collecting the improved translations suggested by users. Volunteer annotators. An important reason for keeping the data model as lightweight as possible is to enable contributions from volunteers with little or no linguistic training. Two models are the volunteers who scan documents and correct OCR output in Project Gutenberg, or the undergraduate volunteers who have constructed Greek and Latin treebanks within Project Perseus (Crane, 2010). Bilingual lexicons that have been extracted from aligned text collections might be corrected using crowd-sourcing, leading to improved translation models and improved alignments. We also see the Universal Corpus as an excellent opportunity for undergraduates to participate in research, and for native speakers to participate in the preservation of their language. Documentary linguists. The collection protocol known as Basic Oral Language Documentation (BOLD) enables documentary linguists to collect 2–3 orders of magnitude more oral discourse than before (Bird, 2010). Linguists can equip local speakers to collect written texts, then to carefully “respeak” and orally translate the texts into a reference language. With suitable tools, incorporating active learning, local speakers could further curate bilingual texts and lexicons. An early need is pilot studies to determine costings for different categories of language. Data agencies. The LDC and ELRA have a central role to play, given their track record in obtaining, curating, and publishing data with licenses that facilitate language technology development. We need to identify key resources where negotiation with the original data provider, and where payment of all preparation costs plus compensation for lost revenue, leads to new material for the Corpus. This is a new publication model and a new business model, but it can co-exist with the existing models. Language archives. Language archives have a special role to play as holders of unique materials. They could contribute existing data in its native format, for other participants to process. They could give bilingual texts a distinct status within their collections, to facilitate discovery. Funding agencies. To be successful, the Human Language Project would require substantial funds, possibly drawing on a constellation of public and private agencies in many countries. However, in the spirit of starting small, and starting now, agencies could require that sponsored projects which collect texts and build lexicons contribute them to the Language Commons. After all, the most effective time to do translation, alignment, and lexicon work is often at the point when primary data is first collected, and this extra work promises direct benefits to the individual project. 4.3 Early tasks Seed corpus. The central challenge, we believe, is getting critical mass. Data attracts data, and if one can establish a sufficient seed, the effort will snowball. We can make some concrete proposals as to how to collect a seed. Language resources on the web are one source—the Cr´ubad´an project has identified resources for 400 languages, for example (Scannell, 2008); the New Testament of the Bible exists in about 1200 languages and contains of the order of 100k words. We hope that existing efforts that are already well-disposed toward electronic distribution will participate. We particularly mention the Language and Culture Archive of the Summer Institute of Linguistics, and the Rosetta Project. The latter is already distributed through the Internet Archive and contains material for 2500 languages. Resource discovery. Existing language resources need to be documented, a large un94 dertaking that depends on widely distributed knowledge. Existing published corpora from the LDC, ELRA and dozens of other sources—a total of 85,000 items—are already documented in the combined catalog of the Open Language Archives Community,3 so there is no need to recreate this information. Other resources can be logged by community members using a public access wiki, with a metadata template to ensure key fields are elicited such as resource owner, license, ISO 639 language code(s), and data type. This information can itself be curated and stored in the form of an OLAC archive, to permit search over the union of the existing and newly documented items. Work along these lines has already been initiated by LDC and ELRA (Cieri et al., 2010). Resource classification. Editors with knowledge of particular language families will categorize documented resources relative to the needs of the project, using controlled vocabularies. This involves examining a resource, determining the granularity and provenance of the segmentation and alignment, checking its ISO 639 classifications, assigning it to a logarithmic size category, documenting its format and layout, collecting sample files, and assigning a priority score. Acquisition. Where necessary, permission will be sought to lodge the resource in the repository. Funding may be required to buy the rights to the resource from its owner, as compensation for lost revenue from future data sales. Funding may be required to translate the source into a reference language. The repository’s ingestion process is followed, and the resource metadata is updated. Text collection. Languages for which the available resources are inadequate are identified, and the needs are prioritized, based on linguistic and geographical diversity. Sponsorship is sought for collecting bilingual texts in high priority languages. Workflows are developed for languages based on a variety of factors, such as availability of educated people with native-level proficiency in their mother tongue and good knowledge of a reference language, internet access in the language area, availability of expatriate speakers in a first-world context, and so forth. A classification scheme is required to help predict which workflows will be most successful in a given situation. 3http://www.language-archives.org/ Audio protocol. The challenge posed by languages with no written literature should not be underestimated. A promising collection method is Basic Oral Language Documentation, which calls for inexpensive voice recorders and netbooks, project-specific software for transcription and sentence-aligned translation, network bandwidth for upload to the repository, and suitable training and support throughout the process. Corpus readers. Software developers will inspect the file formats and identify high priority formats based on information about resource priorities and sizes. They will code a corpus reader, an open source reference implementation for converting between corpus formats and the storage model presented in section 3. 4.4 Further challenges There are many additional difficulties that could be listed, though we expect they can be addressed over time, once a sufficient seed corpus is established. Two particular issues deserve further comment, however. Licenses. Intellectual property issues surrounding linguistic corpora present a complex and evolving landscape (DiPersio, 2010). For users, it would be ideal for all materials to be available under a single license that permits derivative works, commercial use, and redistribution, such as the Creative Commons Attribution License (CC-BY). There would be no confusion about permissible uses of subsets and aggregates of the collected corpora, and it would be easy to view the Universal Corpus as a single corpus. But to attract as many data contributors as possible, we cannot make such a license a condition of contribution. Instead, we propose to distinguish between: (1) a digital Archive of contributed corpora that are stored in their original format and made available under a range of licenses, offering preservation and dissemination services to the language resources community at large (i.e. the Language Commons); and (2) the Universal Corpus, which is embodied as programmatic access to an evolving subset of materials from the archive under one of a small set of permissive licenses, licenses whose unions and intersections are understood (e.g. CC-BY and its non-commercial counterpart CC-BY-NC). Apart from being a useful service in its own right, the Archive would provide a staging 95 ground for the Universal Corpus. Archived corpora having restrictive licenses could be evaluated for their potential as contributions to the Corpus, making it possible to prioritize the work of negotiating more liberal licenses. There are reasons to distinguish Archive and Corpus even beyond the license issues. The Corpus, but not the Archive, is limited to the formats that support automatic cross-linguistic processing. Conversely, since the primary interface to the Corpus is programmatic, it may include materials that are hosted in many different archives; it only needs to know how to access and deliver them to the user. Incidentally, we consider it an implementation issue whether the Corpus is provided as a web service, a download service with user-side software, user-side software with data delivered on physical media, or a cloud application with user programs executed server-side. Expenses of conversion and editing. We do not trivialize the work involved in converting documents to the formats of section 3, and in manually correcting the results of noisy automatic processes such as optical character recognition. Indeed, the amount of work involved is one motivation for the lengths to which we have gone to keep the data format simple. For example, we have deliberately avoided specifying any particular tokenization scheme. Variation will arise as a consequence, but we believe that it will be no worse than the variability in input that current machine translation training methods routinely deal with, and will not greatly injure the utility of the Corpus. The utter simplicity of the formats also widens the pool of potential volunteers for doing the manual work that is required. By avoiding linguistically delicate annotation, we can take advantage of motivated but untrained volunteers such as students and members of speaker communities. 5 Conclusion Nearly twenty years ago, the linguistics community received a wake-up call, when Hale et al. (1992) predicted that 90% of the world’s linguistic diversity would be lost or moribund by the year 2100, and warned that linguistics might “go down in history as the only science that presided obliviously over the disappearance of 90 per cent of the very field to which it is dedicated.” Today, language documentation is a high priority in mainstream linguistics. However, the field of computational linguistics is yet to participate substantially. The first half century of research in computational linguistics—from circa 1960 up to the present—has touched on less than 1% of the world’s languages. For a field which is justly proud of its empirical methods, it is time to apply those methods to the remaining 99% of languages. We will never have the luxury of richly annotated data for these languages, so we are forced to ask ourselves: can we do more with less? We believe the answer is “yes,” and so we challenge the computational linguistics community to adopt a scalable computational approach to the problem. We need leaner methods for building machine translation systems; new algorithms for cross-linguistic bootstrapping via multiple paths; more effective techniques for leveraging human effort in labeling data; scalable ways to get bilingual text for unwritten languages; and large scale social engineering to make it all happen quickly. To believe we can build this Universal Corpus is certainly audacious, but not to even try is arguably irresponsible. The initial step parallels earlier efforts to create large machine-readable text collections which began in the 1960s and reverberated through each subsequent decade. Collecting bilingual texts is an orthodox activity, and many alternative conceptions of a Human Language Project would likely include this as an early task. The undertaking ranks with the largest datacollection efforts in science today. It is not achievable without considerable computational sophistication and the full engagement of the field of computational linguistics. Yet we require no fundamentally new technologies. We can build on our strengths in corpus-based methods, linguistic models, human- and machine-supplied annotations, and learning algorithms. By rising to this, the greatest language challenge of our time, we enable multi-lingual technology development at a new scale, and simultaneously lay the foundations for a new science of empirical universal linguistics. Acknowledgments We are grateful to Ed Bice, Doug Oard, Gary Simons, participants of the Language Commons working group meeting in Boston, students in the “Digitizing Languages” seminar (University of Michigan), and anonymous reviewers, for feedback on an earlier version of this paper. 96 References Steven Abney. 2007. Semisupervised Learning for Computational Linguistics. Chapman & Hall/CRC. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O’Reilly Media. http://nltk.org/book. Steven Bird. 2010. A scalable method for preserving oral literature from small languages. In Proceedings of the 12th International Conference on Asia-Pacific Digital Libraries, pages 5–14. Daan Broeder and Peter Wittenburg. 2006. The IMDI metadata framework, its current application and future direction. International Journal of Metadata, Semantics and Ontologies, 1:119–132. Christopher Cieri, Khalid Choukri, Nicoletta Calzolari, D. Terence Langendoen, Johannes Leveling, Martha Palmer, Nancy Ide, and James Pustejovsky. 2010. A road map for interoperable language resource metadata. In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC). Gregory R. Crane. 2010. Perseus Digital Library: Research in 2008/09. http://www.perseus. tufts.edu/hopper/research/current. Accessed Feb. 2010. Hamish Cunningham, Diana Maynard, Kalina Bontcheva, and Valentin Tablan. 2002. GATE: an architecture for development of robust HLT applications. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 168–175. Association for Computational Linguistics. Denise DiPersio. 2010. Implications of a permissions culture on the development and distribution of language resources. In FLaReNet Forum 2010. Fostering Language Resources Network. http: //www.flarenet.eu/. Hale, M. Krauss, L. Watahomigie, A. Yamamoto, and C. Craig. 1992. Endangered languages. Language, 68(1):1–42. Nikolaus P. Himmelmann. 2006. Language documentation: What is it and what is it good for? In Jost Gippert, Nikolaus Himmelmann, and Ulrike Mosel, editors, Essentials of Language Documentation, pages 1–30. Mouton de Gruyter. Human Genome Project. 2007. The science behind the Human Genome Project. http: //www.ornl.gov/sci/techresources/ Human_Genome/project/info.shtml. Accessed Dec. 2007. W. John Hutchins and Harold L. Somers. 1992. An Introduction to Machine Translation. Academic Press. Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press. Shankar Kumar, Franz J. Och, and Wolfgang Macherey. 2007. Improving word alignment with bridge languages. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 42–50, Prague, Czech Republic. Association for Computational Linguistics. Mike Maxwell and Baden Hughes. 2006. Frontiers in linguistic annotation for lower-density languages. In Proceedings of the Workshop on Frontiers in Linguistically Annotated Corpora 2006, pages 29–37, Sydney, Australia, July. Association for Computational Linguistics. Philip Resnik, Mari Broman Olsen, and Mona Diab. 1999. The Bible as a parallel corpus: Annotating the ‘book of 2000 tongues’. Computers and the Humanities, 33:129–153. Kevin Scannell. 2008. The Cr´ubad´an Project: Corpus building for under-resourced languages. In Cahiers du Cental 5: Proceedings of the 3rd Web as Corpus Workshop. Gary Simons and Steven Bird. 2003. The Open Language Archives Community: An infrastructure for distributed archiving of language resources. Literary and Linguistic Computing, 18:117–128. Morris Swadesh. 1955. Towards greater accuracy in lexicostatistic dating. International Journal of American Linguistics, 21:121–137. Tam´as V´aradi, Steven Krauwer, Peter Wittenburg, Martin Wynne, and Kimmo Koskenniemi. 2008. CLARIN: common language resources and technology infrastructure. In Proceedings of the Sixth International Language Resources and Evaluation Conference. European Language Resources Association. Daniel A. Wagner. 1993. Literacy, Culture, and Development: Becoming Literate in Morocco. Cambridge University Press. Glenys Waters. 1998. Local Literacies: Theory and Practice. Summer Institute of Linguistics, Dallas. Douglas H. Whalen and Gary Simons. 2009. Endangered language families. In Proceedings of the 1st International Conference on Language Documentation and Conservation. University of Hawaii. http://hdl.handle.net/10125/5017. Anthony C. Woodbury. 2010. Language documentation. In Peter K. Austin and Julia Sallabank, editors, The Cambridge Handbook of Endangered Languages. Cambridge University Press. Fei Xia and William D. Lewis. 2007. Multilingual structural projection across interlinearized text. In Proceedings of the Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL). Association for Computational Linguistics. 97
2010
10
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 979–988, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Learning Script Knowledge with Web Experiments Michaela Regneri Alexander Koller Department of Computational Linguistics and Cluster of Excellence Saarland University, Saarbr¨ucken {regneri|koller|pinkal}@coli.uni-saarland.de Manfred Pinkal Abstract We describe a novel approach to unsupervised learning of the events that make up a script, along with constraints on their temporal ordering. We collect naturallanguage descriptions of script-specific event sequences from volunteers over the Internet. Then we compute a graph representation of the script’s temporal structure using a multiple sequence alignment algorithm. The evaluation of our system shows that we outperform two informed baselines. 1 Introduction A script is “a standardized sequence of events that describes some stereotypical human activity such as going to a restaurant or visiting a doctor” (Barr and Feigenbaum, 1981). Scripts are fundamental pieces of commonsense knowledge that are shared between the different members of the same culture, and thus a speaker assumes them to be tacitly understood by a hearer when a scenario related to a script is evoked: When one person says “I’m going shopping”, it is an acceptable reply to say “did you bring enough money?”, because the SHOPPING script involves a ‘payment’ event, which again involves the transfer of money. It has long been recognized that text understanding systems would benefit from the implicit information represented by a script (Cullingford, 1977; Mueller, 2004; Miikkulainen, 1995). There are many other potential applications, including automated storytelling (Swanson and Gordon, 2008), anaphora resolution (McTear, 1987), and information extraction (Rau et al., 1989). However, it is also commonly accepted that the large-scale manual formalization of scripts is infeasible. While there have been a few attempts at doing this (Mueller, 1998; Gordon, 2001), efforts in which expert annotators create script knowledge bases clearly don’t scale. The same holds true of the script-like structures called “scenario frames” in FrameNet (Baker et al., 1998). There has recently been a surge of interest in automatically learning script-like knowledge resources from corpora (Chambers and Jurafsky, 2008b; Manshadi et al., 2008); but while these efforts have achieved impressive results, they are limited by the very fact that a lot of scripts – such as SHOPPING – are shared implicit knowledge, and their events are therefore rarely elaborated in text. In this paper, we propose a different approach to the unsupervised learning of script-like knowledge. We focus on the temporal event structure of scripts; that is, we aim to learn what phrases can describe the same event in a script, and what constraints must hold on the temporal order in which these events occur. We approach this problem by asking non-experts to describe typical event sequences in a given scenario over the Internet. This allows us to assemble large and varied collections of event sequence descriptions (ESDs), which are focused on a single scenario. We then compute a temporal script graph for the scenario by identifying corresponding event descriptions using a Multiple Sequence Alignment algorithm from bioinformatics, and converting the alignment into a graph. This graph makes statements about what phrases can describe the same event of a scenario, and in what order these events can take place. Crucially, our algorithm exploits the sequential structure of the ESDs to distinguish event descriptions that occur at different points in the script storyline, even when they are semantically similar. We evaluate our script graph algorithm on ten unseen scenarios, and show that it significantly outperforms a clustering-based baseline. The paper is structured as follows. We will first position our research in the landscape of related work in Section 2. We will then define how 979 we understand scripts, and what aspect of scripts we model here, in Section 3. Section 4 describes our data collection method, and Section 5 explains how we use Multiple Sequence Alignment to compute a temporal script graph. We evaluate our system in Section 6 and conclude in Section 7. 2 Related Work Approaches to learning script-like knowledge are not new. For instance, Mooney (1990) describes an early attempt to acquire causal chains, and Smith and Arnold (2009) use a graph-based algorithm to learn temporal script structures. However, to our knowledge, such approaches have never been shown to generalize sufficiently for wide coverage application, and none of them was rigorously evaluated. More recently, there have been a number of approaches to automatically learning event chains from corpora (Chambers and Jurafsky, 2008b; Chambers and Jurafsky, 2009; Manshadi et al., 2008). These systems typically employ a method for classifying temporal relations between given event descriptions (Chambers et al., 2007; Chambers and Jurafsky, 2008a; Mani et al., 2006). They achieve impressive performance at extracting high-level descriptions of procedures such as a CRIMINAL PROCESS. Because our approach involves directly asking people for event sequence descriptions, it can focus on acquiring specific scripts from arbitrary domains, and we can control the level of granularity at which scripts are described. Furthermore, we believe that much information about scripts is usually left implicit in texts and is therefore easier to learn from our more explicit data. Finally, our system automatically learns different phrases which describe the same event together with the temporal ordering constraints. Jones and Thompson (2003) describe an approach to identifying different natural language realizations for the same event considering the temporal structure of a scenario. However, they don’t aim to acquire or represent the temporal structure of the whole script in the end. In its ability to learn paraphrases using Multiple Sequence Alignment, our system is related to Barzilay and Lee (2003). Unlike Barzilay and Lee, we do not tackle the general paraphrase problem, but only consider whether two phrases describe the same event in the context of the same script. Furthermore, the atomic units of our alignment process are entire phrases, while in Barzilay and Lee’s setting, the atomic units are words. Finally, it is worth pointing out that our work is placed in the growing landscape of research that attempts to learn linguistic information out of data directly collected from users over the Internet. Some examples are the general acquisition of commonsense knowledge (Singh et al., 2002), the use of browser games for that purpose (von Ahn and Dabbish, 2008), and the collaborative annotation of anaphoric reference (Chamberlain et al., 2009). In particular, the use of the Amazon Mechanical Turk, which we use here, has been evaluated and shown to be useful for language processing tasks (Snow et al., 2008). 3 Scripts Before we delve into the technical details, let us establish some terminology. In this paper, we distinguish scenarios, as classes of human activities, from scripts, which are stereotypical models of the internal structure of these activities. Where EATING IN A RESTAURANT is a scenario, the script describes a number of events, such as ordering and leaving, that must occur in a certain order in order to constitute an EATING IN A RESTAURANT activity. The classical perspective on scripts (Schank and Abelson, 1977) has been that next to defining some events with temporal constraints, a script also defines their participants and their causal connections. Here we focus on the narrower task of learning the events that a script consists of, and of modeling and learning the temporal ordering constraints that hold between them. Formally, we will specify a script (in this simplified sense) in terms of a directed graph Gs = (Es, Ts), where Es is a set of nodes representing the events of a scenario s, and Ts is a set of edges (ei, ek) indicating that the event ei typically happens before ek in s. We call Gs the temporal script graph (TSG) for s. Each event in a TSG can usually be expressed with many different natural-language phrases. As the TSG in Fig. 3 illustrates, the first event in the script for EATING IN A FAST FOOD RESTAURANT can be equivalently described as ‘walk to the counter’ or ‘walk up to the counter’; even phrases like ‘walk into restaurant’, which would not usually be taken as paraphrases of these, can be accepted as describing the same event in the context 980 1. walk into restaurant 2. find the end of the line 3. stand in line 4. look at menu board 5. decide on food and drink 6. tell cashier your order 7. listen to cashier repeat order 8. listen for total price 9. swipe credit card in scanner 10. put up credit card 11. take receipt 12. look at order number 13. take your cup 14. stand off to the side 15. wait for number to be called 16. get your drink 1. look at menu 2. decide what you want 3. order at counter 4. pay at counter 5. receive food at counter 6. take food to table 7. eat food 1. walk to the counter 2. place an order 3. pay the bill 4. wait for the ordered food 5. get the food 6. move to a table 7. eat food 8. exit the place Figure 1: Three event sequence descriptions of this scenario. We call a natural-language realization of an individual event in the script an event description, and we call a sequence of event descriptions that form one particular instance of the script an event sequence description (ESD). Examples of ESDs for the FAST FOOD RESTAURANT script are shown in Fig. 1. One way to look at a TSG is thus that its nodes are equivalence classes of different phrases that describe the same event; another is that valid ESDs can be generated from a TSG by randomly selecting phrases from some nodes and arranging them in an order that respects the temporal precedence constraints in Ts. Our goal in this paper is to take a set of ESDs for a given scenario as our input and then compute a TSG that clusters different descriptions of the same event into the same node, and contains edges that generalize the temporal information encoded in the ESDs. 4 Data Acquisition In order to automatically learn TSGs, we selected 22 scenarios for which we collect ESDs. We deliberately included scenarios of varying complexity, including some that we considered hard to describe (CHILDHOOD, CREATE A HOMEPAGE), scenarios with highly variable orderings between events (MAKING SCRAMBLED EGGS), and scenarios for which we expected cultural differences (WEDDING). We used the Amazon Mechanical Turk1 to collect the data. For every scenario, we asked 25 people to enter a typical sequence of events in this scenario, in temporal order and in “bullet point style”. 1http://www.mturk.com/ We required the annotators to enter at least 5 and at most 16 events. Participants were allowed to skip a scenario if they felt unable to enter events for it, but had to indicate why. We did not restrict the participants (e.g. to native speakers). In this way, we collected 493 ESDs for the 22 scenarios. People used the possibility to skip a form 57 times. The most frequent explanation for this was that they didn’t know how a certain scenario works: The scenario with the highest proportion of skipped forms was CREATE A HOMEPAGE, whereas MAKING SCRAMBLED EGGS was the only one in which nobody skipped a form. Because we did not restrict the participants’ inputs, the data was fairly noisy. For the purpose of this study, we manually corrected the data for orthography and filtered out forms that were written in broken English or did not comply with the task (e.g. when users misunderstood the scenario, or did not list the event descriptions in temporal order). Overall we discarded 15% of the ESDs. Fig. 1 shows three of the ESDs we collected for EATING IN A FAST-FOOD RESTAURANT. As the example illustrates, descriptions differ in their starting points (‘walk into restaurant’ vs. ‘walk to counter’), the granularity of the descriptions (‘pay the bill’ vs. event descriptions 8–11 in the third sequence), and the events that are mentioned in the sequence (not even ‘eat food’ is mentioned in all ESDs). Overall, the ESDs we collected consisted of 9 events on average, but their lengths varied widely: For most scenarios, there were significant numbers of ESDs both with the minimum length of 5 and the maximum length of 16 and everything in between. Combined with the fact that 93% of all individual event descriptions occurred only once, this makes it challenging to align the different ESDs with each other. 5 Temporal Script Graphs We will now describe how we compute a temporal script graph out of the collected data. We proceed in two steps. First, we identify phrases from different ESDs that describe the same event by computing a Multiple Sequence Alignment (MSA) of all ESDs for the same scenario. Then we postprocess the MSA and convert it into a temporal script graph, which encodes and generalizes the temporal information contained in the original ESDs. 981 row s1 s2 s3 s4 1 ⊘ walk into restaurant ⊘ enter restaurant 2 ⊘ ⊘ walk to the counter go to counter 3 ⊘ find the end of the line ⊘ ⊘ 4 ⊘ stand in line ⊘ ⊘ 5 look at menu look at menu board ⊘ ⊘ 6 decide what you want decide on food and drink ⊘ make selection 7 order at counter tell cashier your order place an order place order 8 ⊘ listen to cashier repeat order ⊘ ⊘ 9 pay at counter ⊘ pay the bill pay for food 10 ⊘ listen for total price ⊘ ⊘ 11 ⊘ swipe credit card in scanner ⊘ ⊘ 12 ⊘ put up credit card ⊘ ⊘ 13 ⊘ take receipt ⊘ ⊘ 14 ⊘ look at order number ⊘ ⊘ 15 ⊘ take your cup ⊘ ⊘ 16 ⊘ stand off to the side ⊘ ⊘ 17 ⊘ wait for number to be called wait for the ordered food ⊘ 18 receive food at counter get your drink get the food pick up order 19 ⊘ ⊘ ⊘ pick up condiments 20 take food to table ⊘ move to a table go to table 21 eat food ⊘ eat food consume food 22 ⊘ ⊘ ⊘ clear tray 22 ⊘ ⊘ exit the place ⊘ Figure 2: A MSA of four event sequence descriptions 5.1 Multiple Sequence Alignment The problem of computing Multiple Sequence Alignments comes from bioinformatics, where it is typically used to find corresponding elements in proteins or DNA (Durbin et al., 1998). A sequence alignment algorithm takes as its input some sequences s1, . . . , sn ∈Σ∗over some alphabet Σ, along with a cost function cm : Σ×Σ → R for substitutions and gap costs cgap ∈R for insertions and deletions. In bioinformatics, the elements of Σ could be nucleotides and a sequence could be a DNA sequence; in our case, Σ contains the individual event descriptions in our data, and the sequences are the ESDs. A Multiple Sequence Alignment A of these sequences is then a matrix as in Fig. 2: The i-th column of A is the sequence si, possibly with some gaps (“⊘”) interspersed between the symbols of si, such that each row contains at least one nongap. If a row contains two non-gaps, we take these symbols to be aligned; aligning a non-gap with a gap can be thought of as an insertion or deletion. Each sequence alignment A can be assigned a cost c(A) in the following way: c(A) = cgap · Σ⊘+ n X i=1 m X j=1, aji̸=⊘ m X k=j+1, aki̸=⊘ cm(aji, aki) where Σ⊘is the number of gaps in A, n is the number of rows and m the number of sequences. In other words, we sum up the alignment cost for any two symbols from Σ that are aligned with each other, and add the gap cost for each gap. There is an algorithm that computes cheapest pairwise alignments (i.e. n = 2) in polynomial time (Needleman and Wunsch, 1970). For n > 2, the problem is NP-complete, but there are efficient algorithms that approximate the cheapest MSAs by aligning two sequences first, considering the result as a single sequence whose elements are pairs, and repeating this process until all sequences are incorporated in the MSA (Higgins and Sharp, 1988). 5.2 Semantic similarity In order to apply MSA to the problem of aligning ESDs, we choose Σ to be the set of all individual event descriptions in a given scenario. Intuitively, we want the MSA to prefer the alignment of two phrases if they are semantically similar, i.e. it should cost more to align ‘exit’ with ‘eat’ than ‘exit’ with ‘leave’. Thus we take a measure of semantic (dis)similarity as the cost function cm. The phrases to be compared are written in bullet-point style. They are typically short and elliptic (no overt subject), they lack determiners and use infinitive or present progressive form for the main verb. Also, the lexicon differs considerably from usual newspaper corpora. For these reasons, standard methods for similarity assessment are not straightforwardly applicable: Simple bagof-words approaches do not provide sufficiently good results, and standard taggers and parsers cannot process our descriptions with sufficient accuracy. We therefore employ a simple, robust heuristics, which is tailored to our data and provides very 982 get in line enter restaurant stand in line wait in line look at menu board wait in line to order my food examine menu board look at the menu look at menu go to cashier go to ordering counter go to counter i decide what i want decide what to eat decide on food and drink decide on what to order make selection decide what you want order food i order it tell cashier your order order items from wall menu order my food place an order order at counter place order pay at counter pay for the food pay for food give order to the employee pay the bill pay pay for the food and drinks pay for order collect utensils pay for order pick up order make payment keep my receipt take receipt wait for my order look at prices wait look at order number wait for order to be done wait for food to be ready wait for order wait for the ordered food expect order wait for food pick up condiments take your cup receive food take food to table receive tray with order get condiments get the food receive food at counter pick up food when ready get my order get food move to a table sit down wait for number to be called seat at a table sit down at table leave walk into the reasturant walk up to the counter walk into restaurant go to restaurant walk to the counter Figure 3: An extract from the graph computed for EATING IN A FAST FOOD RESTAURANT shallow dependency-style syntactic information. We identify the first potential verb of the phrase (according to the POS information provided by WordNet) as the predicate, the preceding noun (if any) as subject, and all following potential nouns as objects. (With this fairly crude tagging method, we also count nouns in prepositional phrases as “objects”.) On the basis of this pseudo-parse, we compute the similarity measure sim: sim = α · pred + β · subj + γ · obj where pred, subj, and obj are the similarity values for predicates, subjects and objects respectively, and α, β, γ are weights. If a constituent is not present in one of the phrases to compare, we set its weight to zero and redistribute it over the other weights. We fix the individual similarity scores pred, subj, and obj depending on the WordNet relation between the most similar WordNet senses of the respective lemmas (100 for synonyms, 0 for lemmas without any relation, and intermediate numbers for different kind of WordNet links). We optimized the values for pred, subj, and obj as well as the weights α, β and γ using a held-out development set of scenarios. Our experiments showed that in most cases, the verb contributes the largest part to the similarity (accordingly, α needs to be higher than the other factors). We achieved improved accuracy by distinguishing a class of verbs that contribute little to the meaning of the phrase (i.e., support verbs, verbs of movement, and the verb “get”), and assigning them a separate, lower α. 5.3 Building Temporal Script Graphs We can now compute a low-cost MSA for each scenario out of the ESDs. From this alignment, we extract a temporal script graph, in the following way. First, we construct an initial graph which has one node for each row of the MSA as in Fig. 2. We interpret each node of the graph as representing a single event in the script, and the phrases that are collected in the node as different descriptions of this event; that is, we claim that these phrases are paraphrases in the context of this scenario. We then add an edge (u, v) to the graph iff (1) u ̸= v, (2) there was at least one ESD in the original data in which some phrase in u directly preceded some phrase in v, and (3) if a single ESD contains a phrase from u and from v, the phrase from u directly precedes the one from v. In terms of the MSA, this means that if a phrase from u comes from the same column as a phrase from v, there are at most some gaps between them. This initial graph represents exactly the same information as the MSA, in a different notation. The graph is automatically post-processed in a second step to simplify it and eliminate noise that caused MSA errors. At first we prune spurious nodes which contain only one event description. Then we refine the graph by merging nodes whose elements should have been aligned in the first place but were missed by the MSA. We merge two nodes if they satisfy certain structural and semantic constraints. The semantic constraints check whether the event descriptions of the merged node would be sufficiently consistent according to the similarity measure from Section 5.2. To check whether we can merge two nodes u and v, we use an unsupervised clustering algorithm (Flake et al., 2004) to 983 first cluster the event descriptions in u and v separately. Then we combine the event descriptions from u and v and cluster the resulting set. If the union has more clusters than either u or v, we assume the nodes to be too dissimilar for merging. The structural constraints depend on the graph structure. We only merge two nodes u and v if their event descriptions come from different sequences and one of the following conditions holds: • u and v have the same parent; • u has only one parent, v is its only child; • v has only one child and is the only child of u; • all children of u (except for v) are also children of v. These structural constraints prevent the merging algorithm from introducing new temporal relations that are not supported by the input ESDs. We take the output of this post-processing step as the temporal script graph. An excerpt of the graph we obtain for our running example is shown in Fig. 3. One node created by the node merging step was the top left one, which combines one original node containing ‘walk into restaurant’ and another with ‘go to restaurant’. The graph mostly groups phrases together into event nodes quite well, although there are some exceptions, such as the ‘collect utensils’ node. Similarly, the temporal information in the graph is pretty accurate. But perhaps most importantly, our MSA-based algorithm manages to keep similar phrases like ‘wait in line’ and ‘wait for my order’ apart by exploiting the sequential structure of the input ESDs. 6 Evaluation We evaluated the two core aspects of our system: its ability to recognize descriptions of the same event (paraphrases) and the resulting temporal constraints it defines on the event descriptions (happens-before relation). We compare our approach to two baseline systems and show that our system outperforms both baselines and sometimes even comes close to our upper bound. 6.1 Method We selected ten scenarios which we did not use for development purposes, five of them taken from the corpus described in Section 4, the other five from the OMICS corpus.2 The OMICS corpus is a freely available, web-collected corpus by the Open Mind Initiative (Singh et al., 2002). It contains several stories (≈scenarios) consisting of multiple ESDs. The corpus strongly resembles ours in language style and information provided, but is restricted to “indoor activities” and contains much more data than our collection (175 scenarios with more than 40 ESDs each). For each scenario, we created a paraphrase set out of 30 randomly selected pairs of event descriptions which the system classified as paraphrases and 30 completely random pairs. The happens-before set consisted of 30 pairs classified as happens-before, 30 random pairs and additionally all 60 pairs in reverse order. We added the reversed pairs to check whether the raters really prefer one direction or whether they accept both and were biased by the order of presentation. We presented each pair to 5 non-experts, all US residents, via Mechanical Turk. For the paraphrase set, an exemplary question we asked the rater looks as follows, instantiating the Scenario and the two descriptions to compare appropriately: Imagine two people, both telling a story about SCENARIO. Could the first one say event2 to describe the same part of the story that the second one describes with event1 ? For the happens-before task, the question template was the following: Imagine somebody telling a story about SCENARIO in which the events event1 and event2 occur. Would event1 normally happen before event2? We constructed a gold standard by a majority decision of the raters. An expert rater adjudicated the pairs with a 3:2 vote ratio. 6.2 Upper Bound and Baselines To show the contributions of the different system components, we implemented two baselines: Clustering Baseline: We employed an unsupervised clustering algorithm (Flake et al., 2004) and fed it all event descriptions of a scenario. We first created a similarity graph with one node per event description. Each pair of nodes is connected 2http://openmind.hri-us.com/ 984 SCENARIO PRECISION RECALL F-SCORE sys basecl baselev sys basecl baselev sys basecl baselev upper MTURK pay with credit card 0.52 0.43 0.50 0.84 0.89 0.11 0.64 0.58 • 0.17 0.60 eat in restaurant 0.70 0.42 0.75 0.88 1.00 0.25 0.78 • 0.59 • 0.38 • 0.92 iron clothes I 0.52 0.32 1.00 0.94 1.00 0.12 0.67 • 0.48 • 0.21 • 0.82 cook scrambled eggs 0.58 0.34 0.50 0.86 0.95 0.10 0.69 • 0.50 • 0.16 • 0.91 take a bus 0.65 0.42 0.40 0.87 1.00 0.09 0.74 • 0.59 • 0.14 • 0.88 OMICS answer the phone 0.93 0.45 0.70 0.85 1.00 0.21 0.89 • 0.71 • 0.33 0.79 buy from vending machine 0.59 0.43 0.59 0.83 1.00 0.54 0.69 0.60 0.57 0.80 iron clothes II 0.57 0.30 0.33 0.94 1.00 0.22 0.71 • 0.46 • 0.27 0.77 make coffee 0.50 0.27 0.56 0.94 1.00 0.31 0.65 • 0.42 ◦0.40 • 0.82 make omelette 0.75 0.54 0.67 0.92 0.96 0.23 0.83 • 0.69 • 0.34 0.85 AVERAGE 0.63 0.40 0.60 0.89 0.98 0.22 0.73 0.56 0.30 0.82 Figure 4: Results for paraphrasing task; significance of difference to sys: • : p ≤0.01, ◦: p ≤0.1 with a weighted edge; the weight reflects the semantic similarity of the nodes’ event descriptions as described in Section 5.2. To include all input information on inequality of events, we did not allow for edges between nodes containing two descriptions occurring together in one ESD. The underlying assumption here is that two different event descriptions of the same ESD always represent distinct events. The clustering algorithm uses a parameter which influences the cluster granularity, without determining the exact number of clusters beforehand. We optimized this parameter automatically for each scenario: The system picks the value that yields the optimal result with respect to density and distance of the clusters (Flake et al., 2004), i.e. the elements of each cluster are as similar as possible to each other, and as dissimilar as possible to the elements of all other clusters. The clustering baseline considers two phrases as paraphrases if they are in the same cluster. It claims a happens-before relation between phrases e and f if some phrase in e’s cluster precedes some phrase in f’s cluster in the original ESDs. With this baseline, we can show the contribution of MSA. Levenshtein Baseline: This system follows the same steps as our system, but using Levenshtein distance as the measure of semantic similarity for MSA and for node merging (cf. Section 5.3). This lets us measure the contribution of the more finegrained similarity function. We computed Levenshtein distance as the character-wise edit distance on the phrases, divided by the phrases’ character length so as to get comparable values for shorter and longer phrases. The gap costs for MSA with Levenshtein were optimized on our development set so as to produce the best possible alignment. Upper bound: We also compared our system to a human-performance upper bound. Because no single annotator rated all pairs of ESDs, we constructed a “virtual annotator” as a point of comparison, by randomly selecting one of the human annotations for each pair. 6.3 Results We calculated precision, recall, and f-score for our system, the baselines, and the upper bound as follows, with allsystem being the number of pairs labelled as paraphrase or happens-before, allgold as the respective number of pairs in the gold standard and correct as the number of pairs labeled correctly by the system. precision = correct allsystem recall = correct allgold f-score = 2 ∗precision ∗recall precision + recall The tables in Fig. 4 and 5 show the results of our system and the reference values; Fig. 4 describes the paraphrasing task and Fig. 5 the happensbefore task. The upper half of the tables describes the test sets from our own corpus, the remainder refers to OMICS data. The columns labelled sys contain the results of our system, basecl describes the clustering baseline and baselev the Levenshtein baseline. The f-score for the upper bound is in the column upper. For the f-score values, we calculated the significance for the difference between our system and the baselines as well as the upper bound, using a resampling test (Edgington, 1986). The values marked with • differ from our system significantly at a level of p ≤0.01, ◦marks a level of p ≤0.1. The remaining values are not significant with p ≤0.1. (For the average values, no sig985 SCENARIO PRECISION RECALL F-SCORE sys basecl baselev sys basecl baselev sys basecl baselev upper MTURK pay with credit card 0.86 0.49 0.65 0.84 0.74 0.45 0.85 • 0.59 • 0.53 0.92 eat in restaurant 0.78 0.48 0.68 0.84 0.98 0.75 0.81 • 0.64 0.71 • 0.95 iron clothes I 0.78 0.54 0.75 0.72 0.95 0.53 0.75 0.69 • 0.62 • 0.92 cook scrambled eggs 0.67 0.54 0.55 0.64 0.98 0.69 0.66 0.70 0.61 • 0.88 take a bus 0.80 0.49 0.68 0.80 1.00 0.37 0.80 • 0.66 • 0.48 • 0.96 OMICS answer the phone 0.83 0.48 0.79 0.86 1.00 0.96 0.84 • 0.64 0.87 0.90 buy from vending machine 0.84 0.51 0.69 0.85 0.90 0.75 0.84 • 0.66 ◦0.71 0.83 iron clothes II 0.78 0.48 0.75 0.80 0.96 0.66 0.79 • 0.64 0.70 0.84 make coffee 0.70 0.55 0.50 0.78 1.00 0.55 0.74 0.71 ◦0.53 ◦0.83 make omelette 0.70 0.55 0.79 0.83 0.93 0.82 0.76 ◦0.69 0.81 • 0.92 AVERAGE 0.77 0.51 0.68 0.80 0.95 0.65 0.78 0.66 0.66 0.90 Figure 5: Results for happens-before task; significance of difference to sys: • : p ≤0.01, ◦: p ≤0.1 nificance is calculated because this does not make sense for scenario-wise evaluation.) Paraphrase task: Our system outperforms both baselines clearly, reaching significantly higher f-scores in 17 of 20 cases. Moreover, for five scenarios, the upper bound does not differ significantly from our system. For judging the precision, consider that the test set is slightly biased: Labeling all pairs with the majority category (no paraphrase) would result in a precision of 0.64. However, recall and f-score for this trivial lower bound would be 0. The only scenario in which our system doesn’t score very well is BUY FROM A VENDING MACHINE, where the upper bound is not significantly better either. The clustering system, which can’t exploit the sequential information from the ESDs, has trouble distinguishing semantically similar phrases (high recall, low precision). The Levenshtein similarity measure, on the other hand, is too restrictive and thus results in comparatively high precisions, but very low recall. Happens-before task: In most cases, and on average, our system is superior to both baselines. Where a baseline system performs better than ours, the differences are not significant. In four cases, our system does not differ significantly from the upper bound. Regarding precision, our system outperforms both baselines in all scenarios except one (MAKE OMELETTE). Again the clustering baseline is not fine-grained enough and suffers from poor precision, only slightly better than the majority baseline. The Levenshtein baseline gets mostly poor recall, except for ANSWER THE PHONE: to describe this scenario, people used very similar wording. In such a scenario, adding lexical knowledge to the sequential information makes less of a difference. On average, the baselines do much better here than for the paraphrase task. This is because once a system decides on paraphrase clusters that are essentially correct, it can retrieve correct information about the temporal order directly from the original ESDs. Both tables illustrate that the task complexity strongly depends on the scenario: Scripts that allow for a lot of variation with respect to ordering (such as COOK SCRAMBLED EGGS) are particularly challenging for our system. This is due to the fact that our current system can neither represent nor find out that two events can happen in arbitrary order (e.g., ‘take out pan’ and ‘take out bowl’). One striking difference between the performance of our system on the OMICS data and on our own dataset is the relation to the upper bound: On our own data, the upper bound is almost always significantly better than our system, whereas significant differences are rare on OMICS. This difference bears further analysis; we speculate it might be caused either by the increased amount of training data in OMICS or by differences in language (e.g., fewer anaphoric references). 7 Conclusion We conclude with a summary of this paper and some discussion along with hints to future work in the last part. 7.1 Summary In this paper, we have described a novel approach to the unsupervised learning of temporal script information. Our approach differs from previous work in that we collect training data by directly asking non-expert users to describe a scenario, and 986 then apply a Multiple Sequence Alignment algorithm to extract scenario-specific paraphrase and temporal ordering information. We showed that our system outperforms two baselines and sometimes approaches human-level performance, especially because it can exploit the sequential structure of the script descriptions to separate clusters of semantically similar events. 7.2 Discussion and Future Work We believe that we can scale this approach to model a large numbers of scenarios representing implicit shared knowledge. To realize this goal, we are going to automatize several processing steps that were done manually for the current study. We will restrict the user input to lexicon words to avoid manual orthography correction. Further, we will implement some heuristics to filter unusable instances by matching them with the remaining data. As far as the data collection is concerned, we plan to replace the web form with a browser game, following the example of von Ahn and Dabbish (2008). This game will feature an algorithm that can generate new candidate scenarios without any supervision, for instance by identifying suitable sub-events of collected scripts (e.g. inducing data collection for PAY as sub-event sequence of GO SHOPPING) On the technical side, we intend to address the question of detecting participants of the scripts and integrating them into the graphs, Further, we plan to move on to more elaborate data structures than our current TSGs, and then identify and represent script elements like optional events, alternative events for the same step, and events that can occur in arbitrary order. Because our approach gathers information from volunteers on the Web, it is limited by the knowledge of these volunteers. We expect it will perform best for general commonsense knowledge; culture-specific knowledge or domain-specific expert knowledge will be hard for it to learn. This limitation could be addressed by targeting specific groups of online users, or by complementing our approach with corpus-based methods, which might perform well exactly where ours does not. Acknowledgements We want to thank Dustin Smith for the OMICS data, Alexis Palmer for her support with Amazon Mechanical Turk, Nils Bendfeldt for the creation of all web forms and Ines Rehbein for her effort with several parsing experiments. In particular, we thank the anonymous reviewers for their helpful comments. – This work was funded by the Cluster of Excellence “Multimodal Computing and Interaction” in the German Excellence Initiative. References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project. In Proceedings of the 17th international conference on Computational linguistics, pages 86–90, Morristown, NJ, USA. Association for Computational Linguistics. Avron Barr and Edward Feigenbaum. 1981. The Handbook of Artificial Intelligence, Volume 1. William Kaufman Inc., Los Altos, CA. Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple-sequence alignment. In Proceedings of HLT-NAACL 2003. Jon Chamberlain, Massimo Poesio, and Udo Kruschwitz. 2009. A demonstration of human computation using the phrase detectives annotation game. In KDD Workshop on Human Computation. ACM. Nathanael Chambers and Dan Jurafsky. 2008a. Jointly combining implicit constraints improves temporal ordering. In Proceedings of EMNLP 2008. Nathanael Chambers and Dan Jurafsky. 2008b. Unsupervised learning of narrative event chains. In Proceedings of ACL-08: HLT. Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proceedings of ACL-IJCNLP 2009. Nathanael Chambers, Shan Wang, and Dan Jurafsky. 2007. Classifying temporal relations between events. In Proceedings of ACL-07: Interactive Poster and Demonstration Sessions. Richard Edward Cullingford. 1977. Script application: computer understanding of newspaper stories. Ph.D. thesis, Yale University, New Haven, CT, USA. Richard Durbin, Sean Eddy, Anders Krogh, and Graeme Mitchison. 1998. Biological Sequence Analysis. Cambridge University Press. Eugene S Edgington. 1986. Randomization tests. Marcel Dekker, Inc., New York, NY, USA. Gary W. Flake, Robert E. Tarjan, and Kostas Tsioutsiouliklis. 2004. Graph clustering and minimum cut trees. Internet Mathematics, 1(4). Andrew S. Gordon. 2001. Browsing image collections with representations of common-sense activities. JASIST, 52(11). 987 Desmond G. Higgins and Paul M. Sharp. 1988. Clustal: a package for performing multiple sequence alignment on a microcomputer. Gene, 73(1). Dominic R. Jones and Cynthia A. Thompson. 2003. Identifying events using similarity and context. In Proceedings of CoNNL-2003. Inderjeet Mani, Marc Verhagen, Ben Wellner, Chong Min Lee, and James Pustejovsky. 2006. Machine learning of temporal relations. In COLING/ACL-2006. Mehdi Manshadi, Reid Swanson, and Andrew S. Gordon. 2008. Learning a probabilistic model of event sequences from internet weblog stories. In Proceedings of the 21st FLAIRS Conference. Michael McTear. 1987. The Articulate Computer. Blackwell Publishers, Inc., Cambridge, MA, USA. Risto Miikkulainen. 1995. Script-based inference and memory retrieval in subsymbolic story processing. Applied Intelligence, 5(2), 04. Raymond J. Mooney. 1990. Learning plan schemata from observation: Explanation-based learning for plan recognition. Cognitive Science, 14(4). Erik T. Mueller. 1998. Natural Language Processing with Thought Treasure. Signiform. Erik T. Mueller. 2004. Understanding script-based stories using commonsense reasoning. Cognitive Systems Research, 5(4). Saul B. Needleman and Christian D. Wunsch. 1970. A general method applicable to the search for similarities in the amino acid sequence of two proteins. Journal of molecular biology, 48(3), March. Lisa F. Rau, Paul S. Jacobs, and Uri Zernik. 1989. Information extraction and text summarization using linguistic knowledge acquisition. Information Processing and Management, 25(4):419 – 428. Roger C. Schank and Robert P. Abelson. 1977. Scripts, Plans, Goals and Understanding. Lawrence Erlbaum, Hillsdale, NJ. Push Singh, Thomas Lin, Erik T. Mueller, Grace Lim, Travell Perkins, and Wan L. Zhu. 2002. Open mind common sense: Knowledge acquisition from the general public. In On the Move to Meaningful Internet Systems - DOA, CoopIS and ODBASE 2002, London, UK. Springer-Verlag. Dustin Smith and Kenneth C. Arnold. 2009. Learning hierarchical plans by reading simple english narratives. In Proceedings of the Commonsense Workshop at IUI-09. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast—but is it good?: evaluating non-expert annotations for natural language tasks. In Proceedings of EMNLP 2008. Reid Swanson and Andrew S. Gordon. 2008. Say anything: A massively collaborative open domain story writing companion. In Proceedings of ICIDS 2008. Luis von Ahn and Laura Dabbish. 2008. Designing games with a purpose. Commun. ACM, 51(8). 988
2010
100
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 989–998, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Starting From Scratch in Semantic Role Labeling Michael Connor University of Illinois [email protected] Yael Gertner University of Illinois [email protected] Cynthia Fisher University of Illinois [email protected] Dan Roth University of Illinois [email protected] Abstract A fundamental step in sentence comprehension involves assigning semantic roles to sentence constituents. To accomplish this, the listener must parse the sentence, find constituents that are candidate arguments, and assign semantic roles to those constituents. Each step depends on prior lexical and syntactic knowledge. Where do children learning their first languages begin in solving this problem? In this paper we focus on the parsing and argumentidentification steps that precede Semantic Role Labeling (SRL) training. We combine a simplified SRL with an unsupervised HMM part of speech tagger, and experiment with psycholinguisticallymotivated ways to label clusters resulting from the HMM so that they can be used to parse input for the SRL system. The results show that proposed shallow representations of sentence structure are robust to reductions in parsing accuracy, and that the contribution of alternative representations of sentence structure to successful semantic role labeling varies with the integrity of the parsing and argumentidentification stages. 1 Introduction In this paper we present experiments with an automatic system for semantic role labeling (SRL) that is designed to model aspects of human language acquisition. This simplified SRL system is inspired by the syntactic bootstrapping theory, and by an account of syntactic bootstrapping known as ’structure-mapping’ (Fisher, 1996; Gillette et al., 1999; Lidz et al., 2003). Syntactic bootstrapping theory proposes that young children use their very partial knowledge of syntax to guide sentence comprehension. The structure-mapping account makes three key assumptions: First, sentence comprehension is grounded by the acquisition of an initial set of concrete nouns. Nouns are arguably less dependent on prior linguistic knowledge for their acquisition than are verbs; thus children are assumed to be able to identify the referents of some nouns via cross-situational observation (Gillette et al., 1999). Second, these nouns, once identified, yield a skeletal sentence structure. Children treat each noun as a candidate argument, and thus interpret the number of nouns in the sentence as a cue to its semantic predicate-argument structure (Fisher, 1996). Third, children represent sentences in an abstract format that permits generalization to new verbs (Gertner et al., 2006). The structure-mapping account of early syntactic bootstrapping makes strong predictions, including predictions of tell-tale errors. In the sentence “Ellen and John laughed”, an intransitive verb appears with two nouns. If young children rely on representations of sentences as simple as an ordered set of nouns, then they should have trouble distinguishing such sentences from transitive sentences. Experimental evidence suggests that they do: 21-month-olds mistakenly interpreted word order in sentences such as “The girl and the boy kradded” as conveying agent-patient roles (Gertner and Fisher, 2006). Previous computational experiments with a system for automatic semantic role labeling (BabySRL: (Connor et al., 2008)) showed that it is possible to learn to assign basic semantic roles based on the shallow sentence representations proposed by the structure-mapping view. Furthermore, these simple structural features were robust to drastic reductions in the integrity of the semantic-role feedback (Connor et al., 2009). These experiments showed that representations of sentence structure as simple as ‘first of two nouns’ are useful, but the experiments relied on perfect 989 knowledge of arguments and predicates as a start to classification. Perfect built-in parsing finesses two problems facing the human learner. The first problem involves classifying words by part-of-speech. Proposed solutions to this problem in the NLP and human language acquisition literatures focus on distributional learning as a key data source (e.g., (Mintz, 2003; Johnson, 2007)). Importantly, infants are good at learning distributional patterns (Gomez and Gerken, 1999; Saffran et al., 1996). Here we use a fairly standard Hidden Markov Model (HMM) to generate clusters of words that occur in similar distributional contexts in a corpus of input sentences. The second problem facing the learner is more contentious: Having identified clusters of distributionally-similar words, how do children figure out what role these clusters of words should play in a sentence interpretation system? Some clusters contain nouns, which are candidate arguments; others contain verbs, which take arguments. How is the child to know which are which? In order to use the output of the HMM tagger to process sentences for input to an SRL model, we must find a way to automatically label the clusters. Our strategies for automatic argument and predicate identification, spelled out below, reflect core claims of the structure-mapping theory: (1) The meanings of some concrete nouns can be learned without prior linguistic knowledge; these concrete nouns are assumed based on their meanings to be possible arguments; (2) verbs are identified, not primarily by learning their meanings via observation, but rather by learning about their syntactic argument-taking behavior in sentences. By using the HMM part-of-speech tagger in this way, we can ask how the simple structural features that we propose children start with stand up to reductions in parsing accuracy. In doing so, we move to a parser derived from a particular theoretical account of how the human learner might classify words, and link them into a system for sentence comprehension. 2 Model We model language learning as a Semantic Role Labeling (SRL) task (Carreras and M`arquez, 2004). This allows us to ask whether a learner, equipped with particular theoretically-motivated representations of the input, can learn to understand sentences at the level of who did what to whom. The architecture of our system is similar to a previous approach to modeling early language acquisition (Connor et al., 2009), which is itself based on the standard architecture of a full SRL system (e.g. (Punyakanok et al., 2008)). This basic approach follows a multi-stage pipeline, with each stage feeding in to the next. The stages are: (1) Parsing the sentence, (2) Identifying potential predicates and arguments based on the parse, (3) Classifying role labels for each potential argument relative to a predicate, (4) Applying constraints to find the best labeling of arguments for a sentence. In this work we attempt to limit the knowledge available at each stage to the automatic output of the previous stage, constrained by knowledge that we argue is available to children in the early stages of language learning. In the parsing stage we use an unsupervised parser based on Hidden Markov Models (HMM), modeling a simple ‘predict the next word’ parser. Next the argument identification stage identifies HMM states that correspond to possible arguments and predicates. The candidate arguments and predicates identified in each input sentence are passed to an SRL classifier that uses simple abstract features based on the number and order of arguments to learn to assign semantic roles. As input to our learner we use samples of natural child directed speech (CDS) from the CHILDES corpora (MacWhinney, 2000). During initial unsupervised parsing we experiment with incorporating knowledge through a combination of statistical priors favoring a skewed distribution of words into classes, and an initial hard clustering of the vocabulary into function and content words. The argument identifier uses a small set of frequent nouns to seed argument states, relying on the assumptions that some concrete nouns can be learned as a prerequisite to sentence interpretation, and are interpreted as candidate arguments. The SRL classifier starts with noisy largely unsupervised argument identification, and receives feedback based on annotation in the PropBank style; in training, each word identified as an argument receives the true role label of the phrase that word is part of. This represents the assumption that learning to interpret sentences is naturally supervised by the fit of the learner’s predicted meaning with the referential context. The provision 990 of perfect ‘gold-standard’ feedback over-estimates the real child’s access to this supervision, but allows us to investigate the consequences of noisy argument identification for SRL performance. We show that even with imperfect parsing, a learner can identify useful abstract patterns for sentence interpretation. Our ultimate goal is to ‘close the loop’ of this system, by using learning in the SRL system to improve the initial unsupervised parse and argument identification. The training data were samples of parental speech to three children (Adam, Eve, and Sarah; (Brown, 1973)), available via CHILDES. The SRL training corpus consists of parental utterances in samples Adam 01-20 (child age 2;3 - 3;1), Eve 01-18 (1;6 - 2;2), and Sarah 01-83 (2;3 - 3;11). All verb-containing utterances without symbols indicating disfluencies were automatically parsed with the Charniak parser (Charniak, 1997), annotated using an existing SRL system (Punyakanok et al., 2008) and then errors were hand-corrected. The final annotated sample contains about 16,730 propositions, with 32,205 arguments. 3 Unsupervised Parsing As a first step of processing, we feed the learner large amounts of unlabeled text and expect it to learn some structure over this data that will facilitate future processing. The source of this text is child directed speech collected from various projects in the CHILDES repository1. We removed sentences with fewer than three words or markers of disfluency. In the end we used 160 thousand sentences from this set, totaling over 1 million tokens and 10 thousand unique words. The goal of the parsing stage is to give the learner a representation permitting it to generalize over word forms. The exact parse we are after is a distributional and context-sensitive clustering of words based on sequential processing. We chose an HMM based parser for this since, in essence the HMM yields an unsupervised POS classifier, but without names for states. An HMM trained with expectation maximization (EM) is analogous to a simple process of predicting the next word in a stream and correcting connections accordingly for each sentence. 1We used parts of the Bloom (Bloom, 1970; Bloom, 1973), Brent (Brent and Siskind, 2001), Brown (Brown, 1973), Clark (Clark, 1978), Cornell, MacWhinney (MacWhinney, 2000), Post (Demetras et al., 1986) and Providence (Demuth et al., 2006) collections. With HMM we can also easily incorporate additional knowledge during parameter estimation. The first (and simplest) parser we used was an HMM trained using EM with 80 hidden states. The number of hidden states was made relatively large to increase the likelihood of clusters corresponding to a single part of speech, while preserving some degree of generalization. Johnson (2007) observed that EM tends to create word clusters of uniform size, which does not reflect the way words cluster into parts of speech in natural languages. The addition of priors biasing the system toward a skewed allocation of words to classes can help. The second parser was an 80 state HMM trained with Variational Bayes EM (VB) incorporating Dirichlet priors (Beal, 2003).2 In the third and fourth parsers we experiment with enriching the HMM POS-tagger with other psycholinguistically plausible knowledge. Words of different grammatical categories differ in their phonological as well as in their distributional properties (e.g., (Kelly, 1992; Monaghan et al., 2005; Shi et al., 1998)); combining phonological and distributional information improves the clustering of words into grammatical categories. The phonological difference between content and function words is particularly striking (Shi et al., 1998). Even newborns can categorically distinguish content and function words, based on the phonological difference between the two classes (Shi et al., 1999). Human learners may treat content and function words as distinct classes from the start. To implement this division into function and content words3, we start with a list of function word POS tags4 and then find words that appear predominantly with these POS tags, using tagged WSJ data (Marcus et al., 1993). We allocated a fixed number of states for these function words, and left the rest of the states for the rest of the words. This amounts to initializing the emission matrix for the HMM with a block structure; words from one class cannot be emitted by states allocated to the other class. This trick has been used before in speech recognition work (Rabiner, 2We tuned the prior using the same set of 8 value pairs suggested by Gao and Johnson (2008), using a held out set of POS-tagged CDS to evaluate final performance. 3We also include a small third class for punctuation, which is discarded. 4TO,IN,EX,POS,WDT,PDT,WRB,MD,CC,DT,RP,UH 991 1989), and requires far fewer resources than the full tagging dictionary that is often used to intelligently initialize an unsupervised POS classifier (e.g. (Brill, 1997; Toutanova and Johnson, 2007; Ravi and Knight, 2009)). Because the function and content word preclustering preceded parameter estimation, it can be combined with either EM or VB learning. Although this initial split forces sparsity on the emission matrix and allows more uniform sized clusters, Dirichlet priors may still help, if word clusters within the function or content word subsets vary in size and frequency. The third parser was an 80 state HMM trained with EM estimation, with 30 states pre-allocated to function words; the fourth parser was the same except that it was trained with VB EM. 3.1 Parser Evaluation 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5 5.2 100 1000 10000 100000 1e+06 Variation of Information Training Sentences EM VB EM+Funct VB+Funct Figure 1: Unsupervised Part of Speech results, matching states to gold POS labels. All systems use 80 states, and comparison is to gold labeled CDS text, which makes up a subset of the HMM training data. Variation of Information is an information-theoretic measure summing mutual information between tags and states, proposed by (Meil˘a, 2002), and first used for Unsupervised Part of Speech in (Goldwater and Griffiths, 2007). Smaller numbers are better, indicating less information lost in moving from the HMM states to the gold POS tags. Note that incorporating function word preclustering allows both EM and VB algorithms to achieve the same performance with an order of magnitude fewer sentences. We first evaluate these parsers (the first stage of our SRL system) on unsupervised POS tagging. Figure 1 shows the performance of the four systems using Variation of Information to measure match between gold states and unsupervised parsers as we vary the amount of text they receive. Each point on the graph represents the average result over 10 runs of the HMM with different samples of the unlabeled CDS. Another common measure for unsupervised POS (when there are more states than tags) is a many to one greedy mapping of states to tags. It is known that EM gives a better many to one score than VB trained HMM (Johnson, 2007), and likewise we see that here: with all data EM gives 0.75 matching, VB gives 0.74, while both EM+Funct and VB+Funct reach 0.80. Adding the function/content word split to the HMM structure improves both EM and VB estimation in terms of both tag matching accuracy and information. However, these measures look at the parser only in isolation. What is more important to us is how useful the provided word clusters are for future semantic processing. In the next sections we use the outputs of our four parsers to identify arguments and predicates. 4 Argument Identification The unsupervised parser provides a state label for each word in each sentence; the goal of the argument identification stage is to use these states to label words as potential arguments, predicates or neither. As described in the introduction, core premises of the structure-mapping account offer routes whereby we could label some HMM states as argument or predicate states. The structure-mapping account holds that sentence comprehension is grounded in the learning of an initial set of nouns. Children are assumed to identify the referents of some concrete nouns via cross-situational learning (Gillette et al., 1999; Smith and Yu, 2008). Children then assume, by virtue of the meanings of these nouns, that they are candidate arguments. This is a simple form of semantic bootstrapping, requiring the use of built-in links between semantics and syntax to identify the grammatical type of known words (Pinker, 1984). We use a small set of known nouns to transform unlabeled word clusters into candidate arguments for the SRL: HMM states that are dominated by known names for animate or inanimate objects are assumed to be argument states. Given text parsed by the HMM parser and a list of known nouns, the argument identifier proceeds in multiple steps as illustrated in figure 2. The first stage identifies as argument states those states that appear at least half the time in the training data with known nouns. This use of a seed list and distributional clustering is similar to Prototype Driven Learning (Haghighi and Klein, 2006), except we are only providing information on one specific class. 992 Algorithm ARGUMENT STATE IDENTIFICATION INPUT: Parsed Text T = list of (word, state) pairs Set of concrete nouns N OUTPUT: Set of argument states A Argument count likelihood ArgLike(s, c) Identify Argument States Let freq(s) = |{(∗, s) ∈T }| Let freqN(s) = |{(w, s) ∈T |w ∈N}| For each s: If freqN(s) ≥freq(s)/2 Add s to A Collect Per Sentence Argument Count statistics For each Sentence S ∈T : Let Arg(S) = |{(w, s) ∈S|s ∈A}| For (w, s) ∈S s.t. s /∈A Increment ArgCount(s, Arg(S)) For each s /∈A, and argument count c: ArgLike(s, c) = ArgCount(s, c)/freq(s) (a) Argument Identification Algorithm PREDICATE STATE IDENTIFICATION INPUT: Parsed Sentence S = list of (word, state) pairs Set of argument states A Sentence Argument Count ArgLike(s, c) OUTPUT: Most likely predicate (v, sv) Find Number of arguments in sentence Let Arg(S) = |{(w, s) ∈S|s ∈A}| Find Non-argument state in sentence most likely to appear with this number of arguments (v, sv) = argmax(w,s)∈SArgLike(s, Arg(S)) (b) Predicate Identification Figure 2: Argument identification algorithm. This is a two stage process: argument state identification based on statistics collected over entire text and per sentence predicate identification. As a list of known nouns we collected all those nouns that appear three times or more in the child directed speech training data and judged to be either animate or inanimate nouns. The full set of 365 nouns covers over 93% of noun occurences in our data. In upcoming sections we experiment with varying the number of seed nouns used from this set, selecting the most frequent set of nouns. Reflecting the spoken nature of the child directed speech, the most frequent nouns are pronouns, but beyond the top 10 we see nouns naming people (‘daddy’, ‘ursula’) and object nouns (‘chair’, ‘lunch’). What about verbs? A typical SRL model identifies candidate arguments and tries to assign roles to them relative to each verb in the sentence. In principle one might suppose that children learn the meanings of verbs via cross-situational observation just as they learn the meanings of concrete nouns. But identifying the meanings of verbs is much more troublesome. Verbs’ meanings are abstract, therefore harder to identify based on scene information alone (Gillette et al., 1999). As a result, early vocabularies are dominated by nouns (Gentner, 2006). On the structure-mapping account, learners identify verbs, and begin to determine their meanings, based on sentence structure cues. Verbs take noun arguments; thus, learners could learn which words are verbs by detecting each verb’s syntactic argument-taking behavior. Experimental evidence provides some support for this procedure: 2-year-olds keep track of the syntactic structures in which a new verb appears, even without a concurrent scene that provides cues to the verb’s semantic content (Yuan and Fisher, 2009). We implement this behavior by identifying as predicate states the HMM states that appear commonly with a particular number of previously identified arguments. First, we collect statistics over the entire HMM training corpus regarding how many arguments are identified per sentence, and which states that are not identified as argument states appear with each number of arguments. Next, for each parsed sentence that serves as SRL input, the algorithm chooses as the most likely predicate the word whose state is most likely to appear with the number of arguments found in the current input sentence. Note that this algorithm assumes exactly one predicate per sentence. Implicitly, the argument count likelihood divides predicate states up into transitive and intransitive predicates based on appearances in the simple sentences of CDS. 4.1 Argument Identification Evaluation Figure 3 shows argument and predicate identification accuracy for each of the four parsers when provided with different numbers of known nouns. The known word list is very skewed with its most frequent members dominating the total noun occurrences in the data. The ten most frequent words5 account for 60% of the total noun occurrences. We achieve the different occurrence coverage numbers of figure 3 by using the most frequent N words from the list that give the specific coverage6. Pronouns refer to people or objects, but are abstract in that they can refer to any person or object. The inclusion of pronouns in our list of 5you, it, I, what, he, me, ya, she, we, her 6N of 5, 10, 30, 83, 227 cover 50%, 60%, 70%, 80%, 90% of all noun occurrences 993 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 F1 %Noun Occurences Covered EM VB EM+Funct VB+Funct Figure 3: Effect of number of concrete nouns for seeding argument identification with various unsupervised parsers. Argument identification accuracy is computed against true argument boundaries from hand labeled data. The upper set of results show primary argument (A0-4) identification F1, and bottom lines show predicate identification F1. known nouns represents the assumption that toddlers have already identified pronouns as referential terms. Even 19-month-olds assign appropriately different interpretations to novel verbs presented in simple transitive versus intransitive sentences with pronoun arguments (“He’s kradding him!” vs. “He’s kradding!”; (Yuan et al., 2007)). In ongoing work we experiment with other methods of identifying seed nouns. Two groups of curves appear in figure 3: the upper group shows the primary argument identification accuracy and the bottom group shows the predicate identification accuracy. We evaluate compared to gold tagged data with true argument and predicate boundaries. The primary argument (A0-4) identification accuracy is the F1 value, with precision calculated as the proportion of identified arguments that appear as part of a true argument, and recall as the proportion of true arguments that have some state identified as an argument. F1 is calculated similarly for predicate identification, as one state per sentence is identified as the predicate. As shown in figure 3, argument identification F1 is higher than predicate identification (which is to be expected, given that predicate identification depends on accurate arguments), and as we add more seed nouns the argument identification improves. Surprisingly, despite the clear differences in unsupervised POS performance seen in figure 1, the different parsers do not yield very different argument and predicate identification. As we will see in the next section, however, when the arguments identified in this step are used to train SRL classifier, distinctions between parsers reappear, suggesting that argument identification F1 masks systematic patterns in the errors. 5 Testing SRL Performance Finally, we used the results of the previous parsing and argument-identification stages in training a simplified SRL classifier (BabySRL), equipped with sets of features derived from the structuremapping account. For argument classification we used a linear classifier trained with a regularized perceptron update rule (Grove and Roth, 2001). In the results reported below the BabySRL did not use sentence-level inference for the final classification, every identified argument is classified independently; thus multiple nouns can have the same role. In what follows, we compare the performance of the BabySRL across the four parsers. We evaluated SRL performance by testing the BabySRL with constructed sentences like those used for the experiments with children described in the Introduction. All test sentences contained a novel verb, to test the model’s ability to generalize. We examine the performance of four versions of the BabySRL, varying in the features used to represent sentences. All four versions include lexical features consisting of the target argument and predicate (as identified in the previous steps). The baseline model has only these lexical features (Lexical). Following Connor et al. (2008; 2009), the key feature type we propose is noun pattern features (NounPat). Noun pattern features indicate how many nouns there are in the sentence and which noun the target is. For example, in “You dropped it!”, ‘you’ has a feature active indicating that it is the first of two nouns, while ‘it’ has a feature active indicating that it is the second of two nouns. We compared the behavior of noun pattern features to another simple representation of word order, position relative to the verb (VerbPos). In the same example sentence, ‘you’ has a feature active indicating that it is pre-verbal; for ‘it’ a feature is active indicating that it is post-verbal. A fourth version of the BabySRL (Combined) used both NounPat and VerbPos features. We structured our tests of the BabySRL to test the predictions of the structure-mapping account. (1) NounPat features will improve the SRL’s ability to interpret simple transitive test sentences containing two nouns and a novel verb, relative 994 to a lexical baseline. Like 21-month-old children (Gertner et al., 2006), the SRL should interpret the first noun as an agent and the second as a patient. (2) Because NounPat features represent word order solely in terms of a sequence of nouns, an SRL equipped with these features will make the errors predicted by the structure-mapping account and documented in children (Gertner and Fisher, 2006). (3) NounPat features permit the SRL to assign different roles to the subjects of transitive and intransitive sentences that differ in their number of nouns. This effect follows from the nature of the NounPat features: These features partition the training data based on the number of nouns, and therefore learn separately the likely roles of the ‘1st of 1 noun’ and the ‘1st of 2 nouns’. These patterns contrast with the behavior of the VerbPos features: When the BabySRL was trained with perfect parsing, VerbPos promoted agentpatient interpretations of transitive test sentences, and did so even more successfully than NounPat features did, reflecting the usefulness of position relative to the verb in understanding English sentences. In addition, VerbPos features eliminated the errors with two-noun intransitive sentences. Given test sentences such as ‘You and Mommy krad’, VerbPos features represented both nouns as pre-verbal, and therefore identified both as likely agents. However, VerbPos features did not help the SRL assign different roles to the subjects of simple transitive and intransitive sentences: ‘Mommy’ in ‘Mommy krads you’ and ’Mommy krads’ are both represented simply as pre-verbal. To test the system’s predictions on transitive and intransitive two noun sentences, we constructed two test sentence templates: ‘A krads B’ and ‘A and B krad’, where A and B were replaced with familiar animate nouns. The animate nouns were selected from all three children’s data in the training set and paired together in the templates such that all pairs are represented. Figure 4 shows SRL performance on test sentences containing a novel verb and two animate nouns. Each plot shows the proportion of test sentences that were assigned an agent-patient (A0A1) role sequence; this sequence is correct for transitive sentences but is an error for two-noun intransitive sentences. Each group of bars shows the performance of the BabySRL trained using one of the four parsers, equipped with each of our four feature sets. The top and bottom panels in Figure 4 differ in the number of nouns provided to seed the argument identification stage. The top row shows performance with 10 seed nouns (the 10 most frequent nouns, mostly animate pronouns), and the bottom row shows performance with 365 concrete (animate or inanimate) nouns treated as known. Relative to the lexical baseline, NounPat features fared well: they promoted the assignment of A0A1 interpretations to transitive sentences, across all parser versions and both sets of known nouns. Both VB estimation and the content-function word split increased the ability of NounPat features to learn that the first of two nouns was an agent, and the second a patient. The NounPat features also promote the predicted error with two-noun intransitive sentences (Figures 4(b), 4(d)). Despite the relatively low accuracy of predicate identification noted in section 4.1, the VerbPos features did succeed in promoting an A0A1 interpretation for transitive sentences containing novel verbs relative to the lexical baseline. In every case the performance of the Combined model that includes both NounPat and VerbPos features exceeds the performance of either NounPat or VerbPos alone, suggesting both contribute to correct predictions for transitive sentences. However, the performance of VerbPos features did not improve with parsing accuracy as did the performance of the NounPat features. Most strikingly, the VerbPos features did not eliminate the predicted error with two-noun intransitive sentences, as shown in panels 4(b) and 4(d). The Combined model predicted an A0A1 sequence for these sentences, showing no reduction in this error due to the participation of VerbPos features. Table 1 shows SRL performance on the same transitive test sentences (‘A krads B’), compared to simple one-noun intransitive sentences (‘A krads’). To permit a direct comparison, the table reports the proportion of transitive test sentences for which the first noun was assigned an agent (A0) interpretation, and the proportion of intransitive test sentences with the agent (A0) role assigned to the single noun in the sentence. Here we report only the results from the best-performing parser (trained with VB EM, and content/function word pre-clustering), compared to the same classifiers trained with gold standard argument identification. When trained on arguments identified via the unsupervised POS tagger, noun pattern features promoted agent interpretations of tran995 Two Noun Transitive, % Agent First One Noun Intransitive, % Agent Prediction Lexical NounPat VerbPos Combine Lexical NounPat VerbPos Combine VB+Funct 10 seed 0.48 0.61 0.55 0.71 0.48 0.57 0.56 0.59 VB+Funct 365 seed 0.22 0.64 0.41 0.74 0.23 0.33 0.43 0.41 Gold Arguments 0.16 0.41 0.69 0.77 0.17 0.18 0.70 0.58 Table 1: SRL result comparison when trained with best unsupervised argument identifier versus trained with gold arguments. Comparison is between agent first prediction of two noun transitive sentences vs. one noun intransitive sentences. The unsupervised arguments lead the classifier to rely more on noun pattern features; when the true arguments and predicate are known the verb position feature leads the classifier to strongly indicate agent first in both settings. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 EM VB EM+Funct VB+Funct Gold %A0A1 Lexical NounPat VerbPos Combine (a) Two Noun Transitive Sentence, 10 seed nouns 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 EM VB EM+Funct VB+Funct Gold %A0A1 Lexical NounPat VerbPos Combine (b) Two Noun Intransitive Sentence, 10 seed nouns 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 EM VB EM+Funct VB+Funct Gold %A0A1 Lexical NounPat VerbPos Combine (c) Two Noun Transitive Sentence, 365 seed nouns 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 EM VB EM+Funct VB+Funct Gold %A0A1 Lexical NounPat VerbPos Combine (d) Two Noun Intransitive Sentence, 365 seed nouns Figure 4: SRL classification performance on transitive and intransitive test sentences containing two nouns and a novel verb. Performance with gold-standard argument identification is included for comparison. Across parses, noun pattern features promote agent-patient (A0A1) interpretations of both transitive (“You krad Mommy”) and two-noun intransitive sentences (“You and Mommy krad”); the latter is an error found in young children. Unsupervised parsing is less accurate in identifying the verb, so verb position features fail to eliminate errors with two-noun intransitive sentences. sitive subjects, but not for intransitive subjects. This differentiation between transitive and intransitive sentences was clearer when more known nouns were provided. Verb position features, in contrast, promote agent interpretations of subjects weakly with unsupervised argument identification, but equally for transitive and intransitive. Noun pattern features were robust to increases in parsing noise. The behavior of verb position features suggests that variations in the identifiability of different parts of speech can affect the usefulness of alternative representations of sentence structure. Representations that reflect the position of the verb may be powerful guides for understanding simple English sentences, but representations reflecting only the number and order of nouns can dominate early in acquisition, depending on the integrity of parsing decisions. 6 Conclusion and Future Work The key innovation in the present work is the combination of unsupervised part-of-speech tagging and argument identification to permit learning in a simplified SRL system. Children do not 996 have the luxury of treating part-of-speech tagging and semantic role labeling as separable tasks. Instead, they must learn to understand sentences starting from scratch, learning the meanings of some words, and using those words and their patterns of arrangement into sentences to bootstrap their way into more mature knowledge. We have created a first step toward modeling this incremental process. We combined unsupervised parsing with minimal supervision to begin to identify arguments and predicates. An SRL classifier used simple representations built from these identified arguments to extract useful abstract patterns for classifying semantic roles. Our results suggest that multiple simple representations of sentence structure could co-exist in the child’s system for sentence comprehension; representations that will ultimately turn out to be powerful guides to role identification may be less powerful early in acquisition because of the noise introduced by the unsupervised parsing. The next step is to ‘close the loop’, using higher level semantic feedback to improve the earlier argument identification and parsing stages. Perhaps with the help of semantic feedback the system can automatically improve predicate identification, which in turn allows it to correct the observed intransitive sentence error. This approach will move us closer to the goal of using initial simple structural patterns and natural observation of the world (semantic feedback) to bootstrap more and more sophisticated representations of linguistic structure. Acknowledgments This research is supported by NSF grant BCS0620257 and NIH grant R01-HD054448. References M.J. Beal. 2003. Variational Algorithms for Approximate Bayesian Inference. Ph.D. thesis, Gatsby Computational Neuroscience Unit, University College London. L. Bloom. 1970. Language development: Form and function in emerging grammars. MIT Press, Cambridge, MA. L. Bloom. 1973. One word at a time: The use of single-word utterances before syntax. Mouton, The Hague. M.R. Brent and J.M. Siskind. 2001. The role of exposure to isolated words in early vocabulary development. Cognition, 81:31–44. E. Brill. 1997. Unsupervised learning of disambiguation rules for part of speech tagging. In Natural Language Processing Using Very Large Corpora. Kluwer Academic Press. R. Brown. 1973. A First Language. Harvard University Press, Cambridge, MA. X. Carreras and L. M`arquez. 2004. Introduction to the CoNLL-2004 shared tasks: Semantic role labeling. In Proceedings of CoNLL-2004, pages 89–97. Boston, MA, USA. E. Charniak. 1997. Statistical parsing with a contextfree grammar and word statistics. In Proc. National Conference on Artificial Intelligence. E.V. Clark. 1978. Awwareness of language: Some evidence from what children say and do. In R. J. A. Sinclair and W. Levelt, editors, The child’s conception of language. Springer Verlag, Berlin. M. Connor, Y. Gertner, C. Fisher, and D. Roth. 2008. Baby srl: Modeling early language acquisition. In Proc. of the Annual Conference on Computational Natural Language Learning (CoNLL), pages xx–yy, Aug. M. Connor, Y. Gertner, C. Fisher, and D. Roth. 2009. Minimally supervised model of early language acquisition. In Proc. of the Annual Conference on Computational Natural Language Learning (CoNLL), Jun. M. Demetras, K. Post, and C. Snow. 1986. Feedback to first-language learners. Journal of Child Language, 13:275–292. K. Demuth, J. Culbertson, and J. Alter. 2006. Wordminimality, epenthesis, and coda licensing in the acquisition of english. Language & Speech, 49:137– 174. C. Fisher. 1996. Structural limits on verb mapping: The role of analogy in children’s interpretation of sentences. Cognitive Psychology, 31:41–81. Jianfeng Gao and Mark Johnson. 2008. A comparison of bayesian estimators for unsupervised hidden markov model pos taggers. In Proceedings of EMNLP-2008, pages 344–352. D. Gentner. 2006. Why verbs are hard to learn. In K. Hirsh-Pasek and R. Golinkoff, editors, Action meets word: How children learn verbs, pages 544– 564. Oxford University Press. Y. Gertner and C. Fisher. 2006. Predicted errors in early verb learning. In 31st Annual Boston University Conference on Language Development. 997 Y. Gertner, C. Fisher, and J. Eisengart. 2006. Learning words and rules: Abstract knowledge of word order in early sentence comprehension. Psychological Science, 17:684–691. J. Gillette, H. Gleitman, L. R. Gleitman, and A. Lederer. 1999. Human simulations of vocabulary learning. Cognition, 73:135–176. Sharon Goldwater and Tom Griffiths. 2007. A fully bayesian approach to unsupervised part-of-speech tagging. In Proceedings of 45th Annual Meeting of the Association of Computational Linguists, pages 744–751. R. Gomez and L. Gerken. 1999. Artificial grammar learning by 1-year-olds leads to specific and abstract knowledge. Cognition, 70:109–135. A. Haghighi and D. Klein. 2006. Prototype-drive learning for sequence models. In Proceedings of NAACL-2006, pages 320–327. Mark Johnson. 2007. Why doesnt em find good hmm pos-taggers? In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 296–305. M.H. Kelly. 1992. Using sound to solve syntactic problems: The role of phonology in grammatical category assignments. Psychological Review, 99:349–364. J. Lidz, H. Gleitman, and L. R. Gleitman. 2003. Understanding how input matters: verb learning and the footprint of universal grammar. Cognition, 87:151– 178. B. MacWhinney. 2000. The CHILDES project: Tools for analyzing talk. Third Edition. Lawrence Elrbaum Associates, Mahwah, NJ. M. P. Marcus, B. Santorini, and M. Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330, June. Marina Meil˘a. 2002. Comparing clusterings. Technical Report 418, University of Washington Statistics Department. T. Mintz. 2003. Frequent frames as a cue for grammatical categories in child directed speech. Cognition, 90:91–117. P. Monaghan, N. Chater, and M.H. Christiansen. 2005. The differential role of phonological and distributional cues in grammatical categorisation. Cognition, 96:143–182. S. Pinker. 1984. Language learnability and language development. Harvard University Press, Cambridge, MA. V. Punyakanok, D. Roth, and W. Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2). L. R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–285. Sujith Ravi and Kevin Knight. 2009. Minimized models for unsupervised part-of-speech tagging. In Proceedings of the Joint Conferenceof the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACLIJCNLP). J.R. Saffran, R.N. Aslin, and E.L. Newport. 1996. Statistical learning by 8-month-old infants. Science, 274:1926–1928. Rushen Shi, James L. Morgan, and Paul Allopenna. 1998. Phonological and acoustic bases for earliest grammatical category assignment: a crosslinguistic perspective. Journal of Child Language, 25(01):169–201. Rushen Shi, Janet F. Werker, and James L. Morgan. 1999. Newborn infants’ sensitivity to perceptual cues to lexical and grammatical words. Cognition, 72(2):B11 – B21. L.B. Smith and C. Yu. 2008. Infants rapidly learn word-referent mappings via cross-situational statistics. Cognition, 106:1558–1568. Kiristina Toutanova and Mark Johnson. 2007. A bayesian lda-based model for semi-supervised partof-speech tagging. In Proceedings of NIPS. S. Yuan and C. Fisher. 2009. “really? she blicked the baby?”: Two-year-olds learn combinatorial facts about verbs by listening. Psychological Science, 20:619–626. S. Yuan, C. Fisher, Y. Gertner, and J. Snedeker. 2007. Participants are more than physical bodies: 21month-olds assign relational meaning to novel transitive verbs. In Biennial Meeting of the Society for Research in Child Development, Boston, MA. 998
2010
101
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 999–1008, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Modeling Norms of Turn-Taking in Multi-Party Conversation Kornel Laskowski Carnegie Mellon University Pittsburgh PA, USA [email protected] Abstract Substantial research effort has been invested in recent decades into the computational study and automatic processing of multi-party conversation. While most aspects of conversational speech have benefited from a wide availability of analytic, computationally tractable techniques, only qualitative assessments are available for characterizing multi-party turn-taking. The current paper attempts to address this deficiency by first proposing a framework for computing turn-taking model perplexity, and then by evaluating several multi-participant modeling approaches. Experiments show that direct multi-participant models do not generalize to held out data, and likely never will, for practical reasons. In contrast, the Extended-Degree-of-Overlap model represents a suitable candidate for future work in this area, and is shown to successfully predict the distribution of speech in time and across participants in previously unseen conversations. 1 Introduction Substantial research effort has been invested in recent decades into the computational study and automatic processing of multi-party conversation. Whereas sociolinguists might argue that multiparty settings provide for the most natural form of conversation, and that dialogue and monologue are merely degenerate cases (Jaffe and Feldstein, 1970), computational approaches have found it most expedient to leverage past successes; these often involved at most one speaker. Consequently, even in multi-party settings, automatic systems generally continue to treat participants independently, fusing information across participants relatively late in processing. This state of affairs has resulted in the nearexclusion from computational consideration and from semantic analysis of a phenomenon which occurs at the lowest level of speech exchange, namely the relative timing of the deployment of speech in arbitrary multi-party groups. This phenomenon, the implicit taking of turns at talk (Sacks et al., 1974), is important because unless participants adhere to its general rules, a conversation would simply not take place. It is therefore somewhat surprising that while most other aspects of speech enjoy a large base of computational methodologies for their study, there are few quantitative techniques for assessing the flow of turn-taking in general multi-party conversation. The current work attempts to address this problem by proposing a simple framework, which, at least conceptually, borrows quite heavily from the standard language modeling paradigm. First, it defines the perplexity of a vector-valued Markov process whose multi-participant states are a concatenation of the binary states of individual speakers. Second, it presents some obvious evidence regarding the unsuitability of models defined directly over this space, under various assumptions of independence, for the inference of conversationindependent norms of turn-taking. Finally, it demonstrates that the extended-degree-of-overlap model of (Laskowski and Schultz, 2007), which models participants in an alternate space, achieves by far the best likelihood estimates for previously unseen conversations. This appears to be because the model can learn across conversations, regardless of the number of their participants. Experimental results show that it yields relative perplexity reductions of approximately 75% when compared to the ubiquitous singleparticipant model which ignores interlocutors, indicating that it can learn and generalize aspects of interaction which direct multi-participant models, and merely single-participant models, cannot. 999 2 Data Analysis and experiments are performed using the ICSI Meeting Corpus (Janin et al., 2003; Shriberg et al., 2004). The corpus consists of 75 meetings, held by various research groups at ICSI, which would have occurred even if they had not been recorded. This is important for studying naturally occurring interaction, since any form of intervention (including occurrence staging solely for the purpose of obtaining a record) may have an unknown but consistent impact on the emergence of turn-taking behaviors. Each meeting was attended by 3 to 9 participants, providing a wide variety of possible interaction types. 3 Conceptual Framework 3.1 Definitions Turn-taking is a generally observed phenomenon in conversation (Sacks et al., 1974; Goodwin, 1981; Schegloff, 2007); one party talks while the others listen. Its description and analysis is an important problem, treated frequently as a subdomain of linguistic pragmatics (Levinson, 1983). In spite of this, linguists tend to disagree about what precisely constitutes a turn (Sacks et al., 1974; Edelsky, 1981; Goodwin, 1981; Traum and Heeman, 1997), or even a turn boundary. For example, a “yeah” produced by a listener to indicate attentiveness, referred to as a backchannel (Yngve, 1970), is often considered to not implement a turn (nor to delineate an ongoing turn of an interlocutor), as it bears no propositional content and does not “take the floor” from the current speaker. To avoid being tied to any particular sociolinguistic theory, the current work equates “turn” with any contiguous interval of speech uttered by the same participant. Such intervals are commonly referred to as talk spurts (Norwine and Murphy, 1938). Because Norwine and Murphy’s original definition is somewhat ambiguous and non-trivial to operationalize, this work relies on that proposed by (Shriberg et al., 2001), in which spurts are “defined as speech regions uninterrupted by pauses longer than 500 ms” (italics in the original). Here, a threshold of 300 ms is used instead, as recently proposed in NIST’s Rich Transcription Meeting Recognition evaluations (NIST, 2002). The resulting definition of talk spurt, it is important to note, is in quite common use but frequently under different names. An oft-cited example is the inter-pausal unit of (Koiso et al., 1998)1, where the threshold is 100 ms. A consequence of this choice is that any model of turn-taking behavior inferred will effectively be a model of the distribution of speech, in time and across participants. If the parameters of such a model are maximum likelihood (ML) estimates, then that model will best account for what is most likely, or most “normal”; it will constitute a norm. Finally, an important aspect of this work is that it analyzes turn-taking behavior as independent of the words spoken (and of the ways in which those words are spoken). As a result, strictly speaking, what is modeled is not the distribution of speech in time and across participants but of binary speech activity in time and across participants. Despite this seemingly dramatic simplification, it will be seen that important aspects of turn-taking are sufficiently rare to be problematic for modeling. Modeling them jointly alongside lexical information, in multi-party scenarios, is likely to remain intractable for the foreseeable future. 3.2 The Vocal Interaction Record Q The notation used here, as in (Laskowski and Schultz, 2007), is a trivial extension of that proposed in (Rabiner, 1989) to vector-valued Markov processes. At any instant t, each of K participants to a conversation is in a state drawn from Ψ ≡{S0, S1} ≡ {□, ■}, where S1 ≡■indicates speech (or, more precisely, “intra-talk-spurt instants”) and S0 ≡ □indicates non-speech (or “inter-talk-spurt instants”). The joint state of all participants at time t is described using the K-length column vector qt ∈ ΨK ≡ Ψ × Ψ × . . . × Ψ ≡  S0, S1, . . . , S2K−1 . (1) An entire conversation, from the point of view of this work, can be represented as the matrix Q ≡ [q1, q2, . . . , qT ] (2) ∈ ΨK×T . Q is known as the (discrete) vocal interaction (Dabbs and Ruback, 1987) record. T is the total number of frames in the conversation, sampled at Ts = 100 ms intervals. This is approximately the duration of the shortest lexical productions in the ICSI Meeting Corpus. 1The inter-pausal unit differs from the pause unit of (Seligman et al., 1997) in that the latter is an intra-turn unit, requiring prior turn segmentation 1000 3.3 Time-Independent First-Order Markov Modeling of Q Given this definition of Q, a model Θ is sought to account for it. Only time-independent models, whose parameters do not change over the course of the conversation, are considered in this work. For simplicity, the state q0 = S0 = [□, □, . . . , □]∗, in which no participant is speaking (∗indicates matrix transpose, to avoid confusion with conversation duration T) is first prepended to Q. P0 = P ( q0 ) therefore represents the unconditional probability of all participants being silent just prior to the start of any conversation2. Then P ( Q ) = P0 · T Y t=1 P ( qt | q0, q1, · · · , qt−1 ) .= P0 · T Y t=1 P ( qt | qt−1, Θ ) , (3) where in the second line the history is truncated to yield a standard first-order Markov form. Each of the T factors in Equation 3 is independent of the instant t, P ( qt | qt−1, Θ ) = P ( qt = Sj | qt−1 = Si, Θ ) (4) ≡ aij , (5) as per the notation in (Rabiner, 1989). In particular, each factor is a function only of the state Si in which the conversation was at time t −1 and the state Sj in which the conversation is at time t, and not of the instants t −1 or t. It may be expressed as the scalar aij which forms the ith row and jth column entry of the matrix {aij} ≡Θ. 3.4 Perplexity In language modeling practice, one finds the likelihood P ( w | Θ ), of a word sequence w of length ∥w∥under a model Θ, to be an inconvenient measure for comparison. Instead, the negative loglikelihood (NLL) and perplexity (PPL), defined as NLL = −1 ∥w∥loge P ( w | Θ ) (6) PPL = 10NLL , (7) 2In reality, the instant t = 0 refers to the beginning of the recording of a conversation, rather than the beginning of the conversation itself; this detail is without consequence. are often preferred (Jelinek, 1999). They are ubiquitously used to compare the complexity of different word sequences (or corpora) w and w′ under the same model Θ, or the performance on a single word sequence (or corpus) w under competing models Θ and Θ′. Here, a similar metric is proposed, to be used for the same purposes, for the record Q. NLL = −1 KT log2 P ( Q | Θ ) (8) PPL = 2NLL = (P ( Q | Θ ))−1/KT (9) are defined as measures of turn-taking perplexity. As can be seen in Equation 8, the negative log-likelihood is normalized by the number K of participants and the number T of frames in Q; the latter renders the measure useful for making duration-independent comparisons. The normalization by K does not per se suggest that turntaking in conversations with different K is necessarily similar; it merely provides similar bounds on the magnitudes of these metrics. 4 Direct Estimation of Θ Direct application of bigram modeling techniques, defined over the states {S}, is treated as a baseline. 4.1 The Case of K = 2 Participants In contrast to multi-party conversation, dialogue has been extensively modeled in the ways described in this paper. Beginning with (Brady, 1969), Markov modeling techniques over the joint speech activity of two interlocutors have been explored by both the sociolinguist and the psycholinguist community (Jaffe and Feldstein, 1970; Dabbs and Ruback, 1987). The same models have also appeared in dialogue systems (Raux, 2008). Most recently, they have been augmented with duration models in a study of the Switchboard corpus (Grothendieck et al., 2009). 4.2 The Case of K > 2 Participants In the general case beyond dialogue, such models have found less traction. This is partly due to the exponential growth in the number of states as K increases, and partly due to difficulties in interpretation. The only model for arbitrary K that the author is familiar with is the GroupTalk model (Dabbs and Ruback, 1987), which is unsuitable for the purposes here as it does not scale (with K, 1001 10 15 20 1.05 1.075 1.1 1.125 oracle A+B B+A Figure 1: Perplexity (along y-axis) in time (along x-axis, in minutes) for meeting Bmr024 under a conditionally dependent global oracle model, two “matched-half” models (A+B), and two “mismatched-half” models (B+A). the number of participants) without losing track of speakers when two or more participants speak simultaneously (known as overlap). 4.2.1 Conditionally Dependent Participants In a particular conversation with K participants, the state space of an ergodic process contains 2K states, and the number of free parameters in a model Θ which treats participant behavior as conditionally dependent (CD), henceforth ΘCD, scales as 2K · 2K −1  . It should be immediately obvious that many of the 2K states are likely to not occur within a conversation of duration T, leading to misestimation of the desired probabilities. To demonstrate this, three perplexity trajectories for a snippet of meeting Bmr024 are shown in Figure 1, in the interval beginning 5 minutes into the meeting and ending 20 minutes later. (The meeting is actually just over 50 minutes long but only a snippet is shown to better appreciate small time-scale variation.) The depicted perplexities are not unweighted averages over the whole meeting of duration T as in Equation 8, but over a 60second Hamming window centered on each t. The first trajectory, the dashed black line, is obtained when the entire meeting is used to estimate ΘCD, and is then scored by that same model (an “oracle” condition). Significant perplexity variation is observed throughout the depicted snippet. The second trajectory, the continuous black line, is that obtained when the meeting is split into two equal-duration halves, one consisting of all instants prior to the midpoint and the other of all instants following it. These halves are hereafter referred to as A and B, respectively (the interval in Figure 1 falls entirely within the A half). Two separate models ΘCD A and ΘCD B are each trained on only one of the two halves, and then applied to those same halves. As can be seen at the scale employed, the matched A+B model, demonstrating the effect of training data ablation, deviates from the global oracle model only in the intervals [7, 11] seconds and [15, 18] seconds; otherwise it appears that more training data, from later in the conversation, does not affect model performance. Finally, the third trajectory, the continuous gray line, is obtained when the two halves A and B of the meeting are scored using the mismatched models ΘCD B and ΘCD A , respectively (this condition is henceforth referred to as the B+A condition). It can be seen that even when probabilities are estimated from the same participants, in exactly the same conversation, a direct conditionally dependent model exposed to over 25 minutes of a conversation cannot predict the turn-taking patterns observed later. 4.2.2 Conditionally Independent Participants A potential reason for the gross misestimation of ΘCD under mismatched conditions is the size of the state space {S}. The number of parameters in the model can be reduced by assuming that participants behave independently at instant t, but are conditioned on their joint behavior at t −1. The likelihood of Q under the resulting conditionally independent model ΘCI has the form P ( Q ) .= P0 · T Y t=1 K Y k=1 P qt [k] | qt−1, ΘCI k  , (10) where each factor is time-independent, P qt [k] | qt−1, ΘCI k  = P qt [k] = Sn | qt−1 = Si, ΘCI k  (11) ≡ aCI k,in , (12) with 0 ≤i < 2K and 0 ≤n < 2. The complete model {ΘCI k } ≡{{aCI k,in}} consists of K matrices of size 2K × 2 each. It therefore contains only K·2K free parameters, a significant reduction over the conditionally dependent model ΘCD. Panel (a) of Figure 2 shows the performance of this model on the same conversational snippet 1002 as in Figure 1. The oracle, dashed black line of the latter is reproduced as a reference. The continuous black and gray lines show the smoothed perplexity for the matched (A+B) and the mismatched (B+A) conditions, respectively. In the matched condition, the CI model reproduces the oracle trajectory with relatively high fidelity, suggesting that participants’ behavior may in fact be assumed to be conditionally independent in the sense discussed. Furthermore, the failures of the CI model under mismatched conditions are less severe in magnitude than those of the CD model. Panel (b) of Figure 2 demonstrates the trivial fact that a conditionally independent model ΘCI any, tying the statistics of all K participants into a single model, is useless. This is of course because it cannot predict the next state of a generic participant for which the index k in qt−1 has been lost. 4.2.3 Mutually Independent Participants A further reduction in the complexity of Θ can be achieved by assuming that participants are mutually independent (MI), leading to the participantspecific ΘMI k model: P ( Q ) .= P0 · T Y t=1 K Y k=1 P qt [k] | qt−1 [k] , ΘMI k  . (13) The factors are time-independent, P qt [k] | qt−1 [k] , ΘMI k  = P qt [k] = Sn | qt−1 [k] = Sm, ΘMI k  (14) ≡ aMI k,mn , (15) where 0 ≤m < 2 and 0 ≤n < 2. This model {ΘMI k } ≡{{aMI k,mn}} consists of K matrices of size 2 × 2 each, with only K · 2 free parameters. Panel (c) of Figure 2 shows that the MI model yields mismatched performance which is a much better approximation to its performance under matched conditions. However, its matched performance is worse than that of CD and CI models. When a single MI model ΘMI any is trained instead for all participants, as shown in panel (d), both of these effects are exaggerated. In fact, the performance of ΘMI any in matched and mismatched conditions is almost identical. The consistently higher perplexity is obtained, as mentioned, by smoothing over 60-second windows, and therefore underestimates poor performance at specific instants (which occur frequently). 10 15 20 1.05 1.075 1.1 1.125 10 15 20 1.1 1.2 1.3 1.4 (a) Θ =  ΘCI k (b) Θ = ΘCI any 10 15 20 1.05 1.075 1.1 1.125 10 15 20 1.05 1.075 1.1 1.125 (c) Θ =  ΘMI k (d) Θ = ΘMI any Figure 2: Perplexity (along y-axis) in time (along x-axis, in minutes) for meeting Bmr024 under a conditionally dependent global oracle model, and various matched (A+B) and mismatched (B+A) model pairs with relaxed dependence assumptions. Legend as in Figure 1. 5 Limitations and Desiderata As the analyses in Section 4 reveal, direct estimation can be useful under oracle conditions, namely when all of a conversation has been observed and the task is to find intervals where multiparticipant behavior deviates significantly from its conversation-specific norm. The assumption of conditional independence among participants was argued to lead to negligible degradation in the detectability of these intervals. However, the assumption of mutual independence consistently leads to higher surprise by the model. 5.1 Predicting the Future Within Conversations In the more interesting setting in which only a part of a conversation has been seen and the task is to limit the perplexity of what is still to come, direct estimation exhibits relatively large failures under both conditionally dependent and conditionally independent participant assumptions. This appears to be due to the size of the state space, which scales as 2K with the number K of participants. In the case of general K, more conversational data may be sought, from exactly the same group of participants, but that approach appears likely to be 1003 insufficient, and, for practical reasons3, impossible. One would instead like to be able to use other conversations, also exhibiting participant interaction, to limit the perplexity of speech occurrence in the conversation under study. Unfortunately, there are two reasons why direct estimation cannot be tractably deployed across conversations. The first is that the direct models considered here, with the exception of ΘMI any, are K-specific. In particular, the number and the identity of conditioning states are both functions of K, for ΘCD and {ΘCI k }; the models may also consist of K distinct submodels, as for {ΘCI k } and {ΘMI k }. No techniques for computing the turntaking perplexity in conversations with K participants, using models trained on conversations with K′ ̸= K, are currently available. The second reason is that these models, again with the exception of ΘMI any, are R-specific, independently of K-specificity. By this it is meant that the models are sensitive to participant index permutation. Had a participant at index k in Q been assigned to another index k′̸=k, an alternate representation of the conversation, namely Q′ = Rkk′ · Q, would have been obtained. (Here, Rkk′ is a matrix rotation operator obtained by exchanging columns k and k′ of the K × K identity matrix I.) Since index assignment is entirely arbitrary, useful direct models cannot be inferred from other conversations, even when their K′ = K, unless K is small. The prospect of naively permuting every training conversation prior to parameter inference has complexity K!. 5.2 Comparing Perplexity Across Conversations Until R-specificity is comprehensively addressed, the only model from among those discussed so far, which exhibits no K-dependence, is ΘMI any, namely that which treats participants identically and independently. This model can be used to score the perplexity of any conversation, and facilitates the comparison of the distribution of speech activity across conversations. Unfortunately, since the model captures only durational aspects of one-participant speech and non-speech intervals, it does not in any way encode a norm of turn-taking, an inherently interac3This pertains to the practicalities of re-inviting, instrumenting, recording and transcribing the same groups of participants, with necessarily more conversations for large groups than for small ones. tive and hence multi-participant phenomenon. It therefore cannot be said to rank conversations according to their deviation from turn-taking norms. 5.3 Theoretical Limitations In addition to the concerns above, a fundamental limitation of the analyzed direct models, whether for conversation-specific or conversationindependent use, is that they are theoretically cumbersome if not vacuous. Given a solution to the problem of R-specificity, the parameters {aCD ij } may be robustly inferred, and the models may be applied to yield useful estimates of turn-taking perplexity. However, they cannot be said to directly validate or dispute the vast qualitative observations of sociolinguistics, and of conversation analysis in particular. 5.4 Prospects for Smoothing To produce Figures 1 and 2, a small fraction of probability mass was reserved for unseen bigram transitions (as opposed to backing off to unigram probabilities). Furthermore, transitions into neverobserved states were assigned uniform probabilities. This policy is simplistic, and there is significant scope for more detailed back-off and interpolation. However, such techniques infer values for under-estimated probabilities from shorter truncations of the conditioning history. As K-specificity and R-specificity suggest, what appears to be needed here are back-off and interpolation across states. For example, in a conversation of K = 5 participants, estimates of the likelihood of the state qt = [□■■■□]∗, which might have been unobserved in any training material, can be assumed to be related to those of q′ t = [□□■■□]∗and q′′ t = [□■■□□]∗, as well as those of Rq′ t and Rq′′ t , for arbitrary R. 6 The Extended-Degree-of-Overlap Model The limitations of direct models appear to be addressable by a form proposed by Laskowski and Schultz in (2006) and (2007). That form, the Extended-Degree-of-Overlap (EDO) model, was used to provide prior probabilities P ( Q | Θ ) of the speech states of multiple meeting participants simultaneously, for use in speech activity detection. The model was trained on utterances (rather than talk spurts) from a different corpus than that 1004 used here, and the authors did not explore the turntaking perplexities of their data sets. Several of the equations in (Laskowski and Schultz, 2007) are reproduced here for comparison. The EDO model yields time-independent transition probabilities which assume conditional inter-participant dependence (cf. Equation 3), P ( qt+1 = Sj | qt = Si ) = αij · (16) P ( ∥qt+1∥= nj, ∥qt+1 · qt∥= oij | ∥qt∥= ni) , where ni ≡∥Si∥and nj ≡∥Sj∥, with ∥S∥yielding the number of participants in ■in the multiparticipant state S. In other words, ni and nj are the numbers of participants simultaneously speaking in states Si and Sj, respectively. The elements of the binary product S = S1 · S2 are given by S [k] ≡  ■, if S1 [k] = S2 [k] = ■ □, otherwise , (17) and oij is therefore the number of same participants speaking in Si and Sj. The discussion of the role of αij in Equation 16 is deferred to the end of this section. The EDO model mitigates R-specificity because it models each bigram (qt−1, qt) = (Si, Sj) as the modified bigram (ni, [oij, nj]), involving three scalars each of which is a sum — a commutative (and therefore rotation-invariant) operation. Because it sums across only those participants which are in the ■state, completely ignoring their □-state interlocutors, it can also mitigate K-specificity if one additionally redefines ni = min ( ∥Si∥, Kmax ) (18) nj = min ( ∥Sj∥, Kmax ) (19) oij = min ( ∥Si · Sj∥, ni, nj) , (20) as in (Laskowski and Schultz, 2007). Kmax represents the maximum model-licensed degree of overlap, or the maximum number of participants allowed to be simultaneously speaking. The EDO model therefore represents a viable conversation-independent, K-independent, and R-independent model of turn-taking for the purposes in the current work4. The factor αij 4There exists some empirical evidence to suggest that conversations of K participants should not be used to train models for predicting turn-taking behavior in conversations of K′ participants, for K′ ̸= K, because turn-taking is inherently K-dependent. For example, (Fay et al., 2000) found that qualitative differences in turn-taking patterns between in Equation 16 provides a deterministic mapping from the conversation-independent space (ni, [oij, nj]) to the conversation-specific space {aij}. The mapping is deterministic because the model assumes that all participants are identical. This places the EDO model at a disadvantage with respect to the CD and CI models, as well as to {ΘMI k }, which allow each participant to be modeled differently. 7 Experiments This section describes the performance of the discussed models on the entire ICSI Meeting Corpus. 7.1 Conversation-Specific Modeling First to be explored is the prediction of yetunobserved behavior in conversation-specific settings. For each meeting, models are trained on portions of that meeting only, and then used to score other portions of the same meeting. This is repeated over all meetings, and comprises the mismatched condition of Section 4; for contrast, the matched condition is also evaluated. Each meeting is divided into two halves, in two different ways. The first way is the A/B split of Section 4, representing the first and second halves of each meeting; as has been shown, turn-taking patterns may vary substantially from A to B. The second split (C/D) places every even-numbered frame in one set and every odd-numbered frame in the other. This yields a much easier setting, of two halves which are on average maximally similar but still temporally disjoint. The perplexities (of Equation 9) in these experiments are shown in the second, fourth, sixth and eighth columns of Table 1, under “all”. In the matched A+B and C+D conditions, the conditionally dependent model ΘCD provides topline ML performance. Perplexities decrease as model complexities fall for direct models, as expected. However, in the more interesting mismatched B+A condition, the EDO model performs the best. This shows that its ability to generalize to unseen data is higher than that of direct models. However, in the easier mismatched D+C condition, it is outperformed by the CI model due to behavior differences among participants, which the EDO model small groups and large groups, represented in their study by K = 5 and K = 10, and noted that there is a smooth transition between the two extremes; this provides some scope for interpolating small- and large- group models, and the EDO framework makes this possible. 1005 Hard split A/B (first/second halves) Easy split C/D (odd/even frames) Model A+B B+A C+D D+C “all” “sub” “all” “sub” “all” “sub” “all” “sub” ΘCD 1.0905 1.6444 1.1225 1.8395 1.0915 1.6555 1.0991 1.7403 {ΘCI k } 1.0915 1.6576 1.1156 1.7809 1.0925 1.6695 1.0956 1.7028 {ΘMI k } 1.0978 1.7236 1.1086 1.7950 1.0991 1.7381 1.0992 1.7398 ΘMI 1.1046 1.8047 1.1047 1.8059 1.1046 1.8050 1.1046 1.8052 ΘEDO 1.0977 1.7257 1.0985 1.7323 1.0977 1.7268 1.0982 1.7313 Table 1: Perplexities for conversation-specific turn-taking models on the entire ICSI Meeting Corpus. Both “all” frames and the subset (“sub”) for which qt−1 ̸= qt are shown, for matched (A+B and C+D) and mismatched (B+A and D+C) conditions on splits A/B and C/D. does not capture. The numbers under the “all” columns in Table 1 were computed using all of each meeting’s frames. For contrast, in the “sub” columns, perplexities are computed over only those frames for which qt−1 ̸= qt. This is a useful subset because, for the majority of time in conversations, one person simply continues to talk while all others remain silent5. Excluding qt−1 = qt bigrams (leading to 0.32M frames from 2.39M frames in “all”) offers a glimpse of expected performance differences were duration modeling to be included in the models. Perplexities are much higher in these intervals, but the same general trend as for “all” is observed. 7.2 Conversation-Independent Modeling The training of conversation-independent models, given a corpus of K-heterogeneous meetings, is achieved by iterating over all meetings and testing each using models trained on all of the other meetings. As discussed in the preceding section, ΘMI any is the only one among the direct models which can be used for this purpose. It also models exclusively single-participant behavior, ignoring the interactive setting provided by other participants. As shown in Table 2, when all time is scored the EDO model with Kmax = 4 is the best model (in Section 7.1, Kmax = K since the model was trained on the same meeting to which it was applied). Its perplexity gap to the oracle model is only a quarter of the gap exhibited by ΘMI any. The relative performance of EDO models is even better when only those instants t are considered for which qt−1 ̸= qt. There, the perplexity gap to the oracle model is smaller than that of 5Retaining only qt−1̸=qt also retains instants of transition into and out of intervals of silence. PPL ∆PPL (%) Model “all” “sub” “all” “sub” ΘCD 1.0921 1.6616 — — ΘMI 1.1051 1.8170 14.1 23.5 ΘEDO (6) 1.0992 1.7405 7.7 11.9 ΘEDO (5) 1.0968 1.7127 5.1 7.7 ΘEDO (4) 1.0953 1.6947 3.5 5.0 ΘEDO (3) 1.1082 1.8502 17.5 28.5 Table 2: Perplexities for conversation-independent turn-taking models on the entire ICSI Meeting Corpus; the oracle ΘCD topline is included in the first row. Both “all” frames and the subset (“sub”) for which qt−1 ̸= qt are shown; relative increases over the topline (less unity, representing no perplexity) are shown in columns 4 and 5. The value of Kmax (cf. Equations 18, 19, and 20) is shown in parentheses in the first column. ΘEDO by 78%. 8 Discussion The model perplexities as reported above may be somewhat different if the “talk spurt” were replaced by a more sociolinguistically motivated definition of “turn”, but the ranking of models and their relative performance differences are likely to remain quite similar. On the one hand, many intertalk-spurt gaps might find themselves to be withinturn, leading to more ■entries in the record Q than observed in the current work. This would increase the apparent frequency and duration of intervals of overlap. On the other hand, alternative definitions of turn may exclude some speech activity, such as that implementing backchannels. Since backchannels are often produced in overlap 1006 with the foreground speaker, their removal may eliminate some overlap from Q. (However, as noted in (Shriberg et al., 2001), overlap rates in multi-party conversation remain high even after the exclusion of backchannels.) Both inter-talkspurt gap inclusion and backchannel exclusion are likely to yield systematic differences, and therefore to be exploitable by the investigated models in similar ways. The results presented may also be perturbed by modifying the way in which a (manually produced) talk spurt segmentation, with highprecision boundary time-stamps, is discretized to yield Q. Two parameters have controlled the discretization in this work: (1) the frame step Ts = 100 ms; and (2) the proportion ρ of Ts for which a participant must be speaking within a frame in order for that frame to be considered ■rather than □. ρ = 0.5 was chosen since this posits approximately as much more speech (than in the highprecision segmentation) as it eliminates. Higher values of ρ would lead to more ■, leading to more overlap than observed in this work. Meanwhile, at constant ρ, choosing a Ts value larger than 100 ms would occasionally miss the shortest talk spurts, but it would allow the models, which are all 1storder Markovian, to learn temporally more distant dependencies. The trade-offs between these choices are currently under investigation. From an operational, modeling perspective, it is important to recognize that the choices of the definition for “turn”, and of the way in which segmentations are discretized, are essentially arbitrary. The investigated modeling alternatives, and the EDO model in particular, require only that the multi-participant vocal interaction record Q be binary-valued. This general applicability has been demonstrated in past work, in which the EDO model was trained on utterances for use in speech activity detection (Laskowski and Schultz, 2007), as well as in (Laskowski and Burger, 2007) where it was trained separately on talk spurts and laugh bouts, in the same data, to highlight the differences between speech and laughter deployment. Finally, it should be remembered that the EDO model is both time-independent and participantindependent. This makes it suitable for comparison of conversational genres, in much the same way as are general language models of words. Accordingly, as for language models, density estimation in future turn-taking models may be improved by considering variability across participants and in time. Participant dependence is likely to be related to speakers’ social characteristics and conversational roles, while time dependence may reflect opening and closing functions, topic boundaries, and periodic turn exchange failures. In the meantime, event types such as the latter may be detectable as EDO perplexity departures, potentially recommending the model’s use for localizing conversational “hot spots” (Wrede and Shriberg, 2003). The EDO model, and turntaking models in general, may also find use in diagnosing turn-taking naturalness in spoken dialogue systems. 9 Conclusions This paper has presented a framework for quantifying the turn-taking perplexity in multi-party conversations. To begin with, it explored the consequences of modeling participants jointly by concatenating their binary speech/non-speech states into a single multi-participant vector-valued state. Analysis revealed that such models are particularly poor at generalization, even to subsequent portions of the same conversation. This is due to the size of their state space, which is factorial in the number of participants. Furthermore, because such models are both specific to the number of participants and to the order in which participant states are concatenated together, it is generally intractable to train them on material from other conversations. The only such model which may be trained on other conversations is that which completely ignores interlocutor interaction. In contrast, the Extended-Degree-of-Overlap (EDO) construction of (Laskowski and Schultz, 2007) may be trained on other conversations, regardless of their number of participants, and usefully applied to approximate the turn-taking perplexity of an oracle model. This is achieved because it models entry into and egress out of specific degrees of overlap, and completely ignores the number of participants actually present or their modeled arrangement. In this sense, the EDO model can be said to implement the qualitative findings of conversation analysis. In predicting the distribution of speech in time and across participants, it reduces the unseen data perplexity of a model which ignores interaction by 75% relative to an oracle model. 1007 References Paul T. Brady. 1969. A model for generating onoff patterns in two-way conversation. Bell Systems Technical Journal, 48(9):2445–2472. James M. Dabbs and R. Barry Ruback. 1987. Dimensions of group process: Amount and structure of vocal interaction. Advances in Experimental Social Psychology, 20:123–169. Carole Edelsky. 1981. Who’s got the floor? Langauge in Society, 10:383–421. Nicolas Fay, Simon Garrod, and Jean Carletta. 2000. Group discussion as interactive dialogue or as serial monologue: The influence of group size. Psychological Science, 11(6):487–492. Charles Goodwin. 1981. Conversational Organization: Interaction Between Speakers and Hearers. Academic Press, New York NY, USA. John Grothendieck, Allen Gorin, and Nash Borges. 2009. Social correlates of turn-taking behavior. Proc. ICASSP, Taipei, Taiwan, pp. 4745–4748. Joseph Jaffe and Stanley Feldstein. 1970. Rhythms of Dialogue. Academic Press, New York NY, USA. Adam Janin, Don Baron, Jane Edwards, Dan Ellis, David Gelbart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, and Chuck Wooters. 2003. The ICSI Meeting Corpus. Proc. ICASSP, Hong Kong, China, pp. 364– 367. Frederick Jelinek. 1999. Statistical Methods for Speech Recognition. MIT Press, Cambridge MA, USA. Hanae Koiso, Yasui Horiuchi, Syun Tutiya, Akira Ichikawa, and Yasuharu Den. 1998. An analysis of turn-taking and backchannels based on prosodic and syntactic features in Japanese Map Task dialogs. Language and Speech, 41(3-4):295–321. Kornel Laskowski and Tanja Schultz. 2006. Unsupervised learning of overlapped speech model parameters for multichannel speech activity detection in meetings. Proc. ICASSP, Toulouse, France, pp. 993–996. Kornel Laskowski and Susanne Burger. 2007. Analysis of the occurrence of laughter in meetings. Proc. INTERSPEECH, Antwerpen, Belgium, pp. 1258– 1261. Kornel Laskowski and Tanja Schultz. 2007. Modeling vocal interaction for segmentation in meeting recognition. Machine Learning for Multimodal Interaction, A. Popescu-Belis, S. Renals, and H. Bourlard, eds., Lecture Notes in Computer Science, 4892:259–270, Springer Berlin/Heidelberg, Germany. Stephen C. Levinson. 1983. Pragmatics. Cambridge University Press. National Institute of Standards and Technology. 2002. Rich Transcription Evaluation Project, www.itl.nist.gov/iad/mig/tests/rt/ (last accessed 15 February 2010 1217hrs GMT). A. C. Norwine and O. J. Murphy. 1938. Characteristic time intervals in telephonic conversation. Bell System Technical Journal, 17:281-291. Lawrence Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE, 77(2):257–286. Antoine Raux. 2008. Flexible turn-taking for spoken dialogue systems. PhD Thesis, Carnegie Mellon University. Harvey Sacks, Emanuel A. Schegloff, and Gail Jefferson. 1974. A simplest semantics for the organization of turn-taking for conversation. Language, 50(4):696–735. Emanuel A. Schegloff. 2007. Sequence Organization in Interaction. Cambridge University Press, Cambridge, UK. Mark Seligman, Junko Hosaka, and Harald Singer. 1997. “Pause units” and analysis of spontaneous Japanese dialogues: Preliminary studies. Dialogue Processing in Spoken Language Systems E. Maier, M. Mast, and S. LuperFoy, eds., Lecture Notes in Computer Science, 1236:100–112. Springer Berlin/Heidelberg, Germany. Elizabeth Shriberg, Andreas Stolcke, and Don Baron. 2001. Observations on overlap: Findings and implications for automatic processing of multi-party conversation. Proc. EUROSPEECH, Gen`eve, Switzerland, pp. 1359–1362. Elizabeth Shriberg, Raj Dhillon, Sonali Bhagat, Jeremy Ang, and Hannah Carvey. 2004. The ICSI Meeting Recorder Dialog Act (MRDA) Corpus. Proc. SIGDIAL, Boston MA, USA, pp. 97–100. David Traum and Peeter Heeman. 1997. Utterance units in spoken dialogue. Dialogue Processing in Spoken Language Systems E. Maier, M. Mast, and S. LuperFoy, eds., Lecture Notes in Computer Science, 1236:125–140. Springer Berlin/Heidelberg, Germany. Britta Wrede and Elizabeth Shriberg. 2003. Spotting “hot spots” in meetings: Human judgments and prosodic cues. Proc. EUROSPEECH, Aalborg, Denmark, pp. 2805–2808. Victor H. Yngve. 1970. On getting a word in edgewise. Papers from the Sixth Regional Meeting Chicago Linguistic Society, pp. 567–578. Chicago Linguistic Society, Chicago IL, USA. 1008
2010
102
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1009–1018, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Optimising Information Presentation for Spoken Dialogue Systems Verena Rieser University of Edinburgh Edinburgh, United Kingdom [email protected] Oliver Lemon Heriot-Watt University Edinburgh, United Kingdom [email protected] Xingkun Liu Heriot-Watt University Edinburgh, United Kingdom [email protected] Abstract We present a novel approach to Information Presentation (IP) in Spoken Dialogue Systems (SDS) using a data-driven statistical optimisation framework for content planning and attribute selection. First we collect data in a Wizard-of-Oz (WoZ) experiment and use it to build a supervised model of human behaviour. This forms a baseline for measuring the performance of optimised policies, developed from this data using Reinforcement Learning (RL) methods. We show that the optimised policies significantly outperform the baselines in a variety of generation scenarios: while the supervised model is able to attain up to 87.6% of the possible reward on this task, the RL policies are significantly better in 5 out of 6 scenarios, gaining up to 91.5% of the total possible reward. The RL policies perform especially well in more complex scenarios. We are also the first to show that adding predictive “lower level” features (e.g. from the NLG realiser) is important for optimising IP strategies according to user preferences. This provides new insights into the nature of the IP problem for SDS. 1 Introduction Work on evaluating SDS suggests that the Information Presentation (IP) phase is the primary contributor to dialogue duration (Walker et al., 2001), and as such, is a central aspect of SDS design. During this phase the system returns a set of items (“hits”) from a database, which match the user’s current search constraints. An inherent problem in this task is the trade-off between presenting “enough” information to the user (for example helping them to feel confident that they have a good overview of the search results) versus keeping the utterances short and understandable. In the following we show that IP for SDS can be treated as a data-driven joint optimisation problem, and that this outperforms a supervised model of human ‘wizard’ behaviour on a particular IP task (presenting sets of search results to a user). A similar approach has been applied to the problem of Referring Expression Generation in dialogue (Janarthanam and Lemon, 2010). 1.1 Previous work on Information Presentation in SDS Broadly speaking, IP for SDS can be divided into two main steps: 1) IP strategy selection and 2) Content or Attribute Selection. Prior work has presented a variety of IP strategies for structuring information (see examples in Table 1). For example, the SUMMARY strategy is used to guide the user’s “focus of attention”. It draws the user’s attention to relevant attributes by grouping the current results from the database into clusters, e.g. (Polifroni and Walker, 2008; Demberg and Moore, 2006). Other studies investigate a COMPARE strategy, e.g. (Walker et al., 2007; Nakatsu, 2008), while most work in SDS uses a RECOMMEND strategy, e.g. (Young et al., 2007). In a previous proofof-concept study (Rieser and Lemon, 2009) we show that each of these strategies has its own strengths and drawbacks, dependent on the particular context in which information needs to be presented to a user. Here, we will also explore possible combinations of the strategies, for example SUMMARY followed by RECOMMEND, e.g. (Whittaker et al., 2002), see Figure 1. Prior work on Content or Attribute Selection has used a “Summarize and Refine” approach (Polifroni and Walker, 2008; Polifroni and Walker, 2006; Chung, 2004). This method employs utilitybased attribute selection with respect to how each attribute (e.g. price or food type in restaurant 1009 search) of a set of items helps to narrow down the user’s goal to a single item. Related work explores a user modelling approach, where attributes are ranked according to user preferences (Demberg and Moore, 2006; Winterboer et al., 2007). Our data collection (see Section 3) and training environment incorporate these approaches. The work in this paper is the first to apply a data-driven method to this whole decision space (i.e. combinations of Information Presentation strategies as well as attribute selection), and to show the utility of both lower-level features (e.g. from the NLG realiser) and higher-level features (e.g. from Dialogue Management) for this problem. Previous work has only focused on individual aspects of the problem (e.g. how many attributes to generate, or when to use a SUMMARY), using a pipeline model for SDS with DM features as input, and where NLG has no knowledge of lower level features (e.g. behaviour of the realiser). In Section 4.3 we show that lower level features significantly influence users’ ratings of IP strategies. In the following we use a Reinforcement Learning (RL) as a statistical planning framework (Sutton and Barto, 1998) to explore the contextual features for making these decisions, and propose a new joint optimisation method for IP strategies combining content structuring and attribute selection. 2 NLG as planning under uncertainty We follow the overall framework of NLG as planning under uncertainty (Lemon, 2008; Rieser and Lemon, 2009; Lemon, 2010), where each NLG action is a sequential decision point, based on the current dialogue context and the expected longterm utility or “reward” of the action. Other recent approaches describe this task as planning, e.g. (Koller and Petrick, 2008), or as contextual decision making according to a cost function (van Deemter, 2009), but not as a statistical planning problem, where uncertainty in the stochastic environment is explicitly modelled. Below, we apply this framework to Information Presentation strategies in SDS using Reinforcement Learning, where the example task is to present a set of search results (e.g. restaurants) to users. In particular, we consider 7 possible policies for structuring the content (see Figure 1): Recommending one single item, comparing two items, summarising all of them, or ordered combinations of those actions, e.g. first summarise all the retrieved items and then recommend one of them. The IP module has to decide which action to take next, how many attributes to mention, and when to stop generating. Figure 1: Possible Information Presentation structures (X=stop generation) 3 Wizard-of-Oz data collection In an initial Wizard-of-Oz (WoZ) study, we asked humans (our “wizards”) to produce good IP actions in different dialogue contexts, when interacting in spoken dialogues with other humans (the “users”), who believed that they were talking to an automated SDS. The wizards were experienced researchers in SDS and were familiar with the search domain (restaurants in Edinburgh). They were instructed to select IP structures and attributes for NLG so as to most efficiently allow users to find a restaurant matching their search constraints. They also received prior training on this task. The task for the wizards was to decide which IP structure to use next (see Section 3.2 for a list of IP strategies to choose from), which attributes to mention (e.g. cuisine, price range, location, food quality, and/or service quality), and whether to stop generating, given varying numbers of database matches, varying prompt realisations, and varying user behaviour. Wizard utterances were synthesised using a state-of-the-art text-to-speech engine. The user speech input was delivered to the wizard using Voice Over IP. Figure 2 shows the web-based interface for the wizard. 3.1 Experimental Setup and Data collection We collected 213 dialogues with 18 subjects and 2 wizards (Liu et al., 2009). Each user performed a total of 12 tasks, where no task set was seen twice by any one wizard. The majority of users were from a range of backgrounds in a higher education institute, in the age range 20-30, native speakers of English, and none had prior experience of 1010 Figure 2: Wizard interface. [A:] The wizard selects attribute values as specified by the user’s query. [B:] The retrieved database items are presented in an ordered list. We use a User Modelling approach for ranking the restaurants, see e.g. (Polifroni and Walker, 2008). [C:] The wizard then chooses which strategy and which attributes to generate next, by clicking radio buttons. The attribute/s specified in the last user query are pre-selected by default. The strategies can only be combined in the orders as specified in Figure 1. [D:] An utterance is automatically generated by the NLG realiser every time the wizard selects a strategy, and is displayed in an intermediate text panel. [E:] The wizard can decide to add the generated utterance to the final output panel or to start over again. The text in the final panel is sent to the user via TTS, once the wizard decides to stop generating. Strategy Example utterance SUMMARY no UM I found 26 restaurants, which have Indian cuisine. 11 of the restaurants are in the expensive price range. Furthermore, 10 of the restaurants are in the cheap price range and 5 of the restaurants are in the moderate price range. SUMMARY UM 26 restaurants meet your query. There are 10 restaurants which serve Indian food and are in the cheap price range. There are also 16 others which are more expensive. COMPARE by Item The restaurant called Kebab Mahal is an Indian restaurant. It is in the cheap price range. And the restaurant called Saffrani, which is also an Indian restaurant, is in the moderate price range. COMPARE by Attribute The restaurant called Kebab Mahal and the restaurant called Saffrani are both Indian restaurants. However, Kebab Mahal is in the cheap price range while Saffrani is moderately priced. RECOMMEND The restaurant called Kebab Mahal has the best overall quality amongst the matching restaurants. It is an Indian restaurant, and it is in the cheap price range. Table 1: Example realisations, generated when the user provided cuisine=Indian, and where the wizard has also selected the additional attribute price for presentation to the user. Spoken Dialogue Systems. After each task the user answered a questionnaire on a 6 point Likert scale, regarding the perceived generation quality in that task. The wizards’ IP strategies were highly ranked by the users on average (4.7), and users were able to select a restaurant in 98.6% of the cases. No significant difference between the wizards was observed. The data contains 2236 utterances in total: 1465 wizard utterances and 771 user utterances. We automatically extracted 81 features (e.g #sentences, #DBhits, #turns, #ellipsis)1 from the XML logfiles after each dialogue. Please see (Rieser et al., 2009) 1The full corpus and list of features is available at https://www.classic-project.org/corpora/ for more details. 3.2 NLG Realiser In the Wizard-of-Oz environment we implemented a NLG realiser for the chosen IP structures and attribute choices, in order to realise the wizards’ choices in real time. This generator is based on data from the stochastic sentence planner SPaRKy (Stent et al., 2004). We replicated the variation observed in SPaRKy by analysing high-ranking example outputs (given the highest possible score by the SPaRKy judges) and implemented the variance using dynamic sentence generation. The realisations vary in sentence aggregation, aggregation operators (e.g. ‘and’, period, or ellipsis), contrasts 1011 (e.g. ‘however’, ‘on the other hand’) and referring expressions (e.g. ‘it’, ‘this restaurant’) used. The length of an utterance also depends on the number of attributes chosen, i.e. the more attributes the longer the utterance. All of these variations were logged. In particular, we realised the following IP strategies (see examples in Table 1): • SUMMARY of all matching restaurants with or without a User Model (UM), following (Polifroni and Walker, 2008). The approach using a UM assumes that the user has certain preferences (e.g. cheap) and only tells him about the relevant items, whereas the approach with no UM lists all the matching items. • COMPARE the top 2 restaurants by Item (i.e. listing all the attributes for the first item and then for the other) or by Attribute (i.e. directly comparing the different attribute values). • RECOMMEND the top-ranking restaurant (according to UM). Note that there was no discernible pattern in the data about the wizards’ decisions between the UM/no UM and the byItem/byAttribute versions of the strategies. In this study we will therefore concentrate on the higher level decisions (SUMMARY vs. COMPARE vs. RECOMMEND) and model these different realisations as noise in the realiser. 3.3 Supervised Baseline strategy We analysed the WoZ data to explore the bestrated strategies (the top scoring 50%, n = 205) that were employed by humans for this task. Here we used a variety of Supervised Learning methods to create a model of the highly rated wizard behaviour. Please see (Rieser et al., 2009) for further details. The best performing method was Rule Induction (JRip). 2 The model achieved an accuracy of 43.19% which is significantly (p < .001) better than the majority baseline of always choosing SUMMARY (34.65%). 3 The resulting rule set is shown in Figure 3. 2The WEKA implementation of (Cohen, 1995)’s RIPPER. 3Note that the low accuracy is due to data sparsity and diverse behaviour of the wizards. However, in (Rieser et al., 2009) we show that this model is significantly different from the policy learned using the worse scoring 50%. IF (dbHits <= 9)& (prevNLG = summary): THEN nlgStrategy=compare; IF (dbHits = 1): THEN nlgStrategy= Recommend; IF(prevNLG=summaryRecommend)&(dbHits>=10): THEN nlgStrategy= Recommend; ELSE nlgStrategy=summary; Figure 3: Rules learned by JRip for the wizard model (‘dbHits’= number of database matches, ‘prevNLG’= previous NLG action) The features selected by this model were only “high-level” features, i.e. the input (previous action, number of database hits) that an IP module receives as input from a Dialogue Manager (DM). We further analysed the importance of different features using feature ranking and selection methods (Rieser et al., 2009), finding that the human wizards in this specific setup did not pay significant attention to any lower level features, e.g. from surface realisation, although the generated output was displayed to them (see Figure 2). Nevertheless, note that the supervised model achieves up to 87.6% of the possible reward on this task, as we show in Section 5.2, and so can be considered a serious baseline against which to measure performance. Below, we will show that Reinforcement Learning (RL) produces a significant improvement over the strategies present in the original data, especially in cases where RL has access to “lower level” features of the context. 4 The Simulation / Learning Environment Here we “bootstrap” a simulated training environment from the WoZ data, following (Rieser and Lemon, 2008). 4.1 User Simulations User Simulations are commonly used to train strategies for Dialogue Management, see for example (Young et al., 2007). A user simulation for NLG is very similar, in that it is a predictive model of the most likely next user act. 4 However, this NLG predicted user act does not actually change the overall dialogue state (e.g. by filling slots) but it only changes the generator state. In other words, 4Similar to the internal user models applied in recent work on POMDP (Partially Observable Markov Decision Process) dialogue managers (Young et al., 2007; Henderson and Lemon, 2008; Gasic et al., 2008) for estimation of user act probabilities. 1012 the NLG user simulation tells us what the user is most likely to do next, if we were to stop generating now. We are most interested in the following user reactions: 1. select: the user chooses one of the presented items, e.g. “Yes, I’ll take that one.”. This reply type indicates that the Information Presentation was sufficient for the user to make a choice. 2. addInfo: The user provides more attributes, e.g. “I want something cheap.”. This reply type indicates that the user has more specific requests, which s/he wants to specify after being presented with the current information. 3. requestMoreInfo: The user asks for more information, e.g. “Can you recommend me one?”, “What is the price range of the last item?”. This reply type indicates that the system failed to present the information the user was looking for. 4. askRepeat: The user asks the system to repeat the same message again, e.g. “Can you repeat?”. This reply type indicates that the utterance was either too long or confusing for the user to remember, or the TTS quality was not good enough, or both. 5. silence: The user does not say anything. In this case it is up to the system to take initiative. 6. hangup: The user closes the interaction. We build user simulations using n-gram models of system (s) and user (u) acts, as first introduced by (Eckert et al., 1997). In order to account for data sparsity, we apply different discounting (“smoothing”) techniques including back-off, using the CMU Statistical Language Modelling toolkit (Clarkson and Rosenfeld, 1997). We construct a bi-gram model5 for the users’ reactions to the system’s IP structure decisions (P(au,t|IPs,t)), and a tri-gram (i.e. IP structure + attribute choice) model for predicting user reactions to the system’s combined IP structure and attribute selection decisions: P(au,t|IPs,t, attributess,t). 5Where au,t is the predicted next user action at time t, IPs,t was the system’s Information Presentation action at t, and attributess,t is the attributes selected by the system at t. We evaluate the performance of these models by measuring dialogue similarity to the original data, based on the Kullback-Leibler (KL) divergence, as also used by, e.g. (Cuay´ahuitl et al., 2005; Jung et al., 2009; Janarthanam and Lemon, 2009). We compare the raw probabilities as observed in the data with the probabilities generated by our n-gram models using different discounting techniques for each context, see table 2. All the models have a small divergence from the original data (especially the bi-gram model), suggesting that they are reasonable simulations for training and testing NLG policies. The absolute discounting method for the bigram model is most dissimilar to the data, as is the WittenBell method for the tri-gram model, i.e. the models using these discounting methods have the highest KL score. The best performing methods (i.e. most similar to the original data), are linear discounting for the bi-gram model and GoodTuring for the tri-gram. We use the most similar user models for system training, and the most dissimilar user models for testing NLG policies, in order to test whether the learned policies are robust and adaptive to unseen dialogue contexts. discounting method bi-gram US tri-gram US WittenBell 0.086 0.512 GoodTuring 0.086 0.163 absolute 0.091 0.246 linear 0.011 0.276 Table 2: Kullback-Leibler divergence for the different User Simulations (US) 4.2 Database matches and “Focus of attention” An important task of Information Presentation is to support the user in choosing between all the available items (and ultimately in selecting the most suitable one) by structuring the current information returned from the database, as explained in Section 1.1. We therefore model the user’s “focus of attention” as a feature in our learning experiments. This feature reflects how the different IP strategies structure information with different numbers of attributes. We implement this shift of the user’s focus analogously to discovering the user’s goal in Dialogue Management: every time the predicted next user act is to add in1013 formation (addInfo), we infer that the user is therefore only interested in a subset of the previously presented results and so the system will focus on this new subset of database items in the rest of the generated utterance. For example, the user’s focus after the SUMMARY (with UM) in Table 1 is DBhits = 10, since the user is only interested in cheap, Indian places. 4.3 Data-driven Reward function The reward/evaluation function is constructed from the WoZ data, using a stepwise linear regression, following the PARADISE framework (Walker et al., 2000). This model selects the features which significantly influenced the users’ ratings for the NLG strategy in the WoZ questionnaire. We also assign a value to the user’s reactions (valueUserReaction), similar to optimising task success for DM (Young et al., 2007). This reflects the fact that good IP strategies should help the user to select an item (valueUserReaction = +100) or provide more constraints addInfo (valueUserReaction = ±0), but the user should not do anything else (valueUserReaction = −100). The regression in equation 1 (R2 = .26) indicates that users’ ratings are influenced by higher level and lower level features: Users like to be focused on a small set of database hits (where #DBhits ranges over [1-100]), which will enable them to choose an item (valueUserReaction), while keeping the IP utterances short (where #sentence is in the range [2-18]): Reward = (−1.2) × #DBhits (1) +(.121) × valueUserReaction −(1.43) × #sentence Note that the worst possible reward for an NLG move is therefore (−1.20×100)−(.121×100)− (18 × 1.43) = −157.84. This is achieved by presenting 100 items to the user in 18 sentences6, in such a way that the user ends the conversation unsuccessfully. The top possible reward is achieved in the rare cases where the system can immediately present 1 item to the user using just 2 sentences, and the user then selects that item, i.e. Reward = −(1.20×1)+(.121×100)−(2×1.43) = 8.06 6Note that the maximum possible number of sentences generated by the realizer is 18 for the full IP sequence SUMMARY+COMPARE+RECOMMEND using all the attributes. 5 Reinforcement Learning experiments We now formulate the problem as a Markov Decision Process (MDP), where states are NLG dialogue contexts and actions are NLG decisions. Each state-action pair is associated with a transition probability, which is the probability of moving from state s at time t to state s′ at time t+1 after having performed action a when in state s. This transition probability is computed by the environment model (i.e. the user simulation and realiser), and explicitly captures the uncertainty in the generation environment. This is a major difference to other non-statistical planning approaches. Each transition is also associated with a reinforcement signal (or “reward”) rt+1 describing how good the result of action a was when performed in state s. The aim of the MDP is to maximise long-term expected reward of its decisions, resulting in a policy which maps each possible state to an appropriate action in that state. We treat IP as a hierarchical joint optimisation problem, where first one of the IP structures (13) is chosen and then the number of attributes is decided, as shown in Figure 4. At each generation step, the MDP can choose 1-5 attributes (e.g. cuisine, price range, location, food quality, and/or service quality). Generation stops as soon as the user is predicted to select an item, i.e. the IP task is successful. (Note that the same constraint is operational for the WoZ baseline.)   ACTION:  IP:    SUMMARY COMPARE RECOMMEND     attr: 1-5   STATE:   attributes: 1-15 sentence: 2-18 dbHitsFocus: 1-100 userSelect: 0,1 userAddInfo: 0,1 userElse: 0,1     Figure 4: State-Action space for the RL-NLG problem States are represented as sets of NLG dialogue context features. The state space comprises “lower-level” features about the realiser behaviour (two discrete features representing the number of attributes and sentences generated so far) and three binary features representing the user’s predicted next action, as well as “high-level” features pro1014 vided by the DM (e.g. current database hits in the user’s focus (dbHitsFocus)). We trained the policy using the SHARSHA algorithm (Shapiro and Langley, 2002) with linear function approximation (Sutton and Barto, 1998), and the simulation environment described in Section 4. The policy was trained for 60,000 iterations. 5.1 Experimental Set-up We compare the learned strategies against the WoZ baseline as described in Section 3.3. For attribute selection we choose a majority baseline (randomly choosing between 3 or 4 attributes) since the attribute selection models learned by Supervised Learning on the WoZ data didn’t show significant improvements. For training, we used the user simulation model most similar to the data, see Section 4.1. For testing, we test with the different user simulation model (the one which is most dissimilar to the data). We first investigate how well IP structure (without attribute choice) can be learned in increasingly complex generation scenarios. A generation scenario is a combination of a particular kind of NLG realiser (template vs. stochastic) along with different levels of variation introduced by certain features of the dialogue context. In general, the stochastic realiser introduces more variation in lower level features than the template-based realiser. The Focus model introduces more variation with respect to #DBhits and #attributes as described in Section 4.2. We therefore investigate the following cases: 1.1. IP structure choice, Template realiser: Predicted next user action varies according to the bi-gram model (P(au,t|IPs,t)); Number of sentences and attributes per IP strategy is set by defaults, reflecting a template-based realiser. 1.2. IP structure choice, Stochastic realiser: IP structure where number of attributes per NLG turn is given at the beginning of each episode (e.g. set by the DM); Sentence generation according to the SPaRKy stochastic realiser model as described in Section 3.2. We then investigate different scenarios for jointly optimising IP structure (IPS) and attribute selection (Attr) decisions. 2.1. IPS+Attr choice, Template realiser: Predicted next user action varies according to tri-gram (P(au,t|IPs,t, attributess,t)) model; Number of sentences per IP structure set to default. 2.2. IPS+Attr choice, Template realiser+Focus model: Tri-gram user simulation with Template realiser and Focus of attention model with respect to #DBhits and #attributes as described in section 4.2. 2.3. IPS+Attr choice, Stochastic realiser: Trigram user simulation with sentence/attribute relationship according to Stochastic realiser as described in Section 3.2. 2.4. IPS+Attr choice, Stochastic realiser+Focus: i.e. the full model = Predicted next user action varies according to tri-gram model+ Focus of attention model + Sentence/attribute relationship according to stochastic realiser. 5.2 Results We compare the average final reward (see Equation 1) gained by the baseline against the trained RL policies in the different scenarios for each 1000 test runs, using a paired samples t-test. The results are shown in Table 3. In 5 out of 6 scenarios the RL policy significantly (p < .001) outperforms the supervised baseline. We also report on the percentage of the top possible reward gained by the individual policies, and the raw percentage improvement of the RL policy. Note that the best possible (100%) reward can only be gained in rare cases (see Section 4.3). The learned RL policies show that lower level features are important in gaining significant improvements over the baseline. The more complex the scenario, the harder it is to gain higher rewards for the policies in general (as more variation is introduced), but the relative improvement in rewards also increases with complexity: the baseline does not adapt well to the variations in lower level features whereas RL learns to adapt to the more challenging scenarios. 7 An overview of the range of different IP strategies learned for each setup can be found in Table 4. Note that these strategies are context-dependent: the learner chooses how to proceed dependent on 7Note, that the baseline does reasonably well in scenarios with variation introduced by only higher level features (e.g. scenario 2.2). 1015 Scenario Wizard Baseline average Reward RL average Reward RL % - Baseline % = % improvement 1.1 -15.82(±15.53) -9.90***(±15.38) 89.2% - 85.6%= 3.6% 1.2 -19.83(±17.59) -12.83***(±16.88) 87.4% - 83.2%= 4.2% 2.1 -12.53(±16.31) -6.03***(±11.89) 91.5% - 87.6%= 3.9% 2.2 -14.15(±16.60) -14.18(±18.04) 86.6% - 86.6%= 0.0% 2.3 -17.43(±15.87) -9.66***(±14.44) 89.3% - 84.6%= 4.7% 2.4 -19.59(±17.75) -12.78***(±15.83) 87.4% - 83.3%= 4.1% Table 3: Test results for 1000 dialogues, where *** denotes that the RL policy is significantly (p < .001) better than the Baseline policy. the features in the state space at each generation step. Scenario strategies learned 1.1 RECOMMEND COMPARE COMPARE+RECOMMEND SUMMARY SUMMARY+COMPARE SUMMARY+RECOMMEND SUMMARY+COMPARE+RECOMMEND. 1.2 RECOMMEND COMPARE COMPARE+RECOMMEND SUMMARY SUMMARY+COMPARE SUMMARY+RECOMMEND SUMMARY+COMPARE+RECOMMEND. 2.1 RECOMMEND(5) SUMMARY(2) SUMMARY(2)+COMPARE(4) SUMMARY(2)+COMPARE(1) SUMMARY(2)+COMPARE(4)+RECOMMEND(5) SUMMARY(2)+COMPARE(1)+RECOMMEND(5) 2.2 RECOMMEND(5) SUMMARY(4) SUMMARY(4)+RECOMMEND(5) 2.3 RECOMMEND(2) SUMMARY(1) SUMMARY(1)+COMPARE(4) SUMMARY(1)+COMPARE(1) SUMMARY(1)+COMPARE(4)+RECOMMEND(2) 2.4 RECOMMEND(2) SUMMARY(2) SUMMARY(2)+COMPARE(4) SUMMARY(2)+RECOMMEND(2) SUMMARY(2)+COMPARE(4)+RECOMMEND(2) SUMMARY(2)+COMPARE(1)+RECOMMEND(2) Table 4: RL strategies learned for the different scenarios, where (n) denotes the number of attributes generated. For example, the RL policy for scenario 1.1 learned to start with a SUMMARY if the initial number of items returned from the database is high (>30). It will then stop generating if the user is predicted to select an item. Otherwise, it continues with a RECOMMEND. If the number of database items is low, it will start with a COMPARE and then continue with a RECOMMEND, unless the user selects an item. Also see Table 4. Note that the WoZ strategy behaves as described in Figure 3. In addition, the RL policy for scenario 1.2 learns to adapt to a more complex scenario: the number of attributes requested by the DM and produced by the stochastic sentence realiser. It learns to generate the whole sequence (SUMMARY+COMPARE+RECOMMEND) if #attributes is low (<3), because the overall generated utterance (final #sentences) is still relatively short. Otherwise the policy is similar to the one for scenario 1.1. The RL policies for jointly optimising IP strategy and attribute selection learn to select the number of attributes according to the generation scenarios 2.1-2.4. For example, the RL policy learned for scenario 2.1 generates a RECOMMEND with 5 attributes if the database hits are low (<13). Otherwise, it will start with a SUMMARY using 2 attributes. If the user is predicted to narrow down his focus after the SUMMARY, the policy continues with a COMPARE using 1 attribute only, otherwise it helps the user by presenting 4 attributes. It then continues with RECOMMEND(5), and stops as soon as the user is predicted to select one item. The learned policy for scenario 2.1 generates 5.85 attributes per NLG turn on average (i.e. the cumulative number of attributes generated in the whole NLG sequence, where the same attribute may be repeated within the sequence). This strategy primarily adapts to the variations from the user simulation (tri-gram model). For scenario 2.2 the average number of attributes is higher (7.15) since the number of attributes helps to narrow down the user’s focus via the DBhits/attribute relationship specified in section 4.2. For scenario 2.3 fewer attributes are generated on average (3.14), since here the number of attributes influences the sentence realiser, i.e. fewer attributes results in fewer sentences, but does not impact the user’s focus. In scenario 2.4 all the conditions mentioned above influence the learned policy. The average number of attributes selected is still low (3.19). In comparison, the average (cumulative) num1016 ber of attributes for the WoZ baseline is 7.10. The WoZ baseline generates all the possible IP structures (with 3 or 4 attributes) but is restricted to use only “high-level” features (see Figure 3). By beating this baseline we show the importance of the “lower-level” features. Nevertheless, this wizard policy achieves up to 87.6% of the possible reward on this task, and so can be considered a serious baseline against which to measure performance. The only case (scenario 2.2) where RL does not improve significantly over the baseline is where lower level features do not play an important role for learning good strategies: scenario 2.2 is only sensitive to higher level features (DBhits). 6 Conclusion We have presented a new data-driven method for Information Presentation (IP) in Spoken Dialogue Systems using a statistical optimisation framework for content structure planning and attribute selection. This work is the first to apply a datadriven optimisation method to the IP decision space, and to show the utility of both lower-level and higher-level features for this problem. We collected data in a Wizard-of-Oz (WoZ) experiment and showed that human “wizards” mostly pay attention to ‘high-level’ features from Dialogue Management. The WoZ data was used to build statistical models of user reactions to IP strategies, and a data-driven reward function for Reinforcement Learning (RL). We show that lower level features significantly influence users’ ratings of IP strategies. We compared a model of human behaviour (the ‘human wizard baseline’) against policies optimised using Reinforcement Learning, in a variety of scenarios. Our optimised policies significantly outperform the IP structuring and attribute selection present in the WoZ data, especially when performing in complex generation scenarios which require adaptation to, e.g. number of database results, utterance length, etc. While the human wizards were able to attain up to 87.6% of the possible reward on this task, the RL policies are significantly better in 5 out of 6 scenarios, gaining up to 91.5% of the total possible reward. We have also shown that adding predictive “lower level” features, e.g. from the NLG realiser and a user reaction model, is important for learning optimal IP strategies according to user preferences. Future work could include the predicted TTS quality (Boidin et al., 2009) as a feature. We are now working on testing the learned policies with real users, outside of laboratory conditions, using a restaurant-guide SDS, deployed as a VOIP service. Previous work in SDS has shown that results for Dialogue Management obtained with simulated users are able to transfer to evaluations with real users (Lemon et al., 2006). This methodology provides new insights into the nature of the IP problem, which has previously been treated as a module following dialogue management with no access to lower-level context features. The data-driven planning method applied here promises a significant upgrade in the performance of generation modules, and thereby of Spoken Dialogue Systems in general. Acknowledgments The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 216594 (CLASSiC project www.classic-project.org) and from the EPSRC, project no. EP/G069840/1. References Cedric Boidin, Verena Rieser, Lonneke van der Plas, Oliver Lemon, and Jonathan Chevelu. 2009. Predicting how it sounds: Re-ranking alternative inputs to TTS using latent variables (forthcoming). In Proc. of Interspeech/ICSLP, Special Session on Machine Learning for Adaptivity in Spoken Dialogue Systems. Grace Chung. 2004. Developing a flexible spoken dialog system using simulation. In Proc. of the Annual Meeting of the Association for Computational Linguistics (ACL). P.R. Clarkson and R. Rosenfeld. 1997. Statistical Language Modeling Using the CMU-Cambridge Toolkit. In Proc. of ESCA Eurospeech. William W. Cohen. 1995. Fast effective rule induction. In Proceedings of the 12th International Conference on Machine Learning (ICML). Heriberto Cuay´ahuitl, Steve Renals, Oliver Lemon, and Hiroshi Shimodaira. 2005. Human-computer dialogue simulation using hidden markov models. In Proc. of the IEEE workshop on Automatic Speech Recognition and Understanding (ASRU). Vera Demberg and Johanna D. Moore. 2006. Information presentation in spoken dialogue systems. In Proceedings of EACL. 1017 W. Eckert, E. Levin, and R. Pieraccini. 1997. User modeling for spoken dialogue system evaluation. In Proc. of the IEEE workshop on Automatic Speech Recognition and Understanding (ASRU). M. Gasic, S. Keizer, F. Mairesse, J. Schatzmann, B. Thomson, and S. Young. 2008. Training and Evaluation of the HIS POMDP Dialogue System in Noise. In Proc. of SIGdial Workshop on Discourse and Dialogue. James Henderson and Oliver Lemon. 2008. Mixture Model POMDPs for Efficient Handling of Uncertainty in Dialogue Management. In Proc. of ACL. Srinivasan Janarthanam and Oliver Lemon. 2009. A Two-tier User Simulation Model for Reinforcement Learning of Adaptive Referring Expression Generation Policies. In Proc. of SIGdial. Srini Janarthanam and Oliver Lemon. 2010. Learning to adapt to unknown users: Referring expression generation in spoken dialogue systems. In Proceedings of ACL. Sangkeun Jung, Cheongjae Lee, Kyungduk Kim, Minwoo Jeong, and Gary Geunbae Lee. 2009. Datadriven user simulation for automated evaluation of spoken dialog systems. Computer, Speech & Language, 23:479–509. Alexander Koller and Ronald Petrick. 2008. Experiences with planning for natural language generation. In ICAPS. Oliver Lemon, Kallirroi Georgila, and James Henderson. 2006. Evaluating Effectiveness and Portability of Reinforcement Learned Dialogue Strategies with real users: the TALK TownInfo Evaluation. In IEEE/ACL Spoken Language Technology. Oliver Lemon. 2008. Adaptive Natural Language Generation in Dialogue using Reinforcement Learning. In Proceedings of SEMdial. Oliver Lemon. 2010. Learning what to say and how to say it: joint optimization of spoken dialogue management and Natural Language Generation. Computer, Speech & Language, to appear. Xingkun Liu, Verena Rieser, and Oliver Lemon. 2009. A wizard-of-oz interface to study information presentation strategies for spoken dialogue systems. In Proc. of the 1st International Workshop on Spoken Dialogue Systems. Crystal Nakatsu. 2008. Learning contrastive connectives in sentence realization ranking. In Proc. of SIGdial Workshop on Discourse and Dialogue. Joseph Polifroni and Marilyn Walker. 2006. Learning database content for spoken dialogue system design. In Proc. of the IEEE/ACL workshop on Spoken Language Technology (SLT). Joseph Polifroni and Marilyn Walker. 2008. Intensional Summaries as Cooperative Responses in Dialogue Automation and Evaluation. In Proceedings of ACL. Verena Rieser and Oliver Lemon. 2008. Learning Effective Multimodal Dialogue Strategies from Wizard-of-Oz data: Bootstrapping and Evaluation. In Proc. of ACL. Verena Rieser and Oliver Lemon. 2009. Natural Language Generation as Planning Under Uncertainty for Spoken Dialogue Systems. In Proc. of EACL. Verena Rieser, Xingkun Liu, and Oliver Lemon. 2009. Optimal Wizard NLG Behaviours in Context. Technical report, Deliverable 4.2, CLASSiC Project. Dan Shapiro and P. Langley. 2002. Separating skills from preference: Using learning to program by reward. In Proc. of the 19th International Conference on Machine Learning (ICML). Amanda Stent, Rashmi Prasad, and Marilyn Walker. 2004. Trainable sentence planning for complex information presentation in spoken dialog systems. In Association for Computational Linguistics. R. Sutton and A. Barto. 1998. Reinforcement Learning. MIT Press. Kees van Deemter. 2009. What game theory can do for NLG: the case of vague language. In 12th European Workshop on Natural Language Generation (ENLG). Marilyn A. Walker, Candace A. Kamm, and Diane J. Litman. 2000. Towards developing general models of usability with PARADISE. Natural Language Engineering, 6(3). M. Walker, R. Passonneau, and J. Boland. 2001. Quantitative and qualitative evaluation of DARPA Communicator spoken dialogue systems. In Proc. of the Annual Meeting of the Association for Computational Linguistics (ACL). Marilyn Walker, Amanda Stent, Franc¸ois Mairesse, and Rashmi Prasad. 2007. Individual and domain adaptation in sentence planning for dialogue. Journal of Artificial Intelligence Research (JAIR), 30:413–456. Steve Whittaker, Marilyn Walker, and Johanna Moore. 2002. Fish or Fowl: A Wizard of Oz evaluation of dialogue strategies in the restaurant domain. In Proc. of the International Conference on Language Resources and Evaluation (LREC). Andi Winterboer, Jiang Hu, Johanna D. Moore, and Clifford Nass. 2007. The influence of user tailoring and cognitive load on user performance in spoken dialogue systems. In Proc. of the 10th International Conference of Spoken Language Processing (Interspeech/ICSLP). SJ Young, J Schatzmann, K Weilhammer, and H Ye. 2007. The Hidden Information State Approach to Dialog Management. In ICASSP 2007. 1018
2010
103
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1019–1029, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Combining data and mathematical models of language change Morgan Sonderegger University of Chicago Chicago, IL, USA. [email protected] Partha Niyogi University of Chicago Chicago, IL, USA. [email protected] Abstract English noun/verb (N/V) pairs (contract, cement) have undergone complex patterns of change between 3 stress patterns for several centuries. We describe a longitudinal dataset of N/V pair pronunciations, leading to a set of properties to be accounted for by any computational model. We analyze the dynamics of 5 dynamical systems models of linguistic populations, each derived from a model of learning by individuals. We compare each model’s dynamics to a set of properties observed in the N/V data, and reason about how assumptions about individual learning affect population-level dynamics. 1 Introduction The fascinating phenomena of language evolution and language change have inspired much work from computational perspectives in recent years. Research in this field considers populations of linguistic agents, and asks how the population dynamics are related to the behavior of individual agents. However, most such work makes little contact with empirical data (de Boer and Zuidema, 2009).1 As pointed out by Choudhury (2007), most computational work on language change deals with data from cases of change either not at all, or at a relatively high level.2 Recent computational work has addressed “real world” data from change in several languages (Mitchener, 2005; Choudhury et al., 2006; Choudhury et al., 2007; Pearl and Weinberg, 2007; Daland et al., 2007; Landsbergen, 2009). In the same 1However, among language evolution researchers there has been significant recent interest in behavioral experiments, using the “iterated learning” paradigm (Griffiths and Kalish, 2007; Kalish et al., 2007; Kirby et al., 2008). 2We do not review the literature on computational studies of change due to space constraints; see (Baker, 2008; Wang et al., 2005; Niyogi, 2006) for reviews. spirit, we use data from an ongoing stress shift in English noun/verb (N/V) pairs. Because stress has been listed in dictionaries for several centuries, we are able to trace stress longitudinally and at the level of individual words, and observe dynamics significantly more complicated than in changes previously considered in the computational literature. In §2, we summarize aspects of the dynamics to be accounted for by any computational model of the stress shift. We also discuss proposed sources of these dynamics from the literature, based on experimental work by psychologists and linguists. In §3–4, we develop models in the mathematical framework of dynamical systems (DS), which over the past 15 years has been used to model the interaction between language learning and language change in a variety of settings (Niyogi and Berwick, 1995; Niyogi and Berwick, 1996; Niyogi, 2006; Komarova et al., 2001; Yang, 2001; Yang, 2002; Mitchener, 2005; Pearl and Weinberg, 2007). We interpret 6 aspects of the N/V stress dynamics in DS terms; this gives a set of 6 desired properties to which any DS model’s dynamics can be compared. We consider 5 models of language learning by individuals, based on the experimental findings relevant to the N/V stress shift, and evaluate the population-level dynamics of the dynamical system model resulting from each against the set of desired properties. We are thus able to reason about which theories of the source of language change — considered as hypotheses about how individuals learn — lead to the populationlevel patterns observed in change. 2 Data: English N/V pairs The data considered here are the stress patterns of English homographic, disyllabic noun/verb pairs (Table 1); we refer to these throughout as “N/V pairs”. Each of the N and V forms of a pair can have initial (´σσ: c´onvict, n.) or final (σ´σ: conv´ıct, 1019 N V {1, 1} ´σσ ´σσ (exile, anchor, fracture) {1, 2} ´σσ σ´σ (consort, protest, refuse) {2, 2} σ´σ σ´σ (cement, police, review) Table 1: Attested N/V pair stress patterns. v.) stress. We use the notation {Nstress,Vstress} to denote the stress of an N/V pair, with 1=´σσ, 2=σ´σ. Of the four logically possible stress patterns, all current N/V pairs follow one of the 3 patterns shown in Table 1: {1,1}, {1,2}, {2,2}.3 No pair follows the fourth possible pattern, {2,1}. N/V pairs have been undergoing variation and change between these 3 patterns since Middle English (ME, c. 1066-1470), especially change to {1,2}. The vast majority of stress shifts occurred after 1570 (Minkova, 1997), when the first dictionary listing English word stresses was published (Levens, 1570). Many dictionaries from the 17th century on list word stresses, making it possible to trace change in the stress of individual N/V pairs in considerable detail. 2.1 Dynamics Expanding on dictionary pronunciation data collected by Sherman (1975) for the period 1570– 1800, we have collected a corpus of pronunciations of 149 N/V pairs, as listed in 62 British dictionaries, published 1570–2007. Variation and change in N/V pair stress can be visualized by plotting stress trajectories: the moving average of N and V stress vs. time for a given pair. Some examples are shown in Fig. 1. The corpus is described in detail in (Sonderegger and Niyogi, 2010); here we summarize the relevant facts to be accounted for in a computational model.4 Change Four types of clear-cut change between the three stress patterns are observed: {2,2}→{1,2} (Fig.1(a)) {1,2}→{1,1} {1,1}→{1,2} (Fig. 1(b)) {1,2}→{2,2} However, change to {1,2} is much more common than change from {1,2}; in particular, {2,2}→{1,2} is the most common change. When 3However, as variation and change in N/V pair stress is ongoing, a few pairs (e.g. perfume) currently have variable stress. By “stress”, we always mean “primary stress”. All present-day pronunciations are for British English, from CELEX (Baayen et al., 1996). 4The corpus is available on the first author’s home page (currently, people.cs.uchicago.edu/˜morgan). change occurs, it is often fairly sudden, as in Figs. 1(a), 1(b). Finally, change never occurs directly between {1,1} and {2,2}. Stability Previous work on stress in N/V pairs (Sherman, 1975; Phillips, 1984) has emphasized change, in particular {2,2}→{1,2} (the most common change). However, an important aspect of the diachronic dynamics of N/V pairs is stability: most N/V pairs do not show variation or change. The 149 N/V pairs, used both in our corpus and in previous work, were chosen by Sherman (1975) as those most likely to have undergone change, and thus are not suitable for studying how stable the three attested stress patterns are. In a random sample of N/V pairs (not the set of 149) in use over a fixed time period (1700–2007), we find that only 12% have shown variation or change in stress (Sonderegger and Niyogi, 2010). Most pairs maintain the {1,1}, {2,2}, or {1,2} stress pattern for hundreds of years. A model of the diachronic dynamics of N/V pair stress must explain how it can be the case both that some pairs show variation and change, and that many do not. Variation N/V pair stress patterns show both synchronic and diachronic variation. Synchronically, there is variation at the population level in the stress of some N/V pairs at any given time; this is reflected by the inclusion of more than one pronunciation for some N/V pairs in many dictionaries. An important question for modeling is whether there is variation within individual speakers. We show in (Sonderegger and Niyogi, 2010) that there is, for present-day American English speakers, using a corpus of radio speech. For several N/V pairs which have currently variable pronunciation, 1/3 of speakers show variation in the stress of the N form. Metrical evidence from poetry suggests that individual variation also existed in the past; the best evidence is for Shakespeare, who shows variation in the stress of over 20 N/V pairs (K¨okeritz, 1953). Diachronically, a relevant question for modeling is whether all variation is short-lived, or whether stable variation is possible. A particular type of stable variation is in fact observed relatively often in the corpus: either the N or V form stably vary (Fig. 1(c)), but not both at once. Stable variation where both N and V forms vary almost never occurs (Fig. 1(d)). Frequency dependence Phillips (1984) hypoth1020 1700 1800 1900 2000 1 1.2 1.4 1.6 1.8 2 concert Year Moving average of stress placement (a) concert 1700 1800 1900 2000 1 1.2 1.4 1.6 1.8 2 combat Year Moving average of stress placement (b) combat 1700 1800 1900 2000 1 1.2 1.4 1.6 1.8 2 exile Year Moving average of stress placement (c) exile 1850 1900 1950 2000 1 1.2 1.4 1.6 1.8 2 rampage Year Moving average of stress placement (d) rampage Figure 1: Example N/V pair stress trajectories. Moving averages (60-year window) of stress placement (1=´σσ, 2=σ´σ). Solid lines=nouns, dashed lines=verbs. esizes that N/V pairs with lower frequencies (summed N+V word frequencies) are more likely to change to {1,2}. Sonderegger (2010) shows that this is the case for the most common change, {2,2}→{1,2}: among N/V pairs which were {2,2} in 1700 and are either {2,2} or {1,2} today, those which have undergone change have significantly lower frequencies, on average, than those which have not. In (Sonderegger and Niyogi, 2010), we give preliminary evidence from realtime frequency trajectories (for <10 N/V pairs) that it is not lower frequency per se which triggers change to {1,2}, but falling frequency. For example, change in combat from {1,1}→{1,2} around 1800 (Fig. 1(b)) coincides with falling word frequency from 1775–present. 2.2 Sources of change The most salient facts about English N/V pair stress are that (a) change is most often to {1,2} (b) the {2,1} pattern never occurs. We summarize two types of explanation for these facts from the experimental literature, each of which exemplifies a commonly-proposed type of explanation for phonological change. In both cases, there is experimental evidence for biases in present-day English speakers reflecting (a–b). We assume that these biases have been active over the course of the N/V stress shift, and can thus be seen as possible sources of the diachronic dynamics of N/V pairs.5 5This type of assumption is necessary for any hypothesis about the sources of a completed or ongoing change, based on present-day experimental evidence, and is thus common in the literature. In the case of N/V pairs, it is implicitly made in Kelly’s (1988 et seq) account, discussed below. Both biases discussed here stem from facts about English (Ross’ Generalization; rhythmic context) that we believe have not changed over the time period considered here (≈1600–present), based on general accounts of English historical phonology during this period (Lass, 1992; MacMahon, 1998). We leave more careful verification of this claim to future work. Analogy/Lexicon In historical linguistics, analogical changes are those which make “...related forms more similar to each other in their phonetic (and morphological) structure” (Hock, 1991).6 Proposed causes for analogical change thus often involve a speaker’s production and perception of a form being influenced by similar forms in their lexicon. The English lexicon shows a broad tendency, which we call Ross’ generalization, which could be argued to be driving analogical change to {1,2}, and acting against the unobserved stress pattern {2,1}: “primary stress in English nouns is farther to the left than primary stress in English verbs” (Ross, 1973). Change to {1,2} could be seen as motivated by Ross’ generalization, and {2,1} made impossible by it. The argument is lent plausibility by experimental evidence that Ross’ Generalization is reflected in production and perception. English listeners strongly prefer the typical stress pattern (N=´σσ or V=σ´σ) in novel English disyllables (Guion et al., 2003), and process atypical disyllables (N=σ´σ or V=´σσ) more slowly than typical ones (Arciuli and Cupples, 2003). Mistransmission An influential line of research holds that many phonological changes are based in asymmetric transmission errors: because of articulatory or perceptual factors, listeners systematically mishear some sound α as β, but rarely mishear β as α.7 We call such effects mistransmission. Asymmetric mistransmission (by individu6“Forms” here means any linguistic unit; e.g. sounds, words, or paradigms, such as an N/V pair’s stress pattern. 7A standard example is final obstruent devoicing, a common change cross-linguistically. There are several articulatory and perceptual reasons why final voiced obstruents could be heard as unvoiced, but no motivation for the reverse process (final unvoiced obstruents heard as voiced) (Blevins, 2006). 1021 als) is argued to be a necessary condition for the change α→β at the population level, and an explanation for why the change α→β is common, while the change β→α is rarely (or never) observed. Mistransmission-based explanations were pioneered by Ohala (1981, et seq.), and are the subject of much recent work (reviewed by Hansson, 2008) For English N/V pairs, M. Kelly and collaborators have shown mistransmission effects which they propose are responsible for the directionality of the most common type of N/V pair stress shifts ({1,1}, {2,2}→{1,2}), based on “rhythmic context” (Kelly, 1988; Kelly and Bock, 1988; Kelly, 1989). Word stress is misperceived more often as initial in “trochaic-biasing” contexts, where the preceding syllable is weak or the following syllable is heavy; and more often as final in analogously “iambic-biasing” contexts. Nouns occur more frequently in trochaic contexts, and verbs more frequently in iambic contexts; there is thus pressure for the V forms of {1,1} pairs to be misperceived as σ´σ, and for the N forms of {2,2} pairs to be misperceived as ´σσ. 3 Modeling preliminaries We first describe assumptions and notation for models developed below (§4). Because of the evidence for within-speaker variation in N/V pair stress (§2.1), in all models described below, we assume that what is learned for a given N/V pair are the probabilities of using the σ´σ form for the N and V forms. We also make several simplifying assumptions. There are discrete generations Gt, and learners in Gt learn from Gt−1. Each example a learner in Gt hears is equally likely to come from any member of Gt−1. Each learner receives an identical number of examples, and each generation has infinitely many members. These are idealizations, adopted here to keep models simple enough to analyze; the effects of relaxing some of these assumptions have been explored by Niyogi (2006) and Sonderegger (2009). The infinite-population assumption in particular makes the dynamics fully deterministic; this rules out the possibility of change due to drift (or sample variation), where a form disappears from the population because no examples of it are encountered by learners in Gt in the input from Gt−1. Notation For a fixed N/V pair, a learner in Gt hears N1 examples of the N form, of which kt 1 are σ´σ and (N1-kt 1) are ´σσ; N2 and kt 2 are similarly defined for V examples. Each example is sampled i.i.d. from a random member of Gt−1. The Ni are fixed (each learner hears the same number of examples), while the kt i are random variables (over learners in Gt). Each learner applies an algorithm A to the N1+N2 examples to learn ˆαt, ˆβt ∈[0, 1], the probabilities of producing N and V examples as σ´σ. αt, βt are the expectation of ˆαt and ˆβt over members of Gt: αt = E(ˆαt), βt = E(ˆβt). ˆαt and ˆβt are thus random variables (over learners in Gt), while αt, βt ∈[0, 1] are numbers. Because learners in Gt draw examples at random from members of Gt−1, the distributions of ˆαt and ˆβt are determined by (αt−1, βt−1). (αt, βt), the expectations of ˆαt and ˆβt, are thus determined by (αt−1, βt−1) via an iterated map f: f : [0, 1]2 →[0, 1]2, f(αt, βt) = (αt+1, βt+1). 3.1 Dynamical systems We develop and analyze models of populations of language learners in the mathematical framework of (discrete) dynamical systems (DS) (Niyogi and Berwick, 1995; Niyogi, 2006). This setting allows us to determine the diachronic, population-level consequences of assumptions about the learning algorithm used by individuals, as well as assumptions about population structure or the input they receive. Because it is in general impossible to solve a given iterated map as a function of t, the dynamical systems viewpoint is to understand its longterm behavior by finding its fixed points and bifurcations: changes in the number and stability of fixed points as system parameters vary. Briefly, α∗is a fixed point (FP) of f if f(α∗) = α∗; it is stable if lim t→∞αt = α∗for α0 sufficiently near α∗, and unstable otherwise; these are also called stable states and unstable states. Intuitively, α∗is stable iff the system is stable under small perturbations from α∗.8 In the context of a linguistic population, change from state α (100% of the population uses {1,1}) to state β (100% of the population uses {1,2}) corresponds to a bifurcation, where some system parameter (N) passes a critical value (N0). For 8See (Strogatz, 1994; Hirsch et al., 2004) for introductions to dynamical systems in general, and (Niyogi, 2006) for the type of models considered here. 1022 N<N0, α is stable. For N>N0, α is unstable, and β is stable; this triggers change from α to β. 3.2 DS interpretation of observed dynamics Below, we describe 5 DS models of linguistic populations. To interpret whether each model has properties consistent with the N/V dataset, we translate the observations about the dynamics of N/V stress made above (§2.1) into DS terms. This gives a list of desired properties against which to evaluate the properties of each model. 1. ∗{2,1}: {2,1} is not a stable state. 2. Stability of {1,1}, {1,2}, {2,2}: These stress patterns correspond to stable states (for some system parameter values). 3. Observed stable variation: Stable states are possible (for some system parameter values) corresponding to variation in the N or V form, but not both. 4. Sudden change: Change from one stress pattern to another corresponds to a bifurcation, where the fixed point corresponding to the old stress pattern becomes unstable. 5. Observed changes: There are bifurcations corresponding to each of the four observed changes ({1,1} ⇀ ↽{1,2}, {2,2} ⇀ ↽{1,2}). 6. Observed frequency dependence: Change to {1,2} corresponds to a bifurcation in frequency (N), where {2,2} or {1,1} loses stability as N is decreased. 4 Models We now describe 5 DS models, each corresponding to a learning algorithm A used by individual language learners. Each A leads to an iterated map, f(αt, βt) = (αt+1, βt+1), which describes the state of the population of learners over successive generations. We give these evolution equations for each model, then discuss their dynamics, i.e. bifurcation structure. Each model’s dynamics are evaluated with respect to the set of desired properties corresponding to patterns observed in the N/V data. Derivations have been mostly omitted for reasons of space, but are given in (Sonderegger, 2009). The models differ along two dimensions, corresponding to assumptions about the learning algorithm (A): whether or not it is assumed that the stress of examples is possibly mistransmitted (Models 1, 3, 5), and how the N and V probabilities acquired by a given learner are coupled. In Model 1 there is no coupling (ˆαt and ˆβt learned independently), in Models 2–3 coupling takes the form of a hard constraint corresponding to Ross’ generalization, and in Models 4–5 different stress patterns have different prior probabilities.9 4.1 Model 1: Mistransmission Motivated by the evidence for asymmetric misperception of N/V pair stress (§2.2), suppose the stress of N=σ´σ and V=´σσ examples may be misperceived (as N=´σσ and V=σ´σ), with mistransmission probabilities p and q. Learners are assumed to simply probability match: ˆαt = kt 1/N1, ˆβt = kt 2/N2, where kt 1 is the number of N and V examples heard as σ´σ (etc.) The probabilities pN,t & pV,t of hearing an N or V example as final stressed at t are then pN,t = αt−1(1 −p), pV,t = βt−1 + (1 −βt−1)q (1) kt 1 and kt 2 are binomially-distributed: PB(kt 1, kt 2) ≡ N1 kt 1  pN,tkt 1(1 −pN,t)N1−kt 1 × N2 kt 2  pV,tkt 2(1 −pV,t)N2−kt 2 (2) αt and βt, the probability that a random member of Gt produces N and V examples as σ´σ, are the ensemble averages of ˆαt and ˆβt over all members of Gt. Because we have assumed infinitely many learners per generation, αt=E(ˆαt) and βt=E(ˆβt). Using (1), and the formula for the expectation of a binomially-distributed random variable: αt = αt−1(1 −p) (3) βt = βt−1 + (1 −βt−1)q (4) these are the evolution equations for Model 1. Due to space constraints we do not give the (more lengthy) derivations of the evolution equations in Models 2–5. Dynamics There is a single, stable fixed point of evolution equations (3–4): (α∗, β∗) = (0, 1), corresponding to the stress pattern {1,2}. This model thus shows none of the desired properties discussed in §3.2, except that {1,2} corresponds to a stable state. 9The sixth possible model (no coupling, no mistransmission) is a special case of Model 1, resulting in the identity map: αt+1 = αt, βt+1 = βt. 1023 4.2 Model 2: Coupling by constraint Motivated by the evidence for English speakers’ productive knowledge of Ross’ Generalization (§2.2), we consider a second learning model in which the learner attempts to probability match as above, but the (ˆαt, ˆβt) learned must satisfy the constraint that σ´σ stress be more probable in the V form than in the N form. Formally, the learner chooses (ˆαt, ˆβt) satisfying a quadratic optimization problem: minimize [(α −kt 1 N1 )2 + (β −kt 2 N2 )2] s.t. α ≤β This corresponds to the following algorithm, A2: 1. If kt 1 N1 < kt 2 N2 , set ˆαt = kt 1 N1 , ˆβt = kt 2 N2 . 2. Otherwise, set ˆαt = ˆβt = 1 2 ( kt 1 N1 + kt 2 N2 ) The resulting evolution equations can be shown to be αt+1 = αt + A 2 , βt+1 = βt −A 2 (5) where A = X k1 N1 > k2 N2 PB(kt 1, kt 2)( kt 1 N1 −kt 2 N2 ). Dynamics Adding the equations in (5) gives that the (αt, βt) trajectories are lines of constant αt + βt (Fig. 2). All (0, x) and (x, 1) (x∈[0, 1]) are stable fixed points. 1.0 1.0 0 0 Figure 2: Dynamics of Model 2 This model thus has stable FPs corresponding to {1,1}, {1,2}, and {2,2}, does not have {2,1} as a stable FP (by construction), and allows for stable variation in exactly one of N or V. It does not have bifurcations, or the observed patterns of change and frequency dependence. 4.3 Model 3: Coupling by constraint, with mistransmission We now assume that each example is subject to mistransmission, as in Model 1; the learner then applies A2 to the heard examples. The evolution equations are thus the same as in (5), but with αt−1 and βt−1 changed to pN,t, pV,t (Eqn. 1). Dynamics There is a single, stable fixed point, corresponding to stable variation in both N and V. This model thus shows none of the desired properties, except that {2,1} is not a stable FP (by construction). 4.4 Model 4: Coupling by priors The type of coupling assume in Models 2–3 — a constraint on the relative probability of σ´σ stress for N and V forms — has the drawback that there is no way for the rest of the lexicon to affect a pair’s N and V stress probabilities: there can be no influence of the stress of other N/V pairs, or in the lexicon as a whole, on the N/V pair being learned. Models 4–5 allow such influence by formalizing a simple intuitive explanation for the lack of {2, 1} N/V pairs: learners cannot hypothesize a {2, 1} pair because there is no support for this pattern in their lexicons. We now assume that learners compute the probabilities of each possible N/V pair stress pattern, rather than separate probabilities for the N and V forms. We assume that learners keep two sets of probabilities (for {1, 1}, {1, 2}, {2, 1}, {2, 2}): 1. Learned probabilities: ⃗P=(P11, P12, P22, P21), where P11 = N1−kt 1 N1 N2−kt 2 N2 , P12 = N1−kt 1 N1 kt 2 N2 P22 = kt 1 N1 kt 2 N2 , P21 = kt 1 N1 N2−kt 2 N2 2. Prior probabilities: ⃗λ = (λ11, λ12, λ21, λ22), based on the support for each stress pattern in the lexicon. The learner then produces N forms as follows: 1. Pick a pattern {n1, v1} according to ⃗P. 2. Pick a pattern {n2, v2} according to ⃗λ 3. Repeat 1–2 until n1=n2, then produce N=n1. V forms are produced similarly, but checking whether v1 = v2 at step 3. Learners’ production of an N/V pair is thus influenced by both their learning experience (for the particular N/V pair) and by how much support exists in their lexicon for the different stress patterns. We leave the exact interpretation of the λij ambiguous; they could be the percentage of N/V pairs already learned which follow each stress pattern, for example. Motivated by the absence of {2,1} N/V pairs in English, we assume that λ21 = 0. 1024 By following the production algorithm above, the learner’s probabilities of producing N and V forms as σ´σ are: ˆαt = ˜α(kt 1, kt 2) = λ22P22 λ11P11 + λ12P12 + λ22P22 (6) ˆβt = ˜β(kt 1, kt 2) = λ12P12 + λ22P22 λ11P11 + λ12P12 + λ22P22 (7) Eqns. 6–7 are undefined when (kt 1, kt 2)=(N1, 0); in this case we set ˜α(N1, 0) = λ22 and ˜β(N1, 0) = λ12 + λ22. The evolution equations are then αt = E(ˆαt) = N1 X k1=0 N2 X k2=0 PB(k1, k2)˜α(k1, k2) (8) βt = E(ˆβt) = N1 X k1=0 N2 X k2=0 PB(k1, k2)˜β(k1, k2) (9) Dynamics The fixed points of (8–9) are (0, 0), (0, 1), and (1, 1); their stabilities depend on N1, N2, and ⃗λ. Define R = N2 1 + (N2 −1)λ12 λ11 ! N1 1 + (N1 −1)λ12 λ22 ! (10) There are 6 regions of parameter space in which different FPs are stable: 1. λ11, λ22 < λ12: (0, 1) stable 2. λ22 > λ12, R < 1: (0, 1), (1, 1) stable 3. λ11 < λ12 < λ22, R > 1: (1, 1) stable 4. λ11, λ22 > λ12: (0, 0), (1, 1) stable 5. λ22 < λ12 < λ11, R > 1: (0, 0) stable 6. λ11 > λ12, R < 1: (0, 0), (0, 1) stable The parameter space is split into these regimes by three hyperplanes: λ11=λ12, λ22=λ12, and R=1. Given that λ21=0, λ12 = 1 −λ11 − λ22, and the parameter space is 4-dimensional: (λ11, λ22, N1, N2). Fig. 3 shows An example phase diagram in (λ11, λ2), with N1 and N2 fixed. The bifurcation structure implies all 6 possible changes between the three FPs ({1,1}⇀ ↽{1,2}, {1,2}⇀ ↽{2,2}, {2,2}⇀ ↽{1,2}). For example, suppose the system is at stable FP (1, 1) (corresponding to {2,2}) in region 2. As λ22 is decreased, we move into region 1, (1, 1) becomes unstable, and the system shifts to stable FP (0, 1). This transition corresponds to change from {2,2} to {1,2}. Note that change to {1,2} entails crossing the hyperplanes λ12=λ22 and λ12=λ11. These hyperplanes do not change as N1 and N2 vary, so 0.0 0.2 0.4 0.6 0.8 1.0 λ11 0.0 0.2 0.4 0.6 0.8 1.0 λ22 1 2 3 4 5 6 Figure 3: Example phase diagram in (λ11, λ22) for Model 4, with N1 = 5, N2 = 10. Numbers are regions of parameter space (see text). change to {1,2} is not frequency-dependent. However, change from {1,2} entails crossing the hyperplane R=1, which does change as N1 and N2 vary (Eqn. 10), so change from {1,2} is frequencydependent. Thus, although there is frequency dependence in this model, it is not as observed in the diachronic data, where change to {1,2} is frequency-dependent. Finally, no stable variation is possible: in every stable state, all members of the population categorically use a single stress pattern. {2,1} is never a stable FP, by construction. 4.5 Model 5: Coupling by priors, with mistransmission We now suppose that each example from a learner’s data is possibly mistransmitted, as in Model 1; the learner then applies the algorithm from Model 4 to the heard examples (instead of using kt 1, kt 2) . The evolution equations are thus the same as (8–9), but with αt−1 and βt−1 changed to pN,t, pV,t (Eqn. 1). Dynamics (0, 1) is always a fixed point. For some regions of parameter space, there can be one fixed point of the form (κ, 1), as well as one fixed point of the form (0, γ), where κ, γ ∈(0, 1). Define R′ = (1 −p)(1 −q)R, λ′ 12 = λ12, and λ′ 11 = λ11(1−q N2 N2 −1), λ′ 22 = λ22(1−p N1 N1 −1) There are 6 regions of parameter space corresponding to different stable FPs, identical to the 6 regions in Model 4, with the following substitu1025 0 2 4 6 8 10 N1 0.0 0.2 0.4 0.6 0.8 1.0 Stable αt fixed point location Figure 4: Example of falling N1 triggering change from (1, 1) to (0, 1) for Model 5. Dashed line = stable FP of the form (γ, 1), solid line = stable FP (0, 1). For N1 > 4, there is a stable FP near (1, 1). For N1 < 2, (0, 1) is the only stable FP. λ22 = 0.58, λ12 = 0.4, N2 = 10, p = q = 0.05. tions made: R →R′, λij →λ′ ij, (0, 0) →(0, κ), (1, 1) →(γ, 1). The parameter space is again split into these regions by three hyperplanes: λ′ 11=λ′ 12, λ′ 22=λ′ 12, and R′=1. As in Model 4, the bifurcation structure implies all 6 possible changes between the three FPs. However, change to {1,2} entails crossing the hyperplanes λ′ 11=λ′ 12 and λ′ 2=λ′ 12, and is thus now frequency dependent. In particular, consider a system at a stable FP (γ, 1), for some N/V pair. This FP becomes unstable if λ′ 22 becomes smaller than λ′ 12. Assuming that the λij are fixed, this occurs only if N1 falls below a critical value, N∗ 1 = (1 −λ22 λ12 (1 −p))−1; the system would then transition to (0, 1), the only stable state. By a similar argument, falling frequency can lead to change from (0, κ) to (0, 1). Falling frequency can thus cause change to {1,2} in this model, as seen in the N/V data; Fig. 4 shows an example. Unlike in Model 4, stable variation of the type seen in the N/V stress trajectories — one of N or V stably varying, but not both — is possible for some parameter values. (0, 0) and (1, 1) (corresponding to {1,1} and {2,2}) are technically never possible, but effectively occur for FPs of the form (κ, 0) and (γ, 1) when κ or γ are small. {2,1} is never a stable FP, by construction. This model thus arguably shows all of the desired properties seen in the N/V data. Property Model 1 2 3 4 5 ∗{2,1} !!!!! {1,1}, {1,2}, {2,2}%!%!! Obs. stable variation%!%%! Sudden change %%%!! Observed changes %%%!! Obs. freq. depend. %%%%! Table 2: Summary of model properties 4.6 Models summary, observations Table 2 lists which of Models 1–5 show each of the desired properties (from §3.2), corresponding to aspects of the observed diachronic dynamics of N/V pair stress. Based on this set of models, we are able to make some observations about the effect of different assumptions about learning by individuals on population-level dynamics. Models including asymmetric mistransmission (1, 3, 5) generally do not lead to stable states in which the entire population uses {1,1} or {2,2}. (In Model 5, stable variation very near {1,1} or {2,2} is possible.) However, {1,1} and {2,2} are diachronically very stable stress patterns, suggesting that at least for this model set, assuming mistransmission in the learner is problematic. Models 2–3, where analogy is implemented as a hard constraint based on Ross’ generalization, do not give most desired properties. Models 4–5, where analogy is implemented as prior probabilities over N/V stress patterns, show crucial aspects of the observed dynamics: bifurcations corresponding to the changes observed in the stress data. Model 5 shows change to {1,2} triggered by falling frequency, a pattern observed in the stress data, and an emergent property of the model dynamics: this frequency effect is not present in Models 1 or 4, but is present in Model 5, where the learner combines mistransmission (Model 1) with coupling by priors (Model 4). 5 Discussion We have developed 5 dynamical systems models for a relatively complex diachronic change, found one successful model, and were able to reason about the source of model behavior. Each model describes the diachronic, population-level consequences of assuming a particular learning algorithm for individuals. The algorithms considered 1026 were motivated by different possible sources of change, from linguistics and psychology (§2.2). We discuss novel contributions of this work, and future directions. The dataset used here shows more complex dynamics, to our knowledge, than in changes previously considered in the computational literature. By using a detailed, longitudinal dataset, we were able to strongly constrain the desired behavior of a computational model, so that the task of model building is not “doomed to success”. While all models show some patterns observed in the data, only one shows all such properties. We believe detailed datasets are potentially very useful for evaluating and differentiating between proposed computational models of change. This paper is a first attempt to integrate detailed data with a range of DS models. We have only considered some schematic properties of the dynamics observed in our dataset, and used these to qualitatively compare each model’s predictions to the dynamics. Future work should consider the dynamics in more detail, develop more complex models (for example, by relaxing the infinitepopulation assumption, allowing for stochastic dynamics), and quantitatively compare model predictions and observed dynamics. We were able to reason about how assumptions about individual learning affect population dynamics by analyzing a range of simple, related models. This approach is pursued in more depth in the larger set of models considered in (Sonderegger, 2009). Our use of model comparison contrasts with most recent computational work on change, where a small number (1–2) of very complex models are analyzed, allowing for much more detailed models of language learning and usage than those considered here (e.g. Choudhury et al., 2006; Minett & Wang, 2008; Baxter et al., 2009; Landsbergen, 2009). An advantage of our approach is an enhanced ability to evaluate a range of proposed causes for a particular case of language change. By using simple models, we were able to consider a range of learning algorithms corresponding to different explanations for the observed diachronic dynamics. What makes this a useful exercise is the fundamentally non-trivial map, illustrated by Models 1–5, between individual learning and population-level dynamics. Although the type of individual learning assumed in each model was chosen with the same patterns of change in mind, and despite the simplicity of the models used, the resulting population-level dynamics differ greatly. This is an important point given that proposed explanations for change (e.g., mistransmission and analogy) operate at the level of individuals, while the phenomena being explained (patterns of change, or particular changes) are aspects of the population-level dynamics. Acknowledgments We thank Max Bane, James Kirby, and three anonymous reviewers for helpful comments. References J. Arciuli and L. Cupples. 2003. Effects of stress typicality during speeded grammatical classification. Language and Speech, 46(4):353–374. R.H. Baayen, R. Piepenbrock, and L. Gulikers. 1996. CELEX2 (CD-ROM). Linguistic Data Consortium, Philadelphia. A. Baker. 2008. Computational approaches to the study of language change. Language and Linguistics Compass, 2(3):289–307. G.J. Baxter, R.A. Blythe, W. Croft, and A.J. McKane. 2009. Modeling language change: An evaluation of Trudgill’s theory of the emergence of New Zealand English. Language Variation and Change, 21(2):257–296. J. Blevins. 2006. A theoretical synopsis of Evolutionary Phonology. Theoretical Linguistics, 32(2):117– 166. M. Choudhury, A. Basu, and S. Sarkar. 2006. Multiagent simulation of emergence of schwa deletion pattern in Hindi. Journal of Artificial Societies and Social Simulation, 9(2). M. Choudhury, V. Jalan, S. Sarkar, and A. Basu. 2007. Evolution, optimization, and language change: The case of Bengali verb inflections. In Proceedings of the Ninth Meeting of the ACL Special Interest Group in Computational Morphology and Phonology, pages 65–74. M. Choudhury. 2007. Computational Models of Real World Phonological Change. Ph.D. thesis, Indian Institute of Technology Kharagpur. R. Daland, A.D. Sims, and J. Pierrehumbert. 2007. Much ado about nothing: A social network model of Russian paradigmatic gaps. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 936–943. 1027 B. de Boer and W. Zuidema. 2009. Models of language evolution: Does the math add up? ILLC Preprint Series PP-2009-49, University of Amsterdam. T.L. Griffiths and M.L. Kalish. 2007. Language evolution by iterated learning with bayesian agents. Cognitive Science, 31(3):441–480. S.G. Guion, J.J. Clark, T. Harada, and R.P. Wayland. 2003. Factors affecting stress placement for English nonwords include syllabic structure, lexical class, and stress patterns of phonologically similar words. Language and Speech, 46(4):403–427. G.H. Hansson. 2008. Diachronic explanations of sound patterns. Language & Linguistics Compass, 2:859–893. M.W. Hirsch, S. Smale, and R.L. Devaney. 2004. Differential Equations, Dynamical Systems, and an Introduction to Chaos. Academic Press, Amsterdam, 2nd edition. H.H. Hock. 1991. Principles of Historical Linguistics. Mouton de Gruyter, Berlin, 2nd edition. M.L. Kalish, T.L. Griffiths, and S. Lewandowsky. 2007. Iterated learning: Intergenerational knowledge transmission reveals inductive biases. Psychonomic Bulletin and Review, 14(2):288. M.H. Kelly and J.K. Bock. 1988. Stress in time. Journal of Experimental Psychology: Human Perception and Performance, 14(3):389–403. M.H. Kelly. 1988. Rhythmic alternation and lexical stress differences in English. Cognition, 30:107– 137. M.H. Kelly. 1989. Rhythm and language change in English. Journal of Memory & Language, 28:690– 710. S. Kirby, H. Cornish, and K. Smith. 2008. Cumulative cultural evolution in the laboratory: An experimental approach to the origins of structure in human language. Proceedings of the National Academy of Sciences, 105(31):10681–10686. S. Klein, M.A. Kuppin, and K.A. Meives. 1969. Monte Carlo simulation of language change in Tikopia & Maori. In Proceedings of the 1969 Conference on Computational Linguistics, pages 1–27. ACL. S. Klein. 1966. Historical change in language using monte carlo techniques. Mechanical Translation and Computational Linguistics, 9:67–82. S. Klein. 1974. Computer simulation of language contact models. In R. Shuy and C-J. Bailey, editors, Toward Tomorrows Linguistics, pages 276– 290. Georgetown University Press, Washington. H. K¨okeritz. 1953. Shakespeare’s Pronunciation. Yale University Press, New Haven. N.L. Komarova, P. Niyogi, and M.A. Nowak. 2001. The evolutionary dynamics of grammar acquisition. Journal of Theoretical Biology, 209(1):43–60. F. Landsbergen. 2009. Cultural evolutionary modeling of patterns in language change: exercises in evolutionary linguistics. Ph.D. thesis, Universiteit Leiden. R. Lass. 1992. Phonology and morphology. In R.M. Hogg, editor, The Cambridge History of the English Language, volume 3: 1476–1776, pages 23–156. Cambridge University Press. P. Levens. 1570. Manipulus vocabulorum. Henrie Bynneman, London. M. MacMahon. 1998. Phonology. In S. Romaine, editor, The Cambridge History of the English Language, volume 4: 1476–1776, pages 373–535. Cambridge University Press. J.W. Minett and W.S.Y. Wang. 2008. Modelling endangered languages: The effects of bilingualism and social structure. Lingua, 118(1):19–45. D. Minkova. 1997. Constraint ranking in Middle English stress-shifting. English Language and Linguistics, 1(1):135–175. W.G. Mitchener. 2005. Simulating language change in the presence of non-idealized syntax. In Proceedings of the Second Workshop on Psychocomputational Models of Human Language Acquisition, pages 10–19. ACL. P. Niyogi and R.C. Berwick. 1995. The logical problem of language change. AI Memo 1516, MIT. P. Niyogi and R.C. Berwick. 1996. A language learning model for finite parameter spaces. Cognition, 61(1-2):161–193. P. Niyogi. 2006. The Computational Nature of Language Learning and Evolution. MIT Press, Cambridge. J.J. Ohala. 1981. The listener as a source of sound change. In C.S. Masek, R.A. Hendrick, and M.F. Miller, editors, Papers from the Parasession on Language and Behavior, pages 178–203. Chicago Linguistic Society, Chicago. L. Pearl and A. Weinberg. 2007. Input filtering in syntactic acquisition: Answers from language change modeling. Language Learning and Development, 3(1):43–72. B.S. Phillips. 1984. Word frequency and the actuation of sound change. Language, 60(2):320–342. J.R. Ross. 1973. Leftward, ho! In S.R. Anderson and P. Kiparsky, editors, Festschrift for Morris Halle, pages 166–173. Holt, Rinehart and Winston, New York. 1028 D. Sherman. 1975. Noun-verb stress alternation: An example of the lexical diffusion of sound change in English. Linguistics, 159:43–71. M. Sonderegger and P. Niyogi. 2010. Variation and change in English noun/verb pair stress: Data, dynamical systems models, and their interaction. Ms. To appear in A.C.L. Yu, editor, Origins of Sound Patterns: Approaches to Phonologization. Oxford University Press. M. Sonderegger. 2009. Dynamical systems models of language variation and change: An application to an English stress shift. Masters paper, Department of Computer Science, University of Chicago. M. Sonderegger. 2010. Testing for frequency and structural effects in an English stress shift. In Proceedings of the Berkeley Linguistics Society 36. To appear. S. Strogatz. 1994. Nonlinear Dynamics and Chaos. Addison-Wesley, Reading, MA. W.S.Y. Wang, J. Ke, and J.W. Minett. 2005. Computational studies of language evolution. In C. Huang and W. Lenders, editors, Computational Linguistics and Beyond, pages 65–108. Institute of Linguistics, Academia Sinica, Taipei. C. Yang. 2001. Internal and external forces in language change. Language Variation and Change, 12(3):231–250. C. Yang. 2002. Knowledge and Learning in Natural Language. Oxford University Press. 1029
2010
104
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1030–1039, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Finding Cognate Groups using Phylogenies David Hall and Dan Klein Computer Science Division University of California, Berkeley {dlwh,klein}@cs.berkeley.edu Abstract A central problem in historical linguistics is the identification of historically related cognate words. We present a generative phylogenetic model for automatically inducing cognate group structure from unaligned word lists. Our model represents the process of transformation and transmission from ancestor word to daughter word, as well as the alignment between the words lists of the observed languages. We also present a novel method for simplifying complex weighted automata created during inference to counteract the otherwise exponential growth of message sizes. On the task of identifying cognates in a dataset of Romance words, our model significantly outperforms a baseline approach, increasing accuracy by as much as 80%. Finally, we demonstrate that our automatically induced groups can be used to successfully reconstruct ancestral words. 1 Introduction A crowning achievement of historical linguistics is the comparative method (Ohala, 1993), wherein linguists use word similarity to elucidate the hidden phonological and morphological processes which govern historical descent. The comparative method requires reasoning about three important hidden variables: the overall phylogenetic guide tree among languages, the evolutionary parameters of the ambient changes at each branch, and the cognate group structure that specifies which words share common ancestors. All three of these variables interact and inform each other, and so historical linguists often consider them jointly. However, linguists are currently required to make qualitative judgments regarding the relative likelihood of certain sound changes, cognate groups, and so on. Several recent statistical methods have been introduced to provide increased quantitative backing to the comparative method (Oakes, 2000; Bouchard-Cˆot´e et al., 2007; Bouchard-Cˆot´e et al., 2009); others have modeled the spread of language changes and speciation (Ringe et al., 2002; Daum´e III and Campbell, 2007; Daum´e III, 2009; Nerbonne, 2010). These automated methods, while providing robustness and scale in the induction of ancestral word forms and evolutionary parameters, assume that cognate groups are already known. In this work, we address this limitation, presenting a model in which cognate groups can be discovered automatically. Finding cognate groups is not an easy task, because underlying morphological and phonological changes can obscure relationships between words, especially for distant cognates, where simple string overlap is an inadequate measure of similarity. Indeed, a standard string similarity metric like Levenshtein distance can lead to false positives. Consider the often cited example of Greek /ma:ti/ and Malay /mata/, both meaning “eye” (Bloomfield, 1938). If we were to rely on Levenshtein distance, these words would seem to be a highly attractive match as cognates: they are nearly identical, essentially differing in only a single character. However, no linguist would posit that these two words are related. To correctly learn that they are not related, linguists typically rely on two kinds of evidence. First, because sound change is largely regular, we would need to commonly see /i/ in Greek wherever we see /a/ in Malay (Ross, 1950). Second, we should look at languages closely related to Greek and Malay, to see if similar patterns hold there, too. Some authors have attempted to automatically detect cognate words (Mann and Yarowsky, 2001; Lowe and Mazaudon, 1994; Oakes, 2000; Kondrak, 2001; Mulloni, 2007), but these methods 1030 typically work on language pairs rather than on larger language families. To fully automate the comparative method, it is necessary to consider multiple languages, and to do so in a model which couples cognate detection with similarity learning. In this paper, we present a new generative model for the automatic induction of cognate groups given only (1) a known family tree of languages and (2) word lists from those languages. A prior on word survival generates a number of cognate groups and decides which groups are attested in each modern language. An evolutionary model captures how each word is generated from its parent word. Finally, an alignment model maps the flat word lists to cognate groups. Inference requires a combination of message-passing in the evolutionary model and iterative bipartite graph matching in the alignment model. In the message-passing phase, our model encodes distributions over strings as weighted finite state automata (Mohri, 2009). Weighted automata have been successfully applied to speech processing (Mohri et al., 1996) and more recently to morphology (Dreyer and Eisner, 2009). Here, we present a new method for automatically compressing our message automata in a way that can take into account prior information about the expected outcome of inference. In this paper, we focus on a transcribed word list of 583 cognate sets from three Romance languages (Portuguese, Italian and Spanish), as well as their common ancestor Latin (Bouchard-Cˆot´e et al., 2007). We consider both the case where we know that all cognate groups have a surface form in all languages, and where we do not know that. On the former, easier task we achieve identification accuracies of 90.6%. On the latter task, we achieve F1 scores of 73.6%. Both substantially beat baseline performance. 2 Model In this section, we describe a new generative model for vocabulary lists in multiple related languages given the phylogenetic relationship between the languages (their family tree). The generative process factors into three subprocesses: survival, evolution, and alignment, as shown in Figure 1(a). Survival dictates, for each cognate group, which languages have words in that group. Evolution describes the process by which daughter words are transformed from their parent word. Finally, alignment describes the “scrambling” of the word lists into a flat order that hides their lineage. We present each subprocess in detail in the following subsections. 2.1 Survival First, we choose a number G of ancestral cognate groups from a geometric distribution. For each cognate group g, our generative process walks down the tree. At each branch, the word may either survive or die. This process is modeled in a “death tree” with a Bernoulli random variable Sℓg for each language ℓand cognate group g specifying whether or not the word died before reaching that language. Death at any node in the tree causes all of that node’s descendants to also be dead. This process captures the intuition that cognate words are more likely to be found clustered in sibling languages than scattered across unrelated languages. 2.2 Evolution Once we know which languages will have an attested word and which will not, we generate the actual word forms. The evolution component of the model generates words according to a branchspecific transformation from a node’s immediate ancestor. Figure 1(a) graphically describes our generative model for three Romance languages: Italian, Portuguese, and Spanish.1 In each cognate group, each word Wℓis generated from its parent according to a conditional distribution with parameter ϕℓ, which is specific to that edge in the tree, but shared between all cognate groups. In this paper, each ϕℓtakes the form of a parameterized edit distance similar to the standard Levenshtein distance. Richer models – such as the ones in Bouchard-Cˆot´e et al. (2007) – could instead be used, although with an increased inferential cost. The edit transducers are represented schematically in Figure 1(b). Characters x and y are arbitrary phonemes, and σ(x, y) represents the cost of substituting x with y. ε represents the empty phoneme and is used as shorthand for insertion and deletion, which have parameters η and δ, respectively. As an example, see the illustration in Figure 1(c). Here, the Italian word /fwOko/ (“fire”) is generated from its parent form /fokus/ (“hearth”) 1Though we have data for Latin, we treat it as unobserved to represent the more common case where the ancestral language is unattested; we also evaluate our system using the Latin data. 1031 G WVL WPI φ φ φ φ φ WLA φ SLA SVL SPI SIT SES SPT L L wpt wes L π wIT wIT wIT wIT wIT wIT WIT WIT Survival Evolution f u s k f w ɔ o k Alignment (a) (b) (c) x:y / σ(x,y) x:ε/δx ε:y/ηy o Figure 1: (a) The process by which cognate words are generated. Here, we show the derivation of Romance language words Wℓfrom their respective Latin ancestor, parameterized by transformations ϕℓand survival variables Sℓ. Languages shown are Latin (LA), Vulgar Latin (VL), Proto-Iberian (PI), Italian (IT), Portuguese (PT), and Spanish (ES). Note that only modern language words are observed (shaded). (b) The class of parameterized edit distances used in this paper. Each pair of phonemes has a weight σ for deletion, and each phoneme has weights η and δ for insertion and deletion respectively. (c) A possible alignment produced by an edit distance between the Latin word focus (“hearth”) and the Italian word fuoco (“fire”). by a series of edits: two matches, two substitutions (/u/→/o/, and /o/→/O/), one insertion (w) and one deletion (/s/). The probability of each individual edit is determined by ϕ. Note that the marginal probability of a specific Italian word conditioned on its Vulgar Latin parent is the sum over all possible derivations that generate it. 2.3 Alignment Finally, at the leaves of the trees are the observed words. (We take non-leaf nodes to be unobserved.) Here, we make the simplifying assumption that in any language there is at most one word per language per cognate group. Because the assignments of words to cognates is unknown, we specify an unknown alignment parameter πℓfor each modern language which is an alignment of cognate groups to entries in the word list. In the case that every cognate group has a word in each language, each πℓis a permutation. In the more general case that some cognate groups do not have words from all languages, this mapping is injective from words to cognate groups. From a generative perspective, πℓgenerates observed positions of the words in some vocabulary list. In this paper, our task is primarily to learn the alignment variables πℓ. All other hidden variables are auxiliary and are to be marginalized to the greatest extent possible. 3 Inference of Cognate Assignments In this section, we discuss the inference method for determining cognate assignments under fixed parameters ϕ. We are given a set of languages and a list of words in each language, and our objective is to determine which words are cognate with each other. Because the parameters πℓare either permutations or injections, the inference task is reduced to finding an alignment π of the respective word lists to maximize the log probability of the observed words. π∗= arg max π X g log p(w(ℓ,πℓ(g))|ϕ, π, w−ℓ) w(ℓ,πℓ(g)) is the word in language ℓthat πℓhas assigned to cognate group g. Maximizing this quantity directly is intractable, and so instead we use a coordinate ascent algorithm to iteratively 1032 maximize the alignment corresponding to a single language ℓwhile holding the others fixed: π∗ ℓ= arg max πℓ X g log p(w(ℓ,πℓ(g))|ϕ, π−ℓ, πℓ, w−ℓ) Each iteration is then actually an instance of bipartite graph matching, with the words in one language one set of nodes, and the current cognate groups in the other languages the other set of nodes. The edge affinities affbetween these nodes are the conditional probabilities of each word wℓbelonging to each cognate group g: aff(wℓ, g) = p(wℓ|w−ℓ,π−ℓ(g), ϕ, π−ℓ) To compute these affinities, we perform inference in each tree to calculate the marginal distribution of the words from the language ℓ. For the marginals, we use an analog of the forward/backward algorithm. In the upward pass, we send messages from the leaves of the tree toward the root. For observed leaf nodes Wd, we have: µd→a(wa) = p(Wd = wd|wa, ϕd) and for interior nodes Wi: µi→a(wa) = X wi p(wi|wa, ϕi) Y d∈child(wi) µd→i(wi) (1) In the downward pass (toward the language ℓ), we sum over ancestral words Wa: µa→d(wd) = X wa p(wd|wa, ϕd)µa′→a(wa) Y d′∈child(wa) d′̸=d µd′→a(wa) where a′ is the ancestor of a. Computing these messages gives a posterior marginal distribution µℓ(wℓ) = p(wℓ|w−ℓ,π−ℓ(g), ϕ, π−ℓ), which is precisely the affinity score we need for the bipartite matching. We then use the Hungarian algorithm (Kuhn, 1955) to find the optimal assignment for the bipartite matching problem. One important final note is initialization. In our early experiments we found that choosing a random starting configuration unsurprisingly led to rather poor local optima. Instead, we started with empty trees, and added in one language per iteration until all languages were added, and then continued iterations on the full tree. 4 Learning So far we have only addressed searching for Viterbi alignments π under fixed parameters. In practice, it is important to estimate better parametric edit distances ϕℓand survival variables Sℓ. To motivate the need for good transducers, consider the example of English “day” /deI/ and Latin “di¯es” /dIe:s/, both with the same meaning. Surprisingly, these words are in no way related, with English “day” probably coming from a verb meaning “to burn” (OED, 1989). However, a naively constructed edit distance, which for example might penalize vowel substitutions lightly, would fail to learn that Latin words that are borrowed into English would not undergo the sound change /I/→/eI/. Therefore, our model must learn not only which sound changes are plausible (e.g. vowels turning into other vowels is more common than vowels turning into consonants), but which changes are appropriate for a given language.2 At a high level, our learning algorithm is much like Expectation Maximization with hard assignments: after we update the alignment variables π and thus form new potential cognate sets, we reestimate our model’s parameters to maximize the likelihood of those assignments.3 The parameters can be learned through standard maximum likelihood estimation, which we detail in this section. Because we enforce that a word in language d must be dead if its parent word in language a is dead, we just need to learn the conditional probabilities p(Sd = dead|Sa = alive). Given fixed assignments π, the maximum likelihood estimate can be found by counting the number of “deaths” that occurred between a child and a live parent, applying smoothing – we found adding 0.5 to be reasonable – and dividing by the total number of live parents. For the transducers ϕ, we learn parameterized edit distances that model the probabilities of different sound changes. For each ϕℓwe fit a nonuniform substitution, insertion, and deletion matrix σ(x, y). These edit distances define a condi2We note two further difficulties: our model does not handle “borrowings,” which would be necessary to capture a significant portion of English vocabulary; nor can it seamlessly handle words that are inherited later in the evolution of language than others. For instance, French borrowed words from its parent language Latin during the Renaissance and the Enlightenment that have not undergone the same changes as words that evolved “naturally” from Latin. See Bloomfield (1938). Handling these cases is a direction for future research. 3Strictly, we can cast this problem in a variational framework similar to mean field where we iteratively maximize parameters to minimize a KL-divergence. We omit details for clarity. 1033 tional exponential family distribution when conditioned on an ancestral word. That is, for any fixed wa: X wd p(wd|wa, σ) = X wd X z∈ align(wa,wd) score(z; σ) = X wd X z∈ align(wa,wd) Y (x,y)∈z σ(x, y) = 1 where align(wa, wd) is the set of possible alignments between the phonemes in words wa and wd. We are seeking the maximum likelihood estimate of each ϕ, given fixed alignments π: ˆϕℓ= arg max ϕℓ p(w|ϕ, π) To find this maximizer for any given πℓ, we need to find a marginal distribution over the edges connecting any two languages a and d. With this distribution, we calculate the expected “alignment unigrams.” That is, for each pair of phonemes x and y (or empty phoneme ε), we need to find the quantity: Ep(wa,wd)[#(x, y; z)] = X wa,wd X z∈ align(wa,wd) #(x,y; z)p(z|wa, wd)p(wa, wd) where we denote #(x, y; z) to be the number of times the pair of phonemes (x, y) are aligned in alignment z. The exact method for computing these counts is to use an expectation semiring (Eisner, 2001). Given the expected counts, we now need to normalize them to ensure that the transducer represents a conditional probability distribution (Eisner, 2002; Oncina and Sebban, 2006). We have that, for each phoneme x in the ancestor language: ηy = E[#(ε, y; z)] E[#(·, ·; z)] σ(x, y) = (1 − X y′ ηy′)E[#(x, y; z)] E[#(x, ·; z)] δx = (1 − X y′ ηy′)E[#(x, ε; z)] E[#(x, ·; z)] Here, we have #(·, ·; z) = P x,y #(x, y; z) and #(x, ·; z) = P y #(x, y; z). The (1 −P y′ ηy′) term ensure that for any ancestral phoneme x, P y ηy+P y σ(x, y)+δx = 1. These equations ensure that the three transition types (insertion, substitution/match, deletion) are normalized for each ancestral phoneme. 5 Transducers and Automata In our model, it is not just the edit distances that are finite state machines. Indeed, the words themselves are string-valued random variables that have, in principle, an infinite domain. To represent distributions and messages over these variables, we chose weighted finite state automata, which can compactly represent functions over strings. Unfortunately, while initially compact, these automata become unwieldy during inference, and so approximations must be used (Dreyer and Eisner, 2009). In this section, we summarize the standard algorithms and representations used for weighted finite state transducers. For more detailed treatment of the general transducer operations, we direct readers to Mohri (2009). A weighted automaton (resp. transducer) encodes a function over strings (resp. pairs of strings) as weighted paths through a directed graph. Each edge in the graph has a real-valued weight4 and a label, which is a single phoneme in some alphabet Σ or the empty phoneme ε (resp. pair of labels in some alphabet Σ×∆). The weight of a string is then the sum of all paths through the graph that accept that string. For our purposes, we are concerned with three fundamental operations on weighted transducers. The first is computing the sum of all paths through a transducer, which corresponds to computing the partition function of a distribution over strings. This operation can be performed in worst-case cubic time (using a generalization of the FloydWarshall algorithm). For acyclic or feed-forward transducers, this time can be improved dramatically by using a generalization of Djisktra’s algorithm or other related algorithms (Mohri, 2009). The second operation is the composition of two transducers. Intuitively, composition creates a new transducer that takes the output from the first transducer, processes it through the second transducer, and then returns the output of the second transducer. That is, consider two transducers T1 and T2. T1 has input alphabet Σ and output alphabet ∆, while T2 has input alphabet ∆and output alphabet Ω. The composition T1 ◦T2 returns a new transducer over Σ and Ωsuch that (T1 ◦ T2)(x, y) = P u T1(x, u) · T2(u, y). In this paper, we use composition for marginalization and factor products. Given a factor f1(x, u; T1) and an4The weights can be anything that form a semiring, but for the sake of exposition we specialize to real-valued weights. 1034 other factor f2(u, y; T2), composition corresponds to the operation ψ(x, y) = P u f1(x, u)f2(u, y). For two messages µ1(w) and µ2(w), the same algorithm can be used to find the product µ(w) = µ1(w)µ2(w). The third operation is transducer minimization. Transducer composition produces O(nm) states, where n and m are the number of states in each transducer. Repeated compositions compound the problem: iterated composition of k transducers produces O(nk) states. Minimization alleviates this problem by collapsing indistinguishable states into a single state. Unfortunately, minimization does not always collapse enough states. In the next section we discuss approaches to “lossy” minimization that produce automata that are not exactly the same but are much smaller. 6 Message Approximation Recall that in inference, when summing out interior nodes wi we calculated the product over incoming messages µd→i(wi) (Equation 1), and that these products are calculated using transducer composition. Unfortunately, the maximal number of states in a message is exponential in the number of words in the cognate group. Minimization can only help so much: in order for two states to be collapsed, the distribution over transitions from those states must be indistinguishable. In practice, for the automata generated in our model, minimization removes at most half the states, which is not sufficient to counteract the exponential growth. Thus, we need to find a way to approximate a message µ(w) using a simpler automata ˜µ(w; θ) taken from a restricted class parameterized by θ. In the context of transducers, previous authors have focused on a combination of n-best lists and unigram back-off models (Dreyer and Eisner, 2009), a schematic diagram of which is in Figure 2(d). For their problem, n-best lists are sensible: their nodes’ local potentials already focus messages on a small number of hypotheses. In our setting, however, n-best lists are problematic; early experiments showed that a 10,000-best list for a typical message only accounts for 50% of message log perplexity. That is, the posterior marginals in our model are (at least initially) fairly flat. An alternative approach might be to simply treat messages as unnormalized probability distributions, and to minimize the KL divergence bee g u f e o f u u u e u g u o u f f f f e e e e e g g g g g o o o o o f 2 3 e u g o f 0 1 f e o 4 g o e u f u e o f g 5 o g u f f u e g o f e u g o f e u g f e e f u eg g (a) (b) (c) (d) u g o e u f o Figure 2: Various topologies for approximating topologies: (a) a unigram model, (b) a bigram model, (c) the anchored unigram model, and (d) the n-best plus backoff model used in Dreyer and Eisner (2009). In (c) and (d), the relative height of arcs is meant to convey approximate probabilities. tween some approximating message ˜µ(w) and the true message µ(w). However, messages are not always probability distributions and – because the number of possible strings is in principle infinite – they need not sum to a finite number.5 Instead, we propose to minimize the KL divergence between the “expected” marginal distribution and the approximated “expected” marginal distribution: ˆθ = arg min θ DKL(τ(w)µ(w)||τ(w)˜µ(w; θ)) = arg min θ X w τ(w)µ(w) log τ(w)µ(w) τ(w)˜µ(w; θ) = arg min θ X w τ(w)µ(w) log µ(w) ˜µ(w; θ) (2) where τ is a term acting as a surrogate for the posterior distribution over w without the information from µ. That is, we seek to approximate µ not on its own, but as it functions in an environment representing its final context. For example, if µ(w) is a backward message, τ could be a stand-in for a forward probability.6 In this paper, µ(w) is a complex automaton with potentially many states, ˜µ(w; θ) is a simple parametric automaton with forms that we discuss below, and τ(w) is an arbitrary (but hopefully fairly simple) automaton. The actual method we use is 5As an extreme example, suppose we have observed that Wd = wd and that p(Wd = wd|wa) = 1 for all ancestral words wa. Then, clearly P wd µ(wd) = P wd P p(Wd = wd|wa) = ∞whenever there are an infinite number of possible ancestral strings wa. 6This approach is reminiscent of Expectation Propagation (Minka, 2001). 1035 as follows. Given a deterministic prior automaton τ, and a deterministic automaton topology ˜µ∗, we create the composed unweighted automaton τ ◦˜µ∗, and calculate arc transitions weights to minimize the KL divergence between that composed transducer and τ ◦µ. The procedure for calculating these statistics is described in Li and Eisner (2009), which amounts to using an expectation semiring (Eisner, 2001) to compute expected transitions in τ ◦˜µ∗under the probability distribution τ ◦µ. From there, we need to create the automaton τ −1 ◦τ ◦˜µ. That is, we need to divide out the influence of τ(w). Since we know the topology and arc weights for τ ahead of time, this is often as simple as dividing arc weights in τ ◦˜µ by the corresponding arc weight in τ(w). For example, if τ encodes a geometric distribution over word lengths and a uniform distribution over phonemes (that is, τ(w) ∝p|w|), then computing ˜µ is as simple as dividing each arc in τ ◦˜µ by p.7 There are a number of choices for τ. One is a hard maximum on the length of words. Another is to choose τ(w) to be a unigram language model over the language in question with a geometric probability over lengths. In our experiments, we find that τ(w) can be a geometric distribution over lengths with a uniform distribution over phonemes and still give reasonable results. This distribution captures the importance of shorter strings while still maintaining a relatively weak prior. What remains is the selection of the topologies for the approximating message ˜µ. We consider three possible approximations, illustrated in Figure 2. The first is a plain unigram model, the second is a bigram model, and the third is an anchored unigram topology: a position-specific unigram model for each position up to some maximum length. The first we consider is a standard unigram model, which is illustrated in Figure 2(a). It has |Σ| + 2 parameters: one weight σa for each phoneme a ∈Σ, a starting weight λ, and a stopping probability ρ. ˜µ then has the form: ˜µ(w) = λρ Y i≤|w| σwi Estimating this model involves only computing the expected count of each phoneme, along with 7Also, we must be sure to divide each final weight in the transducer by (1 −|Σ|p), which is the stopping probability for a geometric transducer. the expected length of a word, E[|w|]. We then normalize the counts according to the maximum likelihood estimate, with arc weights set as: σa ∝E[#(a)] Recall that these expectations can be computed using an expectation semiring. Finally, λ can be computed by ensuring that the approximate and exact expected marginals have the same partition function. That is, with the other parameters fixed, solve: X w τ(w)˜µ(w) = X w τ(w)µ(w) which amounts to rescaling ˜µ by some constant. The second topology we consider is the bigram topology, illustrated in Figure 2(b). It is similar to the unigram topology except that, instead of a single state, we have a state for each phoneme in Σ, along with a special start state. Each state a has transitions with weights σb|a = p(b|a) ∝ E[#(b|a)]. Normalization is similar to the unigram case, except that we normalize the transitions from each state. The final topology we consider is the positional unigram model in Figure 2(c). This topology takes positional information into account. Namely, for each position (up to some maximum position), we have a unigram model over phonemes emitted at that position, along with the probability of stopping at that position (i.e. a “sausage lattice”). Estimating the parameters of this model is similar, except that the expected counts for the phonemes in the alphabet are conditioned on their position in the string. With the expected counts for each position, we normalize each state’s final and outgoing weights. In our experiments, we set the maximum length to seven more than the length of the longest observed string. 7 Experiments We conduct three experiments. The first is a “complete data” experiment, in which we reconstitute the cognate groups from the Romance data set, where all cognate groups have words in all three languages. This task highlights the evolution and alignment models. The second is a much harder “partial data” experiment, in which we randomly prune 20% of the branches from the dataset according to the survival process described in Section 2.1. Here, only a fraction of words appear 1036 in any cognate group, so this task crucially involves the survival model. The ultimate purpose of the induced cognate groups is to feed richer evolutionary models, such as full reconstruction models. Therefore, we also consider a proto-word reconstruction experiment. For this experiment, using the system of Bouchard-Cˆot´e et al. (2009), we compare the reconstructions produced from our automatic groups to those produced from gold cognate groups. 7.1 Baseline As a novel but heuristic baseline for cognate group detection, we use an iterative bipartite matching algorithm where instead of conditional likelihoods for affinities we use Dice’s coefficient, defined for sets X and Y as: Dice(X, Y ) = 2|X ∩Y | |X| + |Y | (3) Dice’s coefficients are commonly used in bilingual detection of cognates (Kondrak, 2001; Kondrak et al., 2003). We follow prior work and use sets of bigrams within words. In our case, during bipartite matching the set X is the set of bigrams in the language being re-permuted, and Y is the union of bigrams in the other languages. 7.2 Experiment 1: Complete Data In this experiment, we know precisely how many cognate groups there are and that every cognate group has a word in each language. While this scenario does not include all of the features of the real-world task, it represents a good test case of how well these models can perform without the non-parametric task of deciding how many clusters to use. We scrambled the 583 cognate groups in the Romance dataset and ran each method to convergence. Besides the heuristic baseline, we tried our model-based approach using Unigrams, Bigrams and Anchored Unigrams, with and without learning the parametric edit distances. When we did not use learning, we set the parameters of the edit distance to (0, -3, -4) for matches, substitutions, and deletions/insertions, respectively. With learning enabled, transducers were initialized with those parameters. For evaluation, we report two metrics. The first is pairwise accuracy for each pair of languages, averaged across pairs of words. The other is accuPairwise Exact Acc. Match Heuristic Baseline 48.1 35.4 Model Transducers Messages Levenshtein Unigrams 37.2 26.2 Levenshtein Bigrams 43.0 26.5 Levenshtein Anch. Unigrams 68.6 56.8 Learned Unigrams 0.1 0.0 Learned Bigrams 38.7 11.3 Learned Anch. Unigrams 90.3 86.6 Table 1: Accuracies for reconstructing cognate groups. Levenshtein refers to fixed parameter edit distance transducer. Learned refers to automatically learned edit distances. Pairwise Accuracy means averaged on each word pair; Exact Match refers to percentage of completely and accurately reconstructed groups. For a description of the baseline, see Section 7.1. Prec. Recall F1 Heuristic Baseline 49.0 43.5 46.1 Model Transducers Messages Levenshtein Anch. Unigrams 86.5 36.1 50.9 Learned Anch. Unigrams 66.9 82.0 73.6 Table 2: Accuracies for reconstructing incomplete groups. Scores reported are precision, recall, and F1, averaged over all word pairs. racy measured in terms of the number of correctly, completely reconstructed cognate groups. Table 1 shows the results under various configurations. As can be seen, the kind of approximation used matters immensely. In this application, positional information is important, more so than the context of the previous phoneme. Both Unigrams and Bigrams significantly under-perform the baseline, while Anchored Unigrams easily outperforms it both with and without learning. An initially surprising result is that learning actually harms performance under the unanchored approximations. The explanation is that these topologies are not sensitive enough to context, and that the learning procedure ends up flattening the distributions. In the case of unigrams – which have the least context – learning degrades performance to chance. However, in the case of positional unigrams, learning reduces the error rate by more than two-thirds. 7.3 Experiment 2: Incomplete Data As a more realistic scenario, we consider the case where we do not know that all cognate groups have words in all languages. To test our model, we ran1037 domly pruned 20% of the branches according the survival process of our model.8 Because only Anchored Unigrams performed well in Experiment 1, we consider only it and the Dice’s coefficient baseline. The baseline needs to be augmented to support the fact that some words may not appear in all cognate groups. To do this, we thresholded the bipartite matching process so that if the coefficient fell below some value, we started a new group for that word. We experimented on 10 values in the range (0,1) for the baseline’s threshold and report on the one (0.2) that gives the best pairwise F1. The results are in Table 2. Here again, we see that the positional unigrams perform much better than the baseline system. The learned transducers seem to sacrifice precision for the sake of increased recall. This makes sense because the default edit distance parameter settings strongly favor exact matches, while the learned transducers learn more realistic substitution and deletion matrices, at the expense of making more mistakes. For example, the learned transducers enable our model to correctly infer that Portuguese /d1femdu/, Spanish /defiendo/, and Italian /difEndo/ are all derived from Latin /de:fendo:/ “defend.” Using the simple Levenshtein transducers, on the other hand, our model keeps all three separated, because the transducers cannot know – among other things – that Portuguese /1/, Spanish /e/, and Italian /i/ are commonly substituted for one another. Unfortunately, because the transducers used cannot learn contextual rules, certain transformations can be over-applied. For instance, Spanish /nombRar/ “name” is grouped together with Portuguese /num1RaR/ “number” and Italian /numerare/ “number,” largely because the rule Portuguese /u/ →Spanish /o/ is applied outside of its normal context. This sound change occurs primarily with final vowels, and does not usually occur word medially. Thus, more sophisticated transducers could learn better sound laws, which could translate into improved accuracy. 7.4 Experiment 3: Reconstructions As a final trial, we wanted to see how each automatically found cognate group faired as compared to the “true groups” for actual reconstruction of proto-words. Our model is not optimized 8This dataset will be made available at http://nlp.cs.berkeley.edu/Main.html#Historical for faithful reconstruction, and so we used the Ancestry Resampling system of Bouchard-Cˆot´e et al. (2009). To evaluate, we matched each Latin word with the best possible cognate group for that word. The process for the matching was as follows. If two or three of the words in an constructed cognate group agreed, we assigned the Latin word associated with the true group to it. With the remainder, we executed a bipartite matching based on bigram overlap. For evaluation, we examined the Levenshtein distance between the reconstructed word and the chosen Latin word. As a kind of “skyline,” we compare to the edit distances reported in Bouchard-Cˆot´e et al. (2009), which was based on complete knowledge of the cognate groups. On this task, our reconstructed cognate groups had an average edit distance of 3.8 from the assigned Latin word. This compares favorably to the edit distances reported in Bouchard-Cˆot´e et al. (2009), who using oracle cognate assignments achieved an average Levenshtein distance of 3.0.9 8 Conclusion We presented a new generative model of word lists that automatically finds cognate groups from scrambled vocabulary lists. This model jointly models the origin, propagation, and evolution of cognate groups from a common root word. We also introduced a novel technique for approximating automata. Using these approximations, our model can reduce the error rate by 80% over a baseline approach. Finally, we demonstrate that these automatically generated cognate groups can be used to automatically reconstruct proto-words faithfully, with a small increase in error. Acknowledgments Thanks to Alexandre Bouchard-Cˆot´e for the many insights. This project is funded in part by the NSF under grant 0915265 and an NSF graduate fellowship to the first author. References Leonard Bloomfield. 1938. Language. Holt, New York. 9Morphological noise and transcription errors contribute to the absolute error rate for this data set. 1038 Alexandre Bouchard-Cˆot´e, Percy Liang, Thomas Griffiths, and Dan Klein. 2007. A probabilistic approach to diachronic phonology. In EMNLP. Alexandre Bouchard-Cˆot´e, Thomas L. Griffiths, and Dan Klein. 2009. Improved reconstruction of protolanguage word forms. In NAACL, pages 65–73. Hal Daum´e III and Lyle Campbell. 2007. A Bayesian model for discovering typological implications. In Conference of the Association for Computational Linguistics (ACL). Hal Daum´e III. 2009. Non-parametric Bayesian model areal linguistics. In NAACL. Markus Dreyer and Jason Eisner. 2009. Graphical models over multiple strings. In EMNLP, Singapore, August. Jason Eisner. 2001. Expectation semirings: Flexible EM for finite-state transducers. In Gertjan van Noord, editor, FSMNLP. Jason Eisner. 2002. Parameter estimation for probabilistic finite-state transducers. In ACL. Grzegorz Kondrak, Daniel Marcu, and Keven Knight. 2003. Cognates can improve statistical translation models. In NAACL. Grzegorz Kondrak. 2001. Identifying cognates by phonetic and semantic similarity. In NAACL. Harold W. Kuhn. 1955. The Hungarian method for the assignment problem. Naval Research Logistics Quarterly, 2:83–97. Zhifei Li and Jason Eisner. 2009. First- and secondorder expectation semirings with applications to minimum-risk training on translation forests. In EMNLP. John B. Lowe and Martine Mazaudon. 1994. The reconstruction engine: a computer implementation of the comparative method. Computational Linguistics, 20(3):381–417. Gideon S. Mann and David Yarowsky. 2001. Multipath translation lexicon induction via bridge languages. In NAACL, pages 1–8. Association for Computational Linguistics. Thomas P. Minka. 2001. Expectation propagation for approximate bayesian inference. In UAI, pages 362– 369. Mehryar Mohri, Fernando Pereira, and Michael Riley. 1996. Weighted automata in text and speech processing. In ECAI-96 Workshop. John Wiley and Sons. Mehryar Mohri, 2009. Handbook of Weighted Automata, chapter Weighted Automata Algorithms. Springer. Andrea Mulloni. 2007. Automatic prediction of cognate orthography using support vector machines. In ACL, pages 25–30. John Nerbonne. 2010. Measuring the diffusion of linguistic change. Philosophical Transactions of the Royal Society B: Biological Sciences. Michael P. Oakes. 2000. Computer estimation of vocabulary in a protolanguage from word lists in four daughter languages. Quantitative Linguistics, 7(3):233–243. OED. 1989. “day, n.”. In The Oxford English Dictionary online. Oxford University Press. John Ohala, 1993. Historical linguistics: Problems and perspectives, chapter The phonetics of sound change, pages 237–238. Longman. Jose Oncina and Marc Sebban. 2006. Learning stochastic edit distance: Application in handwritten character recognition. Pattern Recognition, 39(9). Don Ringe, Tandy Warnow, and Ann Taylor. 2002. Indo-european and computational cladistics. Transactions of the Philological Society, 100(1):59–129. Alan S.C. Ross. 1950. Philological probability problems. Journal of the Royal Statistical Society Series B. David Yarowsky, Grace Ngai, and Richard Wicentowski. 2000. Inducing multilingual text analysis tools via robust projection across aligned corpora. In NAACL. 1039
2010
105
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1040–1047, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics An Exact A* Method for Deciphering Letter-Substitution Ciphers Eric Corlett and Gerald Penn Department of Computer Science University of Toronto {ecorlett,gpenn}@cs.toronto.edu Abstract Letter-substitution ciphers encode a document from a known or hypothesized language into an unknown writing system or an unknown encoding of a known writing system. It is a problem that can occur in a number of practical applications, such as in the problem of determining the encodings of electronic documents in which the language is known, but the encoding standard is not. It has also been used in relation to OCR applications. In this paper, we introduce an exact method for deciphering messages using a generalization of the Viterbi algorithm. We test this model on a set of ciphers developed from various web sites, and find that our algorithm has the potential to be a viable, practical method for efficiently solving decipherment problems. 1 Introduction Letter-substitution ciphers encode a document from a known language into an unknown writing system or an unknown encoding of a known writing system. This problem has practical significance in a number of areas, such as in reading electronic documents that may use one of many different standards to encode text. While this is not a problem in languages like English and Chinese, which have a small set of well known standard encodings such as ASCII, Big5 and Unicode, there are other languages such as Hindi in which there is no dominant encoding standard for the writing system. In these languages, we would like to be able to automatically retrieve and display the information in electronic documents which use unknown encodings when we find them. We also want to use these documents for information retrieval and data mining, in which case it is important to be able to read through them automatically, without resorting to a human annotator. The holy grail in this area would be an application to archaeological decipherment, in which the underlying language’s identity is only hypothesized, and must be tested. The purpose of this paper, then, is to simplify the problem of reading documents in unknown encodings by presenting a new algorithm to be used in their decipherment. Our algorithm operates by running a search over the n-gram probabilities of possible solutions to the cipher, using a generalization of the Viterbi algorithm that is wrapped in an A* search, which determines at each step which partial solutions to expand. It is guaranteed to converge on the language-modeloptimal solution, and does not require restarts or risk falling into local optima. We specifically consider the problem of finding decodings of electronic documents drawn from the internet, and we test our algorithm on ciphers drawn from randomly selected pages of Wikipedia. Our testing indicates that our algorithm will be effective in this domain. It may seem at first that automatically decoding (as opposed to deciphering) a document is a simple matter, but studies have shown that simple algorithms such as letter frequency counting do not always produce optimal solutions (Bauer, 2007). If the text from which a language model is trained is of a different genre than the plaintext of a cipher, the unigraph letter frequencies may differ substantially from those of the language model, and so frequency counting will be misleading. Because of the perceived simplicity of the problem, however, little work was performed to understand its computational properties until Peleg and Rosenfeld (1979), who developed a method that repeatedly swaps letters in a cipher to find a maximum probability solution. Since then, several different approaches to this problem have been suggested, some of which use word counts in the language to arrive at a solution (Hart, 1994), and some of 1040 which treat the problem as an expectation maximization problem (Knight et al., 2006; Knight, 1999). These later algorithms are, however, highly dependent on their initial states, and require a number of restarts in order to find the globally optimal solution. A further contribution was made by (Ravi and Knight, 2008), which, though published earlier, was inspired in part by the method presented here, first discovered in 2007. Unlike the present method, however, Ravi and Knight (2008) treat the decipherment of letter-substitution ciphers as an integer programming problem. Clever though this constraint-based encoding is, their paper does not quantify the massive running times required to decode even very short documents with this sort of approach. Such inefficiency indicates that integer programming may simply be the wrong tool for the job, possibly because language model probabilities computed from empirical data are not smoothly distributed enough over the space in which a cutting-plane method would attempt to compute a linear relaxation of this problem. In any case, an exact method is available with a much more efficient A* search that is linear-time in the length of the cipher (though still horribly exponential in the size of the cipher and plain text alphabets), and has the additional advantage of being massively parallelizable. (Ravi and Knight, 2008) also seem to believe that short cipher texts are somehow inherently more difficult to solve than long cipher texts. This difference in difficulty, while real, is not inherent, but rather an artefact of the character-level n-gram language models that they (and we) use, in which preponderant evidence of differences in short character sequences is necessary for the model to clearly favour one lettersubstitution mapping over another. Uniform character models equivocate regardless of the length of the cipher, and sharp character models with many zeroes can quickly converge even on short ciphers of only a few characters. In the present method, the role of the language model can be acutely perceived; both the time complexity of the algorithm and the accuracy of the results depend crucially on this characteristic of the language model. In fact, we must use add-one smoothing to decipher texts of even modest lengths because even one unseen plain-text letter sequence is enough to knock out the correct solution. It is likely that the method of (Ravi and Knight, 2008) is sensitive to this as well, but their experiments were apparently fixed on a single, well-trained model. Applications of decipherment are also explored by (Nagy et al., 1987), who uses it in the context of optical character recognition (OCR). The problem we consider here is cosmetically related to the “L2P” (letter-to-phoneme) mapping problem of text-to-speech synthesis, which also features a prominent constraint-based approach (van den Bosch and Canisius, 2006), but the constraints in L2P are very different: two different instances of the same written letter may legitimately map to two different phonemes. This is not the case in letter-substitution maps. 2 Terminology Substitution ciphers are ciphers that are defined by some permutation of a plaintext alphabet. Every character of a plaintext string is consistently mapped to a single character of an output string using this permutation. For example, if we took the string ”hello world” to be the plaintext, then the string ”ifmmp xpsme” would be a cipher that maps e to f, l to m, and so on. It is easy to extend this kind of cipher so that the plaintext alphabet is different from the ciphertext alphabet, but still stands in a one to one correspondence to it. Given a ciphertext C, we say that the set of characters used in C is the ciphertext alphabet ΣC, and that its size is nC. Similarly, the entire possible plaintext alphabet is ΣP , and its size is is nP . Since nC is the number of letters actually used in the cipher, rather than the entire alphabet it is sampled from, we may find that nC < nP even when the two alphabets are the same. We refer to the length of the cipher string C as clen. In the above example, ΣP is { , a, . . . z} and nP = 27, while ΣC = { , e, f, i, m, p, s, x}, clen = 11 and nC = 8. Given the ciphertext C, we say that a partial solution of size k is a map σ = {p1 : c1, . . . pk : ck}, where c1, . . . , ck ∈ΣC and are distinct, and p1, . . . , pk ∈ΣP and are distinct, and where k ≤ nC. If for a partial solution σ′, we have that σ ⊂ σ′, then we say that σ′ extends σ. If the size of σ′ is k+1 and σ is size k, we say that σ′ is an immediate extension of σ. A full solution is a partial solution of size nC. In the above example, σ1 = { : , d : e} would be a partial solution of size 2, and σ2 = { : , d : e, g : m} would be a partial solution of size 3 that immediately extends σ1. A partial solution σT { : , d : e, e : f, h : i, l : m, o : 1041 p, r : s, w : x} would be both a full solution and the correct one. The full solution σT extends σ1 but not σ2. Every possible full solution to a cipher C will produce a plaintext string with some associated language model probability, and we will consider the best possible solution to be the one that gives the highest probability. For the sake of concreteness, we will assume here that the language model is a character-level trigram model. This plaintext can be found by treating all of the length clen strings S as being the output of different character mappings from C. A string S that results from such a mapping is consistent with a partial solution σ iff, for every pi : ci ∈σ, the character positions of C that map to pi are exactly the character positions with ci in C. In our above example, we had C = ”ifmmp xpsme”, in which case we had clen = 11. So mappings from C to ”hhhhh hhhhh” or ” hhhhhhhhhh” would be consistent with a partial solution of size 0, while ”hhhhh hhhhn” would be consistent with the size 2 partial solution σ = { : , n : e}. 3 The Algorithm In order to efficiently search for the most likely solution for a ciphertext C, we conduct a search of the partial solutions using their trigram probabilities as a heuristic, where the trigram probability of a partial solution σ of length k is the maximum trigram probability over all strings consistent with it, meaning, in particular, that ciphertext letters not in its range can be mapped to any plaintext letter, and do not even need to be consistently mapped to the same plaintext letter in every instance. Given a partial solution σ of length n, we can extend σ by choosing a ciphertext letter c not in the range of σ, and then use our generalization of the Viterbi algorithm to find, for each p not in the domain of σ, a score to rank the choice of p for c, namely the trigram probability of the extension σp of σ. If we start with an empty solution and iteratively choose the most likely remaining partial solution in this way, storing the extensions obtained in a priority heap as we go, we will eventually reach a solution of size nC. Every extension of σ has a probability that is, at best, equal to that of σ, and every partial solution receives, at worst, a score equal to its best extension, because the score is potentially based on an inconsistent mapping that does not qualify as an extension. These two observations taken together mean that one minus the score assigned by our method constitutes a cost function over which this score is an admissible heuristic in the A* sense. Thus the first solution of size nC will be the best solution of size nC. The order by which we add the letters c to partial solutions is the order of the distinct ciphertext letters in right-to-left order of their final occurrence in C. Other orderings for the c, such as most frequent first, are also possible though less elegant.1 Algorithm 1 Search Algorithm Order the letters c1 . . . cnC by rightmost occurrence in C, rnC < . . . < r1. Create a priority queue Q for partial solutions, ordered by highest probability. Push the empty solution σ0 = {} onto the queue. while Q is not empty do Pop the best partial solution σ from Q. s = |σ|. if s = nC then return σ else For all p not in the range of σ, push the immediate extension σp onto Q with the score assigned to table cell G(rs+1, p, p) by GVit(σ, cs+1, rs+1) if it is non-zero. end if end while Return ”Solution Infeasible”. Our generalization of the Viterbi algorithm, depicted in Figure 1, uses dynamic programming to score every immediate extension of a given partial solution in tandem, by finding, in a manner consistent with the real Viterbi algorithm, the most probable input string given a set of output symbols, which in this case is the cipher C. Unlike the real Viterbi algorithm, we must also observe the constraints of the input partial solution’s mapping. 1We have experimented with the most frequent first regimen as well, and it performs worse than the one reported here. Our hypothesis is that this is due to the fact that the most frequent character tends to appear in many high-frequency trigrams, and so our priority queue becomes very long because of a lack of low-probability trigrams to knock the scores of partial solutions below the scores of the extensions of their better scoring but same-length peers. A least frequent first regimen has the opposite problem, in which their rare occurrence in the ciphertext provides too few opportunities to potentially reduce the score of a candidate. 1042 A typical decipherment involves multiple runs of this algorithm, each of which scores all of the immediate extensions, both tightening and lowering their scores relative to the score of the input partial solution. A call GVit(σ, c, r) manages this by filling in a table G such that for all 1 ≤i ≤r, and l, k ∈ΣP , G(i, l, k) is the maximum probability over every plaintext string S for which: • len(S) = i, • S[i] = l, • for every p in the domain of σ, every 1 ≤j ≤ i, if C[j] = σ(p) then S[j] = p, and • for every position 1 ≤j ≤i, if C[j] = c, then S[j] = k. The real Viterbi algorithm lacks these final two constraints, and would only store a single cell at G(i, l). There, G is called a trellis. Ours is larger, so so we will refer to G as a greenhouse. The table is completed by filling in the columns from i = 1 to clen in order. In every column i, we will iterate over the values of l and over the values of k such that k : c and l : are consistent with σ. Because we are using a trigram character model, the cells in the first and second columns must be primed with unigram and bigram probabilities. The remaining probabilities are calculated by searching through the cells from the previous two columns, using the entry at the earlier column to indicate the probability of the best string up to that point, and searching through the trigram probabilities over two additional letters. Backpointers are necessary to reference one of the two language model probabilities. Cells that would produce inconsistencies are left at zero, and these as well as cells that the language model assigns zero to can only produce zero entries in later columns. In order to decrease the search space, we add the further restriction that the solutions of every three character sequence must be consistent: if the ciphertext indicates that two adjacent letters are the same, then only the plaintext strings that map the same letter to each will be considered. The number of letters that are forced to be consistent is three because consistency is enforced by removing inconsistent strings from consideration during trigram model evaluation. Because every partial solution is only obtained by extending a solution of size one less, and extensions are only made in a predetermined order of cipher alphabet letters, every partial solution is only considered / extended once. GVit is highly parallelizable. The nP ×nP cells of every column i do not depend on each other — only on the cells of the previous two columns i−1 and i−2, as well as the language model. In our implementation of the algorithm, we have written the underlying program in C/C++, and we have used the CUDA library developed for NVIDIA graphics cards to in order to implement the parallel sections of the code. 4 Experiment The above algorithm is designed for application to the transliteration of electronic documents, specifically, the transliteration of websites, and it has been tested with this in mind. In order to gain realistic test data, we have operated on the assumption that Wikipedia is a good approximation of the type of language that will be found in most internet articles. We sampled a sequence of Englishlanguage articles from Wikipedia using their random page selector, and these were used to create a set of reference pages. In order to minimize the common material used in each page, only the text enclosed by the paragraph tags of the main body of the pages were used. A rough search over internet articles has shown that a length of 1000 to 11000 characters is a realistic length for many articles, although this can vary according to the genre of the page. Wikipedia, for example, does have entries that are one sentence in length. We have run two groups of tests for our algorithm. In the first set of tests, we chose the mean of the above lengths to be our sample size, and we created and decoded 10 ciphers of this size (i.e., different texts, same size). We made these cipher texts by appending the contents of randomly chosen Wikipedia pages until they contained at least 6000 characters, and then using the first 6000 characters of the resulting files as the plaintexts of the cipher. The text length was rounded up to the nearest word where needed. In the second set of tests, we used a single long ciphertext, and measured the time required for the algorithm to finish a number of prefixes of it (i.e., same text, different sizes). The plaintext for this set of tests was developed in the same way as the first set, and the input ciphertext lengths considered were 1000, 3500, 6000, 8500, 11000, and 13500 characters. 1043 Greenhouse Array (a) (b) (c) (d) ... l m n ... z l w · · · y t g · · · g u · · · e f g · · · z Figure 1: Filling the Greenhouse Table. Each cell in the greenhouse is indexed by a plaintext letter and a character from the cipher. Each cell consists of a smaller array. The cells in the array give the best probabilities of any path passing through the greenhouse cell, given that the index character of the array maps to the character in column c, where c is the next ciphertext character to be fixed in the solution. The probability is set to zero if no path can pass through the cell. This is the case, for example, in (b) and (c), where the knowledge that ” ” maps to ” ” would tell us that the cells indicated in gray are unreachable. The cell at (d) is filled using the trigram probabilities and the probability of the path at starting at (a). In all of the data considered, the frequency of spaces was far higher than that of any other character, and so in any real application the character corresponding to the space can likely be guessed without difficulty. The ciphers we have considered have therefore been simplified by allowing the knowledge of which character corresponds to the space. It appears that Ravi and Knight (2008) did this as well. Our algorithm will still work without this assumption, but would take longer. In the event that a trigram or bigram would be found in the plaintext that was not counted in the language model, add one smoothing was used. Our character-level language model used was developed from the first 1.5 million characters of the Wall Street Journal section of the Penn Treebank corpus. The characters used in the language model were the upper and lower case letters, spaces, and full stops; other characters were skipped when counting the frequencies. Furthermore, the number of sequential spaces allowed was limited to one in order to maximize context and to eliminate any long stretches of white space. As discussed in the previous paragraph, the space character is assumed to be known. When testing our algorithm, we judged the time complexity of our algorithm by measuring the actual time taken by the algorithm to complete its runs, as well as the number of partial solutions placed onto the queue (“enqueued”), the number popped off the queue (“expanded”), and the number of zero-probability partial solutions not enqueued (“zeros”) during these runs. These latter numbers give us insight into the quality of trigram probabilities as a heuristic for the A* search. We judged the quality of the decoding by measuring the percentage of characters in the cipher alphabet that were correctly guessed, and also the word error rate of the plaintext generated by our solution. The second metric is useful because a low probability character in the ciphertext may be guessed wrong without changing as much of the actual plaintext. Counting the actual number of word errors is meant as an estimate of how useful or readable the plaintext will be. We did not count the accuracy or word error rate for unfinished ciphers. We would have liked to compare our results with those of Ravi and Knight (2008), but the method presented there was simply not feasible 1044 Algorithm 2 Generalized Viterbi Algorithm GVit(σ, c, r) Input: partial solution σ, ciphertext character c, and index r into C. Output: greenhouse G. Initialize G to 0. i = 1 for All (l, k) such that σ ∪{k : c, l : Ci} is consistent do G(i, l, k) = P(l). end for i = 2 for All (l, k) such that σ ∪{k : c, l : Ci} is consistent do for j such that σ ∪{k : c, l : Ci, j : Ci−1} is consistent do G(i, l, k) = max(G(i, l, k), G(0, j, k) × P(l|j)) end for end for i = 3 for (l, k) such that σ ∪{k : c, l : Ci} is consistent do for j1, j2 such that σ∪{k : c, j2 : C[i−2], j1 : C[i −1], l : Ci} is consistent do G(i, l, k) = max(G(i, l, k), G(i−2, j2, k) × P(j1|j2) × P(l|j2j1)). end for end for for i = 4 to r do for (l, k) such that σ ∪{k : c, l : Ci} is consistent do for j1, j2 such that σ ∪{k : c, j2 : C[i−2], j1 : C[i−1], l : Ci} is consistent do G(i, l, k) = max(G(i, l, k), G(i−2, j2, k)×P(j1|j2j2(back)) × P(l|j2j1)). end for end for end for on texts and (case-sensitive) alphabets of this size with the computing hardware at our disposal. 5 Results In our first set of tests, we measured the time consumption and accuracy of our algorithm over 10 ciphers taken from random texts that were 6000 characters long. The time values in these tables are given in the format of (H)H:MM:SS. For this set of tests, in the event that a test took more than 12 hours, we terminated it and listed it as unfinished. This cutoff was set in advance of the runs based upon our armchair speculation about how long one might at most be reasonably expected to wait for a web-page to be transliterated (an overnight run). The results from this run appear in Table 1. All running times reported in this section were obtained on a computer running Ubuntu Linux 8.04 with 4 GB of RAM and 8 × 2.5 GHz CPU cores. Column-level subcomputations in the greenhouse were dispatched to an NVIDIA Quadro FX 1700 GPU card that is attached through a 16-lane PCI Express adapter. The card has 512 MB of cache memory, a 460 MHz core processor and 32 shader processors operating in parallel at 920 MHz each. In our second set of tests, we measured the time consumption and accuracy of our algorithm over several prefixes of different lengths of a single 13500-character ciphertext. The results of this run are given in Table 2. The first thing to note in this data is that the accuracy of this algorithm is above 90 % for all of the test data, and 100% on all but the smallest 2 ciphers. We can also observe that even when there are errors (e.g., in the size 1000 cipher), the word error rate is very small. This is a Zipf’s Law effect — misclassified characters come from poorly attested character trigrams, which are in turn found only in longer, rarer words. The overall high accuracy is probably due to the large size of the texts relative to the uniticity distance of an English letter-substitution cipher (Bauer, 2007). The results do show, however, that character trigram probabilities are an effective indicator of the most likely solution, even when the language model and test data are from very different genres (here, the Wall Street Journal and Wikipedia, respectively). These results also show that our algorithm is effective as a way of decoding simple ciphers. 80% of our runs finished before the 12 hour cutoff in the first experiment. 1045 Cipher Time Enqueued Expanded Zeros Accuracy Word Error Rate 1 2:03:06 964 964 44157 100% 0% 2 0:13:00 132 132 5197 100% 0% 3 0:05:42 91 91 3080 100% 0% 4 Unfinished N/A N/A N/A N/A N/A 5 Unfinished N/A N/A N/A N/A N/A 6 5:33:50 2521 2521 114283 100% 0% 7 6:02:41 2626 2626 116392 100% 0% 8 3:19:17 1483 1483 66070 100% 0% 9 9:22:54 4814 4814 215086 100% 0% 10 1:23:21 950 950 42107 100% 0% Table 1: Time consumption and accuracy on a sample of 10 6000-character texts. Size Time Enqueued Expanded Zeros Accuracy Word Error Rate 1000 40:06:05 119759 119755 5172631 92.59% 1.89% 3500 0:38:02 615 614 26865 96.30% 0.17% 6000 0:12:34 147 147 5709 100% 0% 8500 8:52:25 1302 1302 60978 100% 0% 11000 1:03:58 210 210 8868 100% 0% 13500 0:54:30 219 219 9277 100% 0% Table 2: Time consumption and accuracy on prefixes of a single 13500-character ciphertext. As far as the running time of the algorithm goes, we see a substantial variance: from a few minutes to several hours for most of the longer ciphers, and that there are some that take longer than the threshold we gave in the experiment. Specifically, there is substantial variability in the the running times seen. Desiring to reduce the variance of the running time, we look at the second set of tests for possible causes. In the second test set, there is a general decrease in both the running time and the number of solutions expanded as the length of the ciphers increases. Running time correlates very well with A* queue size. Asymptotically, the time required for each sweep of the Viterbi algorithm increases, but this is more than offset by the decrease in the number of required sweeps. The results, however, do not show that running time monotonically decreases with length. In particular, the length 8500 cipher generates more solutions than the length 3500 or 6000 ones. Recall that the ciphers in this section are all prefixes of the same string. Because the algorithm fixes characters starting from the end of the cipher, these prefixes have very different character orderings, c1, . . . , cnC, and thus a very different order of partial solutions. The running time of our algorithm depends very crucially on these initial conditions. Perhaps most interestingly, we note that the number of enqueued partial solutions is in every case identical or nearly identical to the number of partial solutions expanded. From a theoretical perspective, we must also remember the zero-probability solutions, which should in a sense count when judging the effectiveness of our A* heuristic. Naturally, these are ignored by our implementation because they are so badly scored that they could never be considered. Nevertheless, what these numbers show is that scores based on character-level trigrams, while theoretically admissible, are really not all that clever when it comes to navigating through the search space of all possible letter substitution ciphers, apart from their very keen ability at assigning zeros to a large number of partial solutions. A more complex heuristic that can additionally rank non-zero probability solutions with more prescience would likely make a very great difference to the running time of this method. 1046 6 Conclusions In the above paper, we have presented an algorithm for solving letter-substitution ciphers, with an eye towards discovering unknown encoding standards in electronic documents on the fly. In a test of our algorithm over ciphers drawn from Wikipedia, we found its accuracy to be 100% on the ciphers that it solved within a threshold of 12 hours, this being 80% of the total attempted. We found that the running time of our algorithm is highly variable depending on the order of characters attempted, and, due to the linear-time theoretical complexity of this method, that running times tend to decrease with larger ciphertexts due to our character-level language model’s facility at eliminating highly improbable solutions. There is, however, a great deal of room for improvement in the trigram model’s ability to rank partial solutions that are not eliminated outright. Perhaps the most valuable insight gleaned from this study has been on the role of the language model. This algorithm’s asymptotic runtime complexity is actually a function of entropic aspects of the character-level language model that it uses — more uniform models provide less prominent separations between candidate partial solutions, and this leads to badly ordered queues, in which extended partial solutions can never compete with partial solutions that have smaller domains, leading to a blind search. We believe that there is a great deal of promise in characterizing natural language processing algorithms in this way, due to the prevalence of Bayesian methods that use language models as priors. Our approach makes no explicit attempt to account for noisy ciphers, in which characters are erroneously mapped, nor any attempt to account for more general substitution ciphers in which a single plaintext (resp. ciphertext) letter can map to multiple ciphertext (resp. plaintext) letters, nor for ciphers in which ciphertext units corresponds to larger units of plaintext such syllables or words. Extensions in these directions are all very worthwhile to explore. References Friedrich L. Bauer. 2007. Decrypted Secrets. Springer-Verlag, Berlin Heidelberg. George W. Hart. 1994. To Decode Short Cryptograms. Communications of the ACM, 37(9): 102–108. Kevin Knight. 1999. Decoding Complexity in WordReplacement Translation Models. Computational Linguistics, 25(4):607–615. Kevin Knight, Anish Nair, Nishit Rathod, Kenji Yamada. Unsupervised Analysis for Decipherment Problems. Proceedings of the COLING/ACL 2006, 2006, 499–506. George Nagy, Sharad Seth, Kent Einspahr. 1987. Decoding Substitution Ciphers by Means of Word Matching with Application to OCR. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9(5):710–715. Shmuel Peleg and Azriel Rosenfeld. 1979. Breaking Substitution Ciphers Using a Relaxation Algorithm. Communications of the ACM, 22(11):589–605. Sujith Ravi, Kevin Knight. 2008. Attacking Decipherment Problems Optimally with Low-Order N-gram Models Proceedings of the ACL 2008, 812–819. Antal van den Bosch, Sander Canisius. 2006. Improved Morpho-phonological Sequence Processing with Constraint Satisfaction Inference Proceedings of the Eighth Meeting of the ACL Special Interest Group on Computational Phonology at HLT-NAACL 2006, 41–49. 1047
2010
106
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1048–1057, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics A Statistical Model for Lost Language Decipherment Benjamin Snyder and Regina Barzilay CSAIL Massachusetts Institute of Technology {bsnyder,regina}@csail.mit.edu Kevin Knight ISI University of Southern California [email protected] Abstract In this paper we propose a method for the automatic decipherment of lost languages. Given a non-parallel corpus in a known related language, our model produces both alphabetic mappings and translations of words into their corresponding cognates. We employ a non-parametric Bayesian framework to simultaneously capture both low-level character mappings and highlevel morphemic correspondences. This formulation enables us to encode some of the linguistic intuitions that have guided human decipherers. When applied to the ancient Semitic language Ugaritic, the model correctly maps 29 of 30 letters to their Hebrew counterparts, and deduces the correct Hebrew cognate for 60% of the Ugaritic words which have cognates in Hebrew. 1 Introduction Dozens of lost languages have been deciphered by humans in the last two centuries. In each case, the decipherment has been considered a major intellectual breakthrough, often the culmination of decades of scholarly efforts. Computers have played no role in the decipherment any of these languages. In fact, skeptics argue that computers do not possess the “logic and intuition” required to unravel the mysteries of ancient scripts.1 In this paper, we demonstrate that at least some of this logic and intuition can be successfully modeled, allowing computational tools to be used in the decipherment process. 1“Successful archaeological decipherment has turned out to require a synthesis of logic and intuition ...that computers do not (and presumably cannot) possess.” A. Robinson, “Lost Languages: The Enigma of the World’s Undeciphered Scripts” (2002) Our definition of the computational decipherment task closely follows the setup typically faced by human decipherers (Robinson, 2002). Our input consists of texts in a lost language and a corpus of non-parallel data in a known related language. The decipherment itself involves two related subtasks: (i) finding the mapping between alphabets of the known and lost languages, and (ii) translating words in the lost language into corresponding cognates of the known language. While there is no single formula that human decipherers have employed, manual efforts have focused on several guiding principles. A common starting point is to compare letter and word frequencies between the lost and known languages. In the presence of cognates the correct mapping between the languages will reveal similarities in frequency, both at the character and lexical level. In addition, morphological analysis plays a crucial role here, as highly frequent morpheme correspondences can be particularly revealing. In fact, these three strands of analysis (character frequency, morphology, and lexical frequency) are intertwined throughout the human decipherment process. Partial knowledge of each drives discovery in the others. We capture these intuitions in a generative Bayesian model. This model assumes that each word in the lost language is composed of morphemes which were generated with latent counterparts in the known language. We model bilingual morpheme pairs as arising through a series of Dirichlet processes. This allows us to assign probabilities based both on character-level correspondences (using a character-edit base distribution) as well as higher-level morpheme correspondences. In addition, our model carries out an implicit morphological analysis of the lost language, utilizing the known morphological structure of the related language. This model structure allows us to capture the interplay between the character1048 and morpheme-level correspondences that humans have used in the manual decipherment process. In addition, we introduce a novel technique for imposing structural sparsity constraints on character-level mappings. We assume that an accurate alphabetic mapping between related languages will be sparse in the following way: each letter will map to a very limited subset of letters in the other language. We capture this intuition by adapting the so-called “spike and slab” prior to the Dirichlet-multinomial setting. For each pair of characters in the two languages, we posit an indicator variable which controls the prior likelihood of character substitutions. We define a joint prior over these indicator variables which encourages sparse settings. We applied our model to a corpus of Ugaritic, an ancient Semitic language discovered in 1928. Ugaritic was manually deciphered in 1932, using knowledge of Hebrew, a related language. We compare our method against the only existing decipherment baseline, an HMM-based character substitution cipher (Knight and Yamada, 1999; Knight et al., 2006). The baseline correctly maps the majority of letters — 22 out of 30 — to their correct Hebrew counterparts, but only correctly translates 29% of all cognates. In comparison, our method yields correct mappings for 29 of 30 letters, and correctly translates 60.4% of all cognates. 2 Related Work Our work on decipherment has connections to three lines of work in statistical NLP. First, our work relates to research on cognate identification (Lowe and Mazaudon, 1994; Guy, 1994; Kondrak, 2001; Bouchard et al., 2007; Kondrak, 2009). These methods typically rely on information that is unknown in a typical deciphering scenario (while being readily available for living languages). For instance, some methods employ a hand-coded similarity function (Kondrak, 2001), while others assume knowledge of the phonetic mapping or require parallel cognate pairs to learn a similarity function (Bouchard et al., 2007). A second related line of work is lexicon induction from non-parallel corpora. While this research has similar goals, it typically builds on information or resources unavailable for ancient texts, such as comparable corpora, a seed lexicon, and cognate information (Fung and McKeown, 1997; Rapp, 1999; Koehn and Knight, 2002; Haghighi et al., 2008). Moreover, distributional methods that rely on co-occurrence analysis operate over large corpora, which are typically unavailable for a lost language. Finally, Knight and Yamada (1999) and Knight et al. (2006) describe a computational HMMbased method for deciphering an unknown script that represents a known spoken language. This method “makes the text speak” by gleaning character-to-sound mappings from non-parallel character and sound sequences. It does not relate words in different languages, thus it cannot encode deciphering constraints similar to the ones considered in this paper. More importantly, this method had not been applied to archaeological data. While lost languages are gaining increasing interest in the NLP community (Knight and Sproat, 2009), there have been no successful attempts of their automatic decipherment. 3 Background on Ugaritic Manual Decipherment of Ugaritic Ugaritic tablets were first found in Syria in 1929 (Smith, 1955; Watson and Wyatt, 1999). At the time, the cuneiform writing on the tablets was of an unknown type. Charles Virolleaud, who lead the initial decipherment effort, recognized that the script was likely alphabetic, since the inscribed words consisted of only thirty distinct symbols. The location of the tablets discovery further suggested that Ugaritic was likely to have been a Semitic language from the Western branch, with properties similar to Hebrew and Aramaic. This realization was crucial for deciphering the Ugaritic script. In fact, German cryptographer and Semitic scholar Hans Bauer decoded the first two Ugaritic letters—mem and lambda—by mapping them to Hebrew letters with similar occurrence patterns in prefixes and suffixes. Bootstrapping from this finding, Bauer found words in the tablets that were likely to serve as cognates to Hebrew words— e.g., the Ugaritic word for king matches its Hebrew equivalent. Through this process a few more letters were decoded, but the Ugaritic texts were still unreadable. What made the final decipherment possible was a sheer stroke of luck— Bauer guessed that a word inscribed on an ax discovered in the Ras Shamra excavations was the Ugaritic word for ax. Bauer’s guess was correct, though he selected the wrong phonetic sequence. Edouard Dhorme, another cryptographer 1049 and Semitic scholar, later corrected the reading, expanding a set of translated words. Discoveries of additional tablets allowed Bauer, Dhorme and Virolleaud to revise their hypothesis, successfully completing the decipherment. Linguistic Features of Ugaritic Ugaritic shares many features with other ancient Semitic languages, following the same word order, gender, number, and case structure (Hetzron, 1997). It is a morphologically rich language, with triliteral roots and many prefixes and suffixes. At the same time, it exhibits a number of features that distinguish it from Hebrew. Ugaritic has a bigger phonemic inventory than Hebrew, yielding a bigger alphabet – 30 letters vs. 22 in Hebrew. Another distinguishing feature of Ugaritic is that vowels are only written with glottal stops while in Hebrew many long vowels are written using homorganic consonants. Ugaritic also does not have articles, while Hebrew nouns and adjectives take definite articles which are realized as prefixes. These differences result in significant divergence between Hebrew and Ugaritic cognates, thereby complicating the decipherment process. 4 Problem Formulation We are given a corpus in a lost language and a nonparallel corpus in a related language from the same language family. Our primary goal is to translate words in the unknown language by mapping them to cognates in the known language. As part of this process, we induce a lower-level mapping between the letters of the two alphabets, capturing the regular phonetic correspondences found in cognates. We make several assumptions about the writing system of the lost language. First, we assume that the writing system is alphabetic in nature. In general, this assumption can be easily validated by counting the number of symbols found in the written record. Next, we assume that the corpus has been transcribed into electronic format, where the graphemes present in the physical text have been unambiguously identified. Finally, we assume that words are explicitly separated in the text, either by white space or a special symbol. We also make a mild assumption about the morphology of the lost language. We posit that each word consists of a stem, prefix, and suffix, where the latter two may be omitted. This assumption captures a wide range of human languages and a variety of morphological systems. While the correct morphological analysis of words in the lost language must be learned, we assume that the inventory and frequencies of prefixes and suffixes in the known language are given. In summary, the observed input to the model consists of two elements: (i) a list of unanalyzed word types derived from a corpus in the lost language, and (ii) a morphologically analyzed lexicon in a known related language derived from a separate corpus, in our case non-parallel. 5 Model 5.1 Intuitions Our goal is to incorporate the logic and intuition used by human decipherers in an unsupervised statistical model. To make these intuitions concrete, consider the following toy example, consisting of a lost language much like English, but written using numerals: • 15234 (asked) • 1525 (asks) • 4352 (desk) Analyzing the undeciphered corpus, we might first notice a pair of endings, -34, and -5, which both occur after the initial sequence 152- (and may likewise occur at the end of a variety of words in the corpus). If we know this lost language to be closely related to English, we can surmise that these two endings correspond to the English verbal suffixes -ed and -s. Using this knowledge, we can hypothesize the following character correspondences: (3 = e), (4 = d), (5 = s). We now know that (4252 = des2) and we can use our knowledge of the English lexicon to hypothesize that this word is desk, thereby learning the correspondence (2 = k). Finally, we can use similar reasoning to reveal that the initial character sequence 152- corresponds to the English verb ask. As this example illustrates, human decipherment efforts proceed by discovering both character-level and morpheme-level correspondences. This interplay implicitly relies on a morphological analysis of words in the lost language, while utilizing knowledge of the known language’s lexicon and morphology. One final intuition our model should capture is the sparsity of the alphabetic correspondence between related languages. We know from comparative linguistics that the correct mapping will pre1050 serve regular phonetic relationships between the two languages (as exemplified by cognates). As a result, each character in one language will map to a small number of characters in the other language (typically one, but sometimes two or three). By incorporating this structural sparsity intuition, we can allow the model to focus on on a smaller set of linguistically valid hypotheses. Below we give an overview of our model, which is designed to capture these linguistic intuitions. 5.2 Model Structure Our model posits that every observed word in the lost language is composed of a sequence of morphemes (prefix, stem, suffix). Furthermore we posit that each morpheme was probabilistically generated jointly with a latent counterpart in the known language. Our goal is to find those counterparts that lead to high frequency correspondences both at the character and morpheme level. The technical challenge is that each level of correspondence (character and morpheme) can completely describe the observed data. A probabilistic mechanism based simply on one leaves no room for the other to play a role. We resolve this tension by employing a non-parametric Bayesian model: the distributions over bilingual morpheme pairs assign probability based on recurrent patterns at the morpheme level. These distributions are themselves drawn from a prior probabilistic process which favors distributions with consistent character-level correspondences. We now give a formal description of the model (see Figure 1 for a graphical overview). There are four basic layers in the generative process: 1. Structural sparsity: draw a set of indicator variables ⃗λ corresponding to character-edit operations. 2. Character-edit distribution: draw a base distribution G0 parameterized by weights on character-edit operations. 3. Morpheme-pair distributions: draw a set of distributions on bilingual morpheme pairs Gstm, Gpre|stm, Gsuf|stm. 4. Word generation: draw pairs of cognates in the lost and known language, as well as words in the lost language with no cognate counterpart. G0 word Gstm ustm hstm upre hpre usuf hsuf stm stm Gsuf|stm Gpre|stm ⃗v ⃗λ Figure 1: Plate diagram of the decipherment model. The structural sparsity indicator variables ⃗λ determine the values of the base distribution hyperparameters ⃗v. The base distribution G0 defines probabilities over string-pairs based solely on character-level edits. The morpheme-pair distributions Gstm, Gpre|stm, Gsuf|stm directly assign probabilities to highly frequent morpheme pairs. We now go through each step in more detail. Structural Sparsity The first step of the generative process provides a control on the sparsity of edit-operation probabilities, encoding the linguistic intuition that the correct character-level mappings should be sparse. The set of edit operations includes character substitutions, insertions, and deletions, as well as a special end symbol: {(u, h), (ϵ, h), (u, ϵ), END} (where u and h range over characters in the lost and known languages, respectively). For each edit operation e we posit a corresponding indicator variable λe. The set of character substitutions with indicators set to one, {(u, h) : λ(u,h) = 1}) conveys the set of phonetically valid correspondences. We define a joint prior over these variables to encourage sparse character mappings. This prior can be viewed as a distribution over binary matrices and is defined to encourage rows and columns to sum to low integer values (typically 1). More precisely, for each character u in the lost language, we count the number of mappings c(u) = ∑ h λ(u,h). We then define a set of features which count how many of these characters map to i other characters beyond some budget bi: fi = max (0, |{u : c(u) = i}| −bi). Likewise, we define corresponding features f′ i and budgets b′ i for the characters h in the known lan1051 guage. The prior over ⃗λ is then defined as P(⃗λ) = exp (⃗f · ⃗w + ⃗f′ · ⃗w ) Z (1) where the feature weight vector ⃗w is set to encourage sparse mappings, and Z is a corresponding normalizing constant, which we never need compute. We set ⃗w so that each character must map to at least one other character, and so that mappings to more than one other character are discouraged 2 Character-edit Distribution The next step in the generative process is drawing a base distribution G0 over character edit sequences (each of which yields a bilingual pair of morphemes). This distribution is parameterized by a set of weights ⃗ϕ on edit operations, where the weights over substitutions, insertions, and deletions each individually sum to one. In addition, G0 provides a fixed distribution q over the number of insertions and deletions occurring in any single edit sequence. Probabilities over edit sequences (and consequently on bilingual morpheme pairs) are then defined according to G0 as: P(⃗e) = ∏ i ϕei · q (#ins(⃗e), #del(⃗e)) We observe that the average Ugaritic word is over two letters longer than the average Hebrew word. Thus, occurrences of Hebrew character insertions are a priori likely, and Ugaritic character deletions are very unlikely. In our experiments, we set q to disallow Ugaritic deletions, and to allow one Hebrew insertion per morpheme (with probability 0.4). The prior on the base distribution G0 is a Dirichlet distribution with hyperparameters ⃗v, i.e., ⃗ϕ ∼Dirichlet(⃗v). Each value ve thus corresponds to a character edit operation e. Crucially, the value of each ve depends deterministically on its corresponding indicator variable: ve = { 1 if λe = 0, K if λe = 1. where K is some constant value > 1.3 The overall effect is that when λe = 0, the marginal prior density of the corresponding edit weight ϕe spikes at 2We set w0 = −∞, w1 = 0, w2 = −50, w>2 = −∞, with budgets b′ 2 = 7, b′ 3 = 1 (otherwise zero), reflecting the knowledge that there are eight more Ugaritic than Hebrew letters. 3Set to 50 in our experiments. 0. When λe = 1, the corresponding marginal prior density remains relatively flat and unconstrained. See (Ishwaran and Rao, 2005) for a similar application of “spike-and-slab” priors in the regression scenario. Morpheme-pair Distributions Next we draw a series of distributions which directly assign probability to morpheme pairs. The previously drawn base distribution G0 along with a fixed concentration parameter α define a Dirichlet process (Antoniak, 1974): DP(G0, α), which provides probabilities over morpheme-pair distributions. The resulting distributions are likely to be skewed in favor of a few frequently occurring morphemepairs, while remaining sensitive to the characterlevel probabilities of the base distribution. Our model distinguishes between three types of morphemes: prefixes, stems, and suffixes. As a result, we model each morpheme type as arising from distinct Dirichlet processes, that share a single base distribution: Gstm ∼ DP(G0, αstm) Gpre|stm ∼ DP(G0, αpre) Gsuf|stm ∼ DP(G0, αsuf) We model prefix and suffix distributions as conditionally dependent on the part-of-speech of the stem morpheme-pair. This choice capture the linguistic fact that different parts-of-speech bear distinct affix frequencies. Thus, while we draw a single distribution Gstm, we maintain separate distributions Gpre|stm and Gsuf|stm for each possible stem part-of-speech. Word Generation Once the morpheme-pair distributions have been drawn, actual word pairs may now be generated. First the model draws a boolean variable ci to determine whether word i in the lost language has a cognate in the known language, according to some prior P(ci). If ci = 1, then a cognate word pair (u, h) is produced: (ustm, hstm) ∼ Gstm (upre, hpre) ∼ Gpre|stm (usuf, hsuf) ∼ Gsuf|stm u = upreustmusuf h = hprehstmhsuf Otherwise, a lone word u is generated, according a uniform character-level language model. 1052 In summary, this model structure captures both character and lexical level correspondences, while utilizing morphological knowledge of the known language. An additional feature of this multilayered model structure is that each distribution over morpheme pairs is derived from the single character-level base distribution G0. As a result, any character-level mappings learned from one type of morphological correspondence will be propagated to all other morpheme distributions. Finally, the character-level mappings discovered by the model are encouraged to obey linguistically motivated structural sparsity constraints. 6 Inference For each word ui in our undeciphered language we predict a morphological segmentation (upreustmusuf)i and corresponding cognate in the known language (hprehstmhsuf)i. Ideally we would like to predict the analysis with highest marginal probability under our model given the observed undeciphered corpus and related language lexicon. In order to do so, we need to integrate out all the other latent variables in our model. As these integrals are intractable to compute exactly, we resort to the standard Monte Carlo approximation. We collect samples of the variables over which we wish to marginalize but for which we cannot compute closed-form integrals. We then approximate the marginal probabilities for undeciphered word ui by summing over all the samples, and predicting the analysis with highest probability. In our sampling algorithm, we avoid sampling the base distribution G0 and the derived morpheme-pair distributions (Gstm etc.), instead using analytical closed forms. We explicitly sample the sparsity indicator variables ⃗λ, the cognate indicator variables ci, and latent word analyses (segmentations and Hebrew counterparts). To do so tractably, we use Gibbs sampling to draw each latent variable conditioned on our current sample of the others. Although the samples are no longer independent, they form a Markov chain whose stationary distribution is the true joint distribution defined by the model (Geman and Geman, 1984). 6.1 Sampling Word Analyses For each undeciphered word, we need to sample a morphological segmentation (upre, ustm, usuf)i along with latent morphemes in the known language (hpre, hstm, hsuf)i. More precisely, we need to sample three character-edit sequences ⃗epre,⃗estm,⃗esuf which together yield the observed word ui. We break this into two sampling steps. First we sample the morphological segmentation of ui, along with the part-of-speech pos of the latent stem cognate. To do so, we enumerate each possible segmentation and part-of-speech and calculate its joint conditional probability (for notational clarity, we leave implicit the conditioning on the other samples in the corpus): P(upre, ustm, usuf, pos) = ∑ ⃗estm P(⃗estm) ∑ ⃗epre P(⃗epre|pos) ∑ ⃗esuf P(⃗esuf|pos) (2) where the summations over character-edit sequences are restricted to those which yield the segmentation (upre, ustm, usuf) and a latent cognate with part-of-speech pos. For a particular stem edit-sequence ⃗estm, we compute its conditional probability in closed form according to a Chinese Restaurant Process (Antoniak, 1974). To do so, we use counts from the other sampled word analyses: countstm(⃗estm) gives the number of times that the entire editsequence ⃗estm has been observed: P(⃗estm) ∝countstm(⃗estm) + α ∏ i p(ei) n + α where n is the number of other word analyses sampled, and α is a fixed concentration parameter. The product ∏ i p(ei) gives the probability of ⃗estm according to the base distribution G0. Since the parameters of G0 are left unsampled, we use the marginalized form: p(e) = ve + count(e) ∑ e′ ve′ + k (3) where count(e) is the number of times that character-edit e appears in distinct edit-sequences (across prefixes, stems, and suffixes), and k is the sum of these counts across all character-edits. Recall that ve is a hyperparameter for the Dirichlet prior on G0 and depends on the value of the corresponding indicator variable λe. Once the segmentation (upre, ustm, usuf) and part-of-speech pos have been sampled, we proceed to sample the actual edit-sequences (and thus 1053 latent morphemes counterparts). Now, instead of summing over the values in Equation 2, we instead sample from them. 6.2 Sampling Sparsity Indicators Recall that each sparsity indicator λe determines the value of the corresponding hyperparameter ve of the Dirichlet prior for the character-edit base distribution G0. In addition, we have an unnormalized joint prior P(⃗λ) = g(⃗λ) Z which encourages a sparse setting of these variables. To sample a particular λe, we consider the set ⃗λ in which λe = 0 and ⃗λ′ in which λe = 1. We then compute: P(⃗λ) ∝g(⃗λ) · v[count(e)] e∑ e′ v[k] e′ where k is the sum of counts for all edit operations, and the notation a[b] indicates the ascending factorial. Likewise, we can compute a probability for ⃗λ′ with corresponding values v′ e. 6.3 Sampling Cognate Indicators Finally, for each word ui, we sample a corresponding indicator variable ci. To do so, we calculate Equation 2 for all possible segmentations and parts-of-speech and sum the resulting values to obtain the conditional likelihood P(ui|ci = 1). We also calculate P(ui|ci = 0) using a uniform unigram character-level language model (and thus depends only on the number of characters in ui). We then sample from among the two values: P(ui|ci = 1) · P(ci = 1) P(ui|ci = 0) · P(ci = 0) 6.4 High-level Resampling Besides the individual sampling steps detailed above, we also consider several larger sampling moves in order to speed convergence. For example, for each type of edit-sequence ⃗e which has been sampled (and may now occur many times throughout the data), we consider a single joint move to another edit-sequence ⃗e′ (both of which yield the same lost language morpheme u). The details are much the same as above, and as before the set of possible edit-sequences is limited by the string u and the known language lexicon. We also resample groups of the sparsity indicator variables ⃗λ in tandem, to allow a more rapid exploration of the probability space. For each character u, we block sample the entire set {λ(u,h)}h, and likewise for each character h. 6.5 Implementation Details Many of the steps detailed above involve the consideration of all possible edit-sequences consistent with (i) a particular undeciphered word ui and (ii) the entire lexicon of words in the known language (or some subset of words with a particular part-of-speech). In particular, we need to both sample from and sum over this space of possibilities repeatedly. Doing so by simple enumeration would needlessly repeat many sub-computations. Instead we use finite-state acceptors to compactly represent both the entire Hebrew lexicon as well as potential Hebrew word forms for each Ugaritic word. By intersecting two such FSAs and minimizing the result we can efficiently represent all potential Hebrew words for a particular Ugaritic word. We weight the edges in the FSA according to the base distribution probabilities (in Equation 3 above). Although these intersected acceptors have to be constantly reweighted to reflect changing probabilities, their topologies need only be computed once. One weighted correctly, marginals and samples can be computed using dynamic programming. Even with a large number of sampling rounds, it is difficult to fully explore the latent variable space for complex unsupervised models. Thus a clever initialization is usually required to start the sampler in a high probability region. We initialize our model with the results of the HMM-based baseline (see section 8), and rule out character substitutions with probability < 0.05 according to the baseline. 7 Experiments 7.1 Corpus and Annotations We apply our model to the ancient Ugaritic language (see Section 3 for background). Our undeciphered corpus consists of an electronic transcription of the Ugaritic tablets (Cunchillos et al., 2002). This corpus contains 7,386 unique word types. As our known language corpus, we use the Hebrew Bible, which is both geographically and temporally close to Ugaritic. To extract a Hebrew morphological lexicon we assume the existence of manual morphological and part-of-speech annotations (Groves and Lowery, 2006). We divide Hebrew stems into four main part-of-speech categories each with a distinct affix profile: Noun, Verb, Pronoun, and Particle. For each part-ofspeech category, we determine the set of allowable affixes using the annotated Bible corpus. 1054 Words Morphemes type token type token Baseline 28.82% 46.00% N/A N/A Our Model 60.42% 66.71% 75.07% 81.25% No Sparsity 46.08% 54.01% 69.48% 76.10% Table 1: Accuracy of cognate translations, measured with respect to complete word-forms and morphemes, for the HMM-based substitution cipher baseline, our complete model, and our model without the structural sparsity priors. Note that the baseline does not provide per-morpheme results, as it does not predict morpheme boundaries. To evaluate the output of our model, we annotated the words in the Ugaritic lexicon with the corresponding Hebrew cognates found in the standard reference dictionary (del Olo Lete and Sanmart´ın, 2004). In addition, manual morphological segmentation was carried out with the guidance of a standard Ugaritic grammar (Schniedewind and Hunt, 2007). Although Ugaritic is an inflectional rather than agglutinative language, in its written form (which lacks vowels) words can easily be segmented (e.g. wyplt.n becomes wy-plt.-n). Overall, we identified Hebrew cognates for 2,155 word forms, covering almost 1/3 of the Ugaritic vocabulary.4 8 Evaluation Tasks and Results We evaluate our model on four separate decipherment tasks: (i) Learning alphabetic mappings, (ii) translating cognates, (iii) identifying cognates, and (iv) morphological segmentation. As a baseline for the first three of these tasks (learning alphabetic mappings and translating and identifying cognates), we adapt the HMM-based method of Knight et al. (2006) for learning letter substitution ciphers. In its original setting, this model was used to map written texts to spoken language, under the assumption that each character was emitted from a hidden phonemic state. In our adaptation, we assume instead that each Ugaritic character was generated by a hidden Hebrew letter. Hebrew character trigram transition probabilities are estimated using the Hebrew Bible, and Hebrew to Ugaritic character emission probabilities are learned using EM. Finally, the highest prob4We are confident that a large majority of Ugaritic words with known Hebrew cognates were thus identified. The remaining Ugaritic words include many personal and geographic names, words with cognates in other Semitic languages, and words whose etymology is uncertain. ability sequence of latent Hebrew letters is predicted for each Ugaritic word-form, using Viterbi decoding. Alphabetic Mapping The first essential step towards successful decipherment is recovering the mapping between the symbols of the lost language and the alphabet of a known language. As a gold standard for this comparison, we use the wellestablished relationship between the Ugaritic and Hebrew alphabets (Hetzron, 1997). This mapping is not one-to-one but is generally quite sparse. Of the 30 Ugaritic symbols, 28 map predominantly to a single Hebrew letter, and the remaining two map to two different letters. As the Hebrew alphabet contains only 22 letters, six map to two distinct Ugaritic letters and two map to three distinct Ugaritic letters. We recover our model’s predicted alphabetic mappings by simply examining the sampled values of the binary indicator variables λu,h for each Ugaritic-Hebrew letter pair (u, h). Due to our structural sparsity prior P(⃗λ), the predicted mappings are sparse: each Ugaritic letter maps to only a single Hebrew letter, and most Hebrew letters map to only a single Ugaritic letter. To recover alphabetic mappings from the HMM substitution cipher baseline, we predict the Hebrew letter h which maximizes the model’s probability P(h|u), for each Ugaritic letter u. To evaluate these mappings, we simply count the number of Ugaritic letters that are correctly mapped to one of their Hebrew reflexes. By this measure, the baseline recovers correct mappings for 22 out of 30 Ugaritic characters (73.3%). Our model recovers correct mappings for all but one (very low frequency) Ugaritic characters, yielding 96.67% accuracy. Cognate Decipherment We compare the decipherment accuracy for Ugaritic words that have corresponding Hebrew cognates. We evaluate our model’s predictions on each distinct Ugaritic word-form at both the type and token level. As Table 1 shows, our method correctly translates over 60% of all distinct Ugaritic word-forms with Hebrew cognates and over 71% of the individual morphemes that compose them, outperforming the baseline by significant margins. Accuracy improves when the frequency of the wordforms is taken into account (token-level evaluation), indicating that the model is able to decipher frequent words more accurately than infre1055 0 0.2 0.4 0.6 0.8 1 False positive rate 0 0.2 0.4 0.6 0.8 1 True positive rate Our Model Baseline Random Figure 2: ROC curve for cognate identification. quent words. We also measure the average Levenshtein distance between predicted and actual cognate word-forms. On average, our model’s predictions lie 0.52 edit operations from the true cognate, whereas the baseline’s predictions average a distance of 1.26 edit operations. Finally, we evaluated the performance of our model when the structural sparsity constraints are not used. As Table 1 shows, performance degrades significantly in the absence of these priors, indicating the importance of modeling the sparsity of character mappings. Cognate identification We evaluate our model’s ability to identify cognates using the sampled indicator variables ci. As before, we compare our performance against the HMM substitution cipher baseline. To produce baseline cognate identification predictions, we calculate the probability of each latent Hebrew letter sequence predicted by the HMM, and compare it to a uniform character-level Ugaritic language model (as done by our model, to avoid automatically assigning higher cognate probability to shorter Ugaritic words). For both our model and the baseline, we can vary the threshold for cognate identification by raising or lowering the cognate prior P(ci). As the prior is set higher, we detect more true cognates, but the false positive rate increases as well. Figure 2 shows the ROC curve obtained by varying this prior both for our model and the baseline. At all operating points, our model outperforms the baseline, and both models always predict better than chance. In practice for our model, we use a high cognate prior, thus only ruling out precision recall f-measure Morfessor 88.87% 67.48% 76.71% Our Model 86.62% 90.53% 88.53% Table 2: Morphological segmentation accuracy for a standard unsupervised baseline and our model. those Ugaritic word-forms which are very unlikely to have Hebrew cognates. Morphological segmentation Finally, we evaluate the accuracy of our model’s morphological segmentation for Ugaritic words. As a baseline for this comparison, we use Morfessor CategoriesMAP (Creutz and Lagus, 2007). As Table 2 shows, our model provides a significant boost in performance, especially for recall. This result is consistent with previous work showing that morphological annotations can be projected to new languages lacking annotation (Yarowsky et al., 2000; Snyder and Barzilay, 2008), but generalizes those results to the case where parallel data is unavailable. 9 Conclusion and Future Work In this paper we proposed a method for the automatic decipherment of lost languages. The key strength of our model lies in its ability to incorporate a range of linguistic intuitions in a statistical framework. We hope to address several issues in future work. Our model fails to take into account the known frequency of Hebrew words and morphemes. In fact, the most common error is incorrectly translating the masculine plural suffix (-m) as the third person plural possessive suffix (-m) rather than the correct and much more common plural suffix (-ym). Also, even with the correct alphabetic mapping, many words can only be deciphered by examining their literary context. Our model currently operates purely on the vocabulary level and thus fails to take this contextual information into account. Finally, we intend to explore our model’s predictive power when the family of the lost language is unknown.5 5The authors acknowledge the support of the NSF (CAREER grant IIS-0448168, grant IIS-0835445, and grant IIS0835652) and the Microsoft Research New Faculty Fellowship. Thanks to Michael Collins, Tommi Jaakkola, and the MIT NLP group for their suggestions and comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations. 1056 References C. E. Antoniak. 1974. Mixtures of Dirichlet processes with applications to bayesian nonparametric problems. The Annals of Statistics, 2:1152–1174, November. Alexandre Bouchard, Percy Liang, Thomas Griffiths, and Dan Klein. 2007. A probabilistic approach to diachronic phonology. In Proceedings of EMNLP, pages 887–896. Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphology learning. ACM Transactions on Speech and Language Processing, 4(1). Jesus-Luis Cunchillos, Juan-Pablo Vita, and Jose´Angel Zamora. 2002. Ugaritic data bank. CDROM. Gregoria del Olo Lete and Joaqu´ın Sanmart´ın. 2004. A Dictionary of the Ugaritic Language in the Alphabetic Tradition. Number 67 in Handbook of Oriental Studies. Section 1 The Near and Middle East. Brill. Pascale Fung and Kathleen McKeown. 1997. Finding terminology translations from non-parallel corpora. In Proceedings of the Annual Workshop on Very Large Corpora, pages 192–202. S. Geman and D. Geman. 1984. Stochastic relaxation, gibbs distributions and the bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12:609–628. Alan Groves and Kirk Lowery, editors. 2006. The Westminster Hebrew Bible Morphology Database. Westminster Hebrew Institute, Philadelphia, PA, USA. Jacques B. M. Guy. 1994. An algorithm for identifying cognates in bilingual wordlists and its applicability to machine translation. Journal of Quantitative Linguistics, 1(1):35–42. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proceedings of the ACL/HLT, pages 771–779. Robert Hetzron, editor. 1997. The Semitic Languages. Routledge. H. Ishwaran and J.S. Rao. 2005. Spike and slab variable selection: frequentist and Bayesian strategies. The Annals of Statistics, 33(2):730–773. Kevin Knight and Richard Sproat. 2009. Writing systems, transliteration and decipherment. NAACL Tutorial. K. Knight and K. Yamada. 1999. A computational approach to deciphering unknown scripts. In ACL Workshop on Unsupervised Learning in Natural Language Processing. Kevin Knight, Anish Nair, Nishit Rathod, and Kenji Yamada. 2006. Unsupervised analysis for decipherment problems. In Proceedings of the COLING/ACL, pages 499–506. Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. In Proceedings of the ACL-02 workshop on Unsupervised lexical acquisition, pages 9–16. Grzegorz Kondrak. 2001. Identifying cognates by phonetic and semantic similarity. In Proceeding of NAACL, pages 1–8. Grzegorz Kondrak. 2009. Identification of cognates and recurrent sound correspondences in word lists. Traitement Automatique des Langues, 50(2):201– 235. John B. Lowe and Martine Mazaudon. 1994. The reconstruction engine: a computer implementation of the comparative method. Computational Linguistics, 20(3):381–417. Reinhard Rapp. 1999. Automatic identification of word translations from unrelated english and german corpora. In Proceedings of the ACL, pages 519–526. Andrew Robinson. 2002. Lost Languages: The Enigma of the World’s Undeciphered Scripts. McGraw-Hill. William M. Schniedewind and Joel H. Hunt. 2007. A Primer on Ugaritic: Language, Culture and Literature. Cambridge University Press. Mark S. Smith, editor. 1955. Untold Stories: The Bible and Ugaritic Studies in the Twentieth Century. Hendrickson Publishers. Benjamin Snyder and Regina Barzilay. 2008. Crosslingual propagation for morphological analysis. In Proceedings of the AAAI, pages 848–854. Wilfred Watson and Nicolas Wyatt, editors. 1999. Handbook of Ugaritic Studies. Brill. David Yarowsky, Grace Ngai, and Richard Wicentowski. 2000. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of HLT, pages 161–168. 1057
2010
107
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1058–1066, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Efficient Inference Through Cascades of Weighted Tree Transducers Jonathan May and Kevin Knight Information Sciences Institute University of Southern California Marina del Rey, CA 90292 {jonmay,knight}@isi.edu Heiko Vogler Technische Universit¨at Dresden Institut f¨ur Theoretische Informatik 01062 Dresden, Germany [email protected] Abstract Weighted tree transducers have been proposed as useful formal models for representing syntactic natural language processing applications, but there has been little description of inference algorithms for these automata beyond formal foundations. We give a detailed description of algorithms for application of cascades of weighted tree transducers to weighted tree acceptors, connecting formal theory with actual practice. Additionally, we present novel on-the-fly variants of these algorithms, and compare their performance on a syntax machine translation cascade based on (Yamada and Knight, 2001). 1 Motivation Weighted finite-state transducers have found recent favor as models of natural language (Mohri, 1997). In order to make actual use of systems built with these formalisms we must first calculate the set of possible weighted outputs allowed by the transducer given some input, which we call forward application, or the set of possible weighted inputs given some output, which we call backward application. After application we can do some inference on this result, such as determining its k highest weighted elements. We may also want to divide up our problems into manageable chunks, each represented by a transducer. As noted by Woods (1980), it is easier for designers to write several small transducers where each performs a simple transformation, rather than painstakingly construct a single complicated device. We would like to know, then, the result of transformation of input or output by a cascade of transducers, one operating after the other. As we will see, there are various strategies for approaching this problem. We will consider offline composition, bucket brigade application, and on-the-fly application. Application of cascades of weighted string transducers (WSTs) has been well-studied (Mohri, 1997). Less well-studied but of more recent interest is application of cascades of weighted tree transducers (WTTs). We tackle application of WTT cascades in this work, presenting: • explicit algorithms for application of WTT cascades • novel algorithms for on-the-fly application of WTT cascades, and • experiments comparing the performance of these algorithms. 2 Strategies for the string case Before we discuss application of WTTs, it is helpful to recall the solution to this problem in the WST domain. We recall previous formal presentations of WSTs (Mohri, 1997) and note informally that they may be represented as directed graphs with designated start and end states and edges labeled with input symbols, output symbols, and weights.1 Fortunately, the solution for WSTs is practically trivial—we achieve application through a series of embedding, composition, and projection operations. Embedding is simply the act of representing a string or regular string language as an identity WST. Composition of WSTs, that is, generating a single WST that captures the transformations of two input WSTs used in sequence, is not at all trivial, but has been well covered in, e.g., (Mohri, 2009), where directly implementable algorithms can be found. Finally, projection is another trivial operation—the domain or range language can be obtained from a WST by ignoring the output or input symbols, respectively, on its arcs, and summing weights on otherwise identical arcs. By embedding an input, composing the result with the given WST, and projecting the result, forward application is accomplished.2 We are then left with a weighted string acceptor (WSA), essentially a weighted, labeled graph, which can be traversed 1We assume throughout this paper that weights are in R+ ∪{+∞}, that the weight of a path is calculated as the product of the weights of its edges, and that the weight of a (not necessarily finite) set T of paths is calculated as the sum of the weights of the paths of T. 2For backward applications, the roles of input and output are simply exchanged. 1058 A B a : a / 1 a : a / 1 C (a) Input string “a a” embedded in an identity WST E a : b / . 1 a : a / . 9 b : a / . 5 D a : b / . 4 a : a / . 6 b : a / . 5 b : b / . 5 b : b / . 5 (b) first WST in cascade a : c / . 6 b : c / . 7 F a : d / . 4 b : d / . 3 (c) second WST in cascade E F a : c / . 0 7 a : c / . 5 4 b : c / . 6 5 b : d / . 3 5 D F a : c / . 2 8 a : c / . 3 6 b : c / . 6 5 b : d / . 3 5 a : d / . 3 6 a : d / . 0 3 a : d / . 2 4 a : d / . 1 2 (d) Offline composition approach: Compose the transducers A D B D C D a : b / . 1 B E a : a / . 9 C E (e) Bucket brigade approach: Apply WST (b) to WST (a) A D F B D F C D F d / . 0 3 c / . 0 7 B E F c / . 5 4 C E F c / . 5 4 c / . 3 6 c / . 2 8 c / . 0 7 d / . 3 6 d / . 0 3 d / . 3 6 d / . 1 2 d / . 2 4 (f) Result of offline or bucket application after projection A D F B D F C D F d / . 0 3 B E F c / . 5 4 C E F c / . 3 6 c / . 2 8 c / . 0 7 d / . 3 6 d / . 1 2 d / . 2 4 (g) Initial on-the-fly stand-in for (f) A D F B D F C D F d / . 0 3 B E F c / . 5 4 C E F c / . 3 6 c / . 2 8 c / . 0 7 d / . 3 6 d / . 1 2 d / . 2 4 (h) On-the-fly stand-in after exploring outgoing edges of state ADF A D F B D F C D F d / . 0 3 B E F c / . 5 4 C E F c / . 3 6 c / . 2 8 c / . 0 7 d / . 3 6 d / . 1 2 d / . 2 4 (i) On-the-fly stand-in after best path has been found Figure 1: Three different approaches to application through cascades of WSTs. by well-known algorithms to efficiently find the kbest paths. Because WSTs can be freely composed, extending application to operate on a cascade of WSTs is fairly trivial. The only question is one of composition order: whether to initially compose the cascade into a single transducer (an approach we call offline composition) or to compose the initial embedding with the first transducer, trim useless states, compose the result with the second, and so on (an approach we call bucket brigade). The appropriate strategy generally depends on the structure of the individual transducers. A third approach builds the result incrementally, as dictated by some algorithm that requests information about it. Such an approach, which we call on-the-fly, was described in (Pereira and Riley, 1997; Mohri, 2009; Mohri et al., 2000). If we can efficiently calculate the outgoing edges of a state of the result WSA on demand, without calculating all edges in the entire machine, we can maintain a stand-in for the result structure, a machine consisting at first of only the start state of the true result. As a calling algorithm (e.g., an implementation of Dijkstra’s algorithm) requests information about the result graph, such as the set of outgoing edges from a state, we replace the current stand-in with a richer version by adding the result of the request. The on-the-fly approach has a distinct advantage over the other two methods in that the entire result graph need not be built. A graphical representation of all three methods is presented in Figure 1. 3 Application of tree transducers Now let us revisit these strategies in the setting of trees and tree transducers. Imagine we have a tree or set of trees as input that can be represented as a weighted regular tree grammar3 (WRTG) and a WTT that can transform that input with some weight. We would like to know the k-best trees the WTT can produce as output for that input, along with their weights. We already know of several methods for acquiring k-best trees from a WRTG (Huang and Chiang, 2005; Pauls and Klein, 2009), so we then must ask if, analogously to the string case, WTTs preserve recognizability4 and we can form an application WRTG. Before we begin, however, we must define WTTs and WRTGs. 3.1 Preliminaries5 A ranked alphabet is a finite set Σ such that every member σ ∈Σ has a rank rk(σ) ∈N. We call Σ(k) ⊆Σ, k ∈N the set of those σ ∈Σ such that rk(σ) = k. The set of variables is denoted X = {x1, x2, . . .} and is assumed to be disjoint from any ranked alphabet used in this paper. We use ⊥to denote a symbol of rank 0 that is not in any ranked alphabet used in this paper. A tree t ∈TΣ is denoted σ(t1, . . . , tk) where k ≥0, σ ∈Σ(k), and t1, . . . , tk ∈TΣ. For σ ∈Σ(0) we 3This generates the same class of weighted tree languages as weighted tree automata, the direct analogue of WSAs, and is more useful for our purposes. 4A weighted tree language is recognizable iff it can be represented by a wrtg. 5The following formal definitions and notations are needed for understanding and reimplementation of the presented algorithms, but can be safely skipped on first reading and consulted when encountering an unfamiliar term. 1059 write σ ∈TΣ as shorthand for σ(). For every set S disjoint from Σ, let TΣ(S) = TΣ∪S, where, for all s ∈S, rk(s) = 0. We define the positions of a tree t = σ(t1, . . . , tk), for k ≥ 0, σ ∈ Σ(k), t1, . . . , tk ∈TΣ, as a set pos(t) ⊂N∗such that pos(t) = {ε} ∪{iv | 1 ≤i ≤k, v ∈pos(ti)}. The set of leaf positions lv(t) ⊆pos(t) are those positions v ∈pos(t) such that for no i ∈N, vi ∈pos(t). We presume standard lexicographic orderings < and ≤on pos. Let t, s ∈TΣ and v ∈pos(t). The label of t at position v, denoted by t(v), the subtree of t at v, denoted by t|v, and the replacement at v by s, denoted by t[s]v, are defined as follows: 1. For every σ ∈Σ(0), σ(ε) = σ, σ|ε = σ, and σ[s]ε = s. 2. For every t = σ(t1, . . . , tk) such that k = rk(σ) and k ≥1, t(ε) = σ, t|ε = t, and t[s]ε = s. For every 1 ≤i ≤k and v ∈pos(ti), t(iv) = ti(v), t|iv = ti|v, and t[s]iv = σ(t1, . . . , ti−1, ti[s]v, ti+1, . . . , tk). The size of a tree t, size(t) is |pos(t)|, the cardinality of its position set. The yield set of a tree is the set of labels of its leaves: for a tree t, yd(t) = {t(v) | v ∈lv(t)}. Let A and B be sets. Let ϕ : A →TΣ(B) be a mapping. We extend ϕ to the mapping ϕ : TΣ(A) →TΣ(B) such that for a ∈A, ϕ(a) = ϕ(a) and for k ≥0, σ ∈Σ(k), and t1, . . . , tk ∈TΣ(A), ϕ(σ(t1, . . . , tk)) = σ(ϕ(t1), . . . , ϕ(tk)). We indicate such extensions by describing ϕ as a substitution mapping and then using ϕ without further comment. We use R+ to denote the set {w ∈R | w ≥0} and R∞ + to denote R+ ∪{+∞}. Definition 3.1 (cf. (Alexandrakis and Bozapalidis, 1987)) A weighted regular tree grammar (WRTG) is a 4-tuple G = (N, Σ, P, n0) where: 1. N is a finite set of nonterminals, with n0 ∈N the start nonterminal. 2. Σ is a ranked alphabet of input symbols, where Σ ∩N = ∅. 3. P is a tuple (P ′, π), where P ′ is a finite set of productions, each production p of the form n −→u, n ∈N, u ∈TΣ(N), and π : P ′ →R+ is a weight function of the productions. We will refer to P as a finite set of weighted productions, each production p of the form n π(p) −−→u. A production p is a chain production if it is of the form ni w−→nj, where ni, nj ∈N.6 6In (Alexandrakis and Bozapalidis, 1987), chain productions are forbidden in order to avoid infinite summations. We explicitly allow such summations. A WRTG G is in normal form if each production is either a chain production or is of the form n w−→σ(n1, . . . , nk) where σ ∈Σ(k) and n1, . . . , nk ∈N. For WRTG G = (N, Σ, P, n0), s, t, u ∈TΣ(N), n ∈N, and p ∈P of the form n w−→u, we obtain a derivation step from s to t by replacing some leaf nonterminal in s labeled n with u. Formally, s ⇒p G t if there exists some v ∈lv(s) such that s(v) = n and s[u]v = t. We say this derivation step is leftmost if, for all v′ ∈lv(s) where v′ < v, s(v′) ∈Σ. We henceforth assume all derivation steps are leftmost. If, for some m ∈N, pi ∈P, and ti ∈TΣ(N) for all 1 ≤i ≤m, n0 ⇒p1 t1 . . . ⇒pm tm, we say the sequence d = (p1, . . . , pm) is a derivation of tm in G and that n0 ⇒∗tm; the weight of d is wt(d) = π(p1) · . . . · π(pm). The weighted tree language recognized by G is the mapping LG : TΣ →R∞ + such that for every t ∈TΣ, LG(t) is the sum of the weights of all (possibly infinitely many) derivations of t in G. A weighted tree language f : TΣ →R∞ + is recognizable if there is a WRTG G such that f = LG. We define a partial ordering ⪯on WRTGs such that for WRTGs G1 = (N1, Σ, P1, n0) and G2 = (N2, Σ, P2, n0), we say G1 ⪯G2 iff N1 ⊆N2 and P1 ⊆P2, where the weights are preserved. Definition 3.2 (cf. Def. 1 of (Maletti, 2008)) A weighted extended top-down tree transducer (WXTT) is a 5-tuple M = (Q, Σ, ∆, R, q0) where: 1. Q is a finite set of states. 2. Σ and ∆are the ranked alphabets of input and output symbols, respectively, where (Σ ∪∆) ∩Q = ∅. 3. R is a tuple (R′, π), where R′ is a finite set of rules, each rule r of the form q.y −→u for q ∈Q, y ∈TΣ(X), and u ∈T∆(Q × X). We further require that no variable x ∈X appears more than once in y, and that each variable appearing in u is also in y. Moreover, π : R′ →R∞ + is a weight function of the rules. As for WRTGs, we refer to R as a finite set of weighted rules, each rule r of the form q.y π(r) −−→u. A WXTT is linear (respectively, nondeleting) if, for each rule r of the form q.y w−→u, each x ∈yd(y) ∩X appears at most once (respectively, at least once) in u. We denote the class of all WXTTs as wxT and add the letters L and N to signify the subclasses of linear and nondeleting WTT, respectively. Additionally, if y is of the form σ(x1, . . . , xk), we remove the letter “x” to signify 1060 the transducer is not extended (i.e., it is a “traditional” WTT (F¨ul¨op and Vogler, 2009)). For WXTT M = (Q, Σ, ∆, R, q0), s, t ∈T∆(Q × TΣ), and r ∈R of the form q.y w−→u, we obtain a derivation step from s to t by replacing some leaf of s labeled with q and a tree matching y by a transformation of u, where each instance of a variable has been replaced by a corresponding subtree of the y-matching tree. Formally, s ⇒r M t if there is a position v ∈pos(s), a substitution mapping ϕ : X →TΣ, and a rule q.y w−→u ∈R such that s(v) = (q, ϕ(y)) and t = s[ϕ′(u)]v, where ϕ′ is a substitution mapping Q × X →T∆(Q × TΣ) defined such that ϕ′(q′, x) = (q′, ϕ(x)) for all q′ ∈Q and x ∈X. We say this derivation step is leftmost if, for all v′ ∈lv(s) where v′ < v, s(v′) ∈∆. We henceforth assume all derivation steps are leftmost. If, for some s ∈TΣ, m ∈N, ri ∈R, and ti ∈T∆(Q × TΣ) for all 1 ≤i ≤m, (q0, s) ⇒r1 t1 . . . ⇒rm tm, we say the sequence d = (r1, . . . , rm) is a derivation of (s, tm) in M; the weight of d is wt(d) = π(r1) · . . . · π(rm). The weighted tree transformation recognized by M is the mapping τM : TΣ × T∆→R∞ + , such that for every s ∈TΣ and t ∈T∆, τM(s, t) is the sum of the weights of all (possibly infinitely many) derivations of (s, t) in M. The composition of two weighted tree transformations τ : TΣ×T∆→R∞ + and µ : T∆×TΓ →R∞ + is the weighted tree transformation (τ; µ) : TΣ × TΓ →R∞ + where for every s ∈TΣ and u ∈TΓ, (τ; µ)(s, u) = P t∈T∆τ(s, t) · µ(t, u). 3.2 Applicable classes We now consider transducer classes where recognizability is preserved under application. Table 1 presents known results for the top-down tree transducer classes described in Section 3.1. Unlike the string case, preservation of recognizability is not universal or symmetric. This is important for us, because we can only construct an application WRTG, i.e., a WRTG representing the result of application, if we can ensure that the language generated by application is in fact recognizable. Of the types under consideration, only wxLNT and wLNT preserve forward recognizability. The two classes marked as open questions and the other classes, which are superclasses of wNT, do not or are presumed not to. All subclasses of wxLT preserve backward recognizability.7 We do not consider cases where recognizability is not preserved in the remainder of this paper. If a transducer M of a class that preserves forward recognizability is applied to a WRTG G, we can call the forward ap7Note that the introduction of weights limits recognizability preservation considerably. For example, (unweighted) xT preserves backward recognizability. plication WRTG M(G)▷and if M preserves backward recognizability, we can call the backward application WRTG M(G)◁. Now that we have explained the application problem in the context of weighted tree transducers and determined the classes for which application is possible, let us consider how to build forward and backward application WRTGs. Our basic approach mimics that taken for WSTs by using an embed-compose-project strategy. As in string world, if we can embed the input in a transducer, compose with the given transducer, and project the result, we can obtain the application WRTG. Embedding a WRTG in a wLNT is a trivial operation—if the WRTG is in normal form and chain production-free,8 for every production of the form n w−→σ(n1, . . . , nk), create a rule of the form n.σ(x1, . . . , xk) w−→σ(n1.x1, . . . , nk.xk). Range projection of a wxLNT is also trivial—for every q ∈Q and u ∈T∆(Q × X) create a production of the form q w−→u′ where u′ is formed from u by replacing all leaves of the form q.x with the leaf q, i.e., removing references to variables, and w is the sum of the weights of all rules of the form q.y −→u in R.9 Domain projection for wxLT is best explained by way of example. The left side of a rule is preserved, with variables leaves replaced by their associated states from the right side. So, the rule q1.σ(γ(x1), x2) w−→δ(q2.x2, β(α, q3.x1)) would yield the production q1 w−→σ(γ(q3), q2) in the domain projection. However, a deleting rule such as q1.σ(x1, x2) w−→γ(q2.x2) necessitates the introduction of a new nonterminal ⊥that can generate all of TΣ with weight 1. The only missing piece in our embed-composeproject strategy is composition. Algorithm 1, which is based on the declarative construction of Maletti (2006), generates the syntactic composition of a wxLT and a wLNT, a generalization of the basic composition construction of Baker (1979). It calls Algorithm 2, which determines the sequences of rules in the second transducer that match the right side of a single rule in the first transducer. Since the embedded WRTG is of type wLNT, it may be either the first or second argument provided to Algorithm 1, depending on whether the application is forward or backward. We can thus use the embed-compose-project strategy for forward application of wLNT and backward application of wxLT and wxLNT. Note that we cannot use this strategy for forward applica8Without loss of generality we assume this is so, since standard algorithms exist to remove chain productions (Kuich, 1998; ´Esik and Kuich, 2003; Mohri, 2009) and convert into normal form (Alexandrakis and Bozapalidis, 1987). 9Finitely many such productions may be formed. 1061 tion of wxLNT, even though that class preserves recognizability. Algorithm 1 COMPOSE 1: inputs 2: wxLT M1 = (Q1, Σ, ∆, R1, q10) 3: wLNT M2 = (Q2, ∆, Γ, R2, q20) 4: outputs 5: wxLT M3 = ((Q1 ×Q2), Σ, Γ, R3, (q10, q20)) such that M3 = (τM1; τM2). 6: complexity 7: O(|R1| max(|R2|size(˜u), |Q2|)), where ˜u is the largest right side tree in any rule in R1 8: Let R3 be of the form (R′ 3, π) 9: R3 ←(∅, ∅) 10: Ξ ←{(q10, q20)} {seen states} 11: Ψ ←{(q10, q20)} {pending states} 12: while Ψ ̸= ∅do 13: (q1, q2) ←any element of Ψ 14: Ψ ←Ψ \ {(q1, q2)} 15: for all (q1.y w1 −−→u) ∈R1 do 16: for all (z, w2) ∈COVER(u, M2, q2) do 17: for all (q, x) ∈yd(z) ∩((Q1 × Q2) × X) do 18: if q ̸∈Ξ then 19: Ξ ←Ξ ∪{q} 20: Ψ ←Ψ ∪{q} 21: r ←((q1, q2).y −→z) 22: R′ 3 ←R′ 3 ∪{r} 23: π(r) ←π(r) + (w1 · w2) 24: return M3 4 Application of tree transducer cascades What about the case of an input WRTG and a cascade of tree transducers? We will revisit the three strategies for accomplishing application discussed above for the string case. In order for offline composition to be a viable strategy, the transducers in the cascade must be closed under composition. Unfortunately, of the classes that preserve recognizability, only wLNT is closed under composition (G´ecseg and Steinby, 1984; Baker, 1979; Maletti et al., 2009; F¨ul¨op and Vogler, 2009). However, the general lack of composability of tree transducers does not preclude us from conducting forward application of a cascade. We revisit the bucket brigade approach, which in Section 2 appeared to be little more than a choice of composition order. As discussed previously, application of a single transducer involves an embedding, a composition, and a projection. The embedded WRTG is in the class wLNT, and the projection forms another WRTG. As long as every transducer in the cascade can be composed with a wLNT to its left or right, depending on the application type, application of a cascade is possible. Note that this embed-compose-project process is somewhat more burdensome than in the string case. For strings, application is obtained by a single embedding, a series of compositions, and a single projecAlgorithm 2 COVER 1: inputs 2: u ∈T∆(Q1 × X) 3: wT M2 = (Q2, ∆, Γ, R2, q20) 4: state q2 ∈Q2 5: outputs 6: set of pairs (z, w) with z ∈TΓ((Q1 × Q2) × X) formed by one or more successful runs on u by rules in R2, starting from q2, and w ∈R∞ + the sum of the weights of all such runs. 7: complexity 8: O(|R2|size(u)) 9: if u(ε) is of the form (q1, x) ∈Q1 × X then 10: zinit ←((q1, q2), x) 11: else 12: zinit ←⊥ 13: Πlast ←{(zinit, {((ε, ε), q2)}, 1)} 14: for all v ∈pos(u) such that u(v) ∈∆(k) for some k ≥0 in prefix order do 15: Πv ←∅ 16: for all (z, θ, w) ∈Πlast do 17: for all v′ ∈lv(z) such that z(v′) = ⊥do 18: for all (θ(v, v′).u(v)(x1, . . . , xk) w′ −→h)∈R2 do 19: θ′ ←θ 20: Form substitution mapping ϕ : (Q2 × X) →TΓ((Q1 × Q2 × X) ∪{⊥}). 21: for i = 1 to k do 22: for all v′′ ∈ pos(h) such that h(v′′) = (q′ 2, xi) for some q′ 2 ∈Q2 do 23: θ′(vi, v′v′′) ←q′ 2 24: if u(vi) is of the form (q1, x) ∈Q1 × X then 25: ϕ(q′ 2, xi) ←((q1, q′ 2), x) 26: else 27: ϕ(q′ 2, xi) ←⊥ 28: Πv ←Πv ∪{(z[ϕ(h)]v′, θ′, w · w′)} 29: Πlast ←Πv 30: Z ←{z | (z, θ, w) ∈Πlast} 31: return {(z, X (z,θ,w)∈Πlast w) | z ∈Z} tion, whereas application for trees is obtained by a series of (embed, compose, project) operations. 4.1 On-the-fly algorithms We next consider on-the-fly algorithms for application. Similar to the string case, an on-thefly approach is driven by a calling algorithm that periodically needs to know the productions in a WRTG with a common left side nonterminal. The embed-compose-project approach produces an entire application WRTG before any inference algorithm is run. In order to admit an on-the-fly approach we describe algorithms that only generate those productions in a WRTG that have a given left nonterminal. In this section we extend Definition 3.1 as follows: a WRTG is a 6tuple G = (N, Σ, P, n0, M, G) where N, Σ, P, and n0 are defined as in Definition 3.1, and either M = G = ∅,10 or M is a wxLNT and G is a normal form, chain production-free WRTG such that 10In which case the definition is functionally unchanged from before. 1062 type preserved? source w[x]T No See w[x]NT w[x]LT OQ (Maletti, 2009) w[x]NT No (G´ecseg and Steinby, 1984) wxLNT Yes (F¨ul¨op et al., 2010) wLNT Yes (Kuich, 1999) (a) Preservation of forward recognizability type preserved? source w[x]T No See w[x]NT w[x]LT Yes (F¨ul¨op et al., 2010) w[x]NT No (Maletti, 2009) w[x]LNT Yes See w[x]LT (b) Preservation of backward recognizability Table 1: Preservation of forward and backward recognizability for various classes of top-down tree transducers. Here and elsewhere, the following abbreviations apply: w = weighted, x = extended LHS, L = linear, N = nondeleting, OQ = open question. Square brackets include a superposition of classes. For example, w[x]T signifies both wxT and wT. Algorithm 3 PRODUCE 1: inputs 2: WRTG Gin = (Nin, ∆, Pin, n0, M, G) such that M = (Q, Σ, ∆, R, q0) is a wxLNT and G = (N, Σ, P, n′ 0, M ′, G′) is a WRTG in normal form with no chain productions 3: nin ∈Nin 4: outputs 5: WRTG Gout = (Nout, ∆, Pout, n0, M, G), such that Gin ⪯Gout and (nin w −→u) ∈Pout ⇔(nin w −→u) ∈M(G)▷ 6: complexity 7: O(|R||P|size(˜y)), where ˜y is the largest left side tree in any rule in R 8: if Pin contains productions of the form nin w −→u then 9: return Gin 10: Nout ←Nin 11: Pout ←Pin 12: Let nin be of the form (n, q), where n ∈N and q ∈Q. 13: for all (q.y w1 −−→u) ∈R do 14: for all (θ, w2) ∈REPLACE(y, G, n) do 15: Form substitution mapping ϕ : Q × X → T∆(N × Q) such that, for all v ∈yd(y) and q′ ∈ Q, if there exist n′ ∈N and x ∈X such that θ(v) = n′ and y(v) = x, then ϕ(q′, x) = (n′, q′). 16: p′ ←((n, q) w1·w2 −−−−→ϕ(u)) 17: for all p ∈NORM(p′, Nout) do 18: Let p be of the form n0 w −→δ(n1, . . . , nk) for δ ∈∆(k). 19: Nout ←Nout ∪{n0, . . . , nk} 20: Pout ←Pout ∪{p} 21: return CHAIN-REM(Gout) G ⪯M(G)▷. In the latter case, G is a stand-in for M(G)▷, analogous to the stand-ins for WSAs and WSTs described in Section 2. Algorithm 3, PRODUCE, takes as input a WRTG Gin = (Nin, ∆, Pin, n0, M, G) and a desired nonterminal nin and returns another WRTG, Gout that is different from Gin in that it has more productions, specifically those beginning with nin that are in M(G)▷. Algorithms using stand-ins should call PRODUCE to ensure the stand-in they are using has the desired productions beginning with the specific nonterminal. Note, then, that PRODUCE obtains the effect of forward applicaAlgorithm 4 REPLACE 1: inputs 2: y ∈TΣ(X) 3: WRTG G = (N, Σ, P, n0, M, G) in normal form, with no chain productions 4: n ∈N 5: outputs 6: set Π of pairs (θ, w) where θ is a mapping pos(y) →N and w ∈R∞ + , each pair indicating a successful run on y by productions in G, starting from n, and w is the weight of the run. 7: complexity 8: O(|P|size(y)) 9: Πlast ←{({(ε, n)}, 1)} 10: for all v ∈pos(y) such that y(v) ̸∈X in prefix order do 11: Πv ←∅ 12: for all (θ, w) ∈Πlast do 13: if M ̸= ∅and G ̸= ∅then 14: G ←PRODUCE(G, θ(v)) 15: for all (θ(v) w′ −→y(v)(n1, . . . , nk)) ∈P do 16: Πv ←Πv∪{(θ∪{(vi, ni), 1 ≤i ≤k}, w·w′)} 17: Πlast ←Πv 18: return Πlast Algorithm 5 MAKE-EXPLICIT 1: inputs 2: WRTG G = (N, Σ, P, n0, M, G) in normal form 3: outputs 4: WRTG G′ = (N ′, Σ, P ′, n0, M, G), in normal form, such that if M ̸= ∅and G ̸= ∅, LG′ = LM(G)▷, and otherwise G′ = G. 5: complexity 6: O(|P ′|) 7: G′ ←G 8: Ξ ←{n0} {seen nonterminals} 9: Ψ ←{n0} {pending nonterminals} 10: while Ψ ̸= ∅do 11: n ←any element of Ψ 12: Ψ ←Ψ \ {n} 13: if M ̸= ∅and G ̸= ∅then 14: G′ ←PRODUCE(G′, n) 15: for all (n w −→σ(n1, . . . , nk)) ∈P ′ do 16: for i = 1 to k do 17: if ni ̸∈Ξ then 18: Ξ ←Ξ ∪{ni} 19: Ψ ←Ψ ∪{ni} 20: return G′ 1063 g0 g0 w1 −−→σ(g0, g1) g0 w2 −−→α g1 w3 −−→α (a) Input WRTG G a0 a0.σ(x1, x2) w4 −−→σ(a0.x1, a1.x2) a0.σ(x1, x2) w5 −−→ψ(a2.x1, a1.x2) a0.α w6 −−→α a1.α w7 −−→α a2.α w8 −−→ρ (b) First transducer MA in the cascade b0 b0.σ(x1, x2) w9 −−→σ(b0.x1, b0.x2) b0.α w10 −−→α (c) Second transducer MB in the cascade g0a0 w1·w4 −−−−→σ(g0a0, g1a1) g0a0 w1·w5 −−−−→ψ(g0a2, g1a1) g0a0 w2·w6 −−−−→α g1a1 w3·w7 −−−−→α (d) Productions of MA(G)▷built as a consequence of building the complete MB(MA(G)▷)▷ g0a0b0 g0a0b0 w1·w4·w9 −−−−−−→σ(g0a0b0, g1a1b0) g0a0b0 w2·w6·w10 −−−−−−−→α g1a1b0 w3·w7·w10 −−−−−−−→α (e) Complete MB(MA(G)▷)▷ Figure 2: Forward application through a cascade of tree transducers using an on-the-fly method. tion in an on-the-fly manner.11 It makes calls to REPLACE, which is presented in Algorithm 4, as well as to a NORM algorithm that ensures normal form by replacing a single production not in normal form with several normal-form productions that can be combined together (Alexandrakis and Bozapalidis, 1987) and a CHAIN-REM algorithm that replaces a WRTG containing chain productions with an equivalent WRTG that does not (Mohri, 2009). As an example of stand-in construction, consider the invocation PRODUCE(G1, g0a0), where G1 = ({g0a0}, {σ, ψ, α, ρ}, ∅, g0a0, MA, G), G is in Figure 2a,12 and MA is in 2b. The stand-in WRTG that is output contains the first three of the four productions in Figure 2d. To demonstrate the use of on-the-fly application in a cascade, we next show the effect of PRODUCE when used with the cascade G◦MA ◦MB, where MB is in Figure 2c. Our driving algorithm in this case is Algorithm 5, MAKE11Note further that it allows forward application of class wxLNT, something the embed-compose-project approach did not allow. 12By convention the initial nonterminal and state are listed first in graphical depictions of WRTGs and WXTTs. rJJ.JJ(x1, x2, x3) −→JJ(rDT.x1, rJJ.x2, rVB.x3) rVB.VB(x1, x2, x3) −→VB(rNNPS.x1, rNN.x3, rVB.x2) t.”gentle” −→”gentle”(a) Rotation rules iVB.NN(x1, x2) −→NN(INS iNN.x1, iNN.x2) iVB.NN(x1, x2) −→NN(iNN.x1, iNN.x2) iVB.NN(x1, x2) −→NN(iNN.x1, iNN.x2, INS) (b) Insertion rules t.VB(x1, x2, x3) −→X(t.x1, t.x2, t.x3) t.”gentleman” −→j1 t.”gentleman” −→EPS t.INS −→j1 t.INS −→j2 (c) Translation rules Figure 3: Example rules from transducers used in decoding experiment. j1 and j2 are Japanese words. EXPLICIT, which simply generates the full application WRTG using calls to PRODUCE. The input to MAKE-EXPLICIT is G2 = ({g0a0b0}, {σ, α}, ∅, g0a0b0, MB, G1).13 MAKE-EXPLICIT calls PRODUCE(G2, g0a0b0). PRODUCE then seeks to cover b0.σ(x1, x2) w9 −→σ(b0.x1, b0.x2) with productions from G1, which is a stand-in for MA(G)▷. At line 14 of REPLACE, G1 is improved so that it has the appropriate productions. The productions of MA(G)▷that must be built to form the complete MB(MA(G)▷)▷are shown in Figure 2d. The complete MB(MA(G)▷)▷is shown in Figure 2e. Note that because we used this on-the-fly approach, we were able to avoid building all the productions in MA(G)▷; in particular we did not build g0a2 w2·w8 −−−−→ρ, while a bucket brigade approach would have built this production. We have also designed an analogous onthe-fly PRODUCE algorithm for backward application on linear WTT. We have now defined several on-the-fly and bucket brigade algorithms, and also discussed the possibility of embed-compose-project and offline composition strategies to application of cascades of tree transducers. Tables 2a and 2b summarize the available methods of forward and backward application of cascades for recognizabilitypreserving tree transducer classes. 5 Decoding Experiments The main purpose of this paper has been to present novel algorithms for performing application. However, it is important to demonstrate these algorithms on real data. We thus demonstrate bucket-brigade and on-the-fly backward application on a typical NLP task cast as a cascade of wLNT. We adapt the Japanese-to-English transla13Note that G2 is the initial stand-in for MB(MA(G)▷)▷, since G1 is the initial stand-in for MA(G)▷. 1064 method WST wxLNT wLNT oc √ × √ bb √ × √ otf √ √ √ (a) Forward application method WST wxLT wLT wxLNT wLNT oc √ × × × √ bb √ √ √ √ √ otf √ √ √ √ √ (b) Backward application Table 2: Transducer types and available methods of forward and backward application of a cascade. oc = offline composition, bb = bucket brigade, otf = on the fly. tion model of Yamada and Knight (2001) by transforming it from an English-tree-to-Japanese-string model to an English-tree-to-Japanese-tree model. The Japanese trees are unlabeled, meaning they have syntactic structure but all nodes are labeled “X”. We then cast this modified model as a cascade of LNT tree transducers. Space does not permit a detailed description, but some example rules are in Figure 3. The rotation transducer R, a sample of which is in Figure 3a, has 6,453 rules, the insertion transducer I, Figure 3b, has 8,122 rules, and the translation transducer, T , Figure 3c, has 37,311 rules. We add an English syntax language model L to the cascade of transducers just described to better simulate an actual machine translation decoding task. The language model is cast as an identity WTT and thus fits naturally into the experimental framework. In our experiments we try several different language models to demonstrate varying performance of the application algorithms. The most realistic language model is a PCFG. Each rule captures the probability of a particular sequence of child labels given a parent label. This model has 7,765 rules. To demonstrate more extreme cases of the usefulness of the on-the-fly approach, we build a language model that recognizes exactly the 2,087 trees in the training corpus, each with equal weight. It has 39,455 rules. Finally, to be ultraspecific, we include a form of the “specific” language model just described, but only allow the English counterpart of the particular Japanese sentence being decoded in the language. The goal in our experiments is to apply a single tree t backward through the cascade L◦R◦I◦T ◦t and find the 1-best path in the application WRTG. We evaluate the speed of each approach: bucket brigade and on-the-fly. The algorithm we use to obtain the 1-best path is a modification of the kbest algorithm of Pauls and Klein (2009). Our algorithm finds the 1-best path in a WRTG and admits an on-the-fly approach. The results of the experiments are shown in Table 3. As can be seen, on-the-fly application is generally faster than the bucket brigade, about double the speed per sentence in the traditional LM type method time/sentence pcfg bucket 28s pcfg otf 17s exact bucket >1m exact otf 24s 1-sent bucket 2.5s 1-sent otf .06s Table 3: Timing results to obtain 1-best from application through a weighted tree transducer cascade, using on-the-fly vs. bucket brigade backward application techniques. pcfg = model recognizes any tree licensed by a pcfg built from observed data, exact = model recognizes each of 2,000+ trees with equal weight, 1-sent = model recognizes exactly one tree. experiment that uses an English PCFG language model. The results for the other two language models demonstrate more keenly the potential advantage that an on-the-fly approach provides—the simultaneous incorporation of information from all models allows application to be done more effectively than if each information source is considered in sequence. In the “exact” case, where a very large language model that simply recognizes each of the 2,087 trees in the training corpus is used, the final application is so large that it overwhelms the resources of a 4gb MacBook Pro, while the on-the-fly approach does not suffer from this problem. The “1-sent” case is presented to demonstrate the ripple effect caused by using on-the fly. In the other two cases, a very large language model generally overwhelms the timing statistics, regardless of the method being used. But a language model that represents exactly one sentence is very small, and thus the effects of simultaneous inference are readily apparent—the time to retrieve the 1-best sentence is reduced by two orders of magnitude in this experiment. 6 Conclusion We have presented algorithms for forward and backward application of weighted tree transducer cascades, including on-the-fly variants, and demonstrated the benefit of an on-the-fly approach to application. We note that a more formal approach to application of WTTs is being developed, 1065 independent from these efforts, by F¨ul¨op et al. (2010). Acknowledgments We are grateful for extensive discussions with Andreas Maletti. We also appreciate the insights and advice of David Chiang, Steve DeNeefe, and others at ISI in the preparation of this work. Jonathan May and Kevin Knight were supported by NSF grants IIS-0428020 and IIS0904684. Heiko Vogler was supported by DFG VO 1011/5-1. References Athanasios Alexandrakis and Symeon Bozapalidis. 1987. Weighted grammars and Kleene’s theorem. Information Processing Letters, 24(1):1–4. Brenda S. Baker. 1979. Composition of top-down and bottom-up tree transductions. Information and Control, 41(2):186–213. Zolt´an ´Esik and Werner Kuich. 2003. Formal tree series. Journal of Automata, Languages and Combinatorics, 8(2):219–285. Zolt´an F¨ul¨op and Heiko Vogler. 2009. Weighted tree automata and tree transducers. In Manfred Droste, Werner Kuich, and Heiko Vogler, editors, Handbook of Weighted Automata, chapter 9, pages 313–404. Springer-Verlag. Zolt´an F¨ul¨op, Andreas Maletti, and Heiko Vogler. 2010. Backward and forward application of weighted extended tree transducers. Unpublished manuscript. Ferenc G´ecseg and Magnus Steinby. 1984. Tree Automata. Akad´emiai Kiad´o, Budapest. Liang Huang and David Chiang. 2005. Better k-best parsing. In Harry Bunt, Robert Malouf, and Alon Lavie, editors, Proceedings of the Ninth International Workshop on Parsing Technologies (IWPT), pages 53–64, Vancouver, October. Association for Computational Linguistics. Werner Kuich. 1998. Formal power series over trees. In Symeon Bozapalidis, editor, Proceedings of the 3rd International Conference on Developments in Language Theory (DLT), pages 61–101, Thessaloniki, Greece. Aristotle University of Thessaloniki. Werner Kuich. 1999. Tree transducers and formal tree series. Acta Cybernetica, 14:135–149. Andreas Maletti, Jonathan Graehl, Mark Hopkins, and Kevin Knight. 2009. The power of extended topdown tree transducers. SIAM Journal on Computing, 39(2):410–430. Andreas Maletti. 2006. Compositions of tree series transformations. Theoretical Computer Science, 366:248–271. Andreas Maletti. 2008. Compositions of extended topdown tree transducers. Information and Computation, 206(9–10):1187–1196. Andreas Maletti. 2009. Personal Communication. Mehryar Mohri, Fernando C. N. Pereira, and Michael Riley. 2000. The design principles of a weighted finite-state transducer library. Theoretical Computer Science, 231:17–32. Mehryar Mohri. 1997. Finite-state transducers in language and speech processing. Computational Linguistics, 23(2):269–312. Mehryar Mohri. 2009. Weighted automata algorithms. In Manfred Droste, Werner Kuich, and Heiko Vogler, editors, Handbook of Weighted Automata, chapter 6, pages 213–254. Springer-Verlag. Adam Pauls and Dan Klein. 2009. K-best A* parsing. In Keh-Yih Su, Jian Su, Janyce Wiebe, and Haizhou Li, editors, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 958–966, Suntec, Singapore, August. Association for Computational Linguistics. Fernando Pereira and Michael Riley. 1997. Speech recognition by composition of weighted finite automata. In Emmanuel Roche and Yves Schabes, editors, Finite-State Language Processing, chapter 15, pages 431–453. MIT Press, Cambridge, MA. William A. Woods. 1980. Cascaded ATN grammars. American Journal of Computational Linguistics, 6(1):1–12. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proceedings of 39th Annual Meeting of the Association for Computational Linguistics, pages 523–530, Toulouse, France, July. Association for Computational Linguistics. 1066
2010
108
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1067–1076, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics A Tree Transducer Model for Synchronous Tree-Adjoining Grammars Andreas Maletti Universitat Rovira i Virgili Avinguda de Catalunya 25, 43002 Tarragona, Spain. [email protected] Abstract A characterization of the expressive power of synchronous tree-adjoining grammars (STAGs) in terms of tree transducers (or equivalently, synchronous tree substitution grammars) is developed. Essentially, a STAG corresponds to an extended tree transducer that uses explicit substitution in both the input and output. This characterization allows the easy integration of STAG into toolkits for extended tree transducers. Moreover, the applicability of the characterization to several representational and algorithmic problems is demonstrated. 1 Introduction Machine translation has seen a multitude of formal translation models. Here we focus on syntaxbased (or tree-based) models. One of the oldest models is the synchronous context-free grammar (Aho and Ullman, 1972). It is clearly too weak as a syntax-based model, but found use in the string-based setting. Top-down tree transducers (Rounds, 1970; Thatcher, 1970) have been heavily investigated in the formal language community (G´ecseg and Steinby, 1984; G´ecseg and Steinby, 1997), but as argued by Shieber (2004) they are still too weak for syntax-based machine translation. Instead Shieber (2004) proposes synchronous tree substitution grammars (STSGs) and develops an equivalent bimorphism (Arnold and Dauchet, 1982) characterization. This characterization eventually led to the rediscovery of extended tree transducers (Graehl and Knight, 2004; Knight and Graehl, 2005; Graehl et al., 2008), which are essentially as powerful as STSG. They had been studied already by Arnold and Dauchet (1982) in the form of bimorphisms, but received little attention until rediscovered. Shieber (2007) claims that even STSGs might be too simple to capture naturally occuring translation phenomena. Instead Shieber (2007) suggests a yet more powerful mechanism, synchronous tree-adjoining grammars (STAGs) as introduced by Shieber and Schabes (1990), that can capture certain (mildly) context-sensitive features of natural language. In the tradition of Shieber (2004), a characterization of the power of STAGs in terms of bimorphims was developed by Shieber (2006). The bimorphisms used are rather unconventional because they consist of a regular tree language and two embedded tree transducers (instead of two tree homomorphisms). Such embedded tree transducers (Shieber, 2006) are particular macro tree transducers (Courcelle and Franchi-Zannettacci, 1982; Engelfriet and Vogler, 1985). In this contribution, we try to unify the picture even further. We will develop a tree transducer model that can simulate STAGs. It turns out that the adjunction operation of an STAG can be explained easily by explicit substitution. In this sense, the slogan that an STAG is an STSG with adjunction, which refers to the syntax, also translates to the semantics. We prove that any tree transformation computed by an STAG can also be computed by an STSG using explicit substitution. Thus, a simple evaluation procedure that performs the explicit substitution is all that is needed to simulate an STAG in a toolkit for STSGs or extended tree transducers like TIBURON by May and Knight (2006). We show that some standard algorithms on STAG can actually be run on the constructed STSG, which often is simpler and better understood. Further, it might be easier to develop new algorithms with the alternative characterization, which we demonstrate with a product construction for input restriction in the spirit of Nederhof (2009). Finally, we also present a complete tree transducer model that is as powerful as STAG, which is an extension of the embedded tree transducers of Shieber (2006). 1067 2 Notation We quickly recall some central notions about trees, tree languages, and tree transformations. For a more in-depth discussion we refer to G´ecseg and Steinby (1984) and G´ecseg and Steinby (1997). A finite set Σ of labels is an alphabet. The set of all strings over that alphabet is Σ∗where ε denotes the empty string. To simplify the presentation, we assume an infinite set X = {x1, x2, . . . } of variables. Those variables are syntactic and represent only themselves. In particular, they are all different. For each k ≥0, we let Xk = {x1, . . . , xk}. We can also form trees over the alphabet Σ. To allow some more flexibility, we will also allow leaves from a special set V . Formally, a Σ-tree over V is either: • a leaf labeled with an element of v ∈Σ ∪V , or • a node that is labeled with an element of Σ with k ≥1 children such that each child is a Σ-tree over V itself.1 The set of all Σ-trees over V is denoted by TΣ(V ). We just write TΣ for TΣ(∅). The trees in Figure 1 are, for example, elements of T∆(Y ) where ∆= {S, NP, VP, V, DT, N} Y = {saw, the} . We often present trees as terms. A leaf labeled v is simply written as v. The tree with a root node labeled σ is written σ(t1, . . . , tk) where t1, . . . , tk are the term representations of its k children. A tree language is any subset of TΣ(V ) for some alphabet Σ and set V . Given another alphabet ∆and a set Y , a tree transformation is a relation τ ⊆TΣ(V ) × T∆(Y ). In many of our examples we have V = ∅= Y . Occasionally, we also speak about the translation of a tree transformation τ ⊆TΣ × T∆. The translation of τ is the relation {(yd(t), yd(u)) | (t, u) ∈τ} where yd(t), the yield of t, is the sequence of leaf labels in a left-to-right tree traversal of t. The yield of the third tree in Figure 1 is “the N saw the N”. Note that the translation is a relation τ ′ ⊆Σ∗× ∆∗. 3 Substitution A standard operation on (labeled) trees is substitution, which replaces leaves with a specified label in one tree by another tree. We write t[u]A for (the 1Note that we do not require the symbols to have a fixed rank; i.e., a symbol does not determine its number of children. S NP VP V saw NP NP DT the N S NP DT the N VP V saw NP DT the N t u t[u]NP Figure 1: A substitution. result of) the substitution that replaces all leaves labeled A in the tree t by the tree u. If t ∈TΣ(V ) and u ∈T∆(Y ), then t[u]A ∈TΣ∪∆(V ∪Y ). We often use the variables of X = {x1, x2, . . . } as substitution points and write t[u1, . . . , uk] instead of (· · · (t[u1]x1) . . . )[uk]xk. An example substitution is shown in Figure 1. The figure also illustrates a common problem with substitution. Occasionally, it is not desirable to replace all leaves with a certain label by the same tree. In the depicted example, we might want to replace one ‘NP’ by a different tree, which cannot be achieved with substitution. Clearly, this problem is avoided if the source tree t contains only one leaf labeled A. We call a tree Aproper if it contains exactly one leaf with label A.2 The subset CΣ(Xk) ⊆TΣ(Xk) contains exactly those trees of TΣ(Xk) that are xi-proper for every 1 ≤i ≤k. For example, the tree t of Figure 1 is ‘saw’-proper, and the tree u of Figure 1 is ‘the’and ‘N’-proper. In this contribution, we will also use substitution as an explicit operator. The tree t[u]NP in Figure 1 only shows the result of the substitution. It cannot be infered from the tree alone, how it was obtained (if we do not know t and u).3 To make substitution explicit, we use the special binary symbols ·[·]A where A is a label. Those symbols will always be used with exactly two children (i.e., as binary symbols). Since this property can easily be checked by all considered devices, we ignore trees that use those symbols in a non-binary manner. For every set Σ of labels, we let Σ = Σ ∪{·[·]A | A ∈Σ} be the extended set of labels containing also the substition symbols. The substitution of Figure 1 can then be ex2A-proper trees are sometimes also called A-context in the literature. 3This remains true even if we know that the participating trees t and u are A-proper and the substitution t[u]A replacing leaves labeled A was used. This is due to the fact that, in general, the root label of u need not coincide with A. 1068 pressed as the tree ·[·]NP(t, u). To obtain t[u]NP (the right-most tree in Figure 1), we have to evaluate ·[·]NP(t, u). However, we want to replace only one leaf at a time. Consequently, we restrict the evaluation of ·[·]A(t, u) such that it applies only to trees t whose evaluation is A-proper. To enforce this restriction, we introduce an error signal ⊥, which we assume not to occur in any set of labels. Let Σ be the set of labels. Then we define the function ·E : TΣ →TΣ ∪{⊥} by4 σ(t1, . . . , tk)E = σ(tE 1 , . . . , tE k ) ·[·]A(t, u)E = ( tE[uE]A if tE is A-proper ⊥ otherwise for every k ≥0, σ ∈Σ, and t, t1, . . . , tk, u ∈TΣ.5 We generally discard all trees that contain the error signal ⊥. Since the devices that we will study later can also check the required A-properness using their state behavior, we generally do not discuss trees with error symbols explicitly. 4 Extended tree transducer An extended tree transducer is a theoretical model that computes a tree transformation. Such transducers have been studied first by Arnold and Dauchet (1982) in a purely theoretic setting, but were later applied in, for example, machine translation (Knight and Graehl, 2005; Knight, 2007; Graehl et al., 2008; Graehl et al., 2009). Their popularity in machine translation is due to Shieber (2004), in which it is shown that extended tree transducers are essentially (up to a relabeling) as expressive as synchronous tree substitution grammars (STSG). We refer to Chiang (2006) for an introduction to synchronous devices. Let us recall the formal definition. An extended tree transducer (for short: XTT)6 is a system M = (Q, Σ, ∆, I, R) where • Q is a finite set of states, • Σ and ∆are alphabets of input and output symbols, respectively, • I ⊆Q is a set of initial states, and • R is a finite set of rules of the form (q, l) →(q1 · · · qk, r) 4Formally, we should introduce an evaluation function for each alphabet Σ, but we assume that the alphabet can be infered. 5This evaluation is a special case of a yield-mapping (Engelfriet and Vogler, 1985). 6Using the notions of Graehl et al. (2009) our extended tree transducers are linear, nondeleting extended top-down tree transducers. qS S x1 VP x2 x3 → S’ qV x2 qNP x1 qNP x3 qNP NP DT the N boy → NP N atefl Figure 2: Example rules taken from Graehl et al. (2009). The term representation of the first rule is (qS, S(x1, VP(x2, x3))) →(w, S′(x2, x1, x3)) where w = qNPqVqNP. where k ≥0, l ∈CΣ(Xk), and r ∈C∆(Xk). Recall that any tree of CΣ(Xk) contains each variable of Xk = {x1, . . . , xk} exactly once. In graphical representations of a rule (q, l) →(q1 · · · qk, r) ∈R , we usually • add the state q as root node of the left-hand side7, and • add the states q1, . . . , qk on top of the nodes labeled x1, . . . , xk, respectively, in the righthand side of the rule. Some example rules are displayed in Figure 2. The rules are applied in the expected way (as in a term-rewrite system). The only additional feature are the states of Q, which can be used to control the derivation. A sentential form is a tree that contains exclusively output symbols towards the root and remaining parts of the input headed by a state as leaves. A derivation step starting from ξ then consists in • selecting a leaf of ξ with remaining input symbols, • matching the state q and the left-hand side l of a rule (q, l) →(q1 · · · qk, r) ∈R to the state and input tree stored in the leaf, thus matching input subtrees t1, . . . , tk to the variables x1, . . . , xk, • replacing all the variables x1, . . . , xk in the right-hand side r by the matched input subtrees q1(t1), . . . , qk(tk) headed by the corresponding state, respectively, and • replacing the selected leaf in ξ by the tree constructed in the previous item. The process is illustrated in Figure 3. Formally, a sentential form of the XTT M is a tree of SF = T∆(Q(TΣ)) where Q(TΣ) = {q(t) | q ∈Q, t ∈TΣ} . 7States are thus also special symbols that are exclusively used as unary symbols. 1069 C qS S t1 VP t2 t3 ⇒ C S’ qV t2 qNP t1 qNP t3 Figure 3: Illustration of a derivation step of an XTT using the left rule of Figure 2. Given ξ, ζ ∈SF, we write ξ ⇒ζ if there exist C ∈C∆(X1), t1, . . . , tk ∈TΣ, and a rule (q, l) →(q1 · · · qk, r) ∈R such that • ξ = C[q(l[t1, . . . , tk])] and • ζ = C[r[q1(t1), . . . , qk(tk)]]. The tree transformation computed by M is the relation τM = {(t, u) ∈TΣ × T∆| ∃q ∈I : q(t) ⇒∗u} where ⇒∗is the reflexive, transitive closure of ⇒. In other words, the tree t can be transformed into u if there exists an initial state q such that we can derive u from q(t) in several derivation steps. We refer to Arnold and Dauchet (1982), Graehl et al. (2008), and Graehl et al. (2009) for a more detailed exposition to XTT. 5 Synchronous tree-adjoining grammar XTT are a simple, natural model for tree transformations, however they are not suitably expressive for all applications in machine translation (Shieber, 2007). In particular, all tree transformations of XTT have a certain locality condition, which yields that the input tree and its corresponding translation cannot be separated by an unbounded distance. To overcome this problem and certain dependency problems, Shieber and Schabes (1990) and Shieber (2007) suggest a stronger model called synchronous tree-adjoining grammar (STAG), which in addition to the substitution operation of STSG (Chiang, 2005) also has an adjoining operation. Let us recall the model in some detail. A treeadjoining grammar essentially is a regular tree grammar (G´ecseg and Steinby, 1984; G´ecseg and NP DT les N bonbons N N⋆ ADJ rouges NP DT les N N bonbons ADJ rouges derived tree auxiliary tree adjunction Figure 4: Illustration of an adjunction taken from Nesson et al. (2008). NP DT les ·[·]N⋆ N N⋆ ADJ rouges N bonbons Figure 5: Illustration of the adjunction of Figure 4 using explicit substitution. Steinby, 1997) enhanced with an adjunction operation. Roughly speaking, an adjunction replaces a node (not necessarily a leaf) by an auxiliary tree, which has exactly one distinguished foot node. The original children of the replaced node will become the children of the foot node after adjunction. Traditionally, the root label and the label of the foot node coincide in an auxiliary tree aside from a star index that marks the foot node. For example, if the root node of an auxiliary tree is labeled A, then the foot node is traditionally labeled A⋆. The star index is not reproduced once adjoined. Formally, the adjunction of the auxiliary tree u with root label A (and foot node label A⋆) into a tree t = C[A(t1, . . . , tk)] with C ∈CΣ(X1) and t1, . . . , tk ∈TΣ is C[u[A(t1, . . . , tk)]A⋆] . Adjunction is illustrated in Figure 4. We note that adjunction can easily be expressed using explicit substitution. Essentially, only an additional node with the adjoined subtree is added. The result of the adjunction of Figure 4 using explicit substitution is displayed in Figure 5. To simplify the development, we will make some assumptions on all tree-adjoining grammars (and synchronous tree-adjoining grammars). A tree-adjoining grammar (TAG) is a finite set of initial trees and a finite set of auxiliary trees. Our 1070 S T c S a S S⋆ a S b S S⋆ b S S⋆ initial tree auxiliary tree auxiliary tree auxiliary tree Figure 6: A TAG for the copy string language {wcw | w ∈{a, b}∗} taken from Shieber (2006). TAG do not use substitution, but only adjunction. A derivation is a chain of trees that starts with an initial tree and each derived tree is obtained from the previous one in the chain by adjunction of an auxiliary tree. As in Shieber (2006) we assume that all adjunctions are mandatory; i.e., if an auxiliary tree can be adjoined, then we need to make an adjunction. Thus, a derivation starting from an initial tree to a derived tree is complete if no adjunction is possible in the derived tree. Moreover, we assume that to each node only one adjunction can be applied. This is easily achieved by labeling the root of each adjoined auxiliary tree by a special marker. Traditionally, the root label A of an auxiliary tree is replaced by A∅once adjoined. Since we assume that there are no auxiliary trees with such a root label, no further adjunction is possible at such nodes. Another effect of this restriction is that the number of operable nodes (i.e., the nodes to which an adjunction must still be applied) is known at any given time.8 A full TAG with our restrictions is shown in Figure 6. Intuitively, a synchronous tree-adjoining grammar (STAG) is essentially a pair of TAGs. The synchronization is achieved by pairing the initial trees and the auxiliary trees. In addition, for each such pair (t, u) of trees, there exists a bijection between the operable nodes of t and u. Such nodes in bijection are linked and the links are preserved in derivations, in which we now use pairs of trees as sentential forms. In graphical representations we often indicate this bijection with integers; i.e., two nodes marked with the same integer are linked. A pair of auxiliary trees is then adjoined to linked nodes (one in each tree of the sentential form) in the expected manner. We will avoid a formal definition here, but rather present an example STAG and a derivation with it in Figures 7 and 8. For a 8Without the given restrictions, this number cannot be determined easily because no or several adjunctions can take place at a certain node. S1 T c — S1 T c S S1 a S⋆ a — S a S1 S⋆ a S S⋆ — S S⋆ S S1 b S⋆ b — S b S1 S⋆ b Figure 7: STAG that computes the translation {(wcwR, wcw) | w ∈{a, b}∗} where wR is the reverse of w. STAG G we write τG for the tree transformation computed by G. 6 Main result In this section, we will present our main result. Essentially, it states that a STAG is as powerful as a STSG using explicit substitution. Thus, for every tree transformation computed by a STAG, there is an extended tree transducer that computes a representation of the tree transformation using explicit substitution. The converse is also true. For every extended tree transducer M that uses explicit substitution, we can construct a STAG that computes the tree transformation represented by τM up to a relabeling (a mapping that consistently replaces node labels throughout the tree). The additional relabeling is required because STAGs do not have states. If we replace the extended tree transducer by a STSG, then the result holds even without the relabeling. Theorem 1 For every STAG G, there exists an extended tree transducer M such that τG = {(tE, uE) | (t, u) ∈τM} . Conversely, for every extended tree transducer M, there exists a STAG G such that the above relation holds up to a relabeling. 6.1 Proof sketch The following proof sketch is intended for readers that are familiar with the literature on embedded tree transducers, macro tree transducers, and bimorphisms. It can safely be skipped because we will illustrate the relevant construction on our example after the proof sketch, which contains the outline for the correctness. 1071 S1 T c — S1 T c S S1 a S T c a — S a S1 S T c a S S S1 b S a S T c a b — S a S b S1 S S T c a b S S S S1 a S b S a S T c a b a — S a S b S a S1 S S S T c a b a Figure 8: An incomplete derivation using the STAG of Figure 7. Let τ ⊆TΣ × T∆be a tree transformation computed by a STAG. By Shieber (2006) there exists a regular tree language L ⊆TΓ and two functions e1 : TΓ →TΣ and e2 : TΓ →T∆such that τ = {(e1(t), e2(t)) | t ∈L}. Moreover, e1 and e2 can be computed by embedded tree transducers (Shieber, 2006), which are particular 1-state, deterministic, total, 1-parameter, linear, and nondeleting macro tree transducers (Courcelle and Franchi-Zannettacci, 1982; Engelfriet and Vogler, 1985). In fact, the converse is also true up to a relabeling, which is also shown in Shieber (2006). The outer part of Figure 9 illustrates these relations. Finally, we remark that all involved constructions are effective. Using a result of Engelfriet and Vogler (1985), each embedded tree transducer can be decomposed into a top-down tree transducer (G´ecseg and Steinby, 1984; G´ecseg and Steinby, 1997) and a yield-mapping. In our particular case, the top-down tree transducers are linear and nondeleting homomorphisms h1 and h2. Linearity and nondeletion are inherited from the corresponding properties of the macro tree transducer. The properties ‘1-state’, ‘deterministic’, and ‘total’ of the macro tree transducer ensure that the obtained topdown tree transducer is also 1-state, deterministic, and total, which means that it is a homomorphism. Finally, the 1-parameter property yields that the used substitution symbols are binary (as our substitution symbols ·[·]A). Consequently, the yield-mapping actually coincides with our evaluation. Again, this decomposition actually is a characterization of embedded tree transducers. Now the set {(h1(t), h2(t)) | t ∈L} can be computed h1 h2 ·E ·E τM τ e1 e2 Figure 9: Illustration of the proof sketch. by an extended tree transducer M due to results of Shieber (2004) and Maletti (2008). More precisely, every extended tree transducer computes such a set, so that also this step is a characterization. Thus we obtain that τ is an evaluation of a tree transformation computed by an extended tree transducer, and moreover, for each extended tree transducer, the evaluation can be computed (up to a relabeling) by a STAG. The overall proof structure is illustrated in Figure 9. 6.2 Example Let us illustrate one direction (the construction of the extended tree transducer) on our example STAG of Figure 7. Essentially, we just prepare all operable nodes by inserting an explicit substitution just on top of them. The first subtree of that substitution will either be a variable (in the lefthand side of a rule) or a variable headed by a state (in the right-hand side of a rule). The numbers of the variables encode the links of the STAG. Two example rules obtained from the STAG of Figure 7 are presented in Figure 10. Using all XTT rules constructed for the STAG of Figure 7, we present 1072 qS ·[·]S⋆ x1 S T c → ·[·]S⋆ qS x1 S T c qS S ·[·]S⋆ x1 S a S⋆ a → S a ·[·]S⋆ qS x1 S S⋆ a Figure 10: Two constructed XTT rules. a complete derivation of the XTT in Figure 11 that (up to the final step) matches the derivation of the STAG in Figure 8. The matching is achieved by the evaluation ·E introduced in Section 3 (i.e., applying the evaluation to the derived trees of Figure 11 yields the corresponding derived trees of Figure 8. 7 Applications In this section, we will discuss a few applications of our main result. Those range from representational issues to algorithmic problems. Finally, we also present a tree transducer model that includes explicit substitution. Such a model might help to address algorithmic problems because derivation and evaluation are intertwined in the model and not separate as in our main result. 7.1 Toolkits Obviously, our characterization can be applied in a toolkit for extended tree transducers (or STSG) such as TIBURON by May and Knight (2006) to simulate STAG. The existing infrastructure (inputoutput, derivation mechanism, etc) for extended tree transducers can be re-used to run XTTs encoding STAGs. The only additional overhead is the implementation of the evaluation, which is a straightforward recursive function (as defined in Section 3). After that any STAG can be simulated in the existing framework, which allows experiments with STAG and an evaluation of their expressive power without the need to develop a new toolkit. It should be remarked that some essential algorithms that are very sensitive to the input and output behavior (such as parsing) cannot be simulated by the corresponding algorithms for STSG. It remains an open problem whether the close relationship can also be exploited for such algorithms. 7.2 Algorithms We already mentioned in the previous section that some algorithms do not easily translate from STAG to STSG (or vice versa) with the help of our characterization. However, many standard algorithms for STAG can easily be derived from the corresponding algorithms for STSG. The simplest example is the union of two STAG. Instead of taking the union of two STAG using the classical construction, we can take the union of the corresponding XTT (or STSG) that simulate the STAGs. Their union will simulate the union of the STAGs. Such properties are especially valuable when we simulate STAG in toolkits for XTT. A second standard algorithm that easily translates is the algorithm computing the n-best derivations (Huang and Chiang, 2005). Clearly, the nbest derivation algorithm does not consider a particular input or output tree. Since the derivations of the XTT match the derivations of the STAG (in the former the input and output are encoded using explicit substitution), the n-best derivations will coincide. If we are additionally interested in the input and output trees for those n-best derivations, then we can simply evaluate the coded input and output trees returned by n-best derivation algorithm. Finally, let us consider an algorithm that can be obtained for STAG by developing it for XTT using explicit substitution. We will develop a BARHILLEL (Bar-Hillel et al., 1964) construction for STAG. Thus, given a STAG G and a recognizable tree language L, we want to construct a STAG G′ such that τG′ = {(t, u) | (t, u) ∈τG, t ∈L} . In other words, we take the tree transformation τG but additionally require the input tree to be in L. Consequently, this operation is also called input restriction. Since STAG are symmetric, the corresponding output restriction can be obtained in the same manner. Note that a classical BAR-HILLEL construction restricting to a regular set of yields can be obtained easily as a particular input restriction. As in Nederhof (2009) a change of model is beneficial for the development of such an algorithm, so we will develop an input restriction for XTT using explicit substitution. Let M = (Q, Σ, ∆, I, R) be an XTT (using explicit substitution) and G = (N, Σ, I′, P) be a tree substitution grammar (regular tree grammar) in normal form that recognizes L (i.e., L(G) = L). Let S = {A ∈Σ | ·[·]A ∈Σ}. A context is a mapping c: S →N, which remembers a nonterminal of G for each substitution point. Given a rule 1073 qS ·[·]S⋆ S ·[·]S⋆ S ·[·]S⋆ S ·[·]S⋆ S S⋆ S a S⋆ a S b S⋆ b S a S⋆ a S T c ⇒ ·[·]S⋆ qS S ·[·]S⋆ S ·[·]S⋆ S ·[·]S⋆ S S⋆ S a S⋆ a S b S⋆ b S a S⋆ a S T c ⇒ ·[·]S⋆ S a ·[·]S⋆ qS S ·[·]S⋆ S ·[·]S⋆ S S⋆ S a S⋆ a S b S⋆ b S S⋆ a S T c ⇒ ·[·]S⋆ S a ·[·]S⋆ S b ·[·]S⋆ qS S ·[·]S⋆ S S⋆ S a S⋆ a S S⋆ b S S⋆ a S T c ⇒ ·[·]S⋆ S a ·[·]S⋆ S b ·[·]S⋆ S a ·[·]S⋆ qS S S⋆ S S⋆ a S S⋆ b S S⋆ a S T c ⇒ ·[·]S⋆ S a ·[·]S⋆ S b ·[·]S⋆ S a ·[·]S⋆ S S⋆ S S⋆ a S S⋆ b S S⋆ a S T c Figure 11: Complete derivation using the constructed XTT rules. (q, l) →(q1 · · · qk, r) ∈R, a nonterminal p ∈N, and a context c ∈S, we construct new rules corresponding to successful parses of l subject to the following restrictions: • If l = ·[·]A(l1, l2) for some A ∈Σ, then select p′ ∈N, parse l1 in p with context c′ where c′ = c[A 7→p′]9, and parse l2 in p′ with context c. • If l = A⋆with A ∈Σ, then p = c(A). • Finally, if l = σ(l1, . . . , lk) for some σ ∈Σ, then select p →σ(p1, . . . , pk) ∈P is a production of G and we parse li with nonterminal pi and context c for each 1 ≤i ≤k. 7.3 A complete tree transducer model So far, we have specified a tree transducer model that requires some additional parsing before it can be applied. This parsing step has to annotate (and correspondingly restructure) the input tree by the adjunction points. This is best illustrated by the left tree in the last pair of trees in Figure 8. To run our constructed XTT on the trivially completed version of this input tree, it has to be transformed into the first tree of Figure 11, where the adjunctions are now visible. In fact, a second un-parsing step is required to evaluate the output. To avoid the first additional parsing step, we will now modify our tree transducer model such that this parsing step is part of its semantics. This shows that it can also be done locally (instead of globally parsing the whole input tree). In addition, we arrive at a tree transducer model that exactly (up to a relabeling) matches the power of STAG, which can be useful for certain constructions. It is known that an embedded tree transducer (Shieber, 2006) can handle the mentioned un-parsing step. An extended embedded tree transducer with 9c′ is the same as c except that it maps A to p′. substitution M = (Q, Σ, ∆, I, R) is simply an embedded tree transducer with extended left-hand sides (i.e., any number of input symbols is allowed in the left-hand side) that uses the special symbols ·[·]A in the input. Formally, let • Q = Q0 ∪Q1 be finite where Q0 and Q1 are the set of states that do not and do have a context parameter, respectively, • Σ and ∆be ranked alphabets such that if ·[·]A ∈Σ, then A, A⋆∈Σ, • Q⟨U⟩be such that Q⟨U⟩= {q⟨u⟩| q ∈Q1, u ∈U} ∪ ∪{q⟨⟩| q ∈Q0} , • I ⊆Q⟨T∆⟩, and • R is a finite set of rules l →r such that there exists k ≥0 with l ∈Q⟨{y}⟩(CΣ(Xk)) and r ∈Rhsk where Rhsk := δ(Rhsk, . . . , Rhsk) | | q1⟨Rhsk⟩(x) | q0⟨⟩(x) with δ ∈∆k, q1 ∈Q1, q0 ∈Q0, and x ∈Xk. Moreover, each variable of l (including y) is supposed to occur exactly once in r. We refer to Shieber (2006) for a full description of embedded tree transducers. As seen from the syntax, we write the context parameter y of a state q ∈Q1 as q⟨y⟩. If q ∈Q0, then we also write q⟨⟩or q⟨ε⟩. In each right-hand side, such a context parameter u can contain output symbols and further calls to input subtrees. The semantics of extended embedded tree transducers with substitution deviates slightly from the embedded tree transducer semantics. Roughly speaking, not its rules as such, but rather their evaluation are now applied in a term-rewrite fashion. Let SF′ := δ(SF′, . . . , SF′) | | q1⟨SF′⟩(t) | q0⟨⟩(t) 1074 qS⟨⟩ ·[·]S⋆ x1 S T c → q⟨·⟩ S T c x1 qS⟨⟩ S S T c ⇒ q⟨·⟩ S T c S S⋆ Figure 12: Rule and derivation step using the rule in an extended embedded tree transducer with substitution where the context parameter (if present) is displayed as first child. where δ ∈∆k, q1 ∈Q1, q0 ∈Q0, and t ∈TΣ. Given ξ, ζ ∈SF′, we write ξ ⇒ζ if there exist C ∈C∆(X1), t1, . . . , tk ∈TΣ, u ∈T∆∪{ε}, and a rule q⟨u⟩(l) →r ∈R10 with l ∈CΣ(Xk) such that • ξ = C[q⟨u⟩(l[t1, . . . , tk]E)] and • ζ = C[(r[t1, . . . , tk])[u]y]. Note that the essential difference to the “standard” semantics of embedded tree transducers is the evaluation in the first item. The tree transformation computed by M is defined as usual. We illustrate a derivation step in Figure 12, where the match ·[·]S⋆(x1, S(T(c)))E = S(S(T(c))) is successful for x1 = S(S⋆). Theorem 2 Every STAG can be simulated by an extended embedded tree transducer with substitution. Moreover, every extended embedded tree transducer computes a tree transformation that can be computed by a STAG up to a relabeling. 8 Conclusions We presented an alternative view on STAG using tree transducers (or equivalently, STSG). Our main result shows that the syntactic characterization of STAG as STSG plus adjunction rules also carries over to the semantic side. A STAG tree transformation can also be computed by an STSG using explicit substitution. In the light of this result, some standard problems for STAG can be reduced to the corresponding problems for STSG. This allows us to re-use existing algorithms for STSG also for STAG. Moreover, existing STAG algorithms can be related to the corresponding STSG algorithms, which provides further evidence of the close relationship between the two models. We used this relationship to develop a 10Note that u is ε if q ∈Q0. BAR-HILLEL construction for STAG. Finally, we hope that the alternative characterization is easier to handle and might provide further insight into general properties of STAG such as compositions and preservation of regularity. Acknowledgements ANDREAS MALETTI was financially supported by the Ministerio de Educaci´on y Ciencia (MEC) grant JDCI-2007-760. References Alfred V. Aho and Jeffrey D. Ullman. 1972. The Theory of Parsing, Translation, and Compiling. Prentice Hall. Andr´e Arnold and Max Dauchet. 1982. Morphismes et bimorphismes d’arbres. Theoret. Comput. Sci., 20(1):33–93. Yehoshua Bar-Hillel, Micha Perles, and Eliyahu Shamir. 1964. On formal properties of simple phrase structure grammars. In Yehoshua Bar-Hillel, editor, Language and Information: Selected Essays on their Theory and Application, chapter 9, pages 116–150. Addison Wesley. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. ACL, pages 263–270. Association for Computational Linguistics. David Chiang. 2006. An introduction to synchronous grammars. In Proc. ACL. Association for Computational Linguistics. Part of a tutorial given with Kevin Knight. Bruno Courcelle and Paul Franchi-Zannettacci. 1982. Attribute grammars and recursive program schemes. Theoret. Comput. Sci., 17:163–191, 235–257. Joost Engelfriet and Heiko Vogler. 1985. Macro tree transducers. J. Comput. System Sci., 31(1):71–146. Ferenc G´ecseg and Magnus Steinby. 1984. Tree Automata. Akad´emiai Kiad´o, Budapest. Ferenc G´ecseg and Magnus Steinby. 1997. Tree languages. In Handbook of Formal Languages, volume 3, chapter 1, pages 1–68. Springer. Jonathan Graehl and Kevin Knight. 2004. Training tree transducers. In HLT-NAACL, pages 105–112. Association for Computational Linguistics. See also (Graehl et al., 2008). Jonathan Graehl, Kevin Knight, and Jonathan May. 2008. Training tree transducers. Computational Linguistics, 34(3):391–427. 1075 Jonathan Graehl, Mark Hopkins, Kevin Knight, and Andreas Maletti. 2009. The power of extended topdown tree transducers. SIAM Journal on Computing, 39(2):410–430. Liang Huang and David Chiang. 2005. Better k-best parsing. In Proc. IWPT, pages 53–64. Association for Computational Linguistics. Kevin Knight and Jonathan Graehl. 2005. An overview of probabilistic tree transducers for natural language processing. In Proc. CICLing, volume 3406 of LNCS, pages 1–24. Springer. Kevin Knight. 2007. Capturing practical natural language transformations. Machine Translation, 21(2):121–133. Andreas Maletti. 2008. Compositions of extended topdown tree transducers. Inform. and Comput., 206(9– 10):1187–1196. Jonathan May and Kevin Knight. 2006. TIBURON: A weighted tree automata toolkit. In Proc. CIAA, volume 4094 of LNCS, pages 102–113. Springer. Mark-Jan Nederhof. 2009. Weighted parsing of trees. In Proc. IWPT, pages 13–24. Association for Computational Linguistics. Rebecca Nesson, Giorgio Satta, and Stuart M. Shieber. 2008. Optimal k-arization of synchronous treeadjoining grammar. In Proc. ACL, pages 604–612. Association for Computational Linguistics. William C. Rounds. 1970. Mappings and grammars on trees. Math. Systems Theory, 4(3):257–287. Stuart M. Shieber and Yves Schabes. 1990. Synchronous tree-adjoining grammars. In Proc. Computational Linguistics, volume 3, pages 253–258. Stuart M. Shieber. 2004. Synchronous grammars as tree transducers. In Proc. TAG+7, pages 88–95. Stuart M. Shieber. 2006. Unifying synchronous tree adjoining grammars and tree transducers via bimorphisms. In Proc. EACL, pages 377–384. Association for Computational Linguistics. Stuart M. Shieber. 2007. Probabilistic synchronous tree-adjoining grammars for machine translation: The argument from bilingual dictionaries. In Proc. Workshop on Syntax and Structure in Statistical Translation, pages 88–95. Association for Computational Linguistics. James W. Thatcher. 1970. Generalized2 sequential machine maps. J. Comput. System Sci., 4(4):339– 367. 1076
2010
109
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 98–107, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Bilingual Lexicon Generation Using Non-Aligned Signatures Daphna Shezaf Institute of Computer Science Hebrew University of Jerusalem [email protected] Ari Rappoport Institute of Computer Science Hebrew University of Jerusalem [email protected] Abstract Bilingual lexicons are fundamental resources. Modern automated lexicon generation methods usually require parallel corpora, which are not available for most language pairs. Lexicons can be generated using non-parallel corpora or a pivot language, but such lexicons are noisy. We present an algorithm for generating a high quality lexicon from a noisy one, which only requires an independent corpus for each language. Our algorithm introduces non-aligned signatures (NAS), a cross-lingual word context similarity score that avoids the over-constrained and inefficient nature of alignment-based methods. We use NAS to eliminate incorrect translations from the generated lexicon. We evaluate our method by improving the quality of noisy Spanish-Hebrew lexicons generated from two pivot English lexicons. Our algorithm substantially outperforms other lexicon generation methods. 1 Introduction Bilingual lexicons are useful for both end users and computerized language processing tasks. They provide, for each source language word or phrase, a set of translations in the target language, and thus they are a basic component of dictionaries, which also include syntactic information, sense division, usage examples, semantic fields, usage guidelines, etc. Traditionally, when bilingual lexicons are not compiled manually, they are extracted from parallel corpora. However, for most language pairs parallel bilingual corpora either do not exist or are at best small and unrepresentative of the general language. Bilingual lexicons can be generated using nonparallel corpora or pivot language lexicons (see Section 2). However, such lexicons are noisy. In this paper we present a method for generating a high quality lexicon given such a noisy one. Our evaluation focuses on the pivot language case. Pivot language approaches deal with the scarcity of bilingual data for most language pairs by relying on the availability of bilingual data for each of the languages in question with a third, pivot, language. In practice, this third language is often English. A naive method for pivot-based lexicon generation goes as follows. For each source headword1, take its translations to the pivot language using the source-to-pivot lexicon, then for each such translation take its translations to the target language using the pivot-to-target lexicon. This method yields highly noisy (‘divergent’) lexicons, because lexicons are generally intransitive. This intransitivity stems from polysemy in the pivot language that does not exist in the source language. For example, take French-English-Spanish. The English word spring is the translation of the French word printemps, but only in the season of year sense. Further translating spring into Spanish yields both the correct translation primavera and an incorrect one, resorte (the elastic object). To cope with the issue of divergence due to lexical intransitivity, we present an algorithm for assessing the correctness of candidate translations. The algorithm is quite simple to understand and to implement and is computationally efficient. In spite of its simplicity, we are not aware of previous work applying it to our problem. The algorithm utilizes two monolingual corpora, comparable in their domain but otherwise unrelated, in the source and target languages. It does not need a pivot language corpus. The algorithm comprises two stages: signature genera1In this paper we focus on single word head entries. Multi-word expressions form a major topic in NLP and their handling is deferred to future work. 98 tion and signature ranking. The signature of word w is the set of words that co-occur with w most strongly. While co-occurrence scores are used to compute signatures, signatures, unlike context vectors, do not contain the score values. For each given source headword we compute its signature and the signatures of all of its candidate translations. We present the non-aligned signatures (NAS) similarity score for signature and use it to rank these translations. NAS is based on the number of headword signature words that may be translated using the input noisy lexicon into words in the signature of a candidate translation. We evaluate our algorithm by generating a bilingual lexicon for Hebrew and Spanish using pivot Hebrew-English and English-Spanish lexicons compiled by a professional publishing house. We show that the algorithm outperforms existing algorithms for handling divergence induced by lexical intransitivity. 2 Previous Work 2.1 Parallel Corpora Parallel corpora are often used to infer wordoriented machine-readable bilingual lexicons. The texts are aligned to each other, at chunk- and/or word-level. Alignment is generally evaluated by consistency (source words should be translated to a small number of target words over the entire corpus) and minimal shifting (in each occurrence, the source should be aligned to a translation nearby). For a review of such methods see (Lopez, 2008). The limited availability of parallel corpora of sufficient size for most language pairs restricts the usefulness of these methods. 2.2 Pivot Language Without Corpora 2.2.1 Inverse Consultation Tanaka and Umemura (1994) generated a bilingual lexicon using a pivot language. They approached lexical intransitivity divergence using Inverse Consultation (IC). IC examines the intersection of two pivot language sets: the set of pivot translations of a source-language word w, and the set of pivot translations of each target-language word that is a candidate for being a translation to w. IC generally requires that the intersection set contains at least two words, which are synonyms. For example, the intersection of the English translations of French printemps and Spanish resorte contains only a single word, spring. The intersection for a correct translation pair printemps and primavera may include two synonym words, spring and springtime. Variations of this method were proposed by (Kaji and Aizono, 1996; Bond et al., 2001; Paik et al., 2004; Ahn and Frampton, 2006). One weakness of IC is that it relies on pivot language synonyms to identify correct translations. In the above example, if the relatively rare springtime had not existed or was missing from the input lexicons, IC would not have been able to discern that primavera is a correct translation. This may result in low recall. 2.2.2 Multiple Pivot Languages Mausam et al. (2009) used many input bilingual lexicons to create bilingual lexicons for new language pairs. They represent the multiple input lexicons in a single undirected graph, with words from all the lexicons as nodes. The input lexicons translation pairs define the edges in the graph. New translation pairs are inferred based on cycles in the graph, that is, the existence of multiple paths between two words in different languages. In a sense, this is a generalization of the pivot language idea, where multiple pivots are used. In the example above, if both English and German are used as pivots, printemps and primavera would be accepted as correct because they are linked by both English spring and German Fruehling, while printemps and resorte are not linked by any German pivot. This multiple-pivot idea is similar to Inverse Consultation in that multiple pivots are required, but using multiple pivot languages frees it from the dependency on rich input lexicons that contain a variety of synonyms. This is replaced, however, with the problem of coming up with multiple suitable input lexicons. 2.2.3 Micro-Structure of Dictionary Entries Dictionaries published by a single publishing house tend to partition the semantic fields of headwords in the same way. Thus the first translation of some English headword in the English-Spanish and in the English-Hebrew dictionaries would correspond to the same sense of the headword, and would therefore constitute translations of each other. The applicability of this method is limited by the availability of machine-readable dictionaries produced by the same publishing house. Not surprisingly, this method has been proposed by lexicographers working in such companies (Sk99 oumalova, 2001). 2.3 Cross-lingual Co-occurrences in Lexicon Construction Rapp (1999) and Fung (1998) discussed semantic similarity estimation using cross-lingual context vector alignment. Both works rely on a pre-existing large (16-20K entries), correct, oneto-one lexicon between the source and target languages, which is used to align context vectors between languages. The context vector data was extracted from comparable (monolingual but domain-related) corpora. Koehn and Knight (2002) were able to do without the initial large lexicon by limiting themselves to related languages that share a writing system, and using identicallyspelled words as context words. Garera et al. (2009) and Pekar et al. (2006) suggested different methods for improving the context vectors data in each language before aligning them. Garera et al. (2009) replaced the traditional window-based cooccurrence counting with dependency-tree based counting, while Pekar et al. (2006) predicted missing co-occurrence values based on similar words in the same language. In the latter work, the oneto-one lexicon assumption was not made: when a context word had multiple equivalents, it was mapped into all of them, with the original probability equally distributed between them. Pivot Language. Using cross-lingual cooccurrences to improve a lexicon generated using a pivot language was suggested by Tanaka and Iwasaki (1996). Schafer and Yarowsky (2002) created lexicons between English and a target local language (e.g. Gujarati) using a related language (e.g. Hindi) as pivot. An English pivot lexicon was used in conjunction with pivot-target cognates. Cross-lingual co-occurrences were used to remove errors, together with other cues such as edit distance and Inverse Document Frequencies (IDF) scores. It appears that this work assumed a single alignment was possible from English to the target language. Kaji et al. (2008) used a pivot English lexicon to generate initial Japanese-Chinese and ChineseJapanese lexicons, then used co-occurrences information, aligned using the initial lexicon, to identify correct translations. Unlike other works, which require alignments of pairs (i.e., two cooccurring words in one language translatable into two co-occurring words in the other), this method relies on alignments of 3-word cliques in each language, every pair of which frequently cooccurring. This is a relatively rare occurrence, which may explain the low recall rates of their results. 3 Algorithm Our algorithm transforms a noisy lexicon into a high quality one. As explained above, in this paper we focus on noisy lexicons generated using pivot language lexicons. Other methods for obtaining an initial noisy lexicon could be used as well; their evaluation is deferred to future work. In the setting evaluated in this paper, we first generate an initial noisy lexicon iLex possibly containing many translation candidates for each source headword. iLex is computed from two pivot-language lexicons, and is the only place in which the algorithm utilizes the pivot language. Afterwards, for each source headword, we compute its signature and the signatures of each of its translation candidates. Signature computation utilizes a monolingual corpus to discover the words that are most strongly related to the word. We now rank the candidates according to the non-aligned signatures (NAS) similarity score, which assesses the similarity between each candidate’s signature and that of the headword. For each headword, we select the t translations with the highest NAS scores as correct translations. 3.1 Input Resources The resources required by our algorithm as evaluated in this paper are: (a) two bilingual lexicons, one from the source to the pivot language and the other from the pivot to the target language. In principle, these two pivot lexicons can be noisy, although in our evaluation we use manually compiled lexicons; (b) two monolingual corpora, one for each of the source and target languages. We have tested the method with corpora of comparable domains, but not covering the same welldefined subjects (the corpora contain news from different countries and over non-identical time periods). 3.2 Initial Lexicon Construction We create an initial lexicon from the source to the target language using the pivot language: we look up each source language word s in the sourcepivot lexicon, and obtain the set Ps of its pivot 100 translations. We then look up each of the members of Ps in the pivot-target lexicon, and obtain a set Ts of candidate target translations. iLex is therefore a mapping from the set of source headwords to the set of candidate target translations. Note that it is possible that not all target lexicon words appear as translation candidates. To create a target to source lexicon, we repeat the process with the directions reversed. 3.3 Signatures The signature of a word w in a language is the set of N words most strongly related to w. There are various possible ways to formalize this notion. We use a common and simple one, the words having the highest tendency to co-occur with w in a corpus. We count co-occurrences using a sliding fixed-length window of size k. We compute, for each pair of words, their Pointwise Mutual Information (PMI), that is: PMI(w1, w2) = log Pr(w1, w2) Pr(w1)Pr(w2) where Pr(w1, w2) is the co-occurrence count, and Pr(wi) is the total number of appearance of wi in the corpus (Church and Hanks, 1990). We define the signature G(w)N,k of w to be the set of N words with the highest PMI with w. Note that a word’s signature includes words in the same language. Therefore, two signatures of words in different languages cannot be directly compared; we compare them using a lexicon L as explained below. Signature is a function of w parameterized by N and k. We discuss the selection of these parameters in section 4.1.5. 3.4 Non-aligned Signatures (NAS) Similarity Scoring The core strength of our method lies in the way in which we evaluate similarity between words in the source and target languages. For a lexicon L, a source word s and a target word t, NASL(s, t) is defined as the number of words in the signature G(s)N,k of s that may be translated, using L, to words in the signature G(t)N,k of t, normalized by dividing it by N. Formally, NASL(s, t) = |{w∈G(s)|L(w)∩G(t)̸=∅}| N Where L(x) is the set of candidate translations of x under the lexicon L. Since we use a single Language Sites Tokens Hebrew haartz.co.il, ynet.co.il, nrg.co.il 510M Spanish elpais.com, elmundo.com, abc.es 560M Table 1: Hebrew corpus data. lexicon, iLex, throughout this work, we usually omit the L subscript when referring to NAS. 4 Lexicon Generation Experiments We tested our algorithm by generating bilingual lexicons for Hebrew and Spanish, using English as a pivot language. We chose a language pair for which basically no parallel corpora exist2, and that do not share ancestry or writing system in a way that can provide cues for alignment. We conducted the test twice: once creating a Hebrew-Spanish lexicon, and once creating a Spanish-Hebrew one. 4.1 Experimental Setup 4.1.1 Corpora The Hebrew and Spanish corpora were extracted from Israeli and Spanish newspaper websites respectively (see table 1 for details). Crawling a small number of sites allowed us to use specialtailored software to extract the textual data from the web pages, thus improving the quality of the extracted texts. Our two corpora are comparable in their domains, news and news commentary. No kind of preprocessing was used for the Spanish corpus. For Hebrew, closed-class words that are attached to the succeeding word (e.g., ‘the’, ‘and’, ‘in’) were segmented using a simple unsupervised method (Dinur et al., 2009). This method compares the corpus frequencies of the non-prefixed form x and the prefixed form wx. If x is frequent enough, it is assumed to be the correct form, and all the occurrences of wx are segmented into two tokens, w x. This method was chosen for being simple and effective. However, the segmentation it produces is not perfect. It is context insensitive, segmenting all appearances of a token in the same way, while many wx forms are actually ambiguous. Even unambiguous token segmentations may fail when the non-segmented form is very frequent in the domain. 2Old testament corpora are for biblical Hebrew, which is very different from modern Hebrew. 101 Lexicon # headwords BF Eng-Spa 55057 2.4 Spa-Eng 44349 2.9 Eng-Heb 48857 2.5 Heb-Eng 33439 3.7 Spa-Heb 34077 12.6 Heb-Spa 27591 14.8 Table 2: Number of words in lexicons, and branching factors (BF). Hebrew orthography presents additional difficulties: there are relatively many homographs, and spelling is not quite standardized. These considerations lead us to believe that our choice of language pair is more challenging than, for example, a pair of European languages. 4.1.2 Lexicons The source of the Hebrew-English lexicon was the Babylon on-line dictionary3. For Spanish-English, we used the union of Babylon with the Oxford English-Spanish lexicon. Since the corpus was segmented to words using spaces, lexicon entries containing spaces were discarded. Lexicon directionality was ignored. All translation pairs extracted for Hebrew-Spanish via English, were also reversed and added to the SpanishHebrew lexicon, and vice-versa. Therefore, every L1-L2 lexicon we mention is identical to the corresponding L2-L1 lexicon in the set of translation pairs it contains. Our lexicon is thus the ‘noisiest’ that can be generated using a pivot language and two source-pivot-target lexicons, but it also provides the most complete candidate set possible. Ignoring directionality is also in accordance with the reversibility principle of the lexicographic literature (Tomaszczyk, 1998). Table 2 details the sizes and branching factors (BF) (the average number of translations for headword) of the input lexicons, as well as those of the generated initial noisy lexicon. 4.1.3 Baseline The performance of our method was compared to three baselines: Inverse Consultation (IC), average cosine distance, and average city block distance. The first is a completely different algorithm, and the last two are a version of our algorithm in which 3www.babylon.com. the NAS score is replaced by other scores. IC (see section 2.2.1) is a corpus-less method. It ranks t1, t2, ..., the candidate translations of a source word s, by the size of the intersections of the sets of pivot translations of ti and s. Note that IC ranking is a partial order, as the intersection size may be the same for many candidate translations. IC is a baseline for our algorithm as a whole. Cosine and city block distances are widely used methods for calculating distances of vectors within the same vector space. They are defined here as4 Cosine(v, u) = 1 − P viui √P vi P ui CityBlock(v, u) = − X i |vi −ui| In the case of context vectors, the vector indices, or keys, are words, and their values are cooccurrence based scores. We used the words in our signatures as context vector keys, and PMI scores as values. In this way, the two scores are ‘plugged’ into our method and serve as baselines for our NAS similarity score. Since the context vectors are in different languages, we had to translate, or align, the baseline context vectors for the source and target words. Our initial lexicon is a many-to-many relation, so multiple alignments were possible; in fact, the number of possible alignments tends to be very large5. We therefore generated M random possible alignments, and used the average distance metric across these alignments. 4.1.4 Test Sets and Gold Standard Following other works (e.g. (Rapp, 1999)), and to simplify the experimental setup, we focused in our experiments on nouns. A p-q frequency range in a corpus is the set of tokens in the places between p and q in the list of corpus tokens, sorted by frequency from high to low. Two types of test sets were used. The first (R1) includes all the singular, correctly segmented (in Hebrew) nouns among the 500 words in the 1001-1500 frequency range. The 1000 highestfrequency tokens were discarded, as a large number of these are utilized as auxiliary syntactic 4We modified the standard cosine and city block metrics so that for all measures higher values would be better. 5This is another advantage of our NAS score. 102 R1 R2 Precision Recall Precision Recall NAS 82.1% 100% 56% 100% Cosine 60.7% 100% 28% 100% City block 56.3% 100% 32% 100% IC 55.2% 85.7% 52% 88% Table 3: Hebrew-Spanish lexicon generation: highest-ranking translation. words. This yielded a test set of 112 Hebrew nouns and 169 Spanish nouns. The second (R2), contains 25 words for each of the two languages, obtained by randomly selecting 5 singular correctly segmented nouns from each of the 5 frequency ranges 1-1000 to 4001-5000. For each of the test words, the correct translations were extracted from a modern professional concise printed Hebrew-Spanish-Hebrew dictionary (Prolog, 2003). This dictionary almost always provides a single Spanish translation for Hebrew headwords. Spanish headwords had 1.98 Hebrew translations on the average. In both cases this is a small number of correct translation comparing to what we might expect with other evaluation methods; therefore this evaluation amounts to a relatively high standard of correctness. Our score comparison experiments (section 5) extend the evaluation beyond this gold standard. 4.1.5 Parameters The following parameter values were used. The window size for co-occurrence counting, k, was 4. This value was chosen in a small pre-test. Signature size N was 200 (see Section 6.1). The number of alignments M for the baseline scores was 100. The number of translations selected for each headword, t, was set to 1 for ease of testing, but see further notes under results. 4.2 Results Tables 3 and 4 summarize the results of the Hebrew-Spanish and Spanish-Hebrew lexicon generation respectively, for both the R1 and R2 test sets. In the three co-occurrence based methods, NAS similarity, cosine distance and and city block distance, the highest ranking translation was selected. Recall is always 100% as a translation from the candidate set is always selected, and all of this set is valid. Precision is computed as the number of R1 R2 Precision Recall Precision Recall NAS 87.6% 100% 80% 100% Cosine 68% 100% 44% 100% City block 69.8% 100% 36% 100% IC 76.4% 100% 48% 92% Table 4: Spanish-Hebrew Lexicon Generation: highest-ranking translation. test words whose selected translation was one of the translations in the gold standard. IC translations ranking is a partial order, as usually many translations are scored equally. When all translations have the same score, IC is effectively undecided. We calculate recall as the percentage of cases in which there was more than one score rank. A result was counted as precise if any of the highest-ranking translations was in the goldstandard, even if other translations were equally ranked, creating a bias in favor of IC. In both of the Hebrew-Spanish and the SpanishHebrew cases, our method significantly outperformed all baselines in generating a precise lexicon on the highest-ranking translations. All methods performed better in R1 than in R2, which included also lower-frequency words, and this was more noticeable with the corpusbased methods (Hebrew-Spanish) than with IC. This suggests, not surprisingly, that the performance of corpus-based methods is related to the amount of information in the corpus. That the results for the Spanish-Hebrew lexicon are higher may arise from the difference in the gold standard. As mentioned, Hebrew words only had one “correct” Spanish translation, while Spanish had 1.98 correct translations on the average. If we had used a more comprehensive resource to test against, the precision of the method would be higher than shown here. In translation pairs generation, the results beyond the top-ranking pair are also of importance. Tables 5 and 6 present the accuracy of the first three translation suggestions, for the three cooccurrence based scores, calculated for the R1 test set. IC results are not included, as they are incomparable to those of the other methods: IC tends to score many candidate translations identically, and in practice, the three highest-scoring sets of translation candidates contained on average 77% of all 103 1st 2nd 3rd total NAS 82.1% 6.3% 1.8% 90.2% Cosine 60.7% 9.8% 2.7% 73.2% City block 56.3% 4.5% 10.7% 71.4% Table 5: Hebrew-Spanish lexicon generation: accuracy of 3 best translations for the R1 condition. The table shows how many of the 2nd and 3rd translations are correct. Note that NAS is always a better solution, even though its numbers for 2nd and 3rd are smaller, because its accumulative percentage, shown in the last column, is higher. 1st 2nd 3rd total NAS 87.6% 77.5% 16% 163.9% Cosine 68% 66.3% 10.1% 144.4% City block 69.8% 64.5% 7.7% 142% Table 6: Spanish-Hebrew lexicon generation: accuracy of 3 best translations for the R1 condition. The total exceeds 100% because Spanish words had more than one correct translation. See also the caption of Table 5. the candidates, thus necessarily yielding mostly incorrect translations. Recall was omitted from the tables as it is always 100%. For all methods, many of the correct translations that do not rank first, rank as second or third. For both languages, NAS ranks highest for total accuracy of the three translations, with considerable advantage. 5 Score Comparison Experiments Lexicon generation, as defined in our experiment, is a relatively high standard for cross-linguistic semantic distance evaluation. This is especially corHeb-Spa Spa-Heb SCE1 SCE2 SCE1 SCE2 NAS 93.8% 76.2% 94.1% 83.7% Cosine 74.1% 57.1% 70.7% 63.2% City block 74.1% 68.3% 78,1% 75.2% Table 7: Precision of score comparison experiments. The percentage of cases in which each of the scoring methods was able to successfully distinguish the correct (SCE1) or possible correct (SCE2) translation from the random translation. rect since our gold standard gives only a small set of translations. The set of possible translations in iLex tends to include, besides the “correct” translation of the gold standard, other translations that are suitable in certain contexts or are semantically related. For example, for one Hebrew word, kvuza, the gold standard translation was grupo (group), while our method chose equipo (team), which was at least as plausible given the amount of sports news in the corpus. Thus to better compare the capability of NAS to distinguish correct and incorrect translations with that of other scores, we performed two more experiments. In the first score comparison experiment (SCE1), we used the two R1 test sets, Hebrew and Spanish, from the lexicon generation test (section 4.1.4). For each word in the test set, we used our method to select between one of two translations: a correct translation, from the gold standard, and a random translation, chosen randomly among all the nouns similar in frequency to the correct translation. The second score comparison experiment (SCE2) was designed to test the score with a more extensive test set. For each of the two languages, we randomly selected 1000 nouns, and used our method to select between a possibly correct translation, chosen randomly among the translations suggested in iLex, and a random translation, chosen randomly among nouns similar in frequency to the possibly correct translation. This test, while using a more extensive test set, is less accurate because it is not guaranteed that any of the input translations is correct. In both SCE1 and SCE2, cosine and city block distance were used as baselines. Inverse Consultation is irrelevant here because it can only score translation pairs that appear in iLex. Table 7 presents the results of the two score comparison experiments, each of them for each of the translation directions. Recall is by definition 100% and is omitted. Again, NAS performs better than the baselines in all cases. With all scores, precision values in SCE1 are higher than in the lexicon generation experiment. This is consistent with the expectation that selection between a correct and a random, probably incorrect, translation is easier than selecting among the translations in iLex. The precision in SCE2 is lower than that in SCE1. This may be a result of both translations in SCE2 being 104 Figure 1: NAS values (not algorithm precision) for various N sizes. NAS is not sensitive to the value of N (see text). in some cases incorrect. Yet this may also reflect a weakness of all three scores with lower-frequency words, which are represented in the 1000-word samples but not in the ones used in SCE1. 6 NAS Score Properties 6.1 Signature Size NAS values are in the range [0, 1]. The values depend on N, the size of the signature used. With an extremely small N, NAS values would usually be 0, and would tend to be noisy, due to accidental inclusion of high-frequency or highly ambiguous words in the signature. As N approaches the size of the lexicon used for alignment, NAS values approach 1 for all word pairs. This suggests that choosing a suitable value of N is critical for effectively using NAS. Yet an empirical test has shown that NAS may be useful for a wide range of N values: we computed NAS values for the correct and random translations used in the Hebrew-Spanish SCE1 experiment (section 5), using N values between 50 and 2000. Figure 1 shows the average score values (note that these are not precision values) for the correct and random translations across that N range. The scores for the correct translations are consistently higher than those of the random translations, even while there is a discernible decline in the difference between them. In fact, the precision of the selection between the correct and random translation is persistent throughout the range. This suggests that while extreme N values should be avoided, the selection of N is not a major issue. 6.2 Dependency on Alignment Lexicon NASL values depend on L, the lexicon in use. Clearly again, in the extremes, an almost empty lexicon or a lexicon containing every possible pair of words (a Cartesian product), this score would not be useful. In the first case, it would yield 0 for every pair, and in the second, 1. However as our experiments show, it performed well with realworld examples of a noisy lexicon, with branching factors of 12.6 and 14.8 (see table 2). 6.3 Lemmatization Lemmatization is the process of extracting the lemmas of words in the corpus. Our experiments show that good results can be achieved without lemmatization, at least for nouns in the pair of languages tested (aside from the simple prefix segmentation we used for Hebrew, see section 4.1.1). For other language pairs lemmatization may be needed. In general, correct lemmatization should improve results, since the signatures would consist of more meaningful information. If automatic lemmatization introduces noise, it may reduce the results’ quality. 6.4 Alternative Models for Relatedness Cosine and city block, as well as other related distance metrics, rely on context vectors. The context vector of a word w collects words and maps them to some score of their “relatedness” to w; in this case, we used PMI. NAS, in contrast, relies on the signature, the set of N words most related to w. That is, it requires a Boolean relatedness indication, rather than a numeric relatedness score. We used PMI to generate this Boolean indication, and naturally, other similar measures could be used as well. More significantly, it may be possible to use it with corpus-less sources of “relatedness”, such as WordNet or search result snippets. 7 Conclusion We presented a method to create a high quality bilingual lexicon given a noisy one. We focused on the case in which the noisy lexicon is created using two pivot language lexicons. Our algorithm uses two unrelated monolingual corpora. At the heart of our method is the non-aligned signatures (NAS) context similarity score, used for removing incorrect translations using cross-lingual cooccurrences. 105 Words in one language tend to have multiple translations in another. The common method for context similarity scoring utilizes some algebraic distance between context vectors, and requires a single alignment of context vectors in one language into the other. Finding a single correct alignment is unrealistic even when a perfectly correct lexicon is available. For example, alignment forces us to choose one correct translation for each context word, while in practice a few possible terms may be used interchangeably in the other language. In our task, moreover, the lexicon used for alignment was automatically generated from pivot language lexicons and was expected to contain errors. NAS does not depend on finding a single correct alignment. While it measures how well the sets of words that tend to co-occur with these two words align to each other, its strength may lie in bypassing the question of which word in one language should be aligned to a certain context word in the other language. Therefore, unlike other scoring methods, it is not effected by incorrect alignments. We have shown that NAS outperforms the more traditional distance metrics, which we adapted to the many-to-many scenario by amortizing across multiple alignments. Our results confirm that alignment is problematic in using co-occurrence methods across languages, at least in our settings. NAS constitutes a way to avoid this problem. While the purpose of this work was to discern correct translations from incorrect one, it is worth noting that our method actually ranks translation correctness. This is a stronger property, which may render it useful in a wider range of scenarios. In fact, NAS can be viewed as a general measure for word similarity between languages. It would be interesting to further investigate this observation with other sources of lexicons (e.g., obtained from parallel or comparable corpora) and for other tasks, such as cross-lingual word sense disambiguation and information retrieval. References Kisuh Ahn and Matthew Frampton. 2006. Automatic generation of translation dictionaries using intermediary languages. In EACL 2006 Workshop on CrossLanguage Knowledge Induction. Francis Bond, Ruhaida Binti Sulong, Takefumi Yamazaki, and Kentaro Ogura. 2001. Design and construction of a machine-tractable japanese-malay dictionary. In MT Summit VIII: Machine Translation in the Information Age, Proceedings, pages 53–58. Kenneth W. Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16:22–29. Elad Dinur, Dmitry Davidov, and Ari Rappoport. 2009. Unsupervised concept discovery in hebrew using simple unsupervised word prefix segmentation for hebrew and arabic. In EACL 2009 Workshop on Computational Approaches to Semitic Languages. Pascale Fung. 1998. A statistical view on bilingual lexicon extraction:from parallel corpora to nonparallel corpora. In The Third Conference of the Association for Machine Translation in the Americas. Nikesh Garera, Chris Callison-Burch, and David Yarowsky. 2009. Improving translation lexicon induction from monolingual corpora via dependency contexts and part-of-speech equivalences. In CoNLL. Hiroyuki Kaji and Toshiko Aizono. 1996. Extracting word correspondences from bilingual corpora based on word co-occurrence information. In COLING. Hiroyuki Kaji, Shin’ichi Tamamura, and Dashtseren Erdenebat. 2008. Automatic construction of a japanese-chinese dictionary via english. In LREC. Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. In Proceedings of ACL Workshop on Unsupervised Lexical Acquisition. Adam Lopez. 2008. Statistical machine translation. ACM Computing Surveys, 40(3):1–49. Mausam, Stephen Soderland, Oren Etzioni, Daniel S. Weld, Michael Skinner, and Jeff Bilmes. 2009. Compiling a massive, multilingual dictionary via probabilistic inference. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and 4th International Joint Conference on Natural Language Processing. Kyonghee Paik, Satoshi Shirai, and Hiromi Nakaiwa. 2004. Automatic construction of a transfer dictionary considering directionality. In COLING, Multilingual Linguistic Resources Workshop. Viktor Pekar, Ruslan Mitkov, Dimitar Blagoev, and Andrea Mulloni. 2006. Finding translations for lowfrequency words in comparable corpora. Machine Translation, 20:247 – 266. Prolog. 2003. Practical Bilingual Dictionary: Spanish-Hebew/Hebrew-Spanish. Israel. Reinhard Rapp. 1999. Automatic identification of word translations from unrelated english and german corpora. In ACL. 106 Charles Schafer and David Yarowsky. 2002. Inducing translation lexicons via diverse similarity measures and bridge languages. In CoNLL. Hana Skoumalova. 2001. Bridge dictionaries as bridges between languages. International Journal of Corpus Linguistics, 6:95–105. Kumiko Tanaka and Hideya Iwasaki. 1996. Extraction of lexical translations from non-aligned corpora. In Conference on Computational linguistics. Kumiko Tanaka and Kyoji Umemura. 1994. Construction of a bilingual dictionary intermediated by a third language. In Conference on Computational Linguistics. Jerzy Tomaszczyk. 1998. The bilingual dictionary under review. In ZuriLEX’86. 107
2010
11
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1077–1086, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Dynamic Programming for Linear-Time Incremental Parsing Liang Huang USC Information Sciences Institute 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292 [email protected] Kenji Sagae USC Institute for Creative Technologies 13274 Fiji Way Marina del Rey, CA 90292 [email protected] Abstract Incremental parsing techniques such as shift-reduce have gained popularity thanks to their efficiency, but there remains a major problem: the search is greedy and only explores a tiny fraction of the whole space (even with beam search) as opposed to dynamic programming. We show that, surprisingly, dynamic programming is in fact possible for many shift-reduce parsers, by merging “equivalent” stacks based on feature values. Empirically, our algorithm yields up to a five-fold speedup over a state-of-the-art shift-reduce dependency parser with no loss in accuracy. Better search also leads to better learning, and our final parser outperforms all previously reported dependency parsers for English and Chinese, yet is much faster. 1 Introduction In terms of search strategy, most parsing algorithms in current use for data-driven parsing can be divided into two broad categories: dynamic programming which includes the dominant CKY algorithm, and greedy search which includes most incremental parsing methods such as shift-reduce.1 Both have pros and cons: the former performs an exact search (in cubic time) over an exponentially large space, while the latter is much faster (in linear-time) and is psycholinguistically motivated (Frazier and Rayner, 1982), but its greedy nature may suffer from severe search errors, as it only explores a tiny fraction of the whole space even with a beam. Can we combine the advantages of both approaches, that is, construct an incremental parser 1McDonald et al. (2005b) is a notable exception: the MST algorithm is exact search but not dynamic programming. that runs in (almost) linear-time, yet searches over a huge space with dynamic programming? Theoretically, the answer is negative, as Lee (2002) shows that context-free parsing can be used to compute matrix multiplication, where sub-cubic algorithms are largely impractical. We instead propose a dynamic programming alogorithm for shift-reduce parsing which runs in polynomial time in theory, but linear-time (with beam search) in practice. The key idea is to merge equivalent stacks according to feature functions, inspired by Earley parsing (Earley, 1970; Stolcke, 1995) and generalized LR parsing (Tomita, 1991). However, our formalism is more flexible and our algorithm more practical. Specifically, we make the following contributions: • theoretically, we show that for a large class of modern shift-reduce parsers, dynamic programming is in fact possible and runs in polynomial time as long as the feature functions are bounded and monotonic (which almost always holds in practice); • practically, dynamic programming is up to five times faster (with the same accuracy) as conventional beam-search on top of a stateof-the-art shift-reduce dependency parser; • as a by-product, dynamic programming can output a forest encoding exponentially many trees, out of which we can draw better and longer k-best lists than beam search can; • finally, better and faster search also leads to better and faster learning. Our final parser achieves the best (unlabeled) accuracies that we are aware of in both English and Chinese among dependency parsers trained on the Penn Treebanks. Being linear-time, it is also much faster than most other parsers, even with a pure Python implementation. 1077 input: w0 . . . wn−1 axiom 0 : ⟨0, ǫ⟩: 0 sh ℓ: ⟨j, S⟩: c ℓ+ 1 : ⟨j + 1, S|wj⟩: c + ξ j < n re↶ ℓ: ⟨j, S|s1|s0⟩: c ℓ+ 1 : ⟨j, S|s1↶s0⟩: c + λ re↷ ℓ: ⟨j, S|s1|s0⟩: c ℓ+ 1 : ⟨j, S|s1↷s0⟩: c + ρ goal 2n −1 : ⟨n, s0⟩: c where ℓis the step, c is the cost, and the shift cost ξ and reduce costs λ and ρ are: ξ = w · fsh(j, S) (1) λ = w · fre↶(j, S|s1|s0) (2) ρ = w · fre↷(j, S|s1|s0) (3) Figure 1: Deductive system of vanilla shift-reduce. For convenience of presentation and experimentation, we will focus on shift-reduce parsing for dependency structures in the remainder of this paper, though our formalism and algorithm can also be applied to phrase-structure parsing. 2 Shift-Reduce Parsing 2.1 Vanilla Shift-Reduce Shift-reduce parsing performs a left-to-right scan of the input sentence, and at each step, choose one of the two actions: either shift the current word onto the stack, or reduce the top two (or more) items at the end of the stack (Aho and Ullman, 1972). To adapt it to dependency parsing, we split the reduce action into two cases, re↶and re↷, depending on which one of the two items becomes the head after reduction. This procedure is known as “arc-standard” (Nivre, 2004), and has been engineered to achieve state-of-the-art parsing accuracy in Huang et al. (2009), which is also the reference parser in our experiments.2 More formally, we describe a parser configuration by a state ⟨j, S⟩where S is a stack of trees s0, s1, ... where s0 is the top tree, and j is the 2There is another popular variant, “arc-eager” (Nivre, 2004; Zhang and Clark, 2008), which is more complicated and less similar to the classical shift-reduce algorithm. input: “I saw Al with Joe” step action stack queue 0 I ... 1 sh I saw ... 2 sh I saw Al ... 3 re↶ I↶saw Al ... 4 sh I↶saw Al with ... 5a re↷ I↶saw↷Al with ... 5b sh I↶saw Al with Joe Figure 2: A trace of vanilla shift-reduce. After step (4), the parser branches off into (5a) or (5b). queue head position (current word q0 is wj). At each step, we choose one of the three actions: 1. sh: move the head of queue, wj, onto stack S as a singleton tree; 2. re↶: combine the top two trees on the stack, s0 and s1, and replace them with tree s1↶s0. 3. re↷: combine the top two trees on the stack, s0 and s1, and replace them with tree s1↷s0. Note that the shorthand notation t↶t′ denotes a new tree by “attaching tree t′ as the leftmost child of the root of tree t”. This procedure can be summarized as a deductive system in Figure 1. States are organized according to step ℓ, which denotes the number of actions accumulated. The parser runs in linear-time as there are exactly 2n−1 steps for a sentence of n words. As an example, consider the sentence “I saw Al with Joe” in Figure 2. At step (4), we face a shiftreduce conflict: either combine “saw” and “Al” in a re↷action (5a), or shift “with” (5b). To resolve this conflict, there is a cost c associated with each state so that we can pick the best one (or few, with a beam) at each step. Costs are accumulated in each step: as shown in Figure 1, actions sh, re↶, and re↷have their respective costs ξ, λ, and ρ, which are dot-products of the weights w and features extracted from the state and the action. 2.2 Features We view features as “abstractions” or (partial) observations of the current state, which is an important intuition for the development of dynamic programming in Section 3. Feature templates are functions that draw information from the feature window (see Tab. 1(b)), consisting of the top few trees on the stack and the first few words on the queue. For example, one such feature templatef100 = s0.w ◦q0.t is a conjunction 1078 of two atomic features s0.w and q0.t, capturing the root word of the top tree s0 on the stack, and the part-of-speech tag of the current head word q0 on the queue. See Tab. 1(a) for the list of feature templates used in the full model. Feature templates are instantiated for a specific state. For example, at step (4) in Fig. 2, the above template f100 will generate a feature instance (s0.w = Al) ◦(q0.t = IN). More formally, we denote f to be the feature function, such that f(j, S) returns a vector of feature instances for state ⟨j, S⟩. To decide which action is the best for the current state, we perform a threeway classification based on f(j, S), and to do so, we further conjoin these feature instances with the action, producing action-conjoined instances like (s0.w = Al) ◦(q0.t = IN) ◦(action = sh). We denote fsh(j, S), fre↶(j, S), and fre↷(j, S) to be the conjoined feature instances, whose dotproducts with the weight vector decide the best action (see Eqs. (1-3) in Fig. 1). 2.3 Beam Search and Early Update To improve on strictly greedy search, shift-reduce parsing is often enhanced with beam search (Zhang and Clark, 2008), where b states develop in parallel. At each step we extend the states in the current beam by applying one of the three actions, and then choose the best b resulting states for the next step. Our dynamic programming algorithm also runs on top of beam search in practice. To train the model, we use the averaged perceptron algorithm (Collins, 2002). Following Collins and Roark (2004) we also use the “early-update” strategy, where an update happens whenever the gold-standard action-sequence falls off the beam, with the rest of the sequence neglected.3 The intuition behind this strategy is that later mistakes are often caused by previous ones, and are irrelevant when the parser is on the wrong track. Dynamic programming turns out to be a great fit for early updating (see Section 4.3 for details). 3 Dynamic Programming (DP) 3.1 Merging Equivalent States The key observation for dynamic programming is to merge “equivalent states” in the same beam 3As a special case, for the deterministic mode (b=1), updates always co-occur with the first mistake made. (a) Features Templates f(j, S) qi = wj+i (1) s0.w s0.t s0.w ◦s0.t s1.w s1.t s1.w ◦s1.t q0.w q0.t q0.w ◦q0.t (2) s0.w ◦s1.w s0.t ◦s1.t s0.t ◦q0.t s0.w ◦s0.t ◦s1.t s0.t ◦s1.w ◦s1.t s0.w ◦s1.w ◦s1.t s0.w ◦s0.t ◦s1.w s0.w ◦s0.t ◦s1 ◦s1.t (3) s0.t ◦q0.t ◦q1.t s1.t ◦s0.t ◦q0.t s0.w ◦q0.t ◦q1.t s1.t ◦s0.w ◦q0.t (4) s1.t ◦s1.lc.t ◦s0.t s1.t ◦s1.rc.t ◦s0.t s1.t ◦s0.t ◦s0.rc.t s1.t ◦s1.lc.t ◦s0 s1.t ◦s1.rc.t ◦s0.w s1.t ◦s0.w ◦s0.lc.t (5) s2.t ◦s1.t ◦s0.t (b) ←stack queue → ... s2 ... s1 s1.lc ... ... s1.rc ... s0 s0.lc ... ... s0.rc ... q0 q1 ... (c) Kernel features for DP ef(j, S) = (j, f2(s2), f1(s1), f0(s0)) f2(s2) s2.t f1(s1) s1.w s1.t s1.lc.t s1.rc.t f0(s0) s0.w s0.t s0.lc.t s0.rc.t j q0.w q0.t q1.t Table 1: (a) feature templates used in this work, adapted from Huang et al. (2009). x.w and x.t denotes the root word and POS tag of tree (or word) x. and x.lc and x.rc denote x’s left- and rightmost child. (b) feature window. (c) kernel features. (i.e., same step) if they have the same feature values, because they will have the same costs as shown in the deductive system in Figure 1. Thus we can define two states ⟨j, S⟩and ⟨j′, S′⟩to be equivalent, notated ⟨j, S⟩∼⟨j′, S′⟩, iff. j = j′ and f(j, S) = f(j′, S′). (4) Note that j = j′ is also needed because the queue head position j determines which word to shift next. In practice, however, a small subset of atomic features will be enough to determine the whole feature vector, which we call kernel features ef(j, S), defined as the smallest set of atomic templates such that ef(j, S) = ef(j′, S′) ⇒⟨j, S⟩∼⟨j′, S′⟩. For example, the full list of 28 feature templates in Table 1(a) can be determined by just 12 atomic features in Table 1(c), which just look at the root words and tags of the top two trees on stack, as well as the tags of their left- and rightmost children, plus the root tag of the third tree s2, and finally the word and tag of the queue head q0 and the 1079 state form ℓ: ⟨i, j, sd...s0⟩: (c, v, π) ℓ: step; c, v: prefix and inside costs; π: predictor states equivalence ℓ: ⟨i, j, sd...s0⟩∼ℓ: ⟨i, j, s′ d...s′ 0⟩ iff. ef(j, sd...s0) = ef(j, s′ d...s′ 0) ordering ℓ: : (c, v, ) ≺ℓ: : (c′, v′, ) iff. c < c′ or (c = c′ and v < v′). axiom (p0) 0 : ⟨0, 0, ǫ⟩: (0, 0, ∅) sh state p: ℓ: ⟨, j, sd...s0⟩: (c, , ) ℓ+ 1 : ⟨j, j + 1, sd−1...s0, wj⟩: (c + ξ, 0, {p}) j < n re↶ state p: : ⟨k, i, s′ d...s′ 0⟩: (c′, v′, π′) state q: ℓ: ⟨i, j, sd...s0⟩: ( , v, π) ℓ+ 1 : ⟨k, j, s′ d...s′ 1, s′ 0 ↶s0⟩: (c′ + v + δ, v′ + v + δ, π′) p ∈π goal 2n −1 : ⟨0, n, sd...s0⟩: (c, c, {p0}) where ξ = w · fsh(j, sd...s0), and δ = ξ′ + λ, with ξ′ = w · fsh(i, s′ d...s′ 0) and λ = w · fre↶(j, sd...s0). Figure 3: Deductive system for shift-reduce parsing with dynamic programming. The predictor state set π is an implicit graph-structured stack (Tomita, 1988) while the prefix cost c is inspired by Stolcke (1995). The re↷case is similar, replacing s′ 0 ↶s0 with s′ 0 ↷s0, and λ with ρ = w · fre↷(j, sd...s0). Irrelevant information in a deduction step is marked as an underscore ( ) which means “can match anything”. tag of the next word q1. Since the queue is static information to the parser (unlike the stack, which changes dynamically), we can use j to replace features from the queue. So in general we write ef(j, S) = (j, fd(sd), . . . , f0(s0)) if the feature window looks at top d + 1 trees on stack, and where fi(si) extracts kernel features from tree si (0 ≤i ≤d). For example, for the full model in Table 1(a) we have ef(j, S) = (j, f2(s2), f1(s1), f0(s0)), (5) where d = 2, f2(x) = x.t, and f1(x) = f0(x) = (x.w, x.t, x.lc.t, x.rc.t) (see Table 1(c)). 3.2 Graph-Structured Stack and Deduction Now that we have the kernel feature functions, it is intuitive that we might only need to remember the relevant bits of information from only the last (d + 1) trees on stack instead of the whole stack, because they provide all the relevant information for the features, and thus determine the costs. For shift, this suffices as the stack grows on the right; but for reduce actions the stack shrinks, and in order still to maintain d + 1 trees, we have to know something about the history. This is exactly why we needed the full stack for vanilla shift-reduce parsing in the first place, and why dynamic programming seems hard here. To solve this problem we borrow the idea of “graph-structured stack” (GSS) from Tomita (1991). Basically, each state p carries with it a set π(p) of predictor states, each of which can be combined with p in a reduction step. In a shift step, if state p generates state q (we say “p predicts q” in Earley (1970) terms), then p is added onto π(q). When two equivalent shifted states get merged, their predictor states get combined. In a reduction step, state q tries to combine with every predictor state p ∈π(q), and the resulting state r inherits the predictor states set from p, i.e., π(r) = π(p). Interestingly, when two equivalent reduced states get merged, we can prove (by induction) that their predictor states are identical (proof omitted). Figure 3 shows the new deductive system with dynamic programming and GSS. A new state has the form ℓ: ⟨i, j, sd...s0⟩ where [i..j] is the span of the top tree s0, and sd..s1 are merely “left-contexts”. It can be combined with some predictor state p spanning [k..i] ℓ′ : ⟨k, i, s′ d...s′ 0⟩ to form a larger state spanning [k..j], with the resulting top tree being either s1↶s0 or s1↷s0. 1080 This style resembles CKY and Earley parsers. In fact, the chart in Earley and other agenda-based parsers is indeed a GSS when viewed left-to-right. In these parsers, when a state is popped up from the agenda, it looks for possible sibling states that can combine with it; GSS, however, explicitly maintains these predictor states so that the newlypopped state does not need to look them up.4 3.3 Correctness and Polynomial Complexity We state the main theoretical result with the proof omitted due to space constraints: Theorem 1. The deductive system is optimal and runs in worst-case polynomial time as long as the kernel feature function satisfies two properties: • bounded: ef(j, S) = (j, fd(sd), . . . , f0(s0)) for some constant d, and each |ft(x)| also bounded by a constant for all possible tree x. • monotonic: ft(x) = ft(y) ⇒ft+1(x) = ft+1(y), for all t and all possible trees x, y. Intuitively, boundedness means features can only look at a local window and can only extract bounded information on each tree, which is always the case in practice since we can not have infinite models. Monotonicity, on the other hand, says that features drawn from trees farther away from the top should not be more refined than from those closer to the top. This is also natural, since the information most relevant to the current decision is always around the stack top. For example, the kernel feature function in Eq. 5 is bounded and monotonic, since f2 is less refined than f1 and f0. These two requirements are related to grammar refinement by annotation (Johnson, 1998), where annotations must be bounded and monotonic: for example, one cannot refine a grammar by only remembering the grandparent but not the parent symbol. The difference here is that the annotations are not vertical ((grand-)parent), but rather horizontal (left context). For instance, a context-free rule A →B C would become DA → DB BC for some D if there exists a rule E →αDAβ. This resembles the reduce step in Fig. 3. The very high-level idea of the proof is that boundedness is crucial for polynomial-time, while monotonicity is used for the optimal substructure property required by the correctness of DP. 4In this sense, GSS (Tomita, 1988) is really not a new invention: an efficient implementation of Earley (1970) should already have it implicitly, similar to what we have in Fig. 3. 3.4 Beam Search based on Prefix Cost Though the DP algorithm runs in polynomialtime, in practice the complexity is still too high, esp. with a rich feature set like the one in Table 1. So we apply the same beam search idea from Sec. 2.3, where each step can accommodate only the best b states. To decide the ordering of states in each beam we borrow the concept of prefix cost from Stolcke (1995), originally developed for weighted Earley parsing. As shown in Fig. 3, the prefix cost c is the total cost of the best action sequence from the initial state to the end of state p, i.e., it includes both the inside cost v (for Viterbi inside derivation), and the cost of the (best) path leading towards the beginning of state p. We say that a state p with prefix cost c is better than a state p′ with prefix cost c′, notated p ≺p′ in Fig. 3, if c < c′. We can also prove (by contradiction) that optimizing for prefix cost implies optimal inside cost (Nederhof, 2003, Sec. 4). 5 As shown in Fig. 3, when a state q with costs (c, v) is combined with a predictor state p with costs (c′, v′), the resulting state r will have costs (c′ + v + δ, v′ + v + δ), where the inside cost is intuitively the combined inside costs plus an additional combo cost δ from the combination, while the resulting prefix cost c′ + v + δ is the sum of the prefix cost of the predictor state q, the inside cost of the current state p, and the combo cost. Note the prefix cost of q is irrelevant. The combo cost δ = ξ′ + λ consists of shift cost ξ′ of p and reduction cost λ of q. The cost in the non-DP shift-reduce algorithm (Fig. 1) is indeed a prefix cost, and the DP algorithm subsumes the non-DP one as a special case where no two states are equivalent. 3.5 Example: Edge-Factored Model As a concrete example, Figure 4 simulates an edge-factored model (Eisner, 1996; McDonald et al., 2005a) using shift-reduce with dynamic programming, which is similar to bilexical PCFG parsing using CKY (Eisner and Satta, 1999). Here the kernel feature function is ef(j, S) = (j, h(s1), h(s0)) 5Note that using inside cost v for ordering would be a bad idea, as it will always prefer shorter derivations like in best-first parsing. As in A* search, we need some estimate of “outside cost” to predict which states are more promising, and the prefix cost includes an exact cost for the left outside context, but no right outside context. 1081 sh ℓ: ⟨, h ...j ⟩: (c, ) ℓ+ 1 : ⟨h, j⟩: (c, 0) j < n re↶ : ⟨h′′, h′ k...i ⟩: (c′, v′) ℓ: ⟨h′, h i...j ⟩: ( , v) ℓ+ 1 : ⟨h′′, h h′ k...i i...j ⟩: (c′ + v + λ, v′ + v + λ) where re↶cost λ = w · fre↶(h′, h) Figure 4: Example of shift-reduce with dynamic programming: simulating an edge-factored model. GSS is implicit here, and re↷case omitted. where h(x) returns the head word index of tree x, because all features in this model are based on the head and modifier indices in a dependency link. This function is obviously bounded and monotonic in our definitions. The theoretical complexity of this algorithm is O(n7) because in a reduction step we have three span indices and three head indices, plus a step index ℓ. By contrast, the na¨ıve CKY algorithm for this model is O(n5) which can be improved to O(n3) (Eisner, 1996).6 The higher complexity of our algorithm is due to two factors: first, we have to maintain both h and h′ in one state, because the current shift-reduce model can not draw features across different states (unlike CKY); and more importantly, we group states by step ℓin order to achieve incrementality and linear runtime with beam search that is not (easily) possible with CKY or MST. 4 Experiments We first reimplemented the reference shift-reduce parser of Huang et al. (2009) in Python (henceforth “non-DP”), and then extended it to do dynamic programing (henceforth “DP”). We evaluate their performances on the standard Penn Treebank (PTB) English dependency parsing task7 using the standard split: secs 02-21 for training, 22 for development, and 23 for testing. Both DP and non-DP parsers use the same feature templates in Table 1. For Secs. 4.1-4.2, we use a baseline model trained with non-DP for both DP and non-DP, so that we can do a side-by-side comparison of search 6Or O(n2) with MST, but including non-projective trees. 7Using the head rules of Yamada and Matsumoto (2003). quality; in Sec. 4.3 we will retrain the model with DP and compare it against training with non-DP. 4.1 Speed Comparisons To compare parsing speed between DP and nonDP, we run each parser on the development set, varying the beam width b from 2 to 16 (DP) or 64 (non-DP). Fig. 5a shows the relationship between search quality (as measured by the average model score per sentence, higher the better) and speed (average parsing time per sentence), where DP with a beam width of b=16 achieves the same search quality with non-DP at b=64, while being 5 times faster. Fig. 5b shows a similar comparison for dependency accuracy. We also test with an edge-factored model (Sec. 3.5) using feature templates (1)-(3) in Tab. 1, which is a subset of those in McDonald et al. (2005b). As expected, this difference becomes more pronounced (8 times faster in Fig. 5c), since the less expressive feature set makes more states “equivalent” and mergeable in DP. Fig. 5d shows the (almost linear) correlation between dependency accuracy and search quality, confirming that better search yields better parsing. 4.2 Search Space, Forest, and Oracles DP achieves better search quality because it expores an exponentially large search space rather than only b trees allowed by the beam (see Fig. 6a). As a by-product, DP can output a forest encoding these exponentially many trees, out of which we can draw longer and better (in terms of oracle) kbest lists than those in the beam (see Fig. 6b). The forest itself has an oracle of 98.15 (as if k →∞), computed `a la Huang (2008, Sec. 4.1). These candidate sets may be used for reranking (Charniak and Johnson, 2005; Huang, 2008).8 4.3 Perceptron Training and Early Updates Another interesting advantage of DP over non-DP is the faster training with perceptron, even when both parsers use the same beam width. This is due to the use of early updates (see Sec. 2.3), which happen much more often with DP, because a goldstandard state p is often merged with an equivalent (but incorrect) state that has a higher model score, which triggers update immediately. By contrast, in non-DP beam search, states such as p might still 8DP’s k-best lists are extracted from the forest using the algorithm of Huang and Chiang (2005), rather than those in the final beam as in the non-DP case, because many derivations have been merged during dynamic programming. 1082 2370 2373 2376 2379 2382 2385 2388 2391 2394 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 avg. model score b=16 b=64 DP non-DP 92.2 92.3 92.4 92.5 92.6 92.7 92.8 92.9 93 93.1 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 dependency accuracy b=16 b=64 DP non-DP (a) search quality vs. time (full model) (b) parsing accuracy vs. time (full model) 2290 2295 2300 2305 2310 2315 2320 2325 2330 2335 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 avg. model score b=16 b=64 DP non-DP 88.5 89 89.5 90 90.5 91 91.5 92 92.5 93 93.5 2280 2300 2320 2340 2360 2380 2400 dependency accuracy full, DP full, non-DP edge-factor, DP edge-factor, non-DP (c) search quality vs. time (edge-factored model) (d) correlation b/w parsing (y) and search (x) Figure 5: Speed comparisons between DP and non-DP, with beam size b ranging 2∼16 for DP and 2∼64 for non-DP. Speed is measured by avg. parsing time (secs) per sentence on x axis. With the same level of search quality or parsing accuracy, DP (at b=16) is ∼4.8 times faster than non-DP (at b=64) with the full model in plots (a)-(b), or ∼8 times faster with the simplified edge-factored model in plot (c). Plot (d) shows the (roughly linear) correlation between parsing accuracy and search quality (avg. model score). 100 102 104 106 108 1010 1012 0 10 20 30 40 50 60 70 number of trees explored sentence length DP forest non-DP (16) 93 94 95 96 97 98 99 64 32 16 8 4 1 oracle precision k DP forest (98.15) DP k-best in forest non-DP k-best in beam (a) sizes of search spaces (b) oracle precision on dev Figure 6: DP searches over a forest of exponentially many trees, which also produces better and longer k-best lists with higher oracles, while non-DP only explores b trees allowed in the beam (b = 16 here). 1083 90.5 91 91.5 92 92.5 93 93.5 0 4 8 12 16 20 24 accuracy on dev (each round) hours 17th 18th DP non-DP Figure 7: Learning curves (showing precision on dev) of perceptron training for 25 iterations (b=8). DP takes 18 hours, peaking at the 17th iteration (93.27%) with 12 hours, while non-DP takes 23 hours, peaking at the 18th (93.04%) with 16 hours. survive in the beam throughout, even though it is no longer possible to rank the best in the beam. The higher frequency of early updates results in faster iterations of perceptron training. Table 2 shows the percentage of early updates and the time per iteration during training. While the number of updates is roughly comparable between DP and non-DP, the rate of early updates is much higher with DP, and the time per iteration is consequently shorter. Figure 7 shows that training with DP is about 1.2 times faster than non-DP, and achieves +0.2% higher accuracy on the dev set (93.27%). Besides training with gold POS tags, we also trained on noisy tags, since they are closer to the test setting (automatic tags on sec 23). In that case, we tag the dev and test sets using an automatic POS tagger (at 97.2% accuracy), and tag the training set using four-way jackknifing similar to Collins (2000), which contributes another +0.1% improvement in accuracy on the test set. Faster training also enables us to incorporate more features, where we found more lookahead features (q2) results in another +0.3% improvement. 4.4 Final Results on English and Chinese Table 3 presents the final test results of our DP parser on the Penn English Treebank, compared with other state-of-the-art parsers. Our parser achieves the highest (unlabeled) dependency accuracy among dependency parsers trained on the Treebank, and is also much faster than most other parsers even with a pure Python implementation it update early% time update early% time 1 31943 98.9 22 31189 87.7 29 5 20236 98.3 38 19027 70.3 47 17 8683 97.1 48 7434 49.5 60 25 5715 97.2 51 4676 41.2 65 Table 2: Perceptron iterations with DP (left) and non-DP (right). Early updates happen much more often with DP due to equivalent state merging, which leads to faster training (time in minutes). word L time comp. McDonald 05b 90.2 Ja 0.12 O(n2) McDonald 05a 90.9 Ja 0.15 O(n3) Koo 08 base 92.0 − − O(n4) Zhang 08 single 91.4 C 0.11 O(n)‡ this work 92.1 Py 0.04 O(n) †Charniak 00 92.5 C 0.49 O(n5) †Petrov 07 92.4 Ja 0.21 O(n3) Zhang 08 combo 92.1 C − O(n2)‡ Koo 08 semisup 93.2 − − O(n4) Table 3: Final test results on English (PTB). Our parser (in pure Python) has the highest accuracy among dependency parsers trained on the Treebank, and is also much faster than major parsers. †converted from constituency trees. C=C/C++, Py=Python, Ja=Java. Time is in seconds per sentence. Search spaces: ‡linear; others exponential. (on a 3.2GHz Xeon CPU). Best-performing constituency parsers like Charniak (2000) and Berkeley (Petrov and Klein, 2007) do outperform our parser, since they consider more information during parsing, but they are at least 5 times slower. Figure 8 shows the parse time in seconds for each test sentence. The observed time complexity of our DP parser is in fact linear compared to the superlinear complexity of Charniak, MST (McDonald et al., 2005b), and Berkeley parsers. Additional techniques such as semi-supervised learning (Koo et al., 2008) and parser combination (Zhang and Clark, 2008) do achieve accuracies equal to or higher than ours, but their results are not directly comparable to ours since they have access to extra information like unlabeled data. Our technique is orthogonal to theirs, and combining these techniques could potentially lead to even better results. We also test our final parser on the Penn Chinese Treebank (CTB5). Following the set-up of Duan et al. (2007) and Zhang and Clark (2008), we split CTB5 into training (secs 001-815 and 10011084 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 10 20 30 40 50 60 70 parsing time (secs) sentence length Cha Berk MST DP Figure 8: Scatter plot of parsing time against sentence length, comparing with Charniak, Berkeley, and the O(n2) MST parsers. word non-root root compl. Duan 07 83.88 84.36 73.70 32.70 Zhang 08† 84.33 84.69 76.73 32.79 this work 85.20 85.52 78.32 33.72 Table 4: Final test results on Chinese (CTB5). †The transition parser in Zhang and Clark (2008). 1136), development (secs 886-931 and 11481151), and test (secs 816-885 and 1137-1147) sets, assume gold-standard POS-tags for the input, and use the head rules of Zhang and Clark (2008). Table 4 summarizes the final test results, where our work performs the best in all four types of (unlabeled) accuracies: word, non-root, root, and complete match (all excluding punctuations). 9,10 5 Related Work This work was inspired in part by Generalized LR parsing (Tomita, 1991) and the graph-structured stack (GSS). Tomita uses GSS for exhaustive LR parsing, where the GSS is equivalent to a dynamic programming chart in chart parsing (see Footnote 4). In fact, Tomita’s GLR is an instance of techniques for tabular simulation of nondeterministic pushdown automata based on deductive systems (Lang, 1974), which allow for cubictime exhaustive shift-reduce parsing with contextfree grammars (Billot and Lang, 1989). Our work advances this line of research in two aspects. First, ours is more general than GLR in 9Duan et al. (2007) and Zhang and Clark (2008) did not report word accuracies, but those can be recovered given nonroot and root ones, and the number of non-punctuation words. 10Parser combination in Zhang and Clark (2008) achieves a higher word accuracy of 85.77%, but again, it is not directly comparable to our work. that it is not restricted to LR (a special case of shift-reduce), and thus does not require building an LR table, which is impractical for modern grammars with a large number of rules or features. In contrast, we employ the ideas behind GSS more flexibly to merge states based on features values, which can be viewed as constructing an implicit LR table on-the-fly. Second, unlike previous theoretical results about cubic-time complexity, we achieved linear-time performance by smart beam search with prefix cost inspired by Stolcke (1995), allowing for state-of-the-art data-driven parsing. To the best of our knowledge, our work is the first linear-time incremental parser that performs dynamic programming. The parser of Roark and Hollingshead (2009) is also almost linear time, but they achieved this by discarding parts of the CKY chart, and thus do achieve incrementality. 6 Conclusion We have presented a dynamic programming algorithm for shift-reduce parsing, which runs in linear-time in practice with beam search. This framework is general and applicable to a largeclass of shift-reduce parsers, as long as the feature functions satisfy boundedness and monotonicity. Empirical results on a state-the-art dependency parser confirm the advantage of DP in many aspects: faster speed, larger search space, higher oracles, and better and faster learning. Our final parser outperforms all previously reported dependency parsers trained on the Penn Treebanks for both English and Chinese, and is much faster in speed (even with a Python implementation). For future work we plan to extend it to constituency parsing. Acknowledgments We thank David Chiang, Yoav Goldberg, Jonathan Graehl, Kevin Knight, and Roger Levy for helpful discussions and the three anonymous reviewers for comments. Mark-Jan Nederhof inspired the use of prefix cost. Yue Zhang helped with Chinese datasets, and Wenbin Jiang with feature sets. This work is supported in part by DARPA GALE Contract No. HR0011-06-C-0022 under subcontract to BBN Technologies, and by the U.S. Army Research, Development, and Engineering Command (RDECOM). Statements and opinions expressed do not necessarily reflect the position or the policy of the United States Government, and no official endorsement should be inferred. 1085 References Alfred V. Aho and Jeffrey D. Ullman. 1972. The Theory of Parsing, Translation, and Compiling, volume I: Parsing of Series in Automatic Computation. Prentice Hall, Englewood Cliffs, New Jersey. S. Billot and B. Lang. 1989. The structure of shared forests in ambiguous parsing. In Proceedings of the 27th ACL, pages 143–151. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine-grained n-best parsing and discriminative reranking. In Proceedings of the 43rd ACL, Ann Arbor, MI. Eugene Charniak. 2000. A maximum-entropyinspired parser. In Proceedings of NAACL. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceedings of ACL. Michael Collins. 2000. Discriminative reranking for natural language parsing. In Proceedings of ICML, pages 175–182. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of EMNLP. Xiangyu Duan, Jun Zhao, and Bo Xu. 2007. Probabilistic models for action-based chinese dependency parsing. In Proceedings of ECML/PKDD. Jay Earley. 1970. An efficient context-free parsing algorithm. Communications of the ACM, 13(2):94– 102. Jason Eisner and Giorgio Satta. 1999. Efficient parsing for bilexical context-free grammars and headautomaton grammars. In Proceedings of ACL. Jason Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of COLING. Lyn Frazier and Keith Rayner. 1982. Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14(2):178 – 210. Liang Huang and David Chiang. 2005. Better k-best Parsing. In Proceedings of the Ninth International Workshop on Parsing Technologies (IWPT-2005). Liang Huang, Wenbin Jiang, and Qun Liu. 2009. Bilingually-constrained (monolingual) shift-reduce parsing. In Proceedings of EMNLP. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of the ACL: HLT, Columbus, OH, June. Mark Johnson. 1998. PCFG models of linguistic tree representations. Computational Linguistics, 24:613–632. Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Proceedings of ACL. B. Lang. 1974. Deterministic techniques for efficient non-deterministic parsers. In Automata, Languages and Programming, 2nd Colloquium, volume 14 of Lecture Notes in Computer Science, pages 255–269, Saarbr¨ucken. Springer-Verlag. Lillian Lee. 2002. Fast context-free grammar parsing requires fast Boolean matrix multiplication. Journal of the ACM, 49(1):1–15. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005a. Online large-margin training of dependency parsers. In Proceedings of the 43rd ACL. Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajiˇc. 2005b. Non-projective dependency parsing using spanning tree algorithms. In Proc. of HLTEMNLP. Mark-Jan Nederhof. 2003. Weighted deductive parsing and Knuth’s algorithm. Computational Linguistics, pages 135–143. Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Incremental Parsing: Bringing Engineering and Cognition Together. Workshop at ACL-2004, Barcelona. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of HLTNAACL. Brian Roark and Kristy Hollingshead. 2009. Linear complexity context-free parsing pipelines via chart constraints. In Proceedings of HLT-NAACL. Andreas Stolcke. 1995. An efficient probabilistic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics, 21(2):165–201. Masaru Tomita. 1988. Graph-structured stack and natural language parsing. In Proceedings of the 26th annual meeting on Association for Computational Linguistics, pages 249–257, Morristown, NJ, USA. Association for Computational Linguistics. Masaru Tomita, editor. 1991. Generalized LR Parsing. Kluwer Academic Publishers. H. Yamada and Y. Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: investigating and combining graphbased and transition-based dependency parsing using beam-search. In Proceedings of EMNLP. 1086
2010
110
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1087–1097, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Hard Constraints for Grammatical Function Labelling Wolfgang Seeker University of Stuttgart Institut f¨ur Maschinelle Sprachverarbeitung [email protected] Ines Rehbein University of Saarland Dep. for Comp. Linguistics & Phonetics [email protected] Jonas Kuhn University of Stuttgart Institut f¨ur Maschinelle Sprachverarbeitung [email protected] Josef van Genabith Dublin City University CNGL and School of Computing [email protected] Abstract For languages with (semi-) free word order (such as German), labelling grammatical functions on top of phrase-structural constituent analyses is crucial for making them interpretable. Unfortunately, most statistical classifiers consider only local information for function labelling and fail to capture important restrictions on the distribution of core argument functions such as subject, object etc., namely that there is at most one subject (etc.) per clause. We augment a statistical classifier with an integer linear program imposing hard linguistic constraints on the solution space output by the classifier, capturing global distributional restrictions. We show that this improves labelling quality, in particular for argument grammatical functions, in an intrinsic evaluation, and, importantly, grammar coverage for treebankbased (Lexical-Functional) grammar acquisition and parsing, in an extrinsic evaluation. 1 Introduction Phrase or constituent structure is often regarded as an analysis step guiding semantic interpretation, while grammatical functions (i. e. subject, object, modifier etc.) provide important information relevant to determining predicate-argument structure. In languages with restricted word order (e. g. English), core grammatical functions can often be recovered from configurational information in constituent structure analyses. By contrast, simple constituent structures are not sufficient for less configurational languages, which tend to encode grammatical functions by morphological means (Bresnan, 2001). Case features, for instance, can be important indicators of grammatical functions. Unfortunately, many of these languages (including German) exhibit strong syncretism where morphological cues can be highly ambiguous with respect to functional information. Statistical classifiers have been successfully used to label constituent structure parser output with grammatical function information (Blaheta and Charniak, 2000; Chrupała and Van Genabith, 2006). However, as these approaches tend to use only limited and local context information for learning and prediction, they often fail to enforce simple yet important global linguistic constraints that exist for most languages, e. g. that there will be at most one subject (object) per sentence/clause.1 “Hard” linguistic constraints, such as these, tend to affect mostly the “core grammatical functions”, i. e. the argument functions (rather than e. g. adjuncts) of a particular predicate. As these functions constitute the core meaning of a sentence (as in: who did what to whom), it is important to get them right. We present a system that adds grammatical function labels to constituent parser output for German in a postprocessing step. We combine a statistical classifier with an integer linear program (ILP) to model non-violable global linguistic constraints, restricting the solution space of the classifier to those labellings that comply with our set of global constraints. There are, of course, many other ways of including functional information into the output of a syntactic parser. Klein and Manning (2003) show that merging some linguistically motivated function labels with specific syntactic categories can improve the performance of a PCFG model on Penn-II En1Coordinate subjects/objects form a constituent that functions as a joint subject/object. 1087 glish data.2 Tsarfaty and Sim’aan (2008) present a statistical model (Relational-Realizational Parsing) that alternates between functional and configurational information for constituency tree parsing and Hebrew data. Dependency parsers like the MST parser (McDonald and Pereira, 2006) and Malt parser (Nivre et al., 2007) use function labels as core part of their underlying formalism. In this paper, we focus on phrase structure parsing with function labelling as a post-processing step. Integer linear programs have already been successfully used in related fields including semantic role labelling (Punyakanok et al., 2004), relation and entity classification (Roth and Yih, 2004), sentence compression (Clarke and Lapata, 2008) and dependency parsing (Martins et al., 2009). Early work on function labelling for German (Brants et al., 1997) reports 94.2% accuracy on gold data (a very early version of the TiGer Treebank (Brants et al., 2002)) using Markov models. Klenner (2007) uses a system similar to – but more restricted than – ours to label syntactic chunks derived from the TiGer Treebank. His research focusses on the correct selection of predefined subcategorisation frames for a verb (see also Klenner (2005)). By contrast, our research does not involve subcategorisation frames as an external resource, instead opting for a less knowledge-intensive approach. Klenner’s system was evaluated on gold treebank data and used a small set of 7 dependency labels. We show that an ILP-based approach can be scaled to a large and comprehensive set of 42 labels, achieving 97.99% label accuracy on gold standard trees. Furthermore, we apply the system to automatically parsed data using a state-ofthe-art statistical phrase-structure parser with a label accuracy of 94.10%. In both cases, the ILPbased approach improves the quality of argument function labelling when compared with a non-ILPapproach. Finally, we show that the approach substantially improves the quality and coverage (from 93.6% to 98.4%) of treebank-based LexicalFunctional Grammars for German over previous work in Rehbein and van Genabith (2009). The paper is structured as follows: Section 2 presents basic data demonstrating the challenges presented by German word order and case syncretism for the function labeller. Section 3 de2Table 6 shows that for our data a model with merged category and function labels (but without hard constraints!) performs slightly worse than the ILP approach developed in this paper. scribes the labeller including the feature model of the classifier and the integer linear program used to pick the correct labelling. The evaluation part (Section 4) is split into an intrinsic evaluation measuring the quality of the labelling directly using the German TiGer Treebank (Brants et al., 2002), and an extrinsic evaluation where we test the impact of the constraint-based labelling on treebankbased automatic LFG grammar acquisition. 2 Data Unlike English, German exhibits a relatively free word order, i. e. in main clauses, the verb occupies second position (the last position in subordinated clauses) and arguments and adjuncts can be placed (fairly) freely. The grammatical function of a noun phrase is marked morphologically on its constituting parts. Determiners, pronouns, adjectives and nouns carry case markings and in order to be well-formed, all parts of a noun phrase have to agree on their case features. German uses a nominative–accusative system to mark predicate arguments. Subjects are marked with nominative case, direct objects carry accusative case. Furthermore, indirect objects are mostly marked with dative case and sometimes genitive case. (1) Der L¨owe NOM the lion gibt gives dem Wolf DAT the wolf einen Besen. ACC a broom The lion gives a broom to the wolf. (1) shows a sentence containing the ditransitive verb geben (to give) with its three arguments. Here, the subject is unambiguously marked with nominative case (NOM), the indirect object with dative case (DAT) and the direct object with accusative case (ACC). (2) shows possible word orders for the arguments in this sentence.3 (2) Der L¨owe gibt einen Besen dem Wolf. Dem Wolf gibt der L¨owe einen Besen. Dem Wolf gibt einen Besen der L¨owe. Einen Besen gibt der L¨owe dem Wolf. Einen Besen gibt dem Wolf der L¨owe. Since all permutations of arguments are possible, there is no chance for a statistical classifier to decide on the correct function of a noun phrase by its position alone. Introducing adjuncts to this example makes matters even worse. 3Note that although (apart from the position of the finite verb) there are no syntactic restrictions on the word order, there are restrictions pertaining to phonological or information structure. 1088 Case information for a given noun phrase can give a classifier some clue about the correct argument function, since functions are strongly related to case values. Unfortunately, the German case system is complex (see Eisenberg (2006) for a thorough description) and exhibits a high degree of case syncretism. (3) shows a sentence where both argument NPs are ambiguous between nominative or accusative case. In such cases, additional semantic or contextual information is required for disambiguation. A statistical classifier (with access to local information only) runs a high risk of incorrectly classifying both NPs as subjects, or both as direct objects or even as nominal predicates (which are also required to carry nominative case). This would leave us with uninterpretable results. Uninterpretability of this kind can be avoided if we are able to constrain the number of subjects and objects globally to one per clause.4 (3) Das Schaf NOM/ACC the sheep sieht sees das M¨adchen. NOM/ACC the girl EITHER The sheep sees the girl OR The girl sees the sheep. 3 Grammatical Function Labelling Our function labeller was developed and tested on the TiGer Treebank (Brants et al., 2002). The TiGer Treebank is a phrase-structure and grammatical function annotated treebank with 50,000 newspaper sentences from the Frankfurter Rundschau (Release 2, July 2006). Its overall annotation scheme is quite flat to account for the relatively free word order of German and does not allow for unary branching. The annotations use non-projective trees modelling long distance dependencies directly by crossing branches. Words are lemmatised and part-of-speech tagged with the Stuttgart-T¨ubingen Tag Set (STTS) (Schiller et al., 1999) and contain morphological annotations (Release 2). TiGer uses 25 syntactic categories and a set of 42 function labels to annotate the grammatical function of a phrase. The function labeller consists of two main components, a maximum entropy classifier and an integer linear program. This basic architecture was introduced by Punyakanok et al. (2004) for the task of semantic role labelling and since then has been applied to different NLP tasks without significant changes. In our case, its input is a bare tree 4Although the classifier may, of course, still identify the wrong phrase as subject or object. structure (as obtained by a standard phrase structure parser) and it outputs a tree structure where every node is labelled with the grammatical relation it bears to its mother node. For each possible label and for each node, the classifier assigns a probability that this node is labelled by this label. This results in a complete probability distribution over all labels for each node. An integer linear program then tries to find the optimal overall tree labelling by picking for each node the label with the highest probability without violating any of its constraints. These constraints implement linguistic rules like the one-subject-per-sentence rule mentioned above. They can also be used to capture treebank particulars, such as for example that punctuation marks never receive a label. 3.1 The Feature Model Maximum entropy classifiers have been used in a wide range of applications in NLP for a long time (Berger et al., 1996; Ratnaparkhi, 1998). They usually give good results while at the same time allowing for the inclusion of arbitrarily complex features. They also have the advantage that they directly output probability distributions over their set of labels (unlike e. g. SVMs). The classifier uses the following features: • the lemma (if terminal node) • the category (the POS for terminal nodes) • the number of left/right sisters • the category of the two left/right sisters • the number of daughters • the number of terminals covered • the lemma of the left/right corner terminal • the category of the left/right corner terminal • the category of the mother node • the category of the mother’s head node • the lemma of the mother’s head node • the category of the grandmother node • the category of the grandmother’s head node • the lemma of the grandmother’s head node • the case features for noun phrases • the category for PP objects • the lemma for PP objects (if terminal node) These features are also computed for the head of the phrase, determined using a set of headfinding rules in the style of Magerman (1995) adapted to TiGer. For lemmatisation, we use TreeTagger (Schmid, 1994) and case features of noun 1089 phrases are obtained from a full German morphological analyser based on (Schiller, 1994). If a noun phrase consists of a single word (e. g. pronouns, but also bare common nouns and proper nouns), all case values output by the analyser are used to reflect the case syncretism. For multi-word noun phrases, the case feature is computed by taking the intersection of all case-bearing words inside the noun phrase, i. e. determiners, pronouns, adjectives, common nouns and proper nouns. If, for some reason (e.g., due to a bracketing error in phrase structure parsing), the intersection turns out to be empty, all four case values are assigned to the phrase.5 3.2 Constrained Optimisation In the second step, a binary integer linear program is used to select those labels that optimise the whole tree labelling. A linear program consists of a linear objective function that is to be maximised (or minimised) and a set of constraints which impose conditions on the variables of the objective function (see (Clarke and Lapata, 2008) for a short but readable introduction). Although solving a linear program has polynomial complexity, requiring the variables to be integral or binary makes finding a solution exponentially hard in the worst case. Fortunately, there are efficient algorithms which are capable of handling a large number of variables and constraints in practical applications.6 For the function labeller, we define the set of binary variables V = N × L to be the crossproduct of the set of nodes N and the set of labels L. Setting a variable xn,l to 1 means that node n is labelled by label l. Every variable is weighted by the probability wn,l = P(l|f(n)) which the classifier has assigned to this node-label combination. The objective function that we seek to optimise is defined as the sum over all weighted variables: max X n∈N X l∈L wn,lxn,l (4) Since we want every node to receive exactly one 5We decided to train the classifier on automatically assigned and possibly ambiguous morphological information instead of on the hand-annotated and manually disambiguated morphological information provided by TiGer because we want the classifier to learn the German case syncretism. This way, the classifier will perform better when presented with unseen data (e.g. from parser output) for which no hand-annotated morphological information is available. 6See lpsolve (http://lpsolve.sourceforge.net/) or GLPK (http://www.gnu.org/software/glpk/glpk.html) for opensource implementations label, we add a constraint that for every node n, exactly one of its variables is set to 1. X l∈L xn,l = 1 (5) Up to now, the whole system is doing exactly the same as an ordinary classifier that always takes the most probable label for each node. We will now add additional global and local linguistic constraints.7 The first and most important constraint restricts the number of each argument function (as opposed to modifier functions) to at most one per clause. Let D ⊂N × N be the direct dominance relation between the nodes of the current tree. For every node n with category S (sentence) or VP (verb phrase), at most one of its daughters is allowed to be labelled SB (subject). The single-subjectfunction condition is defined as: cat(n) ∈{S, V P} −→ X ⟨n,m⟩∈D xm,SB ≤1 (6) Identical constraints are added for labels OA, OA2, DA, OG, OP, PD, OC, EP.8 We add further constraints to capture the following linguistic restrictions: • Of all daughters of a phrase, only one is allowed to be labelled HD (head). X ⟨n,m⟩∈D xm,HD ≤1 (7) • If a noun phrase carries no case feature for nominative case, it cannot be labelled SB, PD or EP. case(n) ̸= nom −→ X l∈{SB,PD,EP} xn,l = 0 (8) • If a noun phrase carries no case feature for accusative case, it cannot be labelled OA or OA2. • If a noun phrase carries no case feature for dative case, it cannot be labelled DA. • If a noun phrase carries no case feature for genitive case, it cannot be labelled OG or AG9. 7Note that some of these constraints are language specific in that they represent linguistic facts about German and do not necessarily hold for other languages. Furthermore, the constraints are treebank specific to a certain degree in that they use a TiGer-specific set of labels and are conditioned on TiGer-specific configurations and categories. 8SB = subject, OA = accusative object, OA2 = second accusative object, DA = dative, OG = genitive object, OP = prepositional object, PD = predicate, OC = clausal object, EP = expletive es 9AG = genitive adjunct 1090 Unlike Klenner (2007), we do not use predefined subcategorization frames, instead letting the statistical model choose arguments. In TiGer, sentences whose main verbs are formed from auxiliary-participle combinations, are annotated by embedding the participle under an extra VP node and non-subject arguments are sisters to the participle. Therefore we add an extension of the constraint in (6) to the constraint set in order to also include the daughters of an embedded VP node in such a case. Because of the particulars of the annotation scheme of TiGer, we can decide some labels in advance. As mentioned before, punctuation does not get a label in TiGer. We set the label for those nodes to −−(no label). Other examples are: • If a node’s category is PTKVZ (separated verb particle), it is labeled SVP (separable verb particle). cat(n) = PTKV Z −→xn,SV P = 1 (9) • If a node’s category is APPR, APPRART, APPO or APZR (prepositions), it is labeled AC (adpositional case marker). • All daughters of an MTA node (multi-token adjective) are labeled ADC (adjective component). These constraints are conditioned on part-ofspeech tags and require high POS-tagging accuracy (when dealing with raw text). Due to the constraints imposed on the classification, the function labeller can no longer assign two subjects to the same S node. Faced with two nodes whose most probable label is SB, it has to decide on one of them taking the next best label for the other. This way, it outputs the optimal solution with respect to the set of constraints. Note that this requires the feature model not only to rank the correct label highest but also to provide a reasonable ranking of the other labels as well. 4 Evaluation We conducted a number of experiments using 1,866 sentences of the TiGer Dependency Bank (Forst et al., 2004) as our test set. The TiGerDB is a part of the TiGer Treebank semi-automatically converted into a dependency representation. We use the manually labelled TiGer trees corresponding to the sentences in the TiGerDB for assessing the labelling quality in the intrinsic evaluation, and the dependencies from TiGerDB for assessing the quality and coverage of the automatically acquired LFG resources in the extrinsic evaluation. In order to test on real parser output, the test set was parsed with the Berkeley Parser (Petrov et al., 2006) trained on 48k sentences of the TiGer corpus (Table 1), excluding the test set. Since the Berkeley Parser assumes projective structures, the training data and test data were made projective by raising non-projective nodes in the tree (K¨ubler, 2005). precision 83.60 recall 82.81 f-score 83.20 tagging acc. 97.97 Table 1: evalb unlabelled parsing scores on test set for Berkeley Parser trained on 48,000 sentences (sentence length ≤40) The maximum entropy classifier of the function labeller was trained on 46,473 sentences of the TiGer Treebank (excluding the test set) which yields about 1.2 million nodes as training samples. For training the Maximum Entropy Model, we used the BLMVM algorithm (Benson and More, 2001) with a width factor of 1.0 (Kazama and Tsujii, 2005) implemented in an open-source C++ library from Tsujii Laboratory.10 The integer linear program was solved with the simplex algorithm in combination with a branch-and-bound method using the freely available GLPK.11 4.1 Intrinsic Evaluation In the intrinsic evaluation, we measured the quality of the labelling itself. We used the node span evaluation method of (Blaheta and Charniak, 2000) which takes only those nodes into account which have been recognised correctly by the parser, i.e. if there are two nodes in the parse and the reference treebank tree which cover the same word span. Unlike Blaheta and Charniak (2000) however, we do not require the two nodes to carry the same syntactic category label.12 Table 2 shows the results of the node span evaluation. The labeller achieves close to 98% label accuracy on gold treebank trees which shows that the feature model captures the differences between the individual labels well. Results on parser output are about 4 percentage points (absolute) lower as parsing errors can distort local context features for the classifier even if the node itself has been parsed 10http://www-tsujii.is.s.u-tokyo.ac.jp/∼tsuruoka/maxent/ 11http://www.gnu.org/software/glpk/glpk.html 12We also excluded the root node, all punctuation marks and both nodes in unary branching sub-trees from evaluation. 1091 correctly. The addition of the ILP constraints improves results only slightly since the constraints affect only (a small number of) argument labels while the evaluation considers all 40 labels occurring in the test set. Since the constraints restrict the selection of certain labels, a less probable label has to be picked by the labeller if the most probable is not available. If the classifier is ranking labels sensibly, the correct label should emerge. However, with an incorrect ranking, the ILP constraints might also introduce new errors. label accuracy error red. without constraints gold 44689/45691 = 97.81% – parser 40578/43140 = 94.06% – with constraints gold 44773/45691 = 97.99%* 8.21% parser 40593/43140 = 94.10% 0.68% Table 2: label accuracy and error reduction (all labels) for node span evaluation, * statistically significant, sign test, α = 0.01 (Koo and Collins, 2005) As the main target of the constraint set are argument functions, we also tested the quality of argument labels. Table 3 shows the node span evaluation in terms of precision, recall and f-score for argument functions only, with clear statistically significant improvements. prec. rec. f-score without constraints gold standard 92.41 91.86 92.13 parser output 88.14 86.43 87.28 with constraints gold standard 94.31 92.76 93.53* parser output 89.51 86.73 88.09* Table 3: node span results for the test set, argument functions only (SB, EP, PD, OA, OA2, DA, OG, OP, OC), * statistically significant, sign test, α = 0.01 (Koo and Collins, 2005) For comparison and to establish a highly competitive baseline, we use the best-scoring system in (Chrupała and Van Genabith, 2006), trained and tested on exactly the same data sets. This purely statistical labeller achieves accuracy of 96.44% (gold) and 92.81% (parser) for all labels, and fscores of 89.88% (gold) and 84.98% (parser) for argument labels. Tables 2 and 3 show that our system (with and even without ILP constraints) comprehensively outperforms all corresponding baseline scores. The node span evaluation defines a correct labelling by taking only those nodes (in parser output) into account that have a corresponding node in the reference tree. However, as this restricts attention to correctly parsed nodes, the results are somewhat over-optimistic. Table 4 provides the results obtained from an evalb evaluation of the same data sets.13 The gold standard scores are high confirming our previous findings about the performance of the function labeller. However, the results on parser output are much worse. The evaluation scores are now taking the parsing quality into account (Table 1). The considerable drop in quality between gold trees and parser output clearly shows that a good parse tree is an important prerequisite for reasonable function labelling. This is in accordance with previous findings by Punyakanok et al. (2008) who emphasise the importance of syntactic parsing for the closely related task of semantic role labelling. prec. rec. f-score without constraints gold standard 95.94 95.94 95.94 parser output 76.27 75.55 75.91 with constraints gold standard 96.21 96.21 96.21 parser output 76.36 75.64 76.00 Table 4: evalb results for the test set 4.1.1 Subcategorisation Frames Early on in the paper we mention that, unlike e. g. Klenner (2007), we did not include predefined subcategorisation frames into the constraint set, but rather let the joint statistical and ILP models decide on the correct type of arguments assigned to a verb. The assumption is that if one uses predefined subcategorisation frames which fix the number and type of arguments for a verb, one runs the risk of excluding correct labellings due to missing subcat frames, unless a very comprehensive and high quality subcat lexicon resource is available. In order to test this assumption, we run an additional experiment with about 10,000 verb frames for 4,508 verbs, which were automatically extracted from our training section. Following Klenner (2007), for each verb and for each subcat frame for this verb attested at least once in the training data, we introduce a new binary variable fn to the ILP model representing the n-th frame (for the verb) weighted by its frequency. We add an ILP constraint requiring exactly one of the frames to be set to one (each verb has to have a subcat frame) and replace the ILP constraint in (6) by: 13Function labels were merged with the category symbols. 1092 X ⟨n,m⟩∈D xm,SB − X SB∈fi fi = 0 (10) This constraint requires the number of subjects in a phrase to be equal to the number of selected14 verb frames that require a subject. As each verb is constrained to “select” exactly one subcat frame (see additional ILP constraint above), there is at most one subject per phrase, if the frame in question requires a subject. If the selected frame does not require a subject, then the constraint blocks the assignment of subjects for the entire phrase. The same was done for the other argument functions and as before we included an extension of this constraint to cover embedded VPs. For unseen verbs (i.e. verbs not attested in the training set) we keep the original constraints as a back-off. prec. rec. f-score all labels (cmp. Table 2) gold standard 97.24 97.24 97.24 parser output 93.43 93.43 93.43 argument functions only (cmp. Table 3) gold standard 91.36 90.12 90.74 parser output 86.64 84.38 85.49 Table 5: node span results for the test set using constraints with automatically extracted subcat frames Table 5 shows the results of the test set node span evaluation when using the ILP system enhanced with subcat frames. Compared to Tables 2 and 3, the results are clearly inferior, and particularly so for argument grammatical functions. This seems to confirm our assumption that, given our data, letting the joint statistical and ILP model decide argument functions is superior to an approach that involves subcat frames. However, and importantly, our results do not rule out that a more comprehensive subcat frame resource may in fact result in improvements. 4.2 Extrinsic Evaluation Over the last number of years, treebank-based deep grammar acquisition has emerged as an attractive alternative to hand-crafting resources within the HPSG, CCG and LFG paradigms (Miyao et al., 2003; Clark and Hockenmaier, 2002; Cahill et al., 2004). While most of the initial development work focussed on English, more recently efforts have branched to other languages. Below we concentrate on LFG. 14The variable representing this frame has been set to 1. Lexical-Functional Grammar (Bresnan, 2001) is a constraint-based theory of grammar with minimally two levels of representation: c(onstituent)structure and f(unctional)-structure. C-structure (CFG trees) captures language specific surface configurations such as word order and the hierarchical grouping of words into phrases, while f-structure represents more abstract (and somewhat more language independent) grammatical relations (essentially bilexical labelled dependencies with some morphological and semantic information, approximating to basic predicate-argument structures) in the form of attribute-value structures. F-structures are defined in terms of equations annotated to nodes in c-structure trees (grammar rules). Treebank-based LFG acquisition was originally developed for English (Cahill, 2004; Cahill et al., 2008) and is based on an f-structure annotation algorithm that annotates c-structure trees (from a treebank or parser output) with f-structure equations, which are read off of the tree and passed on to a constraint solver producing an f-structure for the given sentence. The English annotation algorithm (for Penn-II treebank-style trees) relies heavily on configurational and categorial information, translating this into grammatical functional information (subject, object etc.) represented at f-structure. LFG is “functional” in the mathematical sense, in that argument grammatical functions have to be single valued (there cannot be two or more subjects etc. in the same clause). In fact, if two or more values are assigned to a single argument grammatical function in a local tree, the LFG constraint solver will produce a clash (i. e. it will fail to produce an f-structure) and the sentence will be considered ungrammatical (in other words, the corresponding c-structure tree will be uninterpretable). Rehbein (2009) and Rehbein and van Genabith (2009) develop an f-structure annotation algorithm for German based on the TiGer treebank resource. Unlike the English annotation algorithm and because of the language-particular properties of German (see Section 2), the German annotation algorithm cannot rely on c-structure configurational information, but instead heavily uses TiGer function labels in the treebank. Learning function labels is therefore crucial to the German LFG annotation algorithm, in particular when parsing raw text. Because of the strong case syncretism in German, traditional classification models using local 1093 information only run the risk of predicting multiple occurences of the same function (subject, object etc.) at the same level, causing feature clashes in the constraint solver with no f-structure being produced. Rehbein (2009) and Rehbein and van Genabith (2009) identify this as a major problem resulting in a considerable loss in coverage of the German annotation algorithm compared to English, in particular for parsing raw text, where TiGer function labels have to be supplied by a machine-learning-based method and where the coverage of the LFG annotation algorithm drops to 93.62% with corresponding drops in recall and f-scores for the f-structure evaluations (Table 6). Below we test whether the coverage problems caused by incorrect multiple assignments of grammatical functions can be addressed using the combination of classifier with ILP constraints developed in this paper. We report experiments where automatically parsed and labelled data are handed over to an LFG f-structure computation algorithm. The f-structures produced are converted into a dependency triple representation (Crouch et al., 2002) and evaluated against TiGerDB. cov. prec. rec. f-score upper bound 99.14 85.63 82.58 84.07 without constraints gold 95.82 84.71 76.68 80.49 parser 93.41 79.70 70.38 74.75 with constraints gold 99.30 84.62 82.15 83.37 parser 98.39 79.43 75.60 77.47 Rehbein 2009 parser 93.62 79.20 68.86 73.67 Table 6: f-structure evaluation results for the test set against TigerDB Table 6 shows the results of the f-structure evaluation against TiGerDB, with 84.07% f-score upper-bound results for the f-structure annotation algorithm on the original TiGer treebank trees with hand-annotated function labels. Using the function labeller without ILP constraints results in drastic drops in coverage (between 4.5% and 6.5% points absolute) and hence recall (6% and 12%) and f-score (3.5% and 9.5%) for both gold trees and parser output (compared to upper bounds). By contrast, with ILP constraints, the loss in coverage observed above almost completely disappears and recall and f-scores improve by between 4.4% and 5.5% (recall) and 3% (f-score) absolute (over without ILP constraints). For comparison, we repeated the experiment using the bestscoring method of Rehbein (2009). Rehbein trains the Berkeley Parser to learn an extended category set, merging TiGer function labels with syntactic categories, where the parser outputs fully-labelled trees. The results show that this approach suffers from the same drop in coverage as the classifier without ILP constraints, with recall about 7% and f-score about 4% (absolute) lower than for the classifier with ILP constraints. Table 7 shows the dramatic effect of the ILP constraints on the number of sentences in the test set that have multiple argument functions of the same type within the same clause. With ILP constraints, the problem disappears and therefore, less feature-clashes occur during f-structure computation. no constraints constraints gold 185 0 parser 212 0 Table 7: Number of sentences in the test set with doubly annotated argument functions In order to assess whether ILP constraints help with coverage only or whether they affect the quality of the f-structures as well, we repeat the experiment in Table 6, however this time evaluating only on those sentences that receive an f-structure, ignoring the rest. Table 8 shows that the impact of ILP constraints on quality is much less dramatic than on coverage, with only very small variations in precison, recall and f-scores across the board, and small increases over Rehbein (2009). cov. prec. rec. f-score no constr. 93.41 79.70 77.89 78.79 constraints 98.39 79.43 77.85 78.64 Rehbein 93.62 79.20 76.43 77.79 Table 8: f-structure evaluation results for parser output excluding sentences without f-structures Early work on automatic LFG acquisition and parsing for German is presented in Cahill et al. (2003) and Cahill (2004), adapting the English Annotation Algorithm to an earlier and smaller version of the TiGer treebank (without morphological information) and training a parser to learn merged Tiger function-category labels, and reporting 95.75% coverage and an f-score of 74.56% f-structure quality against 2,000 gold treebank trees automatically converted into f-structures. Rehbein (2009) uses the larger Release 2 of the treebank (with morphological information) reporting 77.79% f-score and coverage of 93.62% (Ta1094 ble 8) against the dependencies in the TiGerDB test set. The only rule-based approach to German LFG-parsing we are aware of is the hand-crafted German grammar in the ParGram Project (Butt et al., 2002). Forst (2007) reports 83.01% dependency f-score evaluated against a set of 1,497 sentences of the TiGerDB. It is very difficult to compare results across the board, as individual papers use (i) different versions of the treebank, (ii) different (sections of) gold-standards to evaluate against (gold TiGer trees in TigerDB, the dependency representations provided by TigerDB, automatically generated gold-standards etc.) and (iii) different label/grammatical function sets. Furthermore, (iv) coverage differs drastically (with the hand-crafted LFG resources achieving about 80% full f-structures) and finally, (v) some of the grammars evaluated having been used in the generation of the gold standards, possibly introducing a bias towards these resources: the German hand-crafted LFG was used to produce TiGerDB (Forst et al., 2004). In order to put the results into some perspective, Table 9 shows an evaluation of our resources against a set of automatically generated gold standard f-structures produced by using the f-structure annotation algorithm on the original hand-labelled TiGer gold trees in the section corresponding to TiGerDB: without ILP constraints we achieve a dependency f-score of 84.35%, with ILP constraints 87.23% and 98.89% coverage. cov. prec. rec. f-score without constraints gold 95.24 97.76 90.93 94.22 parser 93.35 88.71 80.40 84.35 with constraints gold 99.30 97.66 97.33 97.50 parser 98.89 88.37 86.12 87.23 Table 9: f-structure evaluation results for the test set against automatically generated goldstandard (1,850 sentences) 5 Conclusion In this paper, we addressed the problem of assigning grammatical functions to constituent structures. We have proposed an approach to grammatical function labelling that combines the flexibility of a statistical classifier with linguistic expert knowledge in the form of hard constraints implemented by an integer linear program. These constraints restrict the solution space of the classifier by blocking those solutions that cannot be correct. One of the strengths of an integer linear program is the unlimited context it can take into account by optimising over the entire structure, providing an elegant way of supporting classifiers with explicit linguistic knowledge while at the same time keeping feature models small and comprehensible. Most of the constraints are direct formalizations of linguistic generalizations for German. Our approach should generalise to other languages for which linguistic expertise is available. We evaluated our system on the TiGer corpus and the TiGerDB and gave results on gold standard trees and parser output. We also applied the German f-structure annotation algorithm to the automatically labelled data and evaluated the system by measuring the quality of the resulting f-structures. We found that by using the constraint set, the function labeller ensures the interpretability and thus the usefulness of the syntactic structure for a subsequently applied processing step. In our f-structure evaluation, that means, the f-structure computation algorithm is able to produce an f-structure for almost all sentences. Acknowledgements The first author would like to thank Gerlof Bouma for a lot of very helpful discussions. We would like to thank our anonymous reviewers for detailed and helpful comments. The research was supported by the Science Foundation Ireland SFI (Grant 07/CE/I1142) as part of the Centre for Next Generation Localisation (www.cngl.ie) and by DFG (German Research Foundation) through SFB 632 Potsdam-Berlin and SFB 732 Stuttgart. References Steven J. Benson and Jorge J. More. 2001. A limited memory variable metric method in subspaces and bound constrained optimization problems. Technical report, Argonne National Laboratory. Adam L. Berger, Vincent J.D. Pietra, and Stephen A.D. Pietra. 1996. A maximum entropy approach to natural language processing. Computational linguistics, 22(1):71. Don Blaheta and Eugene Charniak. 2000. Assigning function tags to parsed text. In Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference, pages 234 – 240, Seattle, Washington. Morgan Kaufmann Publishers Inc. Thorsten Brants, Wojciech Skut, and Brigitte Krenn. 1997. Tagging grammatical functions. In Proceedings of EMNLP, volume 97, pages 64–74. 1095 Sabine Brants, Stefanie Dipper, Silvia Hansen, Wolfgang Lezius, and George Smith. 2002. The TIGER treebank. In Proceedings of the Workshop on Treebanks and Linguistic Theories, page 2441. Joan Bresnan. 2001. Lexical-Functional Syntax. Blackwell Publishers. Miriam Butt, Helge Dyvik, Tracy Halloway King, Hiroshi Masuichi, and Christian Rohrer. 2002. The parallel grammar project. In COLING-02 on Grammar engineering and evaluation-Volume 15, volume pages, page 7. Association for Computational Linguistics. Aoife Cahill, Martin Forst, Mairead McCarthy, Ruth ODonovan, Christian Rohrer, Josef van Genabith, and Andy Way. 2003. Treebank-based multilingual unification-grammar development. In Proceedings of the Workshop on Ideas and Strategies for Multilingual Grammar Development at the 15th ESSLLI, page 1724. Aoife Cahill, Michael Burke, Ruth O’Donovan, Josef van Genabith, and Andy Way. 2004. Longdistance dependency resolution in automatically acquired wide-coverage PCFG-based LFG approximations. Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics - ACL ’04, pages 319–es. Aoife Cahill, Michael Burke, Ruth O’Donovan, Stefan Riezler, Josef van Genabith, and Andy Way. 2008. Wide-Coverage Deep Statistical Parsing Using Automatic Dependency Structure Annotation. Computational Linguistics, 34(1):81–124, M¨arz. Aoife Cahill. 2004. Parsing with Automatically Acquired, Wide-Coverage, Robust, Probabilistic LFG Approximations. Ph.D. thesis, Dublin City University. Grzegorz Chrupała and Josef Van Genabith. 2006. Using machine-learning to assign function labels to parser output for Spanish. In Proceedings of the COLING/ACL main conference poster session, page 136143, Sydney. Association for Computational Linguistics. Stephen Clark and Judith Hockenmaier. 2002. Evaluating a wide-coverage CCG parser. In Proceedings of the LREC 2002, pages 60–66. James Clarke and Mirella Lapata. 2008. Global inference for sentence compression an integer linear programming approach. Journal of Artificial Intelligence Research, 31:399–429. Richard Crouch, Ronald M. Kaplan, Tracy Halloway King, and Stefan Riezler. 2002. A comparison of evaluation metrics for a broad-coverage stochastic parser. In Proceedings of LREC 2002 Workshop, pages 67–74, Las Palmas, Canary Islands, Spain. Peter Eisenberg. 2006. Grundriss der deutschen Grammatik: Das Wort. J.B. Metzler, Stuttgart, 3 edition. Martin Forst, N´uria Bertomeu, Berthold Crysmann, Frederik Fouvry, Silvia Hansen-Shirra, and Valia Kordoni. 2004. Towards a dependency-based gold standard for German parsers The TiGer Dependency Bank. In Proceedings of the COLING Workshop on Linguistically Interpreted Corpora (LINC ’04), Geneva, Switzerland. Martin Forst. 2007. Filling Statistics with Linguistics Property Design for the Disambiguation of German LFG Parses. In Proceedings of ACL 2007. Association for Computational Linguistics. Jun’Ichi Kazama and Jun’Ichi Tsujii. 2005. Maximum entropy models with inequality constraints: A case study on text categorization. Machine Learning, 60(1):159194. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of ACL 2003, pages 423–430, Morristown, NJ, USA. Association for Computational Linguistics. Manfred Klenner. 2005. Extracting Predicate Structures from Parse Trees. In Proceedings of the RANLP 2005. Manfred Klenner. 2007. Shallow dependency labeling. In Proceedings of the ACL 2007 Demo and Poster Sessions, page 201204, Prague. Association for Computational Linguistics. Terry Koo and Michael Collins. 2005. Hiddenvariable models for discriminative reranking. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing - HLT ’05, pages 507–514, Morristown, NJ, USA. Association for Computational Linguistics. Sandra K¨ubler. 2005. How Do Treebank Annotation Schemes Influence Parsing Results? Or How Not to Compare Apples And Oranges. In Proceedings of RANLP 2005, Borovets, Bulgaria. David M. Magerman. 1995. Statistical decision-tree models for parsing. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics, page 276283, Morristown, NJ, USA. Association for Computational Linguistics Morristown, NJ, USA. Andr´e F. T. Martins, Noah A. Smith, and Eric P. Xing. 2009. Concise integer linear programming formulations for dependency parsing. In Proceedings of ACL 2009. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of EACL, volume 6. Yusuke Miyao, Takashi Ninomiya, and Jun’ichi Tsujii. 2003. Probabilistic modeling of argument structures including non-local dependencies. In Proceedings of the Conference on Recent Advances in Natural Language Processing RANLP 2003, volume 2. 1096 Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, G¨ulsen Eryigit, Sandra K¨ubler, Svetoslav Marinov, and Erwin Marsi. 2007. MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering, 13(2):95–135, Januar. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACL - ACL ’06, pages 433–440, Morristown, NJ, USA. Association for Computational Linguistics. Vasin Punyakanok, Wen-Tau Yih, Dan Roth, and Dav Zimak. 2004. Semantic role labeling via integer linear programming inference. In Proceedings of the 20th international conference on Computational Linguistics - COLING ’04, Morristown, NJ, USA. Association for Computational Linguistics. Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The Importance of Syntactic Parsing and Inference in Semantic Role Labeling. Computational Linguistics, 34(2):257–287, Juni. Adwait Ratnaparkhi. 1998. Maximum Entropy Models for Natural Language Ambiguity Resolution. Ph.D. thesis, University of Pennsylvania. Ines Rehbein and Josef van Genabith. 2009. Automatic Acquisition of LFG Resources for GermanAs Good as it gets. In Miriam Butt and Tracy Holloway King, editors, Proceedings of LFG Conference 2009. CSLI Publications. Ines Rehbein. 2009. Treebank-based grammar acquisition for German. Ph.D. thesis, Dublin City University. Dan Roth and Wen-Tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proceedings of CoNNL 2004. Anne Schiller, Simone Teufel, and Christine St¨ockert. 1999. Guidelines f¨ur das Tagging deutscher Textcorpora mit STTS (Kleines und großes Tagset). Technical Report August, Universit¨at Stuttgart. Anne Schiller. 1994. Dmor - user’s guide. Technical report, University of Stuttgart. Helmut Schmid. 1994. Probabilistic Part-of-Speech Tagging Using Decision Trees. In Proceedings of International Conference on New Methods in Language Processing, volume 12. Manchester, UK. Reut Tsarfaty and Khalil Sima’an. 2008. Relationalrealizational parsing. In Proceedings of the 22nd International Conference on Computational Linguistics - COLING ’08, pages 889–896, Morristown, NJ, USA. Association for Computational Linguistics. 1097
2010
111
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1098–1107, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Simple, Accurate Parsing with an All-Fragments Grammar Mohit Bansal and Dan Klein Computer Science Division University of California, Berkeley {mbansal, klein}@cs.berkeley.edu Abstract We present a simple but accurate parser which exploits both large tree fragments and symbol refinement. We parse with all fragments of the training set, in contrast to much recent work on tree selection in data-oriented parsing and treesubstitution grammar learning. We require only simple, deterministic grammar symbol refinement, in contrast to recent work on latent symbol refinement. Moreover, our parser requires no explicit lexicon machinery, instead parsing input sentences as character streams. Despite its simplicity, our parser achieves accuracies of over 88% F1 on the standard English WSJ task, which is competitive with substantially more complicated state-of-theart lexicalized and latent-variable parsers. Additional specific contributions center on making implicit all-fragments parsing efficient, including a coarse-to-fine inference scheme and a new graph encoding. 1 Introduction Modern NLP systems have increasingly used dataintensive models that capture many or even all substructures from the training data. In the domain of syntactic parsing, the idea that all training fragments1 might be relevant to parsing has a long history, including tree-substitution grammar (data-oriented parsing) approaches (Scha, 1990; Bod, 1993; Goodman, 1996a; Chiang, 2003) and tree kernel approaches (Collins and Duffy, 2002). For machine translation, the key modern advancement has been the ability to represent and memorize large training substructures, be it in contiguous phrases (Koehn et al., 2003) or syntactic trees 1In this paper, a fragment means an elementary tree in a tree-substitution grammar, while a subtree means a fragment that bottoms out in terminals. (Galley et al., 2004; Chiang, 2005; Deneefe and Knight, 2009). In all such systems, a central challenge is efficiency: there are generally a combinatorial number of substructures in the training data, and it is impractical to explicitly extract them all. On both efficiency and statistical grounds, much recent TSG work has focused on fragment selection (Zuidema, 2007; Cohn et al., 2009; Post and Gildea, 2009). At the same time, many high-performance parsers have focused on symbol refinement approaches, wherein PCFG independence assumptions are weakened not by increasing rule sizes but by subdividing coarse treebank symbols into many subcategories either using structural annotation (Johnson, 1998; Klein and Manning, 2003) or lexicalization (Collins, 1999; Charniak, 2000). Indeed, a recent trend has shown high accuracies from models which are dedicated to inducing such subcategories (Henderson, 2004; Matsuzaki et al., 2005; Petrov et al., 2006). In this paper, we present a simplified parser which combines the two basic ideas, using both large fragments and symbol refinement, to provide non-local and local context respectively. The two approaches turn out to be highly complementary; even the simplest (deterministic) symbol refinement and a basic use of an all-fragments grammar combine to give accuracies substantially above recent work on treesubstitution grammar based parsers and approaching top refinement-based parsers. For example, our best result on the English WSJ task is an F1 of over 88%, where recent TSG parsers2 achieve 82-84% and top refinement-based parsers3 achieve 88-90% (e.g., Table 5). Rather than select fragments, we use a simplification of the PCFG-reduction of DOP (Goodman, 2Zuidema (2007), Cohn et al. (2009), Post and Gildea (2009). Zuidema (2007) incorporates deterministic refinements inspired by Klein and Manning (2003). 3Including Collins (1999), Charniak and Johnson (2005), Petrov and Klein (2007). 1098 1996a) to work with all fragments. This reduction is a flexible, implicit representation of the fragments that, rather than extracting an intractably large grammar over fragment types, indexes all nodes in the training treebank and uses a compact grammar over indexed node tokens. This indexed grammar, when appropriately marginalized, is equivalent to one in which all fragments are explicitly extracted. Our work is the first to apply this reduction to full-scale parsing. In this direction, we present a coarse-to-fine inference scheme and a compact graph encoding of the training set, which, together, make parsing manageable. This tractability allows us to avoid selection of fragments, and work with all fragments. Of course, having a grammar that includes all training substructures is only desirable to the extent that those structures can be appropriately weighted. Implicit representations like those used here do not allow arbitrary weightings of fragments. However, we use a simple weighting scheme which does decompose appropriately over the implicit encoding, and which is flexible enough to allow weights to depend not only on frequency but also on fragment size, node patterns, and certain lexical properties. Similar ideas have been explored in Bod (2001), Collins and Duffy (2002), and Goodman (2003). Our model empirically affirms the effectiveness of such a flexible weighting scheme in full-scale experiments. We also investigate parsing without an explicit lexicon. The all-fragments approach has the advantage that parsing down to the character level requires no special treatment; we show that an explicit lexicon is not needed when sentences are considered as strings of characters rather than words. This avoids the need for complex unknown word models and other specialized lexical resources. The main contribution of this work is to show practical, tractable methods for working with an all-fragments model, without an explicit lexicon. In the parsing case, the central result is that accuracies in the range of state-of-the-art parsers (i.e., over 88% F1 on English WSJ) can be obtained with no sampling, no latent-variable modeling, no smoothing, and even no explicit lexicon (hence negligible training overall). These techniques, however, are not limited to the case of monolingual parsing, offering extensions to models of machine translation, semantic interpretation, and other areas in which a similar tension exists between the desire to extract many large structures and the computational cost of doing so. 2 Representation of Implicit Grammars 2.1 All-Fragments Grammars We consider an all-fragments grammar G (see Figure 1(a)) derived from a binarized treebank B. G is formally a tree-substitution grammar (Resnik, 1992; Bod, 1993) wherein each subgraph of each training tree in B is an elementary tree, or fragment f, in G. In G, each derivation d is a tree (multiset) of fragments (Figure 1(c)), and the weight of the derivation is the product of the weights of the fragments: ω(d) = Q f∈d ω(f). In the following, the derivation weights, when normalized over a given sentence s, are interpretable as conditional probabilities, so G induces distributions of the form P(d|s). In models like G, many derivations will generally correspond to the same unsegmented tree, and the parsing task is to find the tree whose sum of derivation weights is highest: tmax = arg maxt P d∈t ω(d). This final optimization is intractable in a way that is orthogonal to this paper (Sima’an, 1996); we describe minimum Bayes risk approximations in Section 4. 2.2 Implicit Representation of G Explicitly extracting all fragment-rules of a grammar G is memory and space intensive, and impractical for full-size treebanks. As a tractable alternative, we consider an implicit grammar GI (see Figure 1(b)) that has the same posterior probabilities as G. To construct GI, we use a simplification of the PCFG-reduction of DOP by Goodman (1996a).4 GI has base symbols, which are the symbol types from the original treebank, as well as indexed symbols, which are obtained by assigning a unique index to each node token in the training treebank. The vast majority of symbols in GI are therefore indexed symbols. While it may seem that such grammars will be overly large, they are in fact reasonably compact, being linear in the treebank size B, while G is exponential in the length of a sentence. In particular, we found that GI was smaller than explicit extraction of all depth 1 and 2 unbinarized fragments for our 4The difference is that Goodman (1996a) collapses our BEGIN and END rules into the binary productions, giving a larger grammar which is less convenient for weighting. 1099 ! SYMBOLS: X, for all types in treebank " RULES: Xĺ#, for all fragments in " ! $ SYMBOLS: ŹBase: X for all types in treebank " ŹIndexed: Xi for all tokens of X in " RULES: ŹBegin: ;ĺ;i for all Xi in " ŹContinue: Xiĺ<j Zk for all rule-tokens in " ŹEnd: Xi ĺ;IRUDOO;i in " %$ FRAGMENTS DERIVATIONS (a) (b) GRAMMAR % #$ A X Al CONTINUE END Xi Zk Yj BEGIN B Bm C Cn # A X Z Y B C C B A X words X C B A words EXPLICIT IMPLICIT MAP ʌ Figure 1: Grammar definition and sample derivations and fragments in the grammar for (a) the explicitly extracted all-fragments grammar G, and (b) its implicit representation GI. treebanks – in practice, even just the raw treebank grammar grows almost linearly in the size of B.5 There are 3 kinds of rules in GI, which are illustrated in Figure 1(d). The BEGIN rules transition from a base symbol to an indexed symbol and represent the beginning of a fragment from G. The CONTINUE rules use only indexed symbols and correspond to specific depth-1 binary fragment tokens from training trees, representing the internal continuation of a fragment in G. Finally, END rules transition from an indexed symbol to a base symbol, representing the frontier of a fragment. By construction, all derivations in GI will segment, as shown in Figure 1(d), into regions corresponding to tokens of fragments from the training treebank B. Let π be the map which takes appropriate fragments in GI (those that begin and end with base symbols and otherwise contain only indexed symbols), and maps them to the corresponding f in G. We can consider any derivation dI in GI to be a tree of fragments fI, each fragment a token of a fragment type f = π(fI) in the original grammar G. By extension, we can therefore map any derivation dI in GI to the corresponding derivation d = π(dI) in G. The mapping π is an onto mapping from GI to 5Just half the training set (19916 trees) itself had 1.7 million depth 1 and 2 unbinarized rules compared to the 0.9 million indexed symbols in GI (after graph packing). Even extracting binarized fragments (depth 1 and 2, with one order of parent annotation) gives us 0.75 million rules, and, practically, we would need fragments of greater depth. G. In particular, each derivation d in G has a nonempty set of corresponding derivations {dI} = π−1(d) in GI, because fragments f in d correspond to multiple fragments fI in GI that differ only in their indexed symbols (one fI per occurrence of f in B). Therefore, the set of derivations in G is preserved in GI. We now discuss how weights can be preserved under π. 2.3 Equivalence for Weighted Grammars In general, arbitrary weight functions ω on fragments in G do not decompose along the increased locality of GI. However, we now consider a usefully broad class of weighting schemes for which the posterior probabilities under G of derivations d are preserved in GI. In particular, assume that we have a weighting ω on rules in GI which does not depend on the specific indices used. Therefore, any fragment fI will have a weight in GI of the form: ωI(fI) = ωBEGIN(b) Y r∈C ωCONT(r) Y e∈E ωEND(e) where b is the BEGIN rule, r are CONTINUE rules, and e are END rules in the fragment fI (see Figure 1(d)). Because ω is assumed to not depend on the specific indices, all fI which correspond to the same f under π will have the same weight ωI(f) in GI. In this case, we can define an induced weight 1100 Xi BEGIN A X Al CONTINUE END Zk Yj Bm word DOP1 MIN-FRAGMENTS OUR MODEL ! ! " #$ ! "%#$%! ! ! CONTINUE RULE TYPES WEIGHTS Figure 2: Rules defined for grammar GI and weight schema for the DOP1 model, the Min-Fragments model (Goodman (2003)) and our model. Here s(X) denotes the total number of fragments rooted at base symbol X. for fragments f in G by ωG(f) = X fI∈π−1(f) ωI(fI) = n(f)ωI(f) = n(f)ωBEGIN(b′) Y r′∈C ωCONT(r′) Y e′∈E ωEND(e′) where now b′, r′ and e′ are non-indexed type abstractions of f’s member productions in GI and n(f) = |π−1(f)| is the number of tokens of f in B. Under the weight function ωG(f), any derivation d in G will have weight which obeys ωG(d) = Y f∈d ωG(f) = Y f∈d n(f)ωI(f) = X dI∈d ωI(dI) and so the posterior P(d|s) of a derivation d for a sentence s will be the same whether computed in G or GI. Therefore, provided our weighting function on fragments f in G decomposes over the derivational representation of f in GI, we can equivalently compute the quantities we need for inference (see Section 4) using GI instead. 3 Parameterization of Implicit Grammars 3.1 Classical DOP1 The original data-oriented parsing model ‘DOP1’ (Bod, 1993) is a particular instance of the general weighting scheme which decomposes appropriately over the implicit encoding, described in Section 2.3. Figure 2 shows rule weights for DOP1 in the parameter schema we have defined. The END rule weight is 0 or 1 depending on whether A is an intermediate symbol or not.6 The local fragments in DOP1 were flat (non-binary) so this weight choice simulates that property by not allowing switching between fragments at intermediate symbols. The original DOP1 model weights a fragment f in G as ωG(f) = n(f)/s(X), i.e., the frequency of fragment f divided by the number of fragments rooted at base symbol X. This is simulated by our weight choices (Figure 2) where each fragment fI in GI has weight ωI(fI) = 1/s(X) and therefore, ωG(f) = P fI∈π−1(f) ωI(fI) = n(f)/s(X). Given the weights used for DOP1, the recursive formula for the number of fragments s(Xi) rooted at indexed symbol Xi (and for the CONTINUE rule Xi →Yj Zk) is s(Xi) = (1 + s(Yj))(1 + s(Zk)), (1) where s(Yj) and s(Zk) are the number of fragments rooted at indexed symbols Yj and Zk (nonintermediate) respectively. The number of fragments s(X) rooted at base symbol X is then s(X) = P Xi s(Xi). Implicitly parsing with the full DOP1 model (no sampling of fragments) using the weights in Figure 2 gives a 68% parsing accuracy on the WSJ dev-set.7 This result indicates that the weight of a fragment should depend on more than just its frequency. 3.2 Better Parameterization As has been pointed out in the literature, largefragment grammars can benefit from weights of fragments depending not only on their frequency but also on other properties. For example, Bod (2001) restricts the size and number of words in the frontier of the fragments, and Collins and Duffy (2002) and Goodman (2003) both give larger fragments smaller weights. Our model can incorporate both size and lexical properties. In particular, we set ωCONT(r) for each binary CONTINUE rule r to a learned constant ωBODY, and we set the weight for each rule with a POS parent to a 6Intermediate symbols are those created during binarization. 7For DOP1 experiments, we use no symbol refinement. We annotate with full left binarization history to imitate the flat nature of fragments in DOP1. We use mild coarse-pass pruning (Section 4.1) without which the basic all-fragments chart does not fit in memory. Standard WSJ treebank splits used: sec 2-21 training, 22 dev, 23 test. 1101 Rule score: r(A →B C, i, k, j) = P x P y P z O(Ax, i, j)ω(Ax →By Cz)I(By, i, k)I(Cz, k, j) Max-Constituent: q(A, i, j) = P x O(Ax,i,j)I(Ax,i,j) P r I(rootr,0,n) tmax = argmax t P c∈t q(c) Max-Rule-Sum: q(A →B C, i, k, j) = r(A→B C,i,k,j) P r I(rootr,0,n) tmax = argmax t P e∈t q(e) Max-Variational: q(A →B C, i, k, j) = r(A→B C,i,k,j) P x O(Ax,i,j)I(Ax,i,j) tmax = argmax t Q e∈t q(e) Figure 3: Inference: Different objectives for parsing with posteriors. A, B, C are base symbols, Ax, By, Cz are indexed symbols and i,j,k are between-word indices. Hence, (Ax, i, j) represents a constituent labeled with Ax spanning words i to j. I(Ax, i, j) and O(Ax, i, j) denote the inside and outside scores of this constituent, respectively. For brevity, we write c ≡(A, i, j) and e ≡(A →B C, i, k, j). Also, tmax is the highest scoring parse. Adapted from Petrov and Klein (2007). constant ωLEX (see Figure 2). Fractional values of these parameters allow the weight of a fragment to depend on its size and lexical properties. Another parameter we introduce is a ‘switching-penalty’ csp for the END rules (Figure 2). The DOP1 model uses binary values (0 if symbol is intermediate, 1 otherwise) as the END rule weight, which is equivalent to prohibiting fragment switching at intermediate symbols. We learn a fractional constant asp that allows (but penalizes) switching between fragments at annotated symbols through the formulation csp(Xintermediate) = 1 −asp and csp(Xnon−intermediate) = 1 + asp. This feature allows fragments to be assigned weights based on the binarization status of their nodes. With the above weights, the recursive formula for s(Xi), the total weighted number of fragments rooted at indexed symbol Xi, is different from DOP1 (Equation 1). For rule Xi →Yj Zk, it is s(Xi) = ωBODY.(csp(Yj)+s(Yj))(csp(Zk)+s(Zk)). The formula uses ωLEX in place of ωBODY if r is a lexical rule (Figure 2). The resulting grammar is primarily parameterized by the training treebank B. However, each setting of the hyperparameters (ωBODY, ωLEX, asp) defines a different conditional distribution on trees. We choose amongst these distributions by directly optimizing parsing F1 on our development set. Because this objective is not easily differentiated, we simply perform a grid search on the three hyperparameters. The tuned values are ωBODY = 0.35, ωLEX = 0.25 and asp = 0.018. For generalization to a larger parameter space, we would of course need to switch to a learning approach that scales more gracefully in the number of tunable hyperparameters.8 8Note that there has been a long history of DOP estimators. The generative DOP1 model was shown to be inconsisdev (≤40) test (≤40) test (all) Model F1 EX F1 EX F1 EX Constituent 88.4 33.7 88.5 33.0 87.6 30.8 Rule-Sum 88.2 34.6 88.3 33.8 87.4 31.6 Variational 87.7 34.4 87.7 33.9 86.9 31.6 Table 1: All-fragments WSJ results (accuracy F1 and exact match EX) for the constituent, rule-sum and variational objectives, using parent annotation and one level of markovization. 4 Efficient Inference The previously described implicit grammar GI defines a posterior distribution P(dI|s) over a sentence s via a large, indexed PCFG. This distribution has the property that, when marginalized, it is equivalent to a posterior distribution P(d|s) over derivations in the correspondingly-weighted all-fragments grammar G. However, even with an explicit representation of G, we would not be able to tractably compute the parse that maximizes P(t|s) = P d∈t P(d|s) = P dI∈t P(dI|s) (Sima’an, 1996). We therefore approximately maximize over trees by computing various existing approximations to P(t|s) (Figure 3). Goodman (1996b), Petrov and Klein (2007), and Matsuzaki et al. (2005) describe the details of constituent, rule-sum and variational objectives respectively. Note that all inference methods depend on the posterior P(t|s) only through marginal expectations of labeled constituent counts and anchored local binary tree counts, which are easily computed from P(dI|s) and equivalent to those from P(d|s). Therefore, no additional approximations are made in GI over G. As shown in Table 1, our model (an allfragments grammar with the weighting scheme tent by Johnson (2002). Later, Zollmann and Sima’an (2005) presented a statistically consistent estimator, with the basic insight of optimizing on a held-out set. Our estimator is not intended to be viewed as a generative model of trees at all, but simply a loss-minimizing conditional distribution within our parametric family. 1102 shown in Figure 2) achieves an accuracy of 88.5% (using simple parent annotation) which is 4-5% (absolute) better than the recent TSG work (Zuidema, 2007; Cohn et al., 2009; Post and Gildea, 2009) and also approaches state-of-theart refinement-based parsers (e.g., Charniak and Johnson (2005), Petrov and Klein (2007)).9 4.1 Coarse-to-Fine Inference Coarse-to-fine inference is a well-established way to accelerate parsing. Charniak et al. (2006) introduced multi-level coarse-to-fine parsing, which extends the basic pre-parsing idea by adding more rounds of pruning. Their pruning grammars were coarse versions of the raw treebank grammar. Petrov and Klein (2007) propose a multistage coarse-to-fine method in which they construct a sequence of increasingly refined grammars, reparsing with each refinement. In particular, in their approach, which we adopt here, coarse-to-fine pruning is used to quickly compute approximate marginals, which are then used to prune subsequent search. The key challenge in coarse-to-fine inference is the construction of coarse models which are much smaller than the target model, yet whose posterior marginals are close enough to prune with safely. Our grammar GI has a very large number of indexed symbols, so we use a coarse pass to prune away their unindexed abstractions. The simple, intuitive, and effective choice for such a coarse grammar GC is a minimal PCFG grammar composed of the base treebank symbols X and the minimal depth-1 binary rules X →Y Z (and with the same level of annotation as in the full grammar). If a particular base symbol X is pruned by the coarse pass for a particular span (i, j) (i.e., the posterior marginal P(X, i, j|s) is less than a certain threshold), then in the full grammar GI, we do not allow building any indexed symbol Xl of type X for that span. Hence, the projection map for the coarse-to-fine model is πC : Xl (indexed symbol) →X (base symbol). We achieve a substantial improvement in speed and memory-usage from the coarse-pass pruning. Speed increases by a factor of 40 and memoryusage decreases by a factor of 10 when we go 9All our experiments use the constituent objective except when we report results for max-rule-sum and maxvariational parsing (where we use the parameters tuned for max-constituent, therefore they unsurprisingly do not perform as well as max-constituent). Evaluations use EVALB, see http://nlp.cs.nyu.edu/evalb/. 87.8 88.0 88.2 88.4 -4.0 -4.5 -5.0 -5.5 -6.0 -6.5 -7.0 -7.5 Coarse-pass Log Posterior Threshold (PT) F1 -6.2 Figure 4: Effect of coarse-pass pruning on parsing accuracy (for WSJ dev-set, ≤40 words). Pruning increases to the left as log posterior threshold (PT) increases. 86.0 86.5 87.0 87.5 88.0 88.5 89.0 89.5 90.0 -1 -3 -5 -7 -9 -11 -13 Coarse-pass Log Posterior Threshold (PT) -6 F1 89.6 No Pruning (PT = -inf) 89.8 Figure 5: Effect of coarse-pass pruning on parsing accuracy (WSJ, training ≤20 words, tested on dev-set ≤20 words). This graph shows that the fortuitous improvement due to pruning is very small and that the peak accuracy is almost equal to the accuracy without pruning (the dotted line). from no pruning to pruning with a −6.2 log posterior threshold.10 Figure 4 depicts the variation in parsing accuracies in response to the amount of pruning done by the coarse-pass. Higher posterior pruning thresholds induce more aggressive pruning. Here, we observe an effect seen in previous work (Charniak et al. (1998), Petrov and Klein (2007), Petrov et al. (2008)), that a certain amount of pruning helps accuracy, perhaps by promoting agreement between the coarse and full grammars (model intersection). However, these ‘fortuitous’ search errors give only a small improvement and the peak accuracy is almost equal to the parsing accuracy without any pruning (as seen in Figure 5).11 This outcome suggests that the coarsepass pruning is critical for tractability but not for performance. 10Unpruned experiments could not be run for 40-word test sentences even with 50GB of memory, therefore we calculated the improvement factors using a smaller experiment with full training and sixty 30-word test sentences. 11To run experiments without pruning, we used training and dev sentences of length ≤20 for the graph in Figure 5. 1103 tree-to-graph encoding Figure 6: Collapsing the duplicate training subtrees converts them to a graph and reduces the number of indexed symbols significantly. 4.2 Packed Graph Encoding The implicit all-fragments approach (Section 2.2) avoids explicit extraction of all rule fragments. However, the number of indexed symbols in our implicit grammar GI is still large, because every node in each training tree (i.e., every symbol token) has a unique indexed symbol. We have around 1.9 million indexed symbol tokens in the word-level parsing model (this number increases further to almost 12.3 million when we parse character strings in Section 5.1). This large symbol space makes parsing slow and memory-intensive. We reduce the number of symbols in our implicit grammar GI by applying a compact, packed graph encoding to the treebank training trees. We collapse the duplicate subtrees (fragments that bottom out in terminals) over all training trees. This keeps the grammar unchanged because in an tree-substitution grammar, a node is defined (identified) by the subtree below it. We maintain a hashmap on the subtrees which allows us to easily discover the duplicates and bin them together. The collapsing converts all the training trees in the treebank to a graph with multiple parents for some nodes as shown in Figure 6. This technique reduces the number of indexed symbols significantly as shown in Table 2 (1.9 million goes down to 0.9 million, reduction by a factor of 2.1). This reduction increases parsing speed by a factor of 1.4 (and by a factor of 20 for character-level parsing, see Section 5.1) and reduces memory usage to under 4GB. We store the duplicate-subtree counts for each indexed symbol of the collapsed graph (using a hashmap). When calculating the number of fragParsing Model No. of Indexed Symbols Word-level Trees 1,900,056 Word-level Graph 903,056 Character-level Trees 12,280,848 Character-level Graph 1,109,399 Table 2: Number of indexed symbols for word-level and character-level parsing and their graph versions (for allfragments grammar with parent annotation and one level of markovization). Figure 7: Character-level parsing: treating the sentence as a string of characters instead of words. ments s(Xi) parented by an indexed symbol Xi (see Section 3.2), and when calculating the inside and outside scores during inference, we account for the collapsed subtree tokens by expanding the counts and scores using the corresponding multiplicities. Therefore, we achieve the compaction with negligible overhead in computation. 5 Improved Treebank Representations 5.1 Character-Level Parsing The all-fragments approach to parsing has the added advantage that parsing below the word level requires no special treatment, i.e., we do not need an explicit lexicon when sentences are considered as strings of characters rather than words. Unknown words in test sentences (unseen in training) are a major issue in parsing systems for which we need to train a complex lexicon, with various unknown classes or suffix tries. Smoothing factors need to be accounted for and tuned. With our implicit approach, we can avoid training a lexicon by building up the parse tree from characters instead of words. As depicted in Figure 7, each word in the training trees is split into its corresponding characters with start and stop boundary tags (and then binarized in a standard rightbranching style). A test sentence’s words are split up similarly and the test-parse is built from training fragments using the same model and inference procedure as defined for word-level parsing (see Sections 2, 3 and 4). The lexical items (alphabets, digits etc.) are now all known, so unlike word-level parsing, no sophisticated lexicon is needed. We choose a slightly richer weighting scheme 1104 dev (≤40) test (≤40) test (all) Model F1 EX F1 EX F1 EX Constituent 88.2 33.6 88.0 31.9 87.1 29.8 Rule-Sum 88.0 33.9 87.8 33.1 87.0 30.9 Variational 87.6 34.4 87.2 32.3 86.4 30.2 Table 3: All-fragments WSJ results for the character-level parsing model, using parent annotation and one level of markovization. for this representation by extending the twoweight schema for CONTINUE rules (ωLEX and ωBODY) to a three-weight one: ωLEX, ωWORD, and ωSENT for CONTINUE rules in the lexical layer, in the portion of the parse that builds words from characters, and in the portion of the parse that builds the sentence from words, respectively. The tuned values are ωSENT = 0.35, ωWORD = 0.15, ωLEX = 0.95 and asp = 0. The character-level model achieves a parsing accuracy of 88.0% (see Table 3), despite lacking an explicit lexicon.12 Character-level parsing expands the training trees (see Figure 7) and the already large indexed symbol space size explodes (1.9 million increases to 12.3 million, see Table 2). Fortunately, this is where the packed graph encoding (Section 4.2) is most effective because duplication of character strings is high (e.g., suffixes). The packing shrinks the symbol space size from 12.3 million to 1.1 million, a reduction by a factor of 11. This reduction increases parsing speed by almost a factor of 20 and brings down memory-usage to under 8GB.13 5.2 Basic Refinement: Parent Annotation and Horizontal Markovization In a pure all-fragments approach, compositions of units which would have been independent in a basic PCFG are given joint scores, allowing the representation of certain non-local phenomena, such as lexical selection or agreement, which in fully local models require rich state-splitting or lexicalization. However, at substitution sites, the coarseness of raw unrefined treebank symbols still creates unrealistic factorization assumptions. A standard solution is symbol refinement; Johnson (1998) presents the particularly simple case of parent annotation, in which each node is 12Note that the word-level model yields a higher accuracy of 88.5%, but uses 50 complex unknown word categories based on lexical, morphological and position features (Petrov et al., 2006). Cohn et al. (2009) also uses this lexicon. 13Full char-level experiments (w/o packed graph encoding) could not be run even with 50GB of memory. We calculate the improvement factors using a smaller experiment with 70% training and fifty 20-word test sentences. Parsing Model F1 No Refinement (P=0, H=0)⋆ 71.3 Basic Refinement (P=1, H=1)⋆ 80.0 All-Fragments + No Refinement (P=0, H=0) 85.7 All-Fragments + Basic Refinement (P=1, H=1) 88.4 Table 4: F1 for a basic PCFG, and incorporation of basic refinement, all-fragments and both, for WSJ dev-set (≤40 words). P = 1 means parent annotation of all non-terminals, including the preterminal tags. H = 1 means one level of markovization. ⋆Results from Klein and Manning (2003). marked with its parent in the underlying treebank. It is reasonable to hope that the gains from using large fragments and the gains from symbol refinement will be complementary. Indeed, previous work has shown or suggested this complementarity. Sima’an (2000) showed modest gains from enriching structural relations with semi-lexical (prehead) information. Charniak and Johnson (2005) showed accuracy improvements from composed local tree features on top of a lexicalized base parser. Zuidema (2007) showed a slight improvement in parsing accuracy when enough fragments were added to learn enrichments beyond manual refinements. Our work reinforces this intuition by demonstrating how complementary they are in our model (∼20% error reduction on adding refinement to an all-fragments grammar, as shown in the last two rows of Table 4). Table 4 shows results for a basic PCFG, and its augmentation with either basic refinement (parent annotation and one level of markovization), with all-fragments rules (as in previous sections), or both. The basic incorporation of large fragments alone does not yield particularly strong performance, nor does basic symbol refinement. However, the two approaches are quite additive in our model and combine to give nearly state-of-the-art parsing accuracies. 5.3 Additional Deterministic Refinement Basic symbol refinement (parent annotation), in combination with all-fragments, gives test-set accuracies of 88.5% (≤40 words) and 87.6% (all), shown as the Basic Refinement model in Table 5. Klein and Manning (2003) describe a broad set of simple, deterministic symbol refinements beyond parent annotation. We included ten of their simplest annotation features, namely: UNARY-DT, UNARY-RB, SPLIT-IN, SPLIT-AUX, SPLIT-CC, SPLIT-%, GAPPED-S, POSS-NP, BASE-NP and DOMINATES-V. None of these annotation schemes use any head information. This additional annotation (see Ad1105 83 84 85 86 87 88 89 0 20 40 60 80 100 F1 Percentage of WSJ sections 2-21 used for training Figure 8: Parsing accuracy F1 on the WSJ dev-set (≤40 words) increases with increasing percentage of training data. ditional Refinement, Table 5) improves the testset accuracies to 88.7% (≤40 words) and 88.1% (all), which is equal to a strong lexicalized parser (Collins, 1999), even though our model does not use lexicalization or latent symbol-split induction. 6 Other Results 6.1 Parsing Speed and Memory Usage The word-level parsing model using the whole training set (39832 trees, all-fragments) takes approximately 3 hours on the WSJ test set (2245 trees of ≤40 words), which is equivalent to roughly 5 seconds of parsing time per sentence; and runs in under 4GB of memory. The character-level version takes about twice the time and memory. This novel tractability of an allfragments grammar is achieved using both coarsepass pruning and packed graph encoding. Microoptimization may further improve speed and memory usage. 6.2 Training Size Variation Figure 8 shows how WSJ parsing accuracy increases with increasing amount of training data (i.e., percentage of WSJ sections 2-21). Even if we train on only 10% of the WSJ training data (3983 sentences), we still achieve a reasonable parsing accuracy of nearly 84% (on the development set, ≤40 words), which is comparable to the fullsystem results obtained by Zuidema (2007), Cohn et al. (2009) and Post and Gildea (2009). 6.3 Other Language Treebanks On the French and German treebanks (using the standard dataset splits mentioned in Petrov and test (≤40) test (all) Parsing Model F1 EX F1 EX FRAGMENT-BASED PARSERS Zuidema (2007) – – 83.8⋆ 26.9⋆ Cohn et al. (2009) – – 84.0 – Post and Gildea (2009) 82.6 – – – THIS PAPER All-Fragments + Basic Refinement 88.5 33.0 87.6 30.8 + Additional Refinement 88.7 33.8 88.1 31.7 REFINEMENT-BASED PARSERS Collins (1999) 88.6 – 88.2 – Petrov and Klein (2007) 90.6 39.1 90.1 37.1 Table 5: Our WSJ test set parsing accuracies, compared to recent fragment-based parsers and top refinement-based parsers. Basic Refinement is our all-fragments grammar with parent annotation. Additional Refinement adds deterministic refinement of Klein and Manning (2003) (Section 5.3). ⋆Results on the dev-set (≤100). Klein (2008)), our simple all-fragments parser achieves accuracies in the range of top refinementbased parsers, even though the model parameters were tuned out of domain on WSJ. For German, our parser achieves an F1 of 79.8% compared to 81.5% by the state-of-the-art and substantially more complex Petrov and Klein (2008) work. For French, our approach yields an F1 of 78.0% vs. 80.1% by Petrov and Klein (2008).14 7 Conclusion Our approach of using all fragments, in combination with basic symbol refinement, and even without an explicit lexicon, achieves results in the range of state-of-the-art parsers on full scale treebanks, across multiple languages. The main takeaway is that we can achieve such results in a very knowledge-light way with (1) no latent-variable training, (2) no sampling, (3) no smoothing beyond the existence of small fragments, and (4) no explicit unknown word model at all. While these methods offer a simple new way to construct an accurate parser, we believe that this general approach can also extend to other large-fragment tasks, such as machine translation. Acknowledgments This project is funded in part by BBN under DARPA contract HR0011-06-C-0022 and the NSF under grant 0643742. 14All results on the test set (≤40 words). 1106 References Rens Bod. 1993. Using an Annotated Corpus as a Stochastic Grammar. In Proceedings of EACL. Rens Bod. 2001. What is the Minimal Set of Fragments that Achieves Maximum Parse Accuracy? In Proceedings of ACL. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and MaxEnt discriminative reranking. In Proceedings of ACL. Eugene Charniak, Sharon Goldwater, and Mark Johnson. 1998. Edge-Based Best-First Chart Parsing. In Proceedings of the 6th Workshop on Very Large Corpora. Eugene Charniak, Mark Johnson, et al. 2006. Multilevel Coarse-to-fine PCFG Parsing. In Proceedings of HLT-NAACL. Eugene Charniak. 2000. A Maximum-EntropyInspired Parser. In Proceedings of NAACL. David Chiang. 2003. Statistical parsing with an automatically-extracted tree adjoining grammar. In Data-Oriented Parsing. David Chiang. 2005. A Hierarchical Phrase-Based Model for Statistical Machine Translation. In Proceedings of ACL. Trevor Cohn, Sharon Goldwater, and Phil Blunsom. 2009. Inducing Compact but Accurate TreeSubstitution Grammars. In Proceedings of NAACL. Michael Collins and Nigel Duffy. 2002. New Ranking Algorithms for Parsing and Tagging: Kernels over Discrete Structures, and the Voted Perceptron. In Proceedings of ACL. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia. Steve Deneefe and Kevin Knight. 2009. Synchronous Tree Adjoining Machine Translation. In Proceedings of EMNLP. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proceedings of HLT-NAACL. Joshua Goodman. 1996a. Efficient Algorithms for Parsing the DOP Model. In Proceedings of EMNLP. Joshua Goodman. 1996b. Parsing Algorithms and Metrics. In Proceedings of ACL. Joshua Goodman. 2003. Efficient parsing of DOP with PCFG-reductions. In Bod R, Scha R, Sima’an K (eds.) Data-Oriented Parsing. University of Chicago Press, Chicago, IL. James Henderson. 2004. Discriminative Training of a Neural Network Statistical Parser. In Proceedings of ACL. Mark Johnson. 1998. PCFG Models of Linguistic Tree Representations. Computational Linguistics, 24:613–632. Mark Johnson. 2002. The DOP Estimation Method Is Biased and Inconsistent. In Computational Linguistics 28(1). Dan Klein and Christopher Manning. 2003. Accurate Unlexicalized Parsing. In Proceedings of ACL. Philipp Koehn, Franz Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proceedings of HLT-NAACL. Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2005. Probabilistic CFG with latent annotations. In Proceedings of ACL. Slav Petrov and Dan Klein. 2007. Improved Inference for Unlexicalized Parsing. In Proceedings of NAACL-HLT. Slav Petrov and Dan Klein. 2008. Sparse Multi-Scale Grammars for Discriminative Latent Variable Parsing. In Proceedings of EMNLP. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning Accurate, Compact, and Interpretable Tree Annotation. In Proceedings of COLING-ACL. Slav Petrov, Aria Haghighi, and Dan Klein. 2008. Coarse-to-Fine Syntactic Machine Translation using Language Projections. In Proceedings of EMNLP. Matt Post and Daniel Gildea. 2009. Bayesian Learning of a Tree Substitution Grammar. In Proceedings of ACL-IJCNLP. Philip Resnik. 1992. Probabilistic Tree-Adjoining Grammar as a Framework for Statistical Natural Language Processing. In Proceedings of COLING. Remko Scha. 1990. Taaltheorie en taaltechnologie; competence en performance. In R. de Kort and G.L.J. Leerdam (eds.): Computertoepassingen in de Neerlandistiek. Khalil Sima’an. 1996. Computational Complexity of Probabilistic Disambiguation by means of TreeGrammars. In Proceedings of COLING. Khalil Sima’an. 2000. Tree-gram Parsing: Lexical Dependencies and Structural Relations. In Proceedings of ACL. Andreas Zollmann and Khalil Sima’an. 2005. A Consistent and Efficient Estimator for Data-Oriented Parsing. Journal of Automata, Languages and Combinatorics (JALC), 10(2/3):367–388. Willem Zuidema. 2007. Parsimonious Data-Oriented Parsing. In Proceedings of EMNLP-CoNLL. 1107
2010
112
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1108–1117, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Joint Syntactic and Semantic Parsing of Chinese Junhui Li and Guodong Zhou School of Computer Science & Technology Soochow University Suzhou, China 215006 {lijunhui, gdzhou}@suda.edu.cn Hwee Tou Ng Department of Computer Science National University of Singapore 13 Computing Drive, Singapore 117417 [email protected] Abstract This paper explores joint syntactic and semantic parsing of Chinese to further improve the performance of both syntactic and semantic parsing, in particular the performance of semantic parsing (in this paper, semantic role labeling). This is done from two levels. Firstly, an integrated parsing approach is proposed to integrate semantic parsing into the syntactic parsing process. Secondly, semantic information generated by semantic parsing is incorporated into the syntactic parsing model to better capture semantic information in syntactic parsing. Evaluation on Chinese TreeBank, Chinese PropBank, and Chinese NomBank shows that our integrated parsing approach outperforms the pipeline parsing approach on n-best parse trees, a natural extension of the widely used pipeline parsing approach on the top-best parse tree. Moreover, it shows that incorporating semantic role-related information into the syntactic parsing model significantly improves the performance of both syntactic parsing and semantic parsing. To our best knowledge, this is the first research on exploring syntactic parsing and semantic role labeling for both verbal and nominal predicates in an integrated way. 1 Introduction Semantic parsing maps a natural language sentence into a formal representation of its meaning. Due to the difficulty in deep semantic parsing, most previous work focuses on shallow semantic parsing, which assigns a simple structure (such as WHO did WHAT to WHOM, WHEN, WHERE, WHY, HOW) to each predicate in a sentence. In particular, the well-defined semantic role labeling (SRL) task has been drawing increasing attention in recent years due to its importance in natural language processing (NLP) applications, such as question answering (Narayanan and Harabagiu, 2004), information extraction (Surdeanu et al., 2003), and co-reference resolution (Kong et al., 2009). Given a sentence and a predicate (either a verb or a noun) in the sentence, SRL recognizes and maps all the constituents in the sentence into their corresponding semantic arguments (roles) of the predicate. In both English and Chinese PropBank (Palmer et al., 2005; Xue and Palmer, 2003), and English and Chinese NomBank (Meyers et al., 2004; Xue, 2006), these semantic arguments include core arguments (e.g., Arg0 for agent and Arg1 for recipient) and adjunct arguments (e.g., ArgM-LOC for locative argument and ArgM-TMP for temporal argument). According to predicate type, SRL can be divided into SRL for verbal predicates (verbal SRL, in short) and SRL for nominal predicates (nominal SRL, in short). With the availability of large annotated corpora such as FrameNet (Baker et al., 1998), PropBank, and NomBank in English, data-driven techniques, including both feature-based and kernel-based methods, have been extensively studied for SRL (Carreras and Màrquez, 2004; Carreras and Màrquez, 2005; Pradhan et al., 2005; Liu and Ng, 2007). Nevertheless, for both verbal and nominal SRL, state-of-the-art systems depend heavily on the top-best parse tree and there exists a large performance gap between SRL based on the gold parse tree and the top-best parse tree. For example, Pradhan et al. (2005) suffered a performance drop of 7.3 in F1-measure on English PropBank when using the top-best parse tree returned from Charniak’s parser (Charniak, 2001). Liu and Ng (2007) reported a performance drop of 4.21 in F1-measure on English NomBank. Compared with English SRL, Chinese SRL suffers more seriously from syntactic parsing. Xue (2008) evaluated on Chinese PropBank and showed that the performance of Chinese verbal SRL drops by about 25 in F1-measure when replacing gold parse trees with automatic ones. Likewise, Xue (2008) and Li et al. (2009) reported a performance drop of about 12 in F1-measure in Chinese NomBank SRL. 1108 While it may be difficult to further improve syntactic parsing, a promising alternative is to perform both syntactic and semantic parsing in an integrated way. Given the close interaction between the two tasks, joint learning not only allows uncertainty about syntactic parsing to be carried forward to semantic parsing but also allows useful information from semantic parsing to be carried backward to syntactic parsing. This paper explores joint learning of syntactic and semantic parsing for Chinese texts from two levels. Firstly, an integrated parsing approach is proposed to benefit from the close interaction between syntactic and semantic parsing. This is done by integrating semantic parsing into the syntactic parsing process. Secondly, various semantic role-related features are directly incorporated into the syntactic parsing model to better capture semantic role-related information in syntactic parsing. Evaluation on Chinese TreeBank, Chinese PropBank, and Chinese NomBank shows that our method significantly improves the performance of both syntactic and semantic parsing. This is promising and encouraging. To our best knowledge, this is the first research on exploring syntactic parsing and SRL for verbal and nominal predicates in an integrated way. The rest of this paper is organized as follows. Section 2 reviews related work. Section 3 presents our baseline systems for syntactic and semantic parsing. Section 4 presents our proposed method of joint syntactic and semantic parsing for Chinese texts. Section 5 presents the experimental results. Finally, Section 6 concludes the paper. 2 Related Work Compared to the large body of work on either syntactic parsing (Ratnaparkhi, 1999; Collins, 1999; Charniak, 2001; Petrov and Klein, 2007), or SRL (Carreras and Màrquez, 2004; Carreras and Màrquez, 2005; Jiang and Ng, 2006), there is relatively less work on their joint learning. Koomen et al. (2005) adopted the outputs of multiple SRL systems (each on a single parse tree) and combined them into a coherent predicate argument output by solving an optimization problem. Sutton and McCallum (2005) adopted a probabilistic SRL system to re-rank the N-best results of a probabilistic syntactic parser. However, they reported negative results, which they blamed on the inaccurate probability estimates from their locally trained SRL model. As an alternative to the above pseudo-joint learning methods (strictly speaking, they are still pipeline methods), one can augment the syntactic label of a constituent with semantic information, like what function parsing does (Merlo and Musillo, 2005). Yi and Palmer (2005) observed that the distributions of semantic labels could potentially interact with the distributions of syntactic labels and redefined the boundaries of constituents. Based on this observation, they incorporated semantic role information into syntactic parse trees by extending syntactic constituent labels with their coarse-grained semantic roles (core argument or adjunct argument) in the sentence, and thus unified semantic parsing and syntactic parsing. The actual fine-grained semantic roles are assigned, as in other methods, by an ensemble classifier. However, the results obtained with this method were negative, and they concluded that semantic parsing on PropBank was too difficult due to the differences between chunk annotation and tree structure. Motivated by Yi and Palmer (2005), Merlo and Musillo (2008) first extended a statistical parser to produce a richly annotated tree that identifies and labels nodes with semantic role labels as well as syntactic labels. Then, they explored both rule-based and machine learning techniques to extract predicate-argument structures from this enriched output. Their experiments showed that their method was biased against these roles in general, thus lowering recall for them (e.g., precision of 87.6 and recall of 65.8). There have been other efforts in NLP on joint learning with various degrees of success. In particular, the recent shared tasks of CoNLL 2008 and 2009 (Surdeanu et al., 2008; Hajic et al., 2009) tackled joint parsing of syntactic and semantic dependencies. However, all the top 5 reported systems decoupled the tasks, rather than building joint models. Compared with the disappointing results of joint learning on syntactic and semantic parsing, Miller et al. (2000) and Finkel and Manning (2009) showed the effectiveness of joint learning on syntactic parsing and some simple NLP tasks, such as information extraction and name entity recognition. In addition, attempts on joint Chinese word segmentation and part-of-speech (POS) tagging (Ng and Low, 2004; Zhang and Clark, 2008) also illustrate the benefits of joint learning. 1109 3 Baseline: Pipeline Parsing on Top-Best Parse Tree In this section, we briefly describe our approach to syntactic parsing and semantic role labeling, as well as the baseline system with pipeline parsing on the top-best parse tree. 3.1 Syntactic Parsing Our syntactic parser re-implements Ratnaparkhi (1999), which adopts the maximum entropy principle. The parser recasts a syntactic parse tree as a sequence of decisions similar to those of a standard shift-reduce parser and the parsing process is organized into three left-to-right passes via four procedures, called TAG, CHUNK, BUILD, and CHECK. First pass. The first pass takes a tokenized sentence as input, and uses TAG to assign each word a part-of-speech. Second pass. The second pass takes the output of the first pass as input, and uses CHUNK to recognize basic chunks in the sentence. Third pass. The third pass takes the output of the second pass as input, and always alternates between BUILD and CHECK in structural parsing in a recursive manner. Here, BUILD decides whether a subtree will start a new constituent or join the incomplete constituent immediately to its left. CHECK finds the most recently proposed constituent, and decides if it is complete. 3.2 Semantic Role Labeling Figure 1 demonstrates an annotation example of Chinese PropBank and NomBank. In the figure, the verbal predicate “提供/provide” is annotated with three core arguments (i.e., “NP (中国 /Chinese 政府/govt.)” as Arg0, “PP (向/to 朝 鲜/N. Korean 政府/govt.)” as Arg2, and “NP (人民币/RMB 贷款/loan)” as Arg1), while the nominal predicate “贷款/loan” is annotated with two core arguments (i.e., “NP (中国/Chinese 政 府/govt.)” as Arg1 and “PP (向/to 朝鲜/N. Korean 政府/govt.)” as Arg0), and an adjunct argument (i.e., “NN ( 人民币/RMB)” as ArgM-MNR, denoting the manner of loan). It is worth pointing out that there is a (Chinese) NomBank-specific label in Figure 1, Sup (support verb) (Xue, 2006), to help introduce the arguments which occur outside the nominal predicate-headed noun phrase. In (Chinese) NomBank, a verb is considered to be a support verb only if it shares at least an argument with the nominal predicate. 3.2.1 Automatic Predicate Recognition Automatic predicate recognition is a prerequisite for the application of SRL systems. For verbal predicates, it is very easy. For example, 99% of verbs are annotated as predicates in Chinese PropBank. Therefore, we can simply select any word with a part-of-speech (POS) tag of VV, VA, VC, or VE as verbal predicate. Unlike verbal predicate recognition, nominal predicate recognition is quite complicated. For Figure 1: Two predicates (Rel1 and Rel2) and their arguments in the style of Chinese PropBank and NomBank. 向 to 朝鲜 N. Korean 政府 govt. 提供 provide P NR NN VV NN NN NP PP Arg0/Rel2 Arg2/Rel1 ArgM-MNR/Rel2 Rel2 NP VP VP 人民币 RMB 贷款 loan 。 . NR NN PU NP Arg1/Rel2 Arg0/Rel1 IP 中国 Chinese 政府 govt. Sup/Rel2 Rel1 Chinese government provides RMB loan to North Korean government. Arg1/Rel1 TOP 1110 example, only 17.5% of nouns are annotated as predicates in Chinese NomBank. It is quite common that a noun is annotated as a predicate in some cases but not in others. Therefore, automatic predicate recognition is vital to nominal SRL. In principle, automatic predicate recognition can be cast as a binary classification (e.g., Predicate vs. Non-Predicate) problem. For nominal predicates, a binary classifier is trained to predict whether a noun is a nominal predicate or not. In particular, any word POS-tagged as NN is considered as a predicate candidate in both training and testing processes. Let the nominal predicate candidate be w0, and its left and right neighboring words/POSs be w-1/p-1and w1/p1, respectively. Table 1 lists the feature set used in our model. In Table 1, local features present the candidate’s contextual information while global features show its statistical information in the whole training set. Type Description w0, w-1, w1, p-1, p1 local features The first and last characters of the candidate Whether w0 is ever tagged as a verb in the training data? Yes/No Whether w0 is ever annotated as a nominal predicate in the training data? Yes/No The most likely label for w0 when it occurs together with w-1 and w1. The most likely label for w0 when it occurs together with w-1. global features The most likely label for w0 when it occurs together with w1. Table 1: Feature set for nominal predicate recognition 3.2.2 SRL for Chinese Predicates Our Chinese SRL models for both verbal and nominal predicates adopt the widely-used SRL framework, which divides the task into three sequential sub-tasks: argument pruning, argument identification, and argument classification. In particular, we follow Xue (2008) and Li et al. (2009) to develop verbal and nominal SRL models, respectively. Moreover, we have further improved the performance of Chinese verbal SRL by exploring additional features, e.g., voice position that indicates the voice maker (BA, BEI) is before or after the constituent in focus, the rule that expands the parent of the constituent in focus, and the core arguments defined in the predicate’s frame file. For nominal SRL, we simply use the final feature set of Li et al. (2009). As a result, our Chinese verbal and nominal SRL systems achieve performance of 92.38 and 72.67 in F1-measure respectively (on golden parse trees and golden predicates), which are comparable to Xue (2008) and Li et al. (2009). For more details, please refer to Xue (2008) and Li et al. (2009). 3.3 Pipeline Parsing on Top-best Parse Tree Similar to most of the state-of-the-art systems (Pradhan et al., 2005; Xue, 2008; Li et al., 2009), the top-best parse tree is first returned from our syntactic parser and then fed into the SRL system. Specifically, the verbal (nominal) SRL labeler is in charge of verbal (nominal) predicates, respectively. For each sentence, since SRL is only performed on one parse tree, only constituents in it are candidates for semantic arguments. Therefore, if no constituent in the parse tree can map the same text span to an argument in the manual annotation, the system will not get a correct annotation. 4 Joint Syntactic and Semantic Parsing In this section, we first explore pipeline parsing on N-best parse trees, as a natural extension of pipeline parsing on the top-best parse tree. Then, joint syntactic and semantic parsing is explored for Chinese texts from two levels. Firstly, an integrated parsing approach to joint syntactic and semantic parsing is proposed. Secondly, various semantic role-related features are directly incorporated into the syntactic parsing model for better interaction between the two tasks. 4.1 Pipeline Parsing on N-best Parse Trees The pipeline parsing approach employed in this paper is largely motivated by the general framework of re-ranking, as proposed in Sutton and McCallum (2005). The idea behind this approach is that it allows uncertainty about syntactic parsing to be carried forward through an N-best list, and that a reliable SRL system, to a certain extent, can reflect qualities of syntactic parse trees. Given a sentence x, a joint parsing model is defined over a semantic frame F and a parse tree t in a log-linear way: ( ) ( ) ( ) ( ) , | 1 log | , log | Score F t x P F t x P t x α α = − + (1) where P(t|x) is returned by a probabilistic syntactic parsing model, e.g., our syntactic parser, and P(F|t, x) is returned by a probabilistic semantic parsing model, e.g. our verbal & nominal 1111 SRL systems. In our pipeline parsing approach, P(t|x) is calculated as the product of all involved decisions’ probabilities in the syntactic parsing model, and P(F|t, x) is calculated as the product of all the semantic role labels’ probabilities in a sentence (including both verbal and nominal SRL). That is to say, we only consider those constituents that are supposed to be arguments. Here, the parameter α is a balance factor indicating the importance of the semantic parsing model. In particular, (F*, t*) with maximal Score(F, t|x) is selected as the final syntactic and semantic parsing results. Given a sentence, N-best parse trees are generated first using the syntactic parser, and then for each parse tree, we predict the best SRL frame using our verbal and nominal SRL systems. 4.2 Integrated Parsing Although pipeline parsing on N-best parse trees could relieve severe dependence on the quality of the top-best parse tree, there is still a potential drawback: this method suffers from the limited scope covered by the N-best parse trees since the items in the parse tree list may be too similar, especially for long sentences. For example, 50-best parse trees can only represent a combination of 5 to 6 binary ambiguities since 2^5 < 50 < 2^6. Ideally, we should perform SRL on as many parse trees as possible, so as to enlarge the search scope. However, pipeline parsing on all possible parse trees is time-consuming and thus unrealistic. As an alternative, we turn to integrated parsing, which aims to perform syntactic and semantic parsing synchronously. The key idea is to construct a parse tree in a bottom-up way so that it is feasible to perform SRL at suitable moments, instead of only when the whole parse tree is built. Integrated parsing is practicable, mostly due to the following two observations: (1) Given a predicate in a parse tree, its semantic arguments are usually siblings of the predicate, or siblings of its ancestor. Actually, this special observation has been widely employed in SRL to prune non-arguments for a verbal or nominal predicate (Xue, 2008; Li et al., 2009). (2) SRL feature spaces (both in feature-based method and kernel-based method) mostly focus on the predicate-argument structure of a given (predicate, argument) pair. That is to say, once a predicate-argument structure is formed (i.e., an argument candidate is connected with the given predicate), there is enough contextual information to predict their SRL relation. As far as our syntactic parser is concerned, we invoke the SRL systems once a new constituent covering a predicate is complete with a “YES” decision from the CHECK procedure. Algorithm Algorithm 1. The algorithm integrating syntactic parsing and SRL. Assume: t: constituent which is complete with “YES” decision of CHECK procedure P: number of predicates Pi: ith predicate S: SRL result, set of predicates and its arguments BEGIN srl_prob = 0.0; FOR i=1 to P DO IF t covers Pi THEN T = number of children of t; FOR j=1 to T DO IF t’s jth child Chj does not cover Pi THEN Run SRL given predicate Pi and constituent Chj to get their semantic role lbl and its probability prob; IF lbl does not indicate non-argument THEN srl_prob += log( prob ); S = S ∪ {(Pi, Chj, lbl)}; END IF END IF END FOR END IF END FOR return srl_prob; END 1112 1 illustrates the integration of syntactic and semantic parsing. For the example shown in Figure 2, the CHECK procedure predicts a “YES” decision, indicating the immediately proposed constituent “VP (提供/provide 人民币/RMB 贷款/loan)” is complete. So, at this moment, the verbal SRL system is invoked to predict the semantic label of the constituent “NP (人民币 /RMB 贷款/loan)”, given the verbal predicate “VV (提供/provide)”. Similarly, “PP (向/to 朝 鲜/N. Korean 政府/govt.)” would also be semantically labeled as soon as “PP (向/to 朝鲜/N. Korean 政府/govt.)” and “VP (提供/provide 人 民币/RMB 贷款/loan)” are merged into a bigger VP. In this way, both syntactic and semantic parsing are accomplished when the root node TOP is formed. It is worth pointing out that all features (Xue, 2008; Li et al., 2009) used in our SRL model can be instantiated and their values are same as the ones when the whole tree is available. In particular, the probability computed from the SRL model is interpolated with that of the syntactic parsing model in a log-linear way (with equal weights in our experiments). This is due to our hypothesis that the probability returned from SRL model is helpful to joint syntactic and semantic parsing, considering the close interaction between the two tasks. 4.3 Integrating Semantic Role-related Features into Syntactic Parsing Model The integrated parsing approach as shown in Section 4.2 performs syntactic and semantic parsing synchronously. In contrast to traditional syntactic parsers where no semantic role-related information is used, it may be interesting to investigate the contribution of such information in the syntactic parsing model, due to the availability of such information in the syntactic parsing process. In addition, it is found that 11% of predicates in a sentence are speculatively attached with two or more core arguments with the same label due to semantic parsing errors (partly caused by syntactic parsing errors in automatic parse trees). This is abnormal since a predicate normally only allows at most one argument of each core argument role (i.e., Arg0-Arg4). Therefore, such syntactic errors should be avoidable by considering those arguments already obtained in the bottom-up parsing process. On the other hand, taking those expected semantic roles into account would help the syntactic parser. In terms of our syntactic parsing model, this is done by directly incorporating various semantic role-related features into the syntactic parsing model (i.e., the BUILD procedure) when the newly-formed constituent covers one or more predicates. For the example shown in Figure 2, once the constituent “VP (提供/provide 人民币/RMB 贷款/loan)”, which covers a verbal predicate “VV (提供/provide)”, is complete, the verbal SRL model would be triggered first to mark constituent “NP (人民币/RMB 贷款/loan)” as ARG1, given predicate “VV (提供/provide)”. Then, the BUILD procedure is called to make the BUILD decision for the newly-formed constituent “VP (提供/provide 人民币/RMB 贷款 /loan)”. Table 2 lists various semantic role-related features explored in our syntactic parsing model and their instantiations with regard to the example shown in Figure 2. In Table 2, feature sf4 gives the possible core semantic roles that the focus predicate may take, according to its frame file; feature sf5 presents the semantic roles that the focus predicate has already occupied; feature sf6 indicates the semantic roles that the focus predicate is expecting; and SF1-SF8 are combined features. Specifically, if the current constituent covers n predicates, then 14 * n features would be instantiated. Moreover, we differentiate whether the focus predicate is verbal or nominal, and whether it is the head word of the current constituent. Feature Selection. Some features proposed above may not be effective in syntactic parsing. Here we adopt the greedy feature selection algorithm as described in Jiang and Ng (2006) to select useful features empirically and incrementally according to their contributions on the development data. The algorithm repeatedly selects one feature each time which contributes the most, and stops when adding any of the remainFigure 2: An application of CHECK with YES as the decision. Thus, VV (提供/provide) and NP (人民币 /RMB 贷款/loan) reduce to a big VP. P NP PP Start_VP / NO VV NP 人民币 RMB 贷款 loan NN NN 提供 provide 向 to NR NN 朝鲜 N. Korean 政府 govt. … … VP YES? 1113 ing features fails to improve the syntactic parsing performance. Feat. Description sf1 Path: the syntactic path from C to P. (VP>VV) sf2 Predicate: the predicate itself. (提供/provide) sf3 Predicate class (Xue, 2008): the class that P belongs to. (C3b) sf4 Possible roles: the core semantic roles P may take. (Arg0, Arg1, Arg2) sf5 Detected roles: the core semantic roles already assigned to P. (Arg1) sf6 Expected roles: possible semantic roles P is still expecting. (Arg0, Arg2) SF1 For each already detected argument, its role label + its path from P. (Arg1+VV<VP>NP) SF2 sf1 + sf2. (VP>VV+提供/provide) SF3 sf1 + sf3. (VP>VV+C3b) SF4 Combined possible argument roles. (Arg0+Arg1+Arg2) SF5 Combined detected argument roles. (Arg1) SF6 Combined expected argument roles. (Arg0+Arg2) SF7 For each expected semantic role, sf1 + its role label. (VP>VV+Arg0, VP>VV+Arg2) SF8 For each expected semantic role, sf2 + its role label. (提供/provide+Arg0, 提供/provide+Arg2) Table 2: SRL-related features and their instantiations for syntactic parsing, with “VP (提供/provide 人民 币/RMB 贷款/loan)” as the current constituent C and “提供/provide” as the focus predicate P, based on Figure 2. 5 Experiments and Results We have evaluated our integrated parsing approach on Chinese TreeBank 5.1 and corresponding Chinese PropBank and NomBank. 5.1 Experimental Settings This version of Chinese PropBank and Chinese NomBank consists of standoff annotations on the file (chtb 001 to 1151.fid) of Chinese Penn TreeBank 5.1. Following the experimental settings in Xue (2008) and Li et al. (2009), 648 files (chtb 081 to 899.fid) are selected as the training data, 72 files (chtb 001 to 040.fid and chtb 900 to 931.fid) are held out as the test data, and 40 files (chtb 041 to 080.fid) are selected as the development data. In particular, the training, test, and development data contain 31,361 (8,642), 3,599 (1,124), and 2,060 (731) verbal (nominal) propositions, respectively. For the evaluation measurement on syntactic parsing, we report labeled recall, labeled precision, and their F1-measure. Also, we report recall, precision, and their F1-measure for evaluation of SRL on automatic predicates, combining verbal SRL and nominal SRL. An argument is correctly labeled if there is an argument in manual annotation with the same semantic label that spans the same words. Moreover, we also report the performance of predicate recognition. To see whether an improvement in F1-measure is statistically significant, we also conduct significance tests using a type of stratified shuffling which in turn is a type of compute-intensive randomized tests. In this paper, ‘>>>’, ‘>>’, and ‘>’ denote p-values less than or equal to 0.01, in-between (0.01, 0.05], and bigger than 0.05, respectively. We are not aware of any SRL system combing automatic predicate recognition, verbal SRL and nominal SRL on Chinese PropBank and NomBank. Xue (2008) experimented independently with verbal and nominal SRL and assumed correct predicates. Li et al. (2009) combined nominal predicate recognition and nominal SRL on Chinese NomBank. The CoNLL-2009 shared task (Hajic et al., 2009) included both verbal and nominal SRL on dependency parsing, instead of constituent-based syntactic parsing. Thus the SRL performances of their systems are not directly comparable to ours. 5.2 Results and Discussions Results of pipeline parsing on N-best parse trees. While performing pipeline parsing on N-best parse trees, 20-best (the same as the heap size in our syntactic parsing) parse trees are obtained for each sentence using our syntactic parser as described in Section 3.1. The balance factor α is set to 0.5 indicating that the two components in formula (1) are equally important. Table 3 compares the two pipeline parsing approaches on the top-best parse tree and the N-best parse trees. It shows that the approach on N-best parse trees outperforms the one on the top-best parse tree by 0.42 (>>>) in F1-measure on SRL. In addition, syntactic parsing also benefits from the N-best parse trees approach with improvement of 0.17 (>>>) in F1-measure. This suggests that pipeline parsing on N-best parse trees can improve both syntactic and semantic parsing. It is worth noting that our experimental results in applying the re-ranking framework in Chinese pipeline parsing on N-best parse trees are very encouraging, considering the pessimistic results of Sutton and McCallum (2005), in which the re-ranking framework failed to improve the performance on English SRL. It may be because, 1114 unlike Sutton and McCallum (2005), P(F, t|x) defined in this paper only considers those constituents which are identified as arguments. This can effectively avoid the noises caused by the predominant non-argument constituents. Moreover, the huge performance gap between Chinese semantic parsing on the gold parse tree and that on the top-best parse tree leaves much room for performance improvement. Method Task R (%) P (%) F1 Syntactic 76.68 79.12 77.88 SRL 62.96 65.04 63.98 Predicate 94.18 92.28 93.22 V-SRL 65.33 68.52 66.88 V-Predicate 89.52 93.12 91.29 N-SRL 49.58 48.19 48.88 Pipeline on top -best parse tree N-Predicate 86.83 71.76 78.58 Syntactic 76.89 79.25 78.05 SRL 62.99 65.88 64.40 Predicate 94.07 92.22 93.13 V-SRL 65.41 69.09 67.20 V-Predicate 89.66 93.02 91.31 N-SRL 49.24 49.46 49.35 Pipeline on 20 -best parse trees N-Predicate 86.65 72.15 78.74 Syntactic 77.14 79.01 78.07 SRL 62.67 67.67 65.07 Predicate 93.97 92.42 93.19 V-SRL 65.37 70.27 67.74 V-Predicate 90.08 92.87 91.45 N-SRL 48.02 52.83 50.31 Integrated parsing N-Predicate 85.41 73.23 78.85 Syntactic 77.47 79.58 78.51 SRL 63.14 68.17 65.56 Predicate 93.97 92.52 93.24 V-SRL 65.74 70.98 68.26 V-Predicate 89.86 93.17 91.49 N-SRL 48.80 52.67 50.66 Integrated parsing with semantic role-related features N-Predicate 85.85 72.78 78.78 Table 3: Syntactic and semantic parsing performance on test data (using gold standard word boundaries). “V-” denotes “verbal” while “N-”denotes “nominal”. Results of integrated parsing. Table 3 also compares the integrated parsing approach with the two pipeline parsing approaches. It shows that the integrated parsing approach improves the performance of both syntactic and semantic parsing by 0.19 (>) and 1.09 (>>>) respectively in F1-measure over the pipeline parsing approach on the top-best parse tree. It is also not surprising to find out that the integrated parsing approach outperforms the pipeline parsing approach on 20-best parse trees by 0.67 (>>>) in F1-measure on SRL, due to its exploring a larger search space, although the integrated parsing approach integrates the SRL probability and the syntactic parsing probability in the same manner as the pipeline parsing approach on 20-best parse trees. However, the syntactic parsing performance gap between the integrated parsing approach and the pipeline parsing approach on 20-best parse trees is negligible. Results of integrated parsing with semantic role-related features. After performing the greedy feature selection algorithm on the development data, features {SF3, SF2, sf5, sf6, SF4} as proposed in Section 4.3 are sequentially selected for syntactic parsing. As what we have assumed, knowledge about the detected semantic roles and expected semantic roles is helpful for syntactic parsing. Table 3 also lists the performance achieved with those selected features. It shows that the integration of semantic role-related features in integrated parsing significantly enhances both the performance of syntactic and semantic parsing by 0.44 (>>>) and 0.49 (>>) respectively in F1-measure. In addition, it shows that it outperforms the widely-used pipeline parsing approach on top-best parse tree by 0.63 (>>>) and 1.58 (>>>) in F1-measure on syntactic and semantic parsing, respectively. Finally, it shows that it outperforms the widely-used pipeline parsing approach on 20-best parse trees by 0.46 (>>>) and 1.16 (>>>) in F1-measure on syntactic and semantic parsing, respectively. This is very encouraging, considering the notorious difficulty and complexity of both the syntactic and semantic parsing tasks. Table 3 also shows that our proposed method works well for both verbal SRL and nominal SRL. In addition, it shows that the performance of predicate recognition is very stable due to its high dependence on POS tagging results, rather than syntactic parsing results. Finally, it is not surprising to find out that the performance of predicate recognition when mixing verbal and nominal predicates is better than the performance of either verbal predicates or nominal predicates. 5.3 Extending the Word-based Syntactic Parser to a Character-based Syntactic Parser The above experimental results on a word-based syntactic parser (assuming correct word segmentation) show that both syntactic and semantic parsing benefit from our integrated parsing approach. However, observing the great challenge of word segmentation in Chinese informa1115 tion processing, it is still unclear whether and how much joint learning benefits character-based syntactic and semantic parsing. In this section, we extended the Ratnaparkhi parser (1999) to a character-based parser (with automatic word segmentation), and then examined the effectiveness of joint learning. Given the three-pass process in the word-based syntactic parser, it is easy to extend it to a character-based parser for Chinese texts. This can be done by only replacing the TAG procedure in the first pass with a POSCHUNK procedure, which integrates Chinese word segmentation and POS tagging in one step, following the method described in (Ng and Low 2004). Here, each character is annotated with both a boundary tag and a POS tag. The 4 possible boundary tags include “B” for a character that begins a word and is followed by another character, “M” for a character that occurs in the middle of a word, “E” for a character that ends a word, and “S” for a character that occurs as a single-character word. For example, “北京市 /Beijing city/NR” would be decomposed into three units: “ 北 /north/B_NR”, “ 京 /capital/M_NR”, and “市/city/E_NR”. Also, “是 /is/VC” would turn into “是/is/S_VC”. Through POSCHUNK, all characters in a sentence are first assigned with POS chunk labels which must be compatible with previous ones, and then merged into words with their POS tags. For example, “北/north/B_NR”, “京/capital/M_NR”, and “市/city/E_NR” will be merged as “北京市 /Beijing/NR”, “是/is/S_VC” will become “是 /is/VC”. Finally the merged results of the POSCHUNK are fed into the CHUNK procedure of the second pass. Using the same data split as the previous experiments, word segmentation achieves performance of 96.3 in F1-measure on the test data. Table 4 lists the syntactic and semantic parsing performance by adopting the character-based parser. Table 4 shows that integrated parsing benefits syntactic and semantic parsing when automatic word segmentation is considered. However, the improvements are smaller due to the extra noise caused by automatic word segmentation. For example, our experiments show that the performance of predicate recognition drops from 93.2 to 90.3 in F1-measure when replacing correct word segmentations with automatic ones. Method Task R (%) P (%) F1 Syntactic 82.23 84.28 83.24 Pipeline on top-best parse tree SRL 60.40 62.75 61.55 Syntactic 82.25 84.29 83.26 Pipeline on 20-best parse trees SRL 60.17 63.63 61.85 Syntactic 82.51 84.31 83.40 Integrated parsing with semantic role-related features SRL 60.09 65.35 62.61 Table 4: Performance with the character-based parser1 (using automatically recognized word boundaries). 6 Conclusion In this paper, we explore joint syntactic and semantic parsing to improve the performance of both syntactic and semantic parsing, in particular that of semantic parsing. Evaluation shows that our integrated parsing approach outperforms the pipeline parsing approach on N-best parse trees, a natural extension of the widely-used pipeline parsing approach on the top-best parse tree. It also shows that incorporating semantic information into syntactic parsing significantly improves the performance of both syntactic and semantic parsing. This is very promising and encouraging, considering the complexity of both syntactic and semantic parsing. To our best knowledge, this is the first successful research on exploring syntactic parsing and semantic role labeling for verbal and nominal predicates in an integrated way. Acknowledgments The first two authors were financially supported by Projects 60683150, 60970056, and 90920004 under the National Natural Science Foundation of China. This research was also partially supported by a research grant R-252-000-225-112 from National University of Singapore Academic Research Fund. We also want to thank the reviewers for insightful comments. References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Project. In Proceedings of COLING-ACL 1998. Xavier Carreras and Lluis Màrquez. 2004. Introduction to the CoNLL-2004 Shared Task: Semantic Role Labeling. In Proceedings of CoNLL 2004. 1 POS tags are included in evaluating the performance of a character-based syntactic parser. Thus it cannot be directly compared with the word-based one where correct word segmentation is assumed. 1116 Xavier Carreras and Lluis Màrquez. 2005. Introduction to the CoNLL-2005 Shared Task: Semantic Role Labeling. In Proceedings of CoNLL 2005. Eugene Charniak. 2001. Immediate-Head Parsing for Language Models. In Proceedings of ACL 2001. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Jenny Rose Finkel and Christopher D. Manning. 2009. Joint Parsing and Named Entity Recognition. In Proceedings of NAACL 2009. Jan Hajic, Massimiliano Ciaramita, Richard Johansson, et al. 2009. The CoNLL-2009 Shared Task: Syntactic and Semantic Dependencies in Multiple Languages. In Proceedings of CoNLL 2009. Zheng Ping Jiang and Hwee Tou Ng. 2006. Semantic Role Labeling of NomBank: A Maximum Entropy Approach. In Proceedings of EMNLP 2006. Fang Kong, Guodong Zhou, and Qiaoming Zhu. 2009. Employing the Centering Theory in Pronoun Resolution from the Semantic Perspective. In Proceedings of EMNLP 2009. Peter Koomen, Vasin Punyakanok, Dan Roth, Wen-tau Yih. 2005. Generalized Inference with Multiple Semantic Role Labeling Systems. In Proceedings of CoNLL 2005. Junhui Li, Guodong Zhou, Hai Zhao, Qiaoming Zhu, and Peide Qian. 2009. Improving Nominal SRL in Chinese Language with Verbal SRL information and Automatic Predicate Recognition. In Proceedings of EMNLP 2009. Chang Liu and Hwee Tou Ng. 2007. Learning Predictive Structures for Semantic Role Labeling of NomBank. In Proceedings of ACL 2007. Paola Merlo and Gabriele Mussillo. 2005. Accurate Function Parsing. In Proceedings of EMNLP 2005. Paola Merlo and Gabriele Musillo. 2008. Semantic Parsing for High-Precision Semantic Role Labelling. In Proceedings of CoNLL 2008. Adam Meyers, Ruth Reeves, Catherine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. Annotating Noun Argument Structure for NomBank. In Proceedings of LREC 2004. Scott Miller, Heidi Fox, Lance Ramshaw, and Ralph Weischedel. 2000. A Novel Use of Statistical Parsing to Extract Information from Text. In Proceedings of ANLP 2000. Srini Narayanan and Sanda Harabagiu. 2004. Question Answering based on Semantic Structures. In Proceedings of COLING 2004. Hwee Tou Ng and Jin Kiat Low. 2004. Chinese Part-of-Speech Tagging: One-at-a-Time or All-at-Once? Word-Based or Character-Based? In Proceedings of EMNLP 2004. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An Annotated Corpus of Semantic Roles. Computational Linguistics, 31, 71-106. Slav Petrov and Dan Klein. 2007. Improved Inference for Unlexicalized Parsing. In Proceesings of NAACL 2007. Sameer Pradhan, Kadri Hacioglu, Valerie Krugler, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2005. Support Vector Learning for Semantic Argument Classification. Machine Learning, 2005, 60:11-39. Adwait Ratnaparkhi. 1999. Learning to Parse Natural Language with Maximum Entropy Models. Machine Learning, 34, 151-175. Mihai Surdeanu, Sanda Harabagiu, John Williams and Paul Aarseth. 2003. Using Predicate-Argument Structures for Information Extraction. In Proceedings of ACL 2003. Mihai Surdeanu, Richard Johansson, Adam Meyers, Lluis Màrquez, and Joakim Nivre. 2008. The CoNLL-2008 Shared Task on Joint Parsing of Syntactic and Semantic Dependencies. In Proceedings of CoNLL 2008. Charles Sutton and Andrew McCallum. 2005. Joint Parsing and Semantic Role Labeling. In Proceedings of CoNLL2005. Nianwen Xue and Martha Palmer. 2003. Annotating the Propositions in the Penn Chinese TreeBank. In Proceedings of the 2nd SIGHAN Workshop on Chinese Language Processing. Nianwen Xue. 2006. Annotating the Predicate-Argument Structure of Chinese Nominalizations. In Proceedings of LREC 2006. Nianwen Xue. 2008. Labeling Chinese Predicates with Semantic Roles. Computational Linguistics, 34(2):225-255. Szu-ting Yi and Martha Palmer. 2005. The Integration of Syntactic Parsing and Semantic Role Labeling. In Proceedings of CoNLL 2005. Yue Zhang and Stephen Clark. 2008. Joint Word Segmentation and POS Tagging Using a Single Perceptron. In Proceedings of ACL 2008. 1117
2010
113
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1118–1127, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Cross-Language Text Classification using Structural Correspondence Learning Peter Prettenhofer and Benno Stein Bauhaus-Universit¨at Weimar D-99421 Weimar, Germany {peter.prettenhofer,benno.stein}@uni-weimar.de Abstract We present a new approach to crosslanguage text classification that builds on structural correspondence learning, a recently proposed theory for domain adaptation. The approach uses unlabeled documents, along with a simple word translation oracle, in order to induce taskspecific, cross-lingual word correspondences. We report on analyses that reveal quantitative insights about the use of unlabeled data and the complexity of interlanguage correspondence modeling. We conduct experiments in the field of cross-language sentiment classification, employing English as source language, and German, French, and Japanese as target languages. The results are convincing; they demonstrate both the robustness and the competitiveness of the presented ideas. 1 Introduction This paper deals with cross-language text classification problems. The solution of such problems requires the transfer of classification knowledge between two languages. Stated precisely: We are given a text classification task γ in a target language T for which no labeled documents are available. γ may be a spam filtering task, a topic categorization task, or a sentiment classification task. In addition, we are given labeled documents for the identical task in a different source language S. Such type of cross-language text classification problems are addressed by constructing a classifier fS with training documents written in S and by applying fS to unlabeled documents written in T . For the application of fS under language T different approaches are current practice: machine translation of unlabeled documents from T to S, dictionary-based translation of unlabeled documents from T to S, or language-independent concept modeling by means of comparable corpora. The mentioned approaches have their pros and cons, some of which are discussed below. Here we propose a different approach to crosslanguage text classification which adopts ideas from the field of multi-task learning (Ando and Zhang, 2005a). Our approach builds upon structural correspondence learning, SCL, a recently proposed theory for domain adaptation in the field of natural language processing (Blitzer et al., 2006). Similar to SCL, our approach induces correspondences among the words from both languages by means of a small number of so-called pivots. In our context a pivot is a pair of words, {wS, wT }, from the source language S and the target language T , which possess a similar semantics. Testing the occurrence of wS or wT in a set of unlabeled documents from S and T yields two equivalence classes across these languages: one class contains the documents where either wS or wT occur, the other class contains the documents where neither wS nor wT occur. Ideally, a pivot splits the set of unlabeled documents with respect to the semantics that is associated with {wS, wT }. The correlation between wS or wT and other words w, w ̸∈{wS, wT } is modeled by a linear classifier, which then is used as a language-independent predictor for the two equivalence classes. As we will see, a small number of pivots can capture a sufficiently large part of the correspondences between S and T in order to (1) construct a cross-lingual representation and (2) learn a classifier fST for the task γ that operates on this representation. Several advantages follow from our approach: • Task specificity. The approach exploits the words’ pragmatics since it considers—during the pivot selection step—task-specific characteristics of language use. 1118 • Efficiency in terms of linguistic resources. The approach uses unlabeled documents from both languages along with a small number (100 - 500) of translated words, instead of employing a parallel corpus or an extensive bilingual dictionary. • Efficiency in terms of computing resources. The approach solves the classification problem directly, instead of resorting to a more general and potentially much harder problem such as machine translation. Note that the use of such technology is prohibited in certain situations (market competitors) or restricted by environmental constraints (offline situations, high latency, bandwidth capacity). Contributions Our contributions to the outlined field are threefold: First, the identification and utilization of the theory of SCL to cross-language text classification, which has, to the best of our knowledge, not been investigated before. Second, the further development and adaptation of SCL towards a technology that is competitive with the state-of-the-art in cross-language text classification. Third, an in-depth analysis with respect to important hyperparameters such as the ratio of labeled and unlabeled documents, the number of pivots, and the optimum dimensionality of the cross-lingual representation. In this connection we compile extensive corpora in the languages English, German, French, and Japanese, and for different sentiment classification tasks. The paper is organized as follows: Section 2 surveys related work. Section 3 states the terminology for cross-language text classification. Section 4 describes our main contribution, a new approach to cross-language text classification based on structural correspondence learning. Section 5 presents experimental results in the context of cross-language sentiment classification. 2 Related Work Cross-Language Text Classification Bel et al. (2003) belong to the first who explicitly considered the problem of cross-language text classification. Their research, however, is predated by work in cross-language information retrieval, CLIR, where similar problems are addressed (Oard, 1998). Traditional approaches to crosslanguage text classification and CLIR use linguistic resources such as bilingual dictionaries or parallel corpora to induce correspondences between two languages (Lavrenko et al., 2002; Olsson et al., 2005). Dumais et al. (1997) is considered as seminal work in CLIR: they propose a method which induces semantic correspondences between two languages by performing latent semantic analysis, LSA, on a parallel corpus. Li and Taylor (2007) improve upon this method by employing kernel canonical correlation analysis, CCA, instead of LSA. The major limitation of these approaches is their computational complexity and, in particular, the dependence on a parallel corpus, which is hard to obtain—especially for less resource-rich languages. Gliozzo and Strapparava (2005) circumvent the dependence on a parallel corpus by using so-called multilingual domain models, which can be acquired from comparable corpora in an unsupervised manner. In (Gliozzo and Strapparava, 2006) they show for particular tasks that their approach can achieve a performance close to that of monolingual text classification. Recent work in cross-language text classification focuses on the use of automatic machine translation technology. Most of these methods involve two steps: (1) translation of the documents into the source or the target language, and (2) dimensionality reduction or semi-supervised learning to reduce the noise introduced by the machine translation. Methods which follow this twostep approach include the EM-based approach by Rigutini et al. (2005), the CCA approach by Fortuna and Shawe-Taylor (2005), the information bottleneck approach by Ling et al. (2008), and the co-training approach by Wan (2009). Domain Adaptation Domain adaptation refers to the problem of adapting a statistical classifier trained on data from one (or more) source domains (e.g., newswire texts) to a different target domain (e.g., legal texts). In the basic domain adaptation setting we are given labeled data from the source domain and unlabeled data from the target domain, and the goal is to train a classifier for the target domain. Beyond this setting one can further distinguish whether a small amount of labeled data from the target domain is available (Daume, 2007; Finkel and Manning, 2009) or not (Blitzer et al., 2006; Jiang and Zhai, 2007). The latter setting is referred to as unsupervised domain adaptation. 1119 Note that, cross-language text classification can be cast as an unsupervised domain adaptation problem by considering each language as a separate domain. Blitzer et al. (2006) propose an effective algorithm for unsupervised domain adaptation, called structural correspondence learning. First, SCL identifies features that generalize across domains, which the authors call pivots. SCL then models the correlation between the pivots and all other features by training linear classifiers on the unlabeled data from both domains. This information is used to induce correspondences among features from the different domains and to learn a shared representation that is meaningful across both domains. SCL is related to the structural learning paradigm introduced by Ando and Zhang (2005a). The basic idea of structural learning is to constrain the hypothesis space of a learning task by considering multiple different but related tasks on the same input space. Ando and Zhang (2005b) present a semi-supervised learning method based on this paradigm, which generates related tasks from unlabeled data. Quattoni et al. (2007) apply structural learning to image classification in settings where little labeled data is given. 3 Cross-Language Text Classification This section introduces basic models and terminology. In standard text classification, a document d is represented under the bag-of-words model as |V |-dimensional feature vector x ∈X, where V , the vocabulary, denotes an ordered set of words, xi ∈x denotes the normalized frequency of word i in d, and X is an inner product space. DS denotes the training set and comprises tuples of the form (x, y), which associate a feature vector x ∈X with a class label y ∈Y . The goal is to find a classifier f : X →Y that predicts the labels of new, previously unseen documents. Without loss of generality we restrict ourselves to binary classification problems and linear classifiers, i.e., Y = {+1, -1} and f(x) = sign(wT x). w is a weight vector that parameterizes the classifier, [·]T denotes the matrix transpose. The computation of w from DS is referred to as model estimation or training. A common choice for w is given by a vector w∗that minimizes the regularized training error: w∗= argmin w∈R|V | X (x,y)∈DS L(y, wT x) + λ 2 ∥w∥2 (1) L is a loss function that measures the quality of the classifier, λ is a non-negative regularization parameter that penalizes model complexity, and ∥w∥2 = wT w. Different choices for L entail different classifier types; e.g., when choosing the hinge loss function for L one obtains the popular Support Vector Machine classifier (Zhang, 2004). Standard text classification distinguishes between labeled (training) documents and unlabeled (test) documents. Cross-language text classification poses an extra constraint in that training documents and test documents are written in different languages. Here, the language of the training documents is referred to as source language S, and the language of the test documents is referred to as target language T . The vocabulary V divides into VS and VT , called vocabulary of the source language and vocabulary of the target language, with VS ∩VT = ∅. I.e., documents from the training set and the test set map on two non-overlapping regions of the feature space. Thus, a linear classifier fS trained on DS associates non-zero weights only with words from VS, which in turn means that fS cannot be used to classify documents written in T . One way to overcome this “feature barrier” is to find a cross-lingual representation for documents written in S and T , which enables the transfer of classification knowledge between the two languages. Intuitively, one can understand such a cross-lingual representation as a concept space that underlies both languages. In the following, we will use θ to denote a map that associates the original |V |-dimensional representation of a document d written in S or T with its cross-lingual representation. Once such a mapping is found the cross-language text classification problem reduces to a standard classification problem in the crosslingual space. Note that the existing methods for cross-language text classification can be characterized by the way θ is constructed. For instance, cross-language latent semantic indexing (Dumais et al., 1997) and cross-language explicit semantic analysis (Potthast et al., 2008) estimate θ using a parallel corpus. Other methods use linguistic resources such as a bilingual dictionary to obtain θ (Bel et al., 2003; Olsson et al., 2005). 1120 4 Cross-Language Structural Correspondence Learning We now present a novel method for learning a map θ by exploiting relations from unlabeled documents written in S and T . The proposed method, which we call cross-language structural correspondence learning, CL-SCL, addresses the following learning setup (see also Figure 1): • Given a set of labeled training documents DS written in language S, the goal is to create a text classifier for documents written in a different language T . We refer to this classification task as the target task. An example for the target task is the determination of sentiment polarity, either positive or negative, of book reviews written in German (T ) given a set of training reviews written in English (S). • In addition to the labeled training documents DS we have access to unlabeled documents DS,u and DT ,u from both languages S and T . Let Du denote DS,u ∪DT ,u. • Finally, we are given a budget of calls to a word translation oracle (e.g., a domain expert) to map words in the source vocabulary VS to their corresponding translations in the target vocabulary VT . For simplicity and without loss of applicability we assume here that the word translation oracle maps each word in VS to exactly one word in VT . CL-SCL comprises three steps: In the first step, CL-SCL selects word pairs {wS, wT }, called pivots, where wS ∈VS and wT ∈VT . Pivots have to satisfy the following conditions: Confidence Both words, wS and wT , are predictive for the target task. Support Both words, wS and wT , occur frequently in DS,u and DT ,u respectively. The confidence condition ensures that, in the second step of CL-SCL, only those correlations are modeled that are useful for discriminative learning. The support condition, on the other hand, ensures that these correlations can be estimated accurately. Considering our sentiment classification example, the word pair {excellentS, exzellentT } satisfies both conditions: (1) the words are strong indicators of positive sentiment, Words in VS Class label term frequencies Negative class label Positive class label Words in VT ... , x|V|) x = (x1 , ... DS DS,u DT,u Du No value y Figure 1: The document sets underlying CL-SCL. The subscripts S, T , and u designate “source language”, “target language”, and “unlabeled”. and (2) the words occur frequently in book reviews from both languages. Note that the support of wS and wT can be determined from the unlabeled data Du. The confidence, however, can only be determined for wS since the setting gives us access to labeled data from S only. We use the following heuristic to form an ordered set P of pivots: First, we choose a subset VP from the source vocabulary VS, |VP | ≪|VS|, which contains those words with the highest mutual information with respect to the class label of the target task in DS. Second, for each word wS ∈VP we find its translation in the target vocabulary VT by querying the translation oracle; we refer to the resulting set of word pairs as the candidate pivots, P ′ : P ′ = {{wS, TRANSLATE(wS)} | wS ∈VP } We then enforce the support condition by eliminating in P ′ all candidate pivots {wS, wT } where the document frequency of wS in DS,u or of wT in DT ,u is smaller than some threshold φ: P = CANDIDATEELIMINATION(P ′, φ) Let m denote |P|, the number of pivots. In the second step, CL-SCL models the correlations between each pivot {wS, wT } ∈P and all other words w ∈V \ {wS, wT }. This is done by training linear classifiers that predict whether or not wS or wT occur in a document, based on the other words. For this purpose a training set Dl is created for each pivot pl ∈P : Dl = {(MASK(x, pl), IN(x, pl)) | x ∈Du} 1121 MASK(x, pl) is a function that returns a copy of x where the components associated with the two words in pl are set to zero—which is equivalent to removing these words from the feature space. IN(x, pl) returns +1 if one of the components of x associated with the words in pl is non-zero and -1 otherwise. For each Dl a linear classifier, characterized by the parameter vector wl, is trained by minimizing Equation (1) on Dl. Note that each training set Dl contains documents from both languages. Thus, for a pivot pl = {wS, wT } the vector wl captures both the correlation between wS and VS \ {wS} and the correlation between wT and VT \ {wT }. In the third step, CL-SCL identifies correlations across pivots by computing the singular value decomposition of the |V |×m-dimensional parameter matrix W, W =  w1 . . . wm  : UΣVT = SVD(W) Recall that W encodes the correlation structure between pivot and non-pivot words in the form of multiple linear classifiers. Thus, the columns of U identify common substructures among these classifiers. Choosing the columns of U associated with the largest singular values yields those substructures that capture most of the correlation in W. We define θ as those columns of U that are associated with the k largest singular values: θ = UT [1:k, 1:|V |] Algorithm 1 summarizes the three steps of CLSCL. At training and test time, we apply the projection θ to each input instance x. The vector v∗ that minimizes the regularized training error for DS in the projected space is defined as follows: v∗= argmin v∈Rk X (x,y)∈DS L(y, vT θx) + λ 2 ∥v∥2 (2) The resulting classifier fST , which will operate in the cross-lingual setting, is defined as follows: fST (x) = sign(v∗T θx) 4.1 An Alternative View of CL-SCL An alternative view of cross-language structural correspondence learning is provided by the framework of structural learning (Ando and Zhang, 2005a). The basic idea of structural learning is Algorithm 1 CL-SCL Input: Labeled source data DS Unlabeled data Du = DS,u ∪DT ,u Parameters: m, k, λ, and φ Output: k × |V |-dimensional matrix θ 1. SELECTPIVOTS(DS, m) VP = MUTUALINFORMATION(DS) P ′ = {{wS, TRANSLATE(wS)} | wS ∈VP } P = CANDIDATEELIMINATION(P ′, φ) 2. TRAINPIVOTPREDICTORS(Du, P) for l = 1 to m do Dl = {(MASK(x, pl), IN(x, pl)) | x ∈Du} wl = argmin w∈R|V | P (x,y)∈Dl L(y, wT x)) + λ 2 ∥w∥2 end for W = w1 . . . wm  3. COMPUTESVD(W, k) UΣVT = SVD(W) θ = UT [1:k, 1:|V |] output {θ} to constrain the hypothesis space, i.e., the space of possible weight vectors, of the target task by considering multiple different but related prediction tasks. In our context these auxiliary tasks are represented by the pivot predictors, i.e., the columns of W. Each column vector wl can be considered as a linear classifier which performs well in both languages. I.e., we regard the column space of W as an approximation to the subspace of bilingual classifiers. By computing SVD(W) one obtains a compact representation of this column space in the form of an orthonormal basis θT . The subspace is used to constrain the learning of the target task by restricting the weight vector w to lie in the subspace defined by θT . Following Ando and Zhang (2005a) and Quattoni et al. (2007) we choose w for the target task to be w∗= θT v∗, where v∗is defined as follows: v∗= argmin v∈Rk X (x,y)∈DS L(y, (θT v)T x) + λ 2 ∥v∥2 (3) Since (θT v)T = vT θ it follows that this view of CL-SCL corresponds to the induction of a new feature space given by Equation 2. 1122 5 Experiments We evaluate CL-SCL for the task of crosslanguage sentiment classification using English as source language and German, French, and Japanese as target languages. Special emphasis is put on corpus construction, determination of upper bounds and baselines, and a sensitivity analysis of important hyperparameters. All data described in the following is publicly available from our project website.1 5.1 Dataset and Preprocessing We compiled a new dataset for cross-language sentiment classification by crawling product reviews from Amazon.{de | fr | co.jp}. The crawled part of the corpus contains more than 4 million reviews in the three languages German, French, and Japanese. The corpus is extended with English product reviews provided by Blitzer et al. (2007). Each review contains a category label, a title, the review text, and a rating of 1-5 stars. Following Blitzer et al. (2007) a review with >3 (<3) stars is labeled as positive (negative); other reviews are discarded. For each language the labeled reviews are grouped according to their category label, whereas we restrict our experiments to three categories: books, dvds, and music. Since most of the crawled reviews are positive (80%), we decide to balance the number of positive and negative reviews. In this study, we are interested in whether the cross-lingual representation induced by CL-SCL captures the difference between positive and negative reviews; by balancing the reviews we ensure that the imbalance does not affect the learned model. Balancing is achieved by deleting reviews from the majority class uniformly at random for each languagespecific category. The resulting sets are split into three disjoint, balanced sets, containing training documents, test documents, and unlabeled documents; the respective set sizes are 2,000, 2,000, and 9,000-50,000. See Table 1 for details. For each of the nine target-language-categorycombinations a text classification task is created by taking the training set of the product category in S and the test set of the same product category in T . A document d is described as normalized feature vector x under a unigram bag-of-words document representation. The morphological analyzer 1http://www.webis.de/research/corpora/ webis-cls-10/ MeCab is used for Japanese word segmentation.2 5.2 Implementation Throughout the experiments linear classifiers are employed; they are trained by minimizing Equation (1), using a stochastic gradient descent (SGD) algorithm. In particular, the learning rate schedule from PEGASOS is adopted (Shalev-Shwartz et al., 2007), and the modified Huber loss, introduced by Zhang (2004), is chosen as loss function L.3 SGD receives two hyperparameters as input: the number of iterations T, and the regularization parameter λ. In our experiments T is always set to 106, which is about the number of iterations required for SGD to converge. For the target task, λ is determined by 3-fold cross-validation, testing for λ all values 10−i, i ∈[0; 6]. For the pivot prediction task, λ is set to the small value of 10−5, in order to favor model accuracy over generalizability. The computational bottleneck of CL-SCL is the SVD of the dense parameter matrix W. Here we follow Blitzer et al. (2006) and set the negative values in W to zero, which yields a sparse representation. For the SVD computation the Lanczos algorithm provided by SVDLIBC is employed.4 We investigated an alternative approach to obtain a sparse W by directly enforcing sparse pivot predictors wl through L1-regularization (Tsuruoka et al., 2009), but didn’t pursue this strategy due to unstable results. Since SGD is sensitive to feature scaling the projection θx is post-processed as follows: (1) Each feature of the cross-lingual representation is standardized to zero mean and unit variance, where mean and variance are estimated on DS ∪Du. (2) The cross-lingual document representations are scaled by a constant α such that |DS|−1 P x∈DS ∥αθx∥= 1. We use Google Translate as word translation oracle, which returns a single translation for each query word.5 Though such a context free translation is suboptimum we do not sanitize the returned words to demonstrate the robustness of CL-SCL with respect to translation noise. To ensure the reproducibility of our results we cache all queries to the translation oracle. 2http://mecab.sourceforge.net 3Our implementation is available at http://github. com/pprett/bolt 4http://tedlab.mit.edu/˜dr/SVDLIBC/ 5http://translate.google.com 1123 T Category Unlabeled data Upper Bound CL-MT CL-SCL |DS,u| |DT ,u| µ σ µ σ ∆ µ σ ∆ books 50,000 50,000 83.79 (±0.20) 79.68 (±0.13) 4.11 79.50 (±0.33) 4.29 German dvd 30,000 50,000 81.78 (±0.27) 77.92 (±0.25) 3.86 76.92 (±0.07) 4.86 music 25,000 50,000 82.80 (±0.13) 77.22 (±0.23) 5.58 77.79 (±0.02) 5.00 books 50,000 32,000 83.92 (±0.14) 80.76 (±0.34) 3.16 78.49 (±0.03) 5.43 French dvd 30,000 9,000 83.40 (±0.28) 78.83 (±0.19) 4.57 78.80 (±0.01) 4.60 music 25,000 16,000 86.09 (±0.13) 75.78 (±0.65) 10.31 77.92 (±0.03) 8.17 books 50,000 50,000 79.39 (±0.27) 70.22 (±0.27) 9.17 73.09 (±0.07) 6.30 Japanese dvd 30,000 50,000 81.56 (±0.28) 71.30 (±0.28) 10.26 71.07 (±0.02) 10.49 music 25,000 50,000 82.33 (±0.13) 72.02 (±0.29) 10.31 75.11 (±0.06) 7.22 Table 1: Cross-language sentiment classification results. For each task, the number of unlabeled documents from S and T is given. Accuracy scores (mean µ and standard deviation σ of 10 repetitions of SGD) on the test set of the target language T are reported. ∆gives the difference in accuracy to the upper bound. CL-SCL uses m = 450, k = 100, and φ = 30. 5.3 Upper Bound and Baseline To get an upper bound on the performance of a cross-language method we first consider the monolingual setting. For each target-languagecategory-combination a linear classifier is learned on the training set and tested on the test set. The resulting accuracy scores are referred to as upper bound; it informs us about the expected performance on the target task if training data in the target language is available. We chose a machine translation baseline to compare CL-SCL to another cross-language method. Statistical machine translation technology offers a straightforward solution to the problem of cross-language text classification and has been used in a number of cross-language sentiment classification studies (Hiroshi et al., 2004; Bautin et al., 2008; Wan, 2009). Our baseline CL-MT works as follows: (1) learn a linear classifier on the training data, and (2) translate the test documents into the source language,6 (3) predict 6Again we use Google Translate. the sentiment polarity of the translated test documents. Note that the baseline CL-MT does not make use of unlabeled documents. 5.4 Performance Results and Sensitivity Table 1 contrasts the classification performance of CL-SCL with the upper bound and with the baseline. Observe that the upper bound does not exhibit a great variability across the three languages. The average accuracy is about 82%, which is consistent with prior work on monolingual sentiment analysis (Pang et al., 2002; Blitzer et al., 2007). The performance of CL-MT, however, differs considerably between the two European languages and Japanese: for Japanese, the average difference between the upper bound and CL-MT (9.9%) is about twice as much as for German and French (5.3%). This difference can be explained by the fact that machine translation works better for European than for Asian languages such as Japanese. Recall that CL-SCL receives three hyperparameters as input: the number of pivots m, the dimensionality of the cross-lingual representation k, Pivot English German Semantics Pragmatics Semantics Pragmatics {beautifulS, sch¨onT } amazing, beauty, picture, pattern, poetry, sch¨oner (more beautiful), bilder (pictures), lovely photographs, paintings traurig (sad) illustriert (illustrated) {boringS, langweiligT } plain, asleep, characters, pages, langatmig (lengthy), charaktere (characters), dry, long story einfach (plain), handlung (plot), entt¨auscht (disappointed) seiten (pages) Table 2: Semantic and pragmatic correlations identified for the two pivots {beautifulS, sch¨onT } and {boringS, langweiligT } in English and German book reviews. 1124 Figure 2: Influence of unlabeled data and hyperparameters on the performance of CL-SCL. The rows show the performance of CL-SCL as a function of (1) the ratio between labeled and unlabeled documents, (2) the number of pivots m, and (3) the dimensionality of the cross-lingual representation k. and the minimum support φ of a pivot in DS,u and DT ,u. For comparison purposes we use fixed values of m = 450, k = 100, and φ = 30. The results show the competitiveness of CL-SCL compared to CL-MT. Although CL-MT outperforms CL-SCL on most tasks for German and French, the difference in accuracy can be considered as small (<1%); merely for French book and music reviews the difference is about 2%. For Japanese, however, CL-SCL outperforms CL-MT on most tasks with a difference in accuracy of about 3%. The results indicate that if the difference between the upper bound and CL-MT is large, CL-SCL can circumvent the loss in accuracy. Experiments with language-specific settings revealed that for Japanese a smaller number of pivots (150<m<250) performs significantly better. Thus, the reported results for Japanese can be considered as pessimistic. Primarily responsible for the effectiveness of CL-SCL is its task specificity, i.e., the ways in which context contributes to meaning (pragmatics). Due to the use of task-specific, unlabeled data, relevant characteristics are captured by the pivot classifiers. Table 2 exemplifies this with two pivots for German book reviews. The rows of the table show those words which have the highest correlation with the pivots {beautifulS, sch¨onT } and {boringS, langweiligT }. We can distinguish between (1) correlations that reflect similar meaning, such as “amazing”, “lovely”, or “plain”, and (2) correlations that reflect the pivot pragmatics with respect to the task, such as “picture”, “poetry”, or “pages”. Note in this connection that authors of book reviews tend to use the word “beautiful” to refer to illustrations or poetry. While the first type of word correlations can be obtained by methods that operate on parallel corpora, the second type of correlation requires an understanding of the task-specific language use. In the following we discuss the sensitivity of each hyperparameter in isolation while keeping 1125 the others fixed at m = 450, k = 100, and φ = 30. The experiments are illustrated in Figure 2. Unlabeled Data The first row of Figure 2 shows the performance of CL-SCL as a function of the ratio of labeled and unlabeled documents. A ratio of 1 means that |DS,u| = |DT ,u| = 2,000, while a ratio of 25 corresponds to the setting of Table 1. As expected, an increase in unlabeled documents results in an improved performance, however, we observe a saturation at a ratio of 10 across all nine tasks. Number of Pivots The second row shows the influence of the number of pivots m on the performance of CL-SCL. Compared to the size of the vocabularies VS and VT , which is in 105 order of magnitude, the number of pivots is very small. The plots show that even a small number of pivots captures a significant amount of the correspondence between S and T . Dimensionality of the Cross-Lingual Representation The third row shows the influence of the dimensionality of the cross-lingual representation k on the performance of CL-SCL. Obviously the SVD is crucial to the success of CL-SCL if m is sufficiently large. Observe that the value of k is task-insensitive: a value of 75<k<150 works equally well across all tasks. 6 Conclusion The paper introduces a novel approach to crosslanguage text classification, called cross-language structural correspondence learning. The approach uses unlabeled documents along with a word translation oracle to automatically induce taskspecific, cross-lingual correspondences. Our contributions include the adaptation of SCL for the problem of cross-language text classification and a well-founded empirical analysis. The analysis covers performance and robustness issues in the context of cross-language sentiment classification with English as source language and German, French, and Japanese as target languages. The results show that CL-SCL is competitive with stateof-the-art machine translation technology while requiring fewer resources. Future work includes the extension of CL-SCL towards a general approach for cross-lingual adaptation of natural language processing technology. References Rie-K. Ando and Tong Zhang. 2005a. A framework for learning predictive structures from multiple tasks and unlabeled data. J. Mach. Learn. Res., 6:1817– 1853. Rie-K. Ando and Tong Zhang. 2005b. A highperformance semi-supervised learning method for text chunking. In Proceedings of ACL-05, pages 1– 9, Ann Arbor. Mikhail Bautin, Lohit Vijayarenu, and Steven Skiena. 2008. International sentiment analysis for news and blogs. In Proceedings of ICWSM-08, pages 19–26, Seattle. Nuria Bel, Cornelis H. A. Koster, and Marta Villegas. 2003. Cross-lingual text categorization. In Proceedings of ECDL-03, pages 126–139, Trondheim. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of EMNLP-06, pages 120–128, Sydney. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of ACL-07, pages 440–447, Prague. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In Proceedings of ACL-07, pages 256–263, Prague. Susan T. Dumais, Todd A. Letsche, Michael L. Littman, and Thomas K. Landauer. 1997. Automatic cross-language retrieval using latent semantic indexing. In AAAI Symposium on CrossLanguage Text and Speech Retrieval. Jenny-R. Finkel and Christopher-D. Manning. 2009. Hierarchical bayesian domain adaptation. In Proceedings of HLT/NAACL-09, pages 602–610, Boulder. Blaˇz Fortuna and John Shawe-Taylor. 2005. The use of machine translation tools for cross-lingual text mining. In Proceedings of the ICML Workshop on Learning with Multiple Views. Alfio Gliozzo and Carlo Strapparava. 2005. Cross language text categorization by acquiring multilingual domain models from comparable corpora. In Proceedings of the ACL Workshop on Building and Using Parallel Texts. Alfio Gliozzo and Carlo Strapparava. 2006. Exploiting comparable corpora and bilingual dictionaries for cross-language text categorization. In Proceedings of ACL-06, pages 553–560, Sydney. Kanayama Hiroshi, Nasukawa Tetsuya, and Watanabe Hideo. 2004. Deeper sentiment analysis using machine translation technology. In Proceedings of COLING-04, pages 494–500, Geneva. 1126 Jing Jiang and Chengxiang Zhai. 2007. A two-stage approach to domain adaptation for statistical classifiers. In Proceedings of CIKM-07, pages 401–410, Lisbon. Victor Lavrenko, Martin Choquette, and W. Bruce Croft. 2002. Cross-lingual relevance models. In Proceedings of SIGIR-02, pages 175–182, Tampere. Yaoyong Li and John S. Taylor. 2007. Advanced learning algorithms for cross-language patent retrieval and classification. Inf. Process. Manage., 43(5):1183–1199. Xiao Ling, Gui-R. Xue, Wenyuan Dai, Yun Jiang, Qiang Yang, and Yong Yu. 2008. Can chinese web pages be classified with english data source? In Proceedings of WWW-08, pages 969–978, Beijing. Douglas W. Oard. 1998. A comparative study of query and document translation for cross-language information retrieval. In Proceedings of AMTA-98, pages 472–483, Langhorne. J. Scott Olsson, Douglas W. Oard, and Jan Hajiˇc. 2005. Cross-language text classification. In Proceedings of SIGIR-05, pages 645–646, Salvador. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of EMNLP-02, pages 79–86, Philadelphia. Martin Potthast, Benno Stein, and Maik Anderka. 2008. A wikipedia-based multilingual retrieval model. In Proceedings of ECIR-08, pages 522–530, Glasgow. Ariadna Quattoni, Michael Collins, and Trevor Darrell. 2007. Learning visual representations using images with captions. In Proceedings of CVPR-07, pages 1–8, Minneapolis. Leonardo Rigutini, Marco Maggini, and Bing Liu. 2005. An em based training algorithm for crosslanguage text categorization. In Proceedings of WI05, pages 529–535, Compi`egne. Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. 2007. Pegasos: Primal estimated sub-gradient solver for svm. In Proceedings of ICML-07, pages 807–814, Corvalis. Yoshimasa Tsuruoka, Jun’ichi Tsujii, and Sophia Ananiadou. 2009. Stochastic gradient descent training for l1-regularized log-linear models with cumulative penalty. In Proceedings of ACL/AFNLP-09, pages 477–485, Singapore. Xiaojun Wan. 2009. Co-training for crosslingual sentiment classification. In Proceedings of ACL/AFNLP-09, pages 235–243, Singapore. Tong Zhang. 2004. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In Proceedings of ICML-04, pages 116– 124, Banff. 1127
2010
114
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1128–1137, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Cross-Lingual Latent Topic Extraction Duo Zhang University of Illinois at Urbana-Champaign [email protected] Qiaozhu Mei University of Michigan [email protected] ChengXiang Zhai University of Illinois at Urbana-Champaign [email protected] Abstract Probabilistic latent topic models have recently enjoyed much success in extracting and analyzing latent topics in text in an unsupervised way. One common deficiency of existing topic models, though, is that they would not work well for extracting cross-lingual latent topics simply because words in different languages generally do not co-occur with each other. In this paper, we propose a way to incorporate a bilingual dictionary into a probabilistic topic model so that we can apply topic models to extract shared latent topics in text data of different languages. Specifically, we propose a new topic model called Probabilistic Cross-Lingual Latent Semantic Analysis (PCLSA) which extends the Probabilistic Latent Semantic Analysis (PLSA) model by regularizing its likelihood function with soft constraints defined based on a bilingual dictionary. Both qualitative and quantitative experimental results show that the PCLSA model can effectively extract cross-lingual latent topics from multilingual text data. 1 Introduction As a robust unsupervised way to perform shallow latent semantic analysis of topics in text, probabilistic topic models (Hofmann, 1999a; Blei et al., 2003b) have recently attracted much attention. The common idea behind these models is the following. A topic is represented by a multinomial word distribution so that words characterizing a topic generally have higher probabilities than other words. We can then hypothesize the existence of multiple topics in text and define a generative model based on the hypothesized topics. By fitting the model to text data, we can obtain an estimate of all the word distributions corresponding to the latent topics as well as the topic distributions in text. Intuitively, the learned word distributions capture clusters of words that co-occur with each other probabilistically. Although many topic models have been proposed and shown to be useful (see Section 2 for more detailed discussion of related work), most of them share a common deficiency: they are designed to work only for mono-lingual text data and would not work well for extracting cross-lingual latent topics, i.e. topics shared in text data in two different natural languages. The deficiency comes from the fact that all these models rely on co-occurrences of words forming a topical cluster, but words in different language generally do not co-occur with each other. Thus with the existing models, we can only extract topics from text in each language, but cannot extract common topics shared in multiple languages. In this paper, we propose a novel topic model, called Probabilistic Cross-Lingual Latent Semantic Analysis (PCLSA) model, which can be used to mine shared latent topics from unaligned text data in different languages. PCLSA extends the Probabilistic Latent Semantic Analysis (PLSA) model by regularizing its likelihood function with soft constraints defined based on a bilingual dictionary. The dictionary-based constraints are key to bridge the gap of different languages and would force the captured co-occurrences of words in each language by PCLSA to be “synchronized” so that related words in the two languages would have similar probabilities. PCLSA can be estimated efficiently using the General ExpectationMaximization (GEM) algorithm. As a topic extraction algorithm, PCLSA would take a pair of unaligned document sets in different languages and a bilingual dictionary as input, and output a set of aligned word distributions in both languages that can characterize the shared topics in the two languages. In addition, it also outputs a topic cov1128 erage distribution for each language to indicate the relative coverage of different shared topics in each language. To the best of our knowledge, no previous work has attempted to solve this topic extraction problem and generate the same output. The closest existing work to ours is the MuTo model proposed in (Boyd-Graber and Blei, 2009) and the JointLDA model published recently in (Jagaralamudi and Daum´e III, 2010). Both used a bilingual dictionary to bridge the language gap in a topic model. However, the goals of their work are different from ours in that their models mainly focus on mining cross-lingual topics of matching word pairs and discovering the correspondence at the vocabulary level. Therefore, the topics extracted using their model cannot indicate how a common topic is covered differently in the two languages, because the words in each word pair share the same probability in a common topic. Our work focuses on discovering correspondence at the topic level. In our model, since we only add a soft constraint on word pairs in the dictionary, their probabilities in common topics are generally different, naturally capturing which shows the different variations of a common topic in different languages. We use a cross-lingual news data set and a review data set to evaluate PCLSA. We also propose a “cross-collection” likelihood measure to quantitatively evaluate the quality of mined topics. Experimental results show that the PCLSA model can effectively extract cross-lingual latent topics from multilingual text data, and it outperforms a baseline approach using the standard PLSA on text data in each language. 2 Related Work Many topic models have been proposed, and the two basic models are the Probabilistic Latent Semantic Analysis (PLSA) model (Hofmann, 1999a) and the Latent Dirichlet Allocation (LDA) model (Blei et al., 2003b). They and their extensions have been successfully applied to many problems, including hierarchical topic extraction (Hofmann, 1999b; Blei et al., 2003a; Li and McCallum, 2006), author-topic modeling (Steyvers et al., 2004), contextual topic analysis (Mei and Zhai, 2006), dynamic and correlated topic models (Blei and Lafferty, 2005; Blei and Lafferty, 2006), and opinion analysis (Mei et al., 2007; Branavan et al., 2008). Our work is an extension of PLSA by incorporating the knowledge of a bilingual dictionary as soft constraints. Such an extension is similar to the extension of PLSA for incorporating social network analysis (Mei et al., 2008a) but our constraint is different. Some previous work on multilingual topic models assume documents in multiple languages are aligned either at the document level, sentence level or by time stamps (Mimno et al., 2009; Zhao and Xing, 2006; Kim and Khudanpur, 2004; Ni et al., 2009; Wang et al., 2007). However, in many applications, we need to mine topics from unaligned text corpus. For example, mining topics from search results in different languages can facilitate summarization of multilingual search results. Besides all the multilingual topic modeling work discussed above, comparable corpora have also been studied extensively (e.g. (Fung, 1995; Franz et al., 1998; Masuichi et al., 2000; Sadat et al., 2003; Gliozzo and Strapparava, 2006)), but most previous work aims at acquiring word translation knowledge or cross-lingual text categorization from comparable corpora. Our work differs from this line of previous work in that our goal is to discover shared latent topics from multi-lingual text data that are weakly comparable (e.g. the data does not have to be aligned by time). 3 Problem Formulation In general, the problem of cross-lingual topic extraction can be defined as to extract a set of common cross-lingual latent topics covered in text collections in different natural languages. A crosslingual latent topic will be represented as a multinomial word distribution over the words in all the languages, i.e. a multilingual word distribution. For example, given two collections of news articles in English and Chinese, respectively, we would like to extract common topics simultaneously from the two collections. A discovered common topic, such as the terrorist attack on September 11, 2001, would be characterized by a word distribution that would assign relatively high probabilities to words related to this event in both English and Chinese (e.g. “terror”, “attack”, “afghanistan”, “taliban”, and their translations in Chinese). As a computational problem, our input is a multi-lingual text corpus, and output is a set of cross-lingual latent topics. We now define this problem more formally. 1129 Definition 1 (Multi-Lingual Corpus) A multilingual corpus C is a set of text collections {C1, C2, . . . , Cs}, where Ci = {di 1, di 2, . . . , di Mi} is a collection of documents in language Li with vocabulary Vi = {wi 1, wi 2, . . . , wi Ni}. Here, Mi is the total number of documents in Ci, Ni is the total number of words in Vi, and di j is a document in collection Ci. Following the common assumption of bag-ofwords representation, we represent document di j with a bag of words {wi j1, wi j2, . . . , wi jd}, and use c(wi k, di j) to denote the count of word wi k in document di j. Definition 2 (Cross-Lingual Topic): A crosslingual topic θ is a semantically coherent multinomial distribution over all the words in the vocabularies of languages L1, ..., Ls. That is, p(w|θ) would give the probability of a word w which can be in any of the s languages under consideration. θ is semantically coherent if it assigns high probabilities to words that are semantically related either in the same language or across different languages. Clearly, we have ∑s i=1 ∑ w∈Vi p(w|θ) = 1 for any cross-lingual topic θ. Definition 3 (Cross-Lingual Topic Extraction) Given a multi-lingual corpus C, the task of cross-lingual topic extraction is to model and extract k major cross-lingual topics {θ1, θ2, . . . , θk} from C, where θi is a cross-lingual topic, and k is a user specified parameter. The extracted cross-lingual topics can be directly used as a summary of the common content of the multi-lingual data set. Note that once a cross-lingual topic is extracted, we can easily obtain its representation in each language Li by “splitting” the cross-lingual topic into multiple word distributions in different languages. Formally, the word distribution of a cross-lingual topic θ in language Li is given by pi(wi|θ) = p(wi|θ) ∑ w∈Vi p(w|θ). These aligned language-specific word distributions can directly review the variations of topics in different languages. They can also be used to analyze the difference of the coverage of the same topic in different languages. Moreover, they are also useful for retrieving relevant articles or passages in each language and aligning them to the same common topic, thus essentially also allowing us to integrate and align articles in multiple languages. 4 Probabilistic Cross-Lingual Latent Semantic Analysis In this section, we present our probabilistic crosslingual latent semantic analysis (PCLSA) model and discuss how it can be used to extract crosslingual topics from multi-lingual text data. The main reason why existing topic models can’t be used for cross-lingual topic extraction is because they cannot cross the language barrier. Intuitively, in order to cross the language barrier and extract a common topic shared in articles in different languages, we must rely on some kind of linguistic knowledge. Our PCLSA model assumes the availability of bi-lingual dictionaries for at least some language pairs, which are generally available for major language pairs. Specifically, for text data in languages L1, ..., Ls, if we represent each language as a node in a graph and connect those language pairs for which we have a bilingual dictionary, the minimum requirement is that the whole graph is connected. Thus, as a minimum, we will need s −1 distinct bilingual dictionaries. This is so that we can potentially cross all the language barriers. Our key idea is to “synchronize” the extraction of monolingual “component topics” of a crosslingual topic from individual languages by forcing a cross-lingual topic word distribution to assign similar probabilities to words that are potential translations according to a Li-Lj bilingual dictionary. We achieve this by adding such preferences formally to the likelihood function of a probabilistic topic model as “soft constraints” so that when we estimate the model, we would try to not only fit the text data well (which is necessary to extract coherent component topics from each language), but also satisfy our specified preferences (which would ensure the extracted component topics in different languages are semantically related). Below we present how we implement this idea in more detail. A bilingual dictionary for languages Li and Lj generally would give us a many-to-many mapping between the vocabularies of the two languages. With such a mapping, we can construct a bipartite graph Gij = (Vij, Eij) between the two languages where if one word can be potentially translated into another word, the two words would be connected with an edge. An edge can be weighted based on the probability of the corresponding translation. An example graph for 1130 Chinese-English dictionary is shown in Figure 1. Figure 1: A Dictionary based Word Graph With multiple bilingual dictionaries, we can merge the graphs to generate a multi-partite graph G = (V, E). Based on this graph, the PCLSA model extends the standard PLSA by adding a constraint to the likelihood function to “smooth” the word distributions of topics in PLSA on the multi-partite graph so that we would encourage the words that are connected in the graph (i.e. possible translations of each other) to be given similar probabilities by every cross-lingual topic. Thus when a cross-lingual topic picks up words that cooccur in mono-lingual text, it would prefer picking up word pairs whose translations in other languages also co-occur with each other, giving us a coherent multilingual word distribution that characterizes well the content of text in different languages. Specifically, let Θ = {θj} (j = 1, ..., k) be a set of k cross-lingual topic models to be discovered from a multilingual text data set with s languages such that p(w|θi) is the probability of word w according to the topic model θi. If we are to use the regular PLSA to model our data, we would have the following log-likelihood and we usually use a maximum likelihood estimator to estimate parameters and discover topics. L(C) = s ∑ i=1 ∑ d∈Ci ∑ w c(w, d) log k ∑ j=1 p(θj|d)p(w|θj) Our main extension is to add to L(C) a crosslingual constraint term R(C) to incorporate the knowledge of bilingual dictionaries. R(C) is defined as R(C) = 1 2 ∑ ⟨u,v⟩∈E w(u, v) k ∑ j=1 (p(wu|θj) Deg(u) −p(wv|θj) Deg(v) )2 where w(u, v) is the weight on the edge between u and v in the multi-partite graph G = (V, E), which in our experiments is set to 1, and Deg(u) is the degree of word u, i.e. the sum of the weights of all the edges ending with u. Intuitively, R(C) measures the difference between p(wu|θj) and p(wv|θj) for each pair (u, v) in a bilingual dictionary; the more they differ, the larger R(C) would be. So it can be regarded as a “loss function” to help us assess how well the “component word distributions” in multiple languages are correlated semantically. Clearly, we would like the extracted topics to have a small R(C). We choose this specific form of loss function because it would make it convenient to solve the optimization problem of maximizing the corresponding regularized maximum likelihood (Mei et al., 2008b). The normalization with Deg(u) and Deg(v) can be regarded as a way to compensate for the potential ambiguity of u and v in their translations. Putting L(C) and R(C) together, we would like to maximize the following objective function which is a regularized log-likelihood: O(C, G) = (1 −λ)L(C) −λR(C) (1) where λ ∈(0, 1) is a parameter to balance the likelihood and the regularizer. When λ = 0, we recover the standard PLSA. Specifically, we will search for a set of values for all our parameters that can maximize the objective function defined above. Our parameters include all the cross-lingual topics and the coverage distributions of the topics in all documents, which we denote by Ψ = {p(w|θj), p(θj|d)}d,w,j where j = 1, ..., k, w varies over the entire vocabularies of all the languages , d varies over all the documents in our collection. This optimization problem can be solved using a Generalized Expectation-Maximization (GEM) algorithm as described in (Mei et al., 2008a). Specifically, in the E-step of the algorithm, the distribution of hidden variables is computed using Eq. 2. z(w, d, j) = p(θj|d)p(w|θj) ∑ j′ p(θj′|d)p(w|θj′) (2) Then in the M-step, we need to maximize the complete data likelihood Q(Ψ; Ψn): Q(Ψ; Ψn) = (1 −λ)L′(C) −λR(C) 1131 where L′(C) = ∑ d ∑ w c(w, d) ∑ j z(w, d, j) log p(θj|d)p(w|θj), (3) with the constraints that ∑ j p(θj|d) = 1 and ∑ w p(w|θj) = 1. There is a closed form solution if we only want to maximize the L′(C) part: p(n+1)(θj|d) = ∑ w c(w, d)z(w, d, j) ∑ w ∑ j′ c(w, d)z(w, d, j′) p(n+1)(w|θj) = ∑ d c(w, d)z(w, d, j) ∑ d ∑′ w c(w′, d)z(w′, d, j)(4) However, there is no closed form solution in the M-step for the whole objective function. Fortunately, according to GEM we do not need to find the local maximum of Q(Ψ; Ψn) in every M-step, and we only need to find a new value Ψn+1 to improve the complete data likelihood, i.e. to make sure Q(Ψn+1; Ψn) ≥Q(Ψn; Ψn). So our method is to first maximize the L′(C) part using Eq. 4 and then use Eq. 5 to gradually increase the R(C) part. p(t+1)(wu|θj) = (1 −α)p(t)(wu|θj) (5) + α ∑ ⟨u,v⟩∈E w(u, v) Deg(v)p(t)(wv|θj) Here, parameter α is the length of each smoothing step. Obviously, after each smoothing step, the sum of the probabilities of all the words in one topic is still equal to 1. We smooth the parameters until we cannot get a better parameter set Ψn+1. Then, we continue to the next E-step. If there is no Ψn+1 s.t. Q(Ψn+1; Ψn) ≥Q(Ψn; Ψn), then we consider Ψn to be the local maximum point of the objective function Eq. 1. 5 Experiment Design 5.1 Data Set The data set we used in our experiment is collected from news articles of Xinhua English and Chinese newswires. The whole data set is quite big, containing around 40,000 articles in Chinese and 35,000 articles in English. For different purpose of our experiments, we randomly selected different number of documents from the whole corpus, and we will describe the concrete statistics in each experiment. To process the Chinese corpus, we use a simple segmenter1 to split the data into Chinese phrases. Both Chinese and English stopwords are removed from our data. The dictionary file we used for our PCLSA model is from mandarintools.com2. For each Chinese phrase, if it has several English meanings, we add an edge between it and each of its English translation. If one English translation is an English phrase, we add an edge between the Chinese phrase and each English word in the phrase. 5.2 Baseline Method As a baseline method, we can apply the standard PLSA (Hofmann, 1999a) directly to the multilingual corpus. Since PLSA takes advantage of the word co-occurrences in the document level to find semantic topics, directly using it for a multilingual corpus will result in finding topics mainly reflecting a single language (because words in different languages would not co-occur in the same document in general). That is, the discovered topics are mostly monolingual. These monolingual topics can then be aligned based on a bilingual dictionary to suggest a possible cross-lingual topic. 6 Experimental Results 6.1 Qualitative Comparison To qualitatively compare PCLSA with the baseline method, we compare the word distributions of topics extracted by them. The data set we used in this experiment is selected from the Xinhua News data during the period from Jun. 8th, 2001 to Jun. 15th, 2001. There are totally 1799 English articles and 1485 Chinese articles in the data set. The number of topics to be extracted is set to 10 for both methods. Table 1 shows the experimental results. To make it easier to understand, we add an English translation to each Chinese phrase in our results. The first ten rows show sample topics of the modeling results of traditional PLSA model. We can see that it only contains mono-language topics, i.e. the topics are either in Chinese or in English. The next ten rows are the results from our PCLSA model. Compared with the baseline method, PCLSA can not only find coherent topics from the cross-lingual corpus, but it can also show the content about one topic from both two language corpora. For example, in ’Topic 2’ 1http://www.mandarintools.com/segmenter.html 2http://www.mandarintools.com/cedict.html 1132 Table 2: Synthetic Data Set from Xinhua News English Shrine Olympic Championship 90 101 70 Chinese CPC Anniversary Afghan War Championship 95 206 72 which is about ’Israel’ and ’Palestinian’, the Chinese corpus mentions a lot about ’Arafat’ who is the leader of ’Palestinian’, while the English corpus discusses more on topics such as ’cease fire’ and ’women’. Similarly, in ’Topic 9’, the topic is related to Philippine, the Chinese corpus mentions some environmental situation in Philippine, while the English corpus mentions a lot about ’Abu Sayyaf’. 6.2 Discovering Common Topics To demonstrate the ability of PCLSA for finding common topics in cross-lingual corpus, we use some event names, e.g. ’Shrine’ and ’Olympic’, as queries and randomly select a certain number of documents from the whole corpus, which are related to the queries. The number of documents for each query in the synthetic data set is shown in Table 2. In either the English corpus or the Chinese corpus, we select a smaller number of documents about topic ’Championship’ combined with the other two topics in the same corpus. In this way, when we want to extract two topics from either English or Chinese corpus, the ’Championship’ topic may not be easy to extract, because the other two topics have more documents in the corpus. However, when we use PCLSA to extract four topics from the two corpora together, we expect that the topic ’Championship’ will be found, because now the sum of English and Chinese documents related to ’Championship’ is larger than other topics. The experimental result is shown in Table 3. The first two columns are the two topics extracted from Engish corpus, the third and the forth columns are two topics from Chinese corpus, and the other four columns are the results from cross-lingual corpus. We can see that in either the Chinese subcollection or the English sub-collection, the topic ’Championship’ is not extracted as a significant topic. But, as expected, the topic ’Championship’ is extracted from the cross-lingual corpus, while the topic ’Olympic’ and topic ’Shrine’ are merged together. This demonstrate that PCLSA is capable of extracting common topics from a cross-lingual corpus. 6.3 Quantitative Evaluation We also quantitatively evaluate how well our PCLSA model can discover common topics among corpus in different languages. We propose a “cross-collection” likelihood measure for this purpose. The basic idea is: suppose we got k cross-lingual topics from the whole corpus, then for each topic, we split the topic into two separate set of topics, English topics and Chinese topics, using the splitting formula described before, i.e. pi(wi|θ) = p(wi|θ) ∑ w∈Vi p(w|θ). Then, we use the word distribution of the Chinese topics (translating the words into English) to fit the English Corpus and use the word distribution of the English topics (translating the words into Chinese) to fit the Chinese Corpus. If the topics mined are common topics in the whole corpus, then such a “crosscollection” likelihood should be larger than those topics which are not commonly shared by the English and the Chinese corpus. To calculate the likelihood of fitness, we use the folding-in method proposed in (Hofmann, 2001). To translate topics from one language to another, e.g. Chinese to English, we look up the bilingual dictionary and do word-to-word translation. If one Chinese word has several English translations, we simply distribute its probability mass equally to each English translation. For comparison, we use the standard PLSA model as the baseline. Basically, suppose PLSA mined k semantic topics in the Chinese corpus and k semantic topics in the English corpus. Then, we also use the “cross-collection” likelihood measure to see how well those k semantic Chinese topics fit the English corpus and those k semantic English topics fit the Chinese corpus. We totally collect three data sets to compare the performance. For the first data set, (English 1, Chinese 1), both the Chinese and English corpus are chosen from the Xinhua News Data during the period from 2001.06.08 to 2001.06.15, which has 1799 English articles and 1485 Chinese articles. For the second data set, (English 2, Chinese 2), the Chinese corpus Chinese 2 is the same as Chinese 1, but the English corpus is chosen from 2001.06.14 to 2001.06.19 which has 1547 documents. For the third data set, (English 3, Chinese 3), the Chinese corpus is the same as in data set one, but the English corpus is chosen from 2001.10.02 to 2001.10.07 which contains 1530 documents. In other words, in the first data set, 1133 Table 1: Qualitative Evaluation Topic 0 Topic 1 Topic 2 Topic 3 Topic 4 Topic 5 Topic 6 Topic 7 Topic 8 Topic 9 j(party) +"(crime)  C(athlete) ®(palestine) \*(collaboration) s¸(education) israel bt dollar china á—j(communist) @(agriculture) (champion) ®–„â(palestine) Þ0(shanghai) E(ball) palestinian beat percent cooperate À×(revolution) @‰(travel) œ)­(championship) 1ºï(israel) ø(relation) É­(league) eu final million shanghai jÊ(party member) Qs(heathendom) Ž(base) *Û(cease fire) Ü)(bilateral) E(soccer) police championship index develop ¥ê(central) ړ(public security) «†E(badminton) É\)(UN) Ž4(trade) I¨(minute) report play stock beije ÌB(ism) w(name) ¸(sports) ¥À(mid east) :(president) èÊ(team member) secure champion point particulate š\(cadre) ™(case) û­(final) ©®(lebanon) )(country) s(teacher) kill win share matter † À(chairman mao) ‰(law enforcement) E(women) jÙî(macedon) ŒP(friendly) ¥B¦(school) europe olympic close sco ¥á(chinese communist) =(city) 6Ú(chess) àB(conflict) ÌÓ(meet) Eè(team) egypt game 0 invest s(leader) ÿ(penalize) Hü(fitness) Ìá(talk) [„(russia) (grade A) treaty cup billion project Ü)(bilateral) É­(league) israel cooperate  C(athlete) party eu invest 0 áœ(absorb) \*(collaboration) w(name) 1ºï(israel) sco particulate j(party) khatami =ý(investment) dollar ¯ Ìá(talk) E(ball) bt develop  communist ireland 7Ã(billion) percent ƒY e(abu) ŒP(friendly) ù“(shenhua) palestinian country athlete revolution }(ireland) s¸(education) index ® ®(palestine) ̜(host) ceasefire president champion ÌB(-ism) elect ¢â(environ. protect.) million (Â(particle) country A ƒn(arafat) apec ii †(antiwar) vote ý—(money) stock philippine É\)(UN) ball women shanghai 6Ú(chess) 3“(comrade) presidential ¥B¦(school) billion abu s|(leader) —y(jinde) jerusalem africa competition À×(revolution) cpc market point Ž(base) bilateral ­(season) mideast meet contestant j„(party) iran s(teacher) 7(billion) ° state EÊ(player) lebanon T Ì(zemin jiang) v(gymnastics) ideology referendum business share Ô(object) Table 3: Effectiveness of Extracting Common Topics English 1 English 2 Chinese 1 Chinese 2 Cross 1 Cross 2 Cross 3 Cross 4 japan olympic á—j(CPC) ƒŒF(afghan) koizumi ¼Á(taliban) swim Ó|(worker) shrine ioc œ(championship) É(taliban) yasukuni /(military) œ(championship) party visit beije -(world) ¼Á(taliban) ioc city „y(free style) ®Ç(three) koizumi game ‡.(thought) /(military) japan refugee !y(diving) j.‡(marx) yasukuni july ®X(theory) Kâ(attack) olympic side œ)­(championship) communist war bid j.‡(marx) ›(US army) beije ›(US army) Ìû­(semi final) marx august swim ‰y(swim) [(laden) shrine q(bomb) competition theory asia vote œ)­(championship) \è(army) visit Y(kabul) ‰y(swim) Oj(found party) criminal championship j(party) q(bomb) £äÌ(olympic) 8ñ(attack) ­9(record) á—j(CPC) ii committee Oj(found party) Y(kabul) £õš.(olympic) Ì(refugee) [¨ï(xuejuan luo) revolution the English corpus and Chinese corpus are comparable with each other, because they cover similar events during the same period. In the second data set, the English and Chinese corpora share some common topics during the overlap period. The third data is the most tough one since the two corpora are from different periods. The purpose of using these three different data sets for evaluation is to test how well PCLSA can mine common topics from either a data set where the English corpus and the Chinese corpus are comparable or a data set where the English corpus and the Chinese corpus rarely share common topics. The experimental results are shown in Table 4. Each row shows the “cross-collection” likelihood of using the “cross-collection” topics to fit the data set named in the first column. For example, in the first row, the values are the “cross-collection” likelihood of using Chinese topics found by different methods from the first data set to fit English 1. The last collum shows how much improvement we got from PCLSA compared with PLSA. From the results, we can see that in all the data sets, our PCLSA has higher “cross-collection” likelihood value, which means it can find better common topics compared to the baseline method. Notice that the Chinese corpora are the same in all three data sets. The results show that both PCLSA and PLSA get lower “cross-collection” likelihood for fitting the Chinese corpora when the data set becomes “tougher”, i.e. less topic overlapping, but the imTable 4: Quantitative Evaluation of Common Topic Finding (“cross-collection” log-likelihood) PCLSA PLSA Rel. Imprv. English 1 -2.86294E+06 -3.03176E+06 5.6% Chinese 1 -4.69989E+06 -4.85369E+06 3.2% English 2 -2.48174E+06 -2.60805E+06 4.8% Chinese 2 -4.73218E+06 -4.88906E+06 3.2% English 3 -2.44714E+06 -2.60540E+06 6.1% Chinese 3 -4.79639E+06 -4.94273E+06 3.0% provement of PCLSA over PLSA does not drop much. On the other hand, the improvement of PCLSA over PLSA on the three English corpora does not show any correlation with the difficulty of the data set. 6.4 Extracting from Multi-Language Corpus In the previous experiments, we have shown the capability and effectiveness of the PCLSA model in latent topic extraction from two language corpora. In fact, the proposed model is general and capable of extracting latent topics from multilanguage corpus. For example, if we have dictionaries among multiple languages, we can construct a multi-partite graph based on the correspondence between those vocabularies, and then smooth the PCLSA model with this graph. To show the effectiveness of PCLSA in mining multiple language corpus, we first construct a simulated data set based on 1115 reviews of three brands of laptops, namely IBM (303), Apple(468) and DELL(344). To simulate a three language cor1134 Table 5: Effectiveness of Latent Topic Extraction from Multi-Language Corpus Topic 0 Topic 1 Topic 2 Topic 3 Topic 4 Topic 5 Topic 6 Topic 7 cd(apple) battery(dell) mouse(dell) print(apple) port(ibm) laptop(ibm) os(apple) port(dell) port(apple) drive(dell) button(dell) resolution(dell) card(ibm) t20(ibm) run(apple) 2(dell) drive(apple) 8200(dell) touchpad(dell) burn(apple) modem(ibm) thinkpad(ibm) 1(apple) usb(dell) airport(apple) inspiron(dell) pad(dell) normal(dell) display(ibm) battery(ibm) ram(apple) 1(dell) firewire(apple) system(dell) keyboard(dell) image(dell) built(ibm) notebook(ibm) mac(apple) 0(dell) dvd(apple) hour(dell) point(dell) digital(apple) swap(ibm) ibm(ibm) battery(apple) slot(dell) usb(apple) sound(dell) stick(dell) organize(apple) easy(ibm) 3(ibm) hour(apple) firewire(dell) rw(apple) dell(dell) rest(dell) cds(apple) connector(ibm) feel(ibm) 12(apple) display(dell) card(apple) service(dell) touch(dell) latch(apple) feature(ibm) hour(ibm) operate(apple) standard(dell) mouse(apple) life(dell) erase(dell) advertise(dell) cd(ibm) high(ibm) word(apple) fast(dell) osx(apple) applework(apple) port(dell) battery(dell) lightest(ibm) uxga(dell) light(ibm) battery(apple) memory(dell) file(apple) port(apple) battery(ibm) quality(dell) ultrasharp(dell) ultrabay(ibm) point(dell) special(dell) bounce(apple) port(ibm) battery(apple) year(ibm) display(dell) connector(ibm) touchpad(dell) crucial(dell) quit(apple) firewire(apple) geforce4(dell) hassle(ibm) organize(apple) dvd(ibm) button(dell) memory(apple) word(apple) imac(apple) 100mhz(apple) bania(dell) learn(apple) nice(ibm) hour(apple) memory(ibm) file(ibm) firewire(dell) 440(dell) 800mhz(apple) logo(apple) modem(ibm) battery(ibm) netscape(apple) file(dell) firewire(ibm) bus(apple) trackpad(apple) postscript(apple) connector(dell) battery(dell) reseller(apple) microsoft(apple) jack(apple) 8200(dell) cover(ibm) ll(apple) light(apple) fan(dell) 10(dell) ms(apple) playback(dell) 8100(dell) workmanship(dell) sxga(dell) light(dell) erase(dell) special(apple) excel(apple) jack(dell) chipset(dell) section(apple) warm(apple) floppy(ibm) point(apple) 2000(ibm) ram(apple) port(dell) itune(apple) uxga(dell) port(apple) pentium(dell) drive(ibm) window(ibm) ram(ibm) port(apple) applework(apple) screen(dell) port(ibm) processor(dell) drive(dell) 2000(apple) ram(dell) port(ibm) imovie(apple) screen(ibm) port(dell) p4(dell) drive(apple) 2000(dell) screen(apple) 2(dell) import(apple) screen(apple) usb(apple) power(dell) hard(ibm) window(apple) 1(apple) 2(apple) battery(apple) ultrasharp(dell) plug(apple) pentium(apple) osx(apple) window(dell) screen(ibm) 2(ibm) iphoto(apple) 1600x1200(dell) cord(apple) pentium(ibm) hard(dell) portege(ibm) screen(dell) speak(dell) battery(ibm) display(dell) usb(ibm) keyboard(dell) hard(apple) option(ibm) 1(ibm) toshiba(dell) battery(dell) display(apple) usb(dell) processor(ibm) card(ibm) hassle(ibm) 1(dell) speak(ibm) hour(apple) display(ibm) firewire(apple) processor(apple) dvd(ibm) device(ibm) maco(apple) toshiba(ibm) hour(ibm) view(dell) plug(ibm) power(apple) card(dell) pus, we use an ’IBM’ word, an ’Apple’ word, and a ’Dell’ word to replace an English word in their corpus. For example, we use ’IBM10’, ’Apple10’, ’Dell10’ to replace the word ’CD’ whenever it appears in an IBM’s, Apple’s, or Dell’s review. After the replacement, the reviews about IBM, Apple, and Dell will not share vocabularies with each other. On the other hand, for any three created words which represent the same English word, we add three edges among them, and therefore we get a simulated dictionary graph for our PCLSA model. The experimental result is shown in Table 5, in which we try to extract 8 topics from the crosslingual corpus. The first ten rows show the result of our PCLSA model, in which we set a very small value to the weight parameter λ for the regularizer part. This can be used as an approximation of the result from the traditional PLSA model on this three language corpus. We can see that the extracted topics are mainly written in monolanguage. As we set the value of parameter λ larger, the extracted topics become multi-lingual, which is shown in the next ten rows. From this result, we can see the difference between the reviews of different brands about the similar topic. In addition, if we set the λ even larger, we will get topics that are mostly made of the same words from the three different brands, which means the extracted topics are very smooth on the dictionary graph now. 7 Conclusion In this paper, we study the problem of crosslingual latent topic extraction where the task is to extract a set of common latent topics from multilingual text data. We propose a novel probabilistic topic model (i.e. the Probabilistic Cross-Lingual Latent Semantic Analysis (PCLSA) model) that can incorporate translation knowledge in bilingual dictionaries as a regularizer to constrain the parameter estimation so that the learned topic models would be synchronized in multiple languages. We evaluated the model using several data sets. The experimental results show that PCLSA is effective in extracting common latent topics from multilingual text data, and it outperforms the baseline method which uses the standard PLSA to fit each monolingual text data set. Our work opens up some interesting future research directions to further explore. First, in this paper, we have only experimented with uniform weighting of edge in the bilingual graph. It should be very interesting to explore how to assign weights to the edges and study whether weighted graphs can further improve performance. Second, it would also be interesting to further extend PCLSA to accommodate discovering topics in each language that aren’t well-aligned with other languages. 8 Acknowledgments We sincerely thank the anonymous reviewers for their comprehensive and constructive comments. The work was supported in part by NASA grant 1135 NNX08AC35A, by the National Science Foundation under Grant Numbers IIS-0713581, IIS0713571, and CNS-0834709, and by a Sloan Research Fellowship. References David Blei and John Lafferty. 2005. Correlated topic models. In NIPS ’05: Advances in Neural Information Processing Systems 18. David M. Blei and John D. Lafferty. 2006. Dynamic topic models. In Proceedings of the 23rd international conference on Machine learning, pages 113– 120. D. Blei, T. Griffiths, M. Jordan, and J. Tenenbaum. 2003a. Hierarchical topic models and the nested chinese restaurant process. In Neural Information Processing Systems (NIPS) 16. D. Blei, A. Ng, and M. Jordan. 2003b. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. J. Boyd-Graber and D. Blei. 2009. Multilingual topic models for unaligned text. In Uncertainty in Artificial Intelligence. S. R. K. Branavan, Harr Chen, Jacob Eisenstein, and Regina Barzilay. 2008. Learning document-level semantic properties from free-text annotations. In Proceedings of ACL 2008. Martin Franz, J. Scott McCarley, and Salim Roukos. 1998. Ad hoc and multilingual information retrieval at IBM. In Text REtrieval Conference, pages 104– 115. Pascale Fung. 1995. A pattern matching method for finding noun and proper noun translations from noisy parallel corpora. In Proceedings of ACL 1995, pages 236–243. Alfio Gliozzo and Carlo Strapparava. 2006. Exploiting comparable corpora and bilingual dictionaries for cross-language text categorization. In ACL-44: Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 553–560, Morristown, NJ, USA. Association for Computational Linguistics. T. Hofmann. 1999a. Probabilistic latent semantic analysis. In Proceedings of UAI 1999, pages 289–296. Thomas Hofmann. 1999b. The cluster-abstraction model: Unsupervised learning of topic hierarchies from text data. In IJCAI’ 99, pages 682–687. Thomas Hofmann. 2001. Unsupervised learning by probabilistic latent semantic analysis. Mach. Learn., 42(1-2):177–196. Jagadeesh Jagaralamudi and Hal Daum´e III. 2010. Extracting multilingual topics from unaligned corpora. In Proceedings of the European Conference on Information Retrieval (ECIR), Milton Keynes, United Kingdom. Woosung Kim and Sanjeev Khudanpur. 2004. Lexical triggers and latent semantic analysis for crosslingual language model adaptation. ACM Transactions on Asian Language Information Processing (TALIP), 3(2):94–112. Wei Li and Andrew McCallum. 2006. Pachinko allocation: Dag-structured mixture models of topic correlations. In ICML ’06: Proceedings of the 23rd international conference on Machine learning, pages 577–584. H. Masuichi, R. Flournoy, S. Kaufmann, and S. Peters. 2000. A bootstrapping method for extracting bilingual text pairs. In Proc. 18th COLINC, pages 1066– 1070. Qiaozhu Mei and ChengXiang Zhai. 2006. A mixture model for contextual text mining. In Proceedings of KDD ’06, pages 649–655. Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai. 2007. Topic sentiment mixture: Modeling facets and opinions in weblogs. In Proceedings of WWW ’07. Qiaozhu Mei, Deng Cai, Duo Zhang, and ChengXiang Zhai. 2008a. Topic modeling with network regularization. In WWW, pages 101–110. Qiaozhu Mei, Duo Zhang, and ChengXiang Zhai. 2008b. A general optimization framework for smoothing language models on graph structures. In SIGIR ’08: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, pages 611–618, New York, NY, USA. ACM. David Mimno, Hanna M. Wallach, Jason Naradowsky, David A. Smith, and Andrew Mccallum. 2009. Polylingual topic models. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 880–889, Singapore, August. Association for Computational Linguistics. Xiaochuan Ni, Jian-Tao Sun, Jian Hu, and Zheng Chen. 2009. Mining multilingual topics from wikipedia. In WWW ’09: Proceedings of the 18th international conference on World wide web, pages 1155–1156, New York, NY, USA. ACM. F. Sadat, M. Yoshikawa, and S. Uemura. 2003. Bilingual terminology acquisition from comparable corpora and phrasal translation to cross-language information retrieval. In ACL ’03: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, pages 141–144. 1136 Mark Steyvers, Padhraic Smyth, Michal Rosen-Zvi, and Thomas Griffiths. 2004. Probabilistic authortopic models for information discovery. In Proceedings of KDD’04, pages 306–315. Xuanhui Wang, ChengXiang Zhai, Xiao Hu, and Richard Sproat. 2007. Mining correlated bursty topic patterns from coordinated text streams. In KDD ’07: Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 784–793, New York, NY, USA. ACM. Bing Zhao and Eric P. Xing. 2006. Bitam: Bilingual topic admixture models for word alignment. In In Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics. 1137
2010
115
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1138–1147, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Topic Models for Word Sense Disambiguation and Token-based Idiom Detection Linlin Li, Benjamin Roth, and Caroline Sporleder Saarland University, Postfach 15 11 50 66041 Saarbr¨ucken, Germany {linlin, beroth, csporled}@coli.uni-saarland.de Abstract This paper presents a probabilistic model for sense disambiguation which chooses the best sense based on the conditional probability of sense paraphrases given a context. We use a topic model to decompose this conditional probability into two conditional probabilities with latent variables. We propose three different instantiations of the model for solving sense disambiguation problems with different degrees of resource availability. The proposed models are tested on three different tasks: coarse-grained word sense disambiguation, fine-grained word sense disambiguation, and detection of literal vs. nonliteral usages of potentially idiomatic expressions. In all three cases, we outperform state-of-the-art systems either quantitatively or statistically significantly. 1 Introduction Word sense disambiguation (WSD) is the task of automatically determining the correct sense for a target word given the context in which it occurs. WSD is an important problem in NLP and an essential preprocessing step for many applications, including machine translation, question answering and information extraction. However, WSD is a difficult task, and despite the fact that it has been the focus of much research over the years, stateof-the-art systems are still often not good enough for real-world applications. One major factor that makes WSD difficult is a relative lack of manually annotated corpora, which hampers the performance of supervised systems. To address this problem, there has been a significant amount of work on unsupervised WSD that does not require manually sensedisambiguated training data (see McCarthy (2009) for an overview). Recently, several researchers have experimented with topic models (Brody and Lapata, 2009; Boyd-Graber et al., 2007; BoydGraber and Blei, 2007; Cai et al., 2007) for sense disambiguation and induction. Topic models are generative probabilistic models of text corpora in which each document is modelled as a mixture over (latent) topics, which are in turn represented by a distribution over words. Previous approaches using topic models for sense disambiguation either embed topic features in a supervised model (Cai et al., 2007) or rely heavily on the structure of hierarchical lexicons such as WordNet (Boyd-Graber et al., 2007). In this paper, we propose a novel framework which is fairly resource-poor in that it requires only 1) a large unlabelled corpus from which to estimate the topics distributions, and 2) paraphrases for the possible target senses. The paraphrases can be user-supplied or can be taken from existing resources. We approach the sense disambiguation task by choosing the best sense based on the conditional probability of sense paraphrases given a context. We propose three models which are suitable for different situations: Model I requires knowledge of the prior distribution over senses and directly maximizes the conditional probability of a sense given the context; Model II maximizes this conditional probability by maximizing the cosine value of two topic-document vectors (one for the sense and one for the context). We apply these models to coarse- and fine-grained WSD and find that they outperform comparable systems for both tasks. We also test our framework on the related task of idiom detection, which involves distinguishing literal and nonliteral usages of potentially ambiguous expressions such as rock the boat. For this task, we propose a third model. Model III calculates the probability of a sense given a context according to the component words of the sense 1138 paraphrase. Specifically, it chooses the sense type which maximizes the probability (given the context) of the paraphrase component word with the highest likelihood of occurring in that context. This model also outperforms state-of-the-art systems. 2 Related Work There is a large body of work on WSD, covering supervised, unsupervised (word sense induction) and knowledge-based approaches (see McCarthy (2009) for an overview). While most supervised approaches treat the task as a classification task and use hand-labelled corpora as training data, most unsupervised systems automatically group word tokens into similar groups using clustering algorithms, and then assign labels to each sense cluster. Knowledge-based approaches exploit information contained in existing resources. They can be combined with supervised machinelearning models to assemble semi-supervised approaches. Recently, a number of systems have been proposed that make use of topic models for sense disambiguation. Cai et al. (2007), for example, use LDA to capture global context. They compute topic models from a large unlabelled corpus and include them as features in a supervised system. Boyd-Graber and Blei (2007) propose an unsupervised approach that integrates McCarthy et al.’s (2004) method for finding predominant word senses into a topic modelling framework. In addition to generating a topic from the document’s topic distribution and sampling a word from that topic, the enhanced model also generates a distributional neighbour for the chosen word and then assigns a sense based on the word, its neighbour and the topic. Boyd-Graber and Blei (2007) test their method on WSD and information retrieval tasks and find that it can lead to modest improvements over state-of-the-art results. In another unsupervised system, Boyd-Graber et al. (2007) enhance the basic LDA algorithm by incorporating WordNet senses as an additional latent variable. Instead of generating words directly from a topic, each topic is associated with a random walk through the WordNet hierarchy which generates the observed word. Topics and synsets are then inferred together. While Boyd-Graber et al. (2007) show that this method can lead to improvements in accuracy, they also find that idiosyncracies in the hierarchical structure of WordNet can harm performance. This is a general problem for methods which use hierarchical lexicons to model semantic distance (Budanitsky and Hirst, 2006). In our approach, we circumvent this problem by exploiting paraphrase information for the target senses rather than relying on the structure of WordNet as a whole. Topic models have also been applied to the related task of word sense induction. Brody and Lapata (2009) propose a method that integrates a number of different linguistic features into a single generative model. Topic models have been previously considered for metaphor extraction and estimating the frequency of metaphors (Klebanov et al., 2009; Bethard et al., 2009). However, we have a different focus in this paper, which aims to distinguish literal and nonliteral usages of potential idiomatic expressions. A number of methods have been applied to this task. Katz and Giesbrecht (2006) devise a supervised method in which they compute the meaning vectors for the literal and nonliteral usages of a given expression in the trainning data. Birke and Sarkar (2006) use a clustering algorithm which compares test instances to two automatically constructed seed sets (one literal and one nonliteral), assigning the label of the closest set. An unsupervised method that computes cohesive links between the component words of the target expression and its context have been proposed (Sporleder and Li, 2009; Li and Sporleder, 2009). Their system predicts literal usages when strong links can be found. 3 The Sense Disambiguation Model 3.1 Topic Model As pointed out by Hofmann (1999), the starting point of topic models is to decompose the conditional word-document probability distribution p(w|d) into two different distributions: the wordtopic distribution p(w|z), and the topic-document distribution p(z|d) (see Equation 1). This allows each semantic topic z to be represented as a multinominal distribution of words p(w|z), and each document d to be represented as a multinominal distribution of semantic topics p(z|d). The model introduces a conditional independence assumption that document d and word w are independent con1139 ditioned on the hidden variable, topic z. p(w|d) = X z p(z|d)p(w|z) (1) LDA is a Bayesian version of this framework with Dirichlet hyper-parameters (Blei et al., 2003). The inference of the two distributions given an observed corpus can be done through Gibbs Sampling (Geman and Geman, 1987; Griffiths and Steyvers, 2004). For each turn of the sampling, each word in each document is assigned a semantic topic based on the current word-topic distribution and topic-document distribution. The resulting topic assignments are then used to re-estimate a new word-topic distribution and topic-document distribution for the next turn. This process repeats until convergence. To avoid statistical coincidence, the final estimation of the distributions is made by the average of all the turns after convergence. 3.2 The Sense Disambiguation Model Assigning the correct sense s to a target word w occurring in a context c involves finding the sense which maximizes the conditional probability of senses given a context: s = arg max si p(si|c) (2) In our model, we represent a sense (si) as a collection of ‘paraphrases’ that capture (some aspect of) the meaning of the sense. These paraphrases can be taken from an existing resource such as WordNet (Miller, 1995) or supplied by the user (see Section 4). This conditional probability is decomposed by incorporating a hidden variable, topic z, introduced by the topic model. We propose three variations of the basic model, depending on how much background information is available, i.e., knowledge of the prior sense distribution available and type of sense paraphrases used. In Model I and Model II, the sense paraphrases are obtained from WordNet, and both the context and the sense paraphrases are treated as documents, c = dc and s = ds. WordNet is a fairly rich resource which provides detailed information about word senses (glosses, example sentences, synsets, semantic relations between senses, etc.). Sometimes such detailed information may not be available, for instance for languages for which such a resource does not exist or for expressions that are not very well covered in WordNet, such as idioms. For those situations, we propose another model, Model III, in which contexts are treated as documents while sense paraphrases are treated as sequences of independent words.1 Model I directly maximizes the conditional probability of the sense given the context, where the sense is modeled as a ‘paraphrase document’ ds and the context as a ‘context document’ dc. The conditional probability of sense given context p(ds|dc) can be rewritten as a joint probability divided by a normalization factor: p(ds|dc) = p(ds, dc) p(dc) (3) This joint probability can be rewritten as a generative process by introducing a hidden variable z. We make the conditional independence assumption that, conditioned on the topic z, a paraphrase document ds is generated independently of the specific context document dc: p(ds, dc) = X z p(ds)p(z|ds)p(dc|z) (4) We apply the same process to the conditional probability p(dc|z). It can be rewritten as: p(dc|z) = p(dc)p(z|dc) p(z) (5) Now, the disambiguation model p(ds|dc) can be rewritten as a prior p(ds) times a topic function f(z): p(ds|dc) = p(ds) X z p(z|dc)p(z|ds) p(z) (6) As p(z) is a uniform distribution according to the uniform Dirichlet priors assumption, Equation 6 can be rewritten as: p(ds|dc) ∝p(ds) X z p(z|dc)p(z|ds) (7) Model I: arg max dsi p(dsi) X z p(z|dc)p(z|dsi) (8) Model I has the disadvantage that it requires information about the prior distribution of senses 1The idea is that these key words capture the meaning of the idioms. 1140 p(ds), which is not always available. We use sense frequency information from WordNet to estimate the prior sense distribution, although it must be kept in mind that, depending on the genre of the texts, it is possible that the distribution of senses in the testing corpus may diverge greatly from the WordNet-based estimation. If there is no means for estimating the prior sense distribution of an experimental corpus, generally a uniform distribution must be assumed. However, this assumption does not hold, as the true distribution of word senses is often highly skewed (McCarthy, 2009). To overcome this problem, we propose Model II, which indirectly maximizes the sense-context probability by maximizing the cosine value of two document vectors that encode the document-topic frequencies from sampling, v(z|dc) and v(z|ds). The document vectors are represented by topics, with each dimension representing the number of times that the tokens in this document are assigned to a certain topic. Model II: arg max dsi cos(v(z|dc), v(z|dsi)) (9) If the prior distribution of senses is known, Model I is the best choice. However, Model II has to be chosen instead when this knowledge is not available. In our experiments, we test the performance of both models (see Section 5). If the sense paraphrases are very short, it is difficult to reliably estimate p(z|ds). In order to solve this problem, we treat the sense paraphrase ds as a ‘query’, a concept which is used in information retrieval. One model from information retrieval takes the conditional probability of the query given the document as a product of all the conditional probabilities of words in the query given the document. The assumption is that the query is generated by a collection of conditionally independent words (Song and Croft, 1999). We make the same assumption here. However, instead of taking the product of all the conditional probabilities of words given the document, we take the maximum. There are two reasons for this: (i) taking the product may penalize longer paraphrases since the product of probabilities decreases as there are more words; (ii) we do not want to model the probability of generating specific paraphrases, but rather the probability of generating a sense, which might only be represented by one or two words in the paraphrases (e.g., the potentially idiomatic phrase ‘rock the boat’ can be paraphrased as ‘break the norm’ or ‘cause trouble’. A similar topic distribution to that of the individual words ‘norm’ or ‘trouble’ would be strong supporting evidence of the corresponding idiomatic reading.). We propose Model III: arg max qsi max wi∈qs X z p(wi|z)p(z|dc) (10) where qs is a collection of words contained in the sense paraphrases. 3.3 Inference One possible inference approach is to combine the context documents and sense paraphrases into a corpus and run Gibbs sampling on this corpus. The problem with this approach is that the test set and sense paraphrase set are relatively small, and topic models running on a small corpus are less likely to capture rich semantic topics. One simple explanation for this is that a small corpus usually has a relatively small vocabulary, which is less representative of topics, i.e., p(w|z) cannot be estimated reliably. In order to overcome this problem, we infer the word-topic distribution from a very large corpus (Wikipedia dump, see Section 4). All the following inference experiments on the test corpus are based on the assumption that the word-topic distribution p(w|z) is the same as the one estimated from the Wikipedia dump. Inference of topicdocument distributions for context and sense paraphrases is done by fixing the word-topic distribution as a constant. 4 Experimental Setup We evaluate our models on three different tasks: coarse-grained WSD, fine-grained WSD and literal vs. nonliteral sense detection. In this section we discuss our experimental set-up. We start by describing the three datasets for evaluation and another dataset for probability estimation. We also discuss how we choose sense paraphrases and instance contexts. Data We use three datasets for evaluation. The coarse-grained task is evaluated on the Semeval2007 Task-07 benchmark dataset released by Navigli et al. (2009). The dataset consists of 5377 words of running text from five different articles: the first three were obtained from the WSJ corpus, the fourth was the Wikipedia entry for computer programming, and the fifth was an excerpt of 1141 Amy Steedman’s Knights of the Art, biographies of Italian painters. The proportion of the non news text, the last two articles, constitutes 51.87% of the whole testing set. It consists of 1108 nouns, 591 verbs, 362 adjectives, and 208 adverbs. The data were annotated with coarse-grained senses which were obtained by clustering senses from the WordNet 2.1 sense inventory based on the procedure proposed by Navigli (2006). To determine whether our model is also suitable for fine-grained WSD, we test on the data provided by Pradhan et al. (2009) for the Semeval-2007 Task-17 (English fine-grained all-words task). This dataset is a subset of the set from Task-07. It comprises the three WSJ articles from Navigli et al. (2009). A total of 465 lemmas were selected as instances from about 3500 words of text. There are 10 instances marked as ‘U’ (undecided sense tag). Of the remaining 455 instances, 159 are nouns and 296 are verbs. The sense inventory is from WordNet 2.1. Finally, we test our model on the related sense disambiguation task of distinguishing literal and nonliteral usages of potentially ambiguous expressions such as break the ice. For this, we use the dataset from Sporleder and Li (2009) as a test set. This dataset consists of 3964 instances of 17 potential English idioms which were manually annotated as literal or nonliteral. A Wikipedia dump2 is used to estimate the multinomial word-topic distribution. This dataset, which consists of 320,000 articles,3 is significantly larger than SemCor, which is the dataset used by Boyd-Graber et al. (2007). All markup from the Wikipedia dump was stripped off using the same filter as the ESA implementation (Sorg and Cimiano, 2008), and stopwords were filtered out using the Snowball (Porter, October 2001) stopword list. In addition, words with a Wikipedia document frequency of 1 were filtered out. The lemmatized version of the corpus consists of 299,825 lexical units. The test sets were POS-tagged and lemmatized using RASP (Briscoe and Carroll, 2006). The inference processes are run on the lemmatized version of the corpus. For the Semeval-2007 Task 17 English all-words, the organizers do not supply the part-of-speech and lemma information of the target instances. In order to avoid the wrong predic2We use the English snapshot of 2009-07-13 3All articles of fewer than 100 words were discarded. tions caused by tagging or lemmatization errors, we manually corrected any bad tags and lemmas for the target instances.4 Sense Paraphrases For word sense disambiguation tasks, the paraphrases of the sense keys are represented by information from WordNet 2.1. (Miller, 1995). To obtain the paraphrases, we use the word forms, glosses and example sentences of the synset itself and a set of selected reference synsets (i.e., synsets linked to the target synset by specific semantic relations, see Table 1). We excluded the ‘hypernym reference synsets’, since information common to all of the child synsets may confuse the disambiguation process. For the literal vs. nonliteral sense detection task, we selected the paraphrases of the nonliteral meaning from several online idiom dictionaries. For the literal senses, we used 2-3 manually selected words with which we tried to capture (aspects of) the literal meaning of the expression.5 For instance, the literal ‘paraphrases’ that we chose for ‘break the ice’ were ice, water and snow. The paraphrases are shorter for the idiom task than for the WSD task, because the meaning descriptions from the idiom dictionaries are shorter than what we get from WordNet. In the latter case, each sense can be represented by its synset as well as its reference synsets. Instance Context We experimented with different context sizes for the disambiguation task. The five different context settings that we used for the WSD tasks are: collocations (1w), ±5-word window (5w), ±10-word window (10w), current sentence, and whole text. Because the idiom corpus also includes explicitly marked paragraph boundaries, we included ‘paragraph’ as a sixth type of context size for the idiom sense detection task. 5 Experiments As mentioned above, we test our proposed sense disambiguation framework on three tasks. We start by describing the sampling experiments for 4This was done by comparing the predicted sense keys and the gold standard sense keys. We only checked instances for which the POS-tags in the predicted sense keys are not consistent with those in the gold standard. This was the case for around 20 instances. 5Note that we use the word ‘paraphrase’ in a fairly wide sense in this paper. Sometimes it is not possible to obtain exact paraphrases. This applies especially to the task of distinguishing literal from nonliteral senses of multi-word expressions. In this case we take as paraphrases some key words which capture salient aspects of the meaning. 1142 POS Paraphrase reference synsets N hyponyms, instance hyponyms, member holonyms, substance holonyms, part holonyms, member meronyms, part meronyms, substance meronyms, attributes, topic members, region members, usage members, topics, regions, usages V Troponyms, entailments, outcomes, phrases, verb groups, topics, regions, usages, sentence frames A similar, pertainym, attributes, related, topics, regions, usages R pertainyms, topics, regions, usages Table 1: Selected reference synsets from WordNet that were used for different parts-of-speech to obtain word sense paraphrase. N(noun), V(verb), A(adj), R(adv). estimating the word-topic distribution from the Wikipedia dump. We used the package provided by Wang et al. (2009) with the suggested Dirichlet hyper-parameters 6. In order to avoid statistical instability, the final result is averaged over the last 50 iterations. We did four rounds of sampling with 1000, 500, 250, and 125 topics respectively. The final word-topic distribution is a normalized concatenate of the four distributions estimated in each round. In average, the sampling program run on the Wikipedia dump consumed 20G memory, and each round took about one week on a single AMD Dual-Core 1000MHZ processor. 5.1 Coarse-Grained WSD In this section we first describe the landscape of similar systems against which we compare our models, then present the results of the comparison. The systems that participated in the SemEval-2007 coarse-grained WSD task (Task-07) can be divided into three categories, depending on whether training data is needed and whether other types of background knowledge are required: What we call Type I includes all the systems that need annotated training data. All the participating systems that have the mark TR fall into this category (see Navigli et al. (2009) for the evaluation for all the participating systems). Type II consists of systems that do not need training data but require prior knowledge of the sense distribution (estimated sense frequency). All the participating systems that have the mark MFS belong to this category. Systems that need neither training data nor prior sense distribution knowledge are categorized as Type III. We make this distinction based on two principles: (i) the cost of building a system; (ii) the portability of the established resource. Type III is the cheapest system to build, while Type I and 6They were set as: α = 50 #topics and β = 0.01. Type II both need extra resources. Type II has an advantage over Type I since the prior knowledge of the sense distribution can be estimated from annotated corpora (e.g.: SemCor, Senseval). In contrast, training data in Type I may be system specific (e.g.: different input format, different annotation guidelines). McCarthy (2009) also addresses the issue of performance and cost by comparing supervised word sense disambiguation systems with unsupervised ones. We exclude the system provided by one of the organizers (UoR-SSI) from our categorization. The reason is that although this system is claimed to be unsupervised, and it performs better than all the participating systems (including the supervised systems) in the SemEval-2007 shared task, it still needs to incorporate a lot of prior knowledge, specifically information about co-occurrences between different word senses, which was obtained from a number of resources (SSI+LKB) including: (i) SemCor (manually annotated); (ii) LDCDSO (partly manually annotated); (iii) collocation dictionaries which are then disambiguated semiautomatically. Even though the system is not “trained”, it needs a lot of information which is largely dependent on manually annotated data, so it does not fit neatly into the categories Type II or Type III either. Table 2 lists the best participating systems of each type in the SemEval-2007 task (Type I: NUS-PT (Chan et al., 2007); Type II: UPV-WSD (Buscaldi and Rosso, 2007); Type III: TKB-UO (Anaya-S´anchez et al., 2007)). Our Model I belongs to Type II, and our Model II belongs to Type III. Table 2 compares the performance of our models with the Semeval-2007 participating systems. We only compare the F-score, since all the compared systems have an attempted rate7 of 1.0, 7Attempted rate is defined as the total number of disambiguated output instances divided by the total number of input 1143 which makes both the precision and recall rates the same as the F-score. We focus on comparisons between our models and the best SemEval-2007 participating systems within the same type. Model I is compared with UPV-WSD, and Model II is compared with TKB-UO. In addition, we also compare our system with the most frequent sense baseline which was not outperformed by any of the systems of Type II and Type III in the SemEval-2007 task. Comparison on Type III is marked with ′, while comparison on Type II is marked with ∗. We find that Model II performs statistically significantly better than the best participating system of the same type TKB-UO (p<<0.01, χ2 test). When encoded with the prior knowledge of sense distribution, Model I outperforms by 1.36% the best Type II system UPV-WSD, although the difference is not statistically significant. Furthermore, Model I also quantitatively outperforms the most frequent sense baseline BLmfs, which, as mentioned above, was not beat by any participating systems that do not use training data. We also find that our model works best for nouns. The unsupervised Type III model Model II achieves better results than the most frequent sense baseline on nouns, but not on other partsof-speech. This is in line with results obtained by previous systems (Griffiths et al., 2005; BoydGraber and Blei, 2008; Cai et al., 2007). While the performance on verbs can be increased to outperform the most frequent sense baseline by including the prior sense probability, the performance on adjectives and adverbs remains below the most frequent sense baseline. We think that there are three reasons for this: first, adjectives and adverbs have fewer reference synsets for paraphrases compared with nouns and verbs (see Table 1); second, adjectives and adverbs tend to convey less key semantic content in the document, so they are more difficult to capture by the topic model; and third, adjectives and adverbs are a small portion of the test set, so their performances are statistically unstable. For example, if ‘already’ appears 10 times out of 20 adverb instances, a system may get bad result on adverbs only because of its failure to disambiguate the word ‘already’. Paraphrase analysis Table 2 also shows the effect of different ways of choosing sense paraphrases. MII+ref is the result of including the reference synsets, while MII-ref excludes the referinstances. System Noun Verb Adj Adv All UoR-SSI 84.12 78.34 85.36 88.46 83.21 NUS-PT 82.31 78.51 85.64 89.42 82.50 UPV-WSD 79.33 72.76 84.53 81.52 78.63∗ TKB-UO 70.76 62.61 78.73 74.04 70.21′ MII–ref 78.16 70.39 79.56 81.25 76.64 MII+ref 80.05 70.73 82.04 82.21 78.14′ MI+ref 79.96 75.47 83.98 86.06 79.99∗ BLmfs 77.44 75.30 84.25 87.50 78.99∗ Table 2: Model performance (F-score) on the coarse-grained dataset (context=sentence). Paraphrases with/without reference synsets (+ref/-ref). Context Ate. Pre. Rec. F1 ±1w 91.67 75.05 68.80 71.79 ±5w 99.29 77.14 76.60 76.87 ±10w 100 77.92 77.92 77.92 text 100 76.86 76.86 76.86 sent. 100 78.14 78.14 78.14 Table 3: Model II performance on different context size. attempted rate (Ate.), precision (Pre.), recall (Rec.), F-score (F1). ence synsets. As can be seen from the table, including all reference synsets in sense paraphrases increases performance. Longer paraphrases contain more information, and they are statistically more stable for inference. We find that nouns get the greatest performance boost from including reference synsets, as they have the largest number of different types of synsets. We also find the ‘similar’ reference synset for adjectives to be very useful. Performance on adjectives increases by 2.75% when including this reference synset. Context analysis In order to study how the context influences the performance, we experiment with Model II on different context sizes (see Table 3). We find sentence context is the best size for this disambiguation task. Using a smaller context not only reduces the precision, but also reduces the recall rate, which is caused by the all-zero topic assignment by the topic model for documents only containing words that are not in the vocabulary. As a result, the model is unable to disambiguate. The context based on the whole text (article) does not perform well either, possibly because using the full text folds in too much noisy information. 1144 System F-score RACAI 52.7 ±4.5 BLmfs 55.91±4.5 MI+ref 56.99±4.5 Table 4: Model performance (F-score) for the finegrained word sense disambiguation task. 5.2 Fine-grained WSD We saw in the previous section that our framework performs well on coarse-grained WSD. Finegrained WSD, however, is a more difficult task. To determine whether our framework is also able to detect subtler sense distinctions, we tested Model I on the English all-words subtask of SemEval-2007 Task-17 (see Table 4). We find that Model I performs better than both the best unsupervised system, RACAI (Ion and Tufis¸, 2007) and the most frequent sense baseline (BLmfs), although these differences are not statistically significant due to the small size of the available test data (465). 5.3 Idiom Sense Disambiguation In the previous section, we provided the results of applying our framework to coarse- and finegrained word sense disambiguation tasks. For both tasks, our models outperform the state-ofthe-art systems of the same type either quantitatively or statistically significantly. In this section, we apply Model III to another sense disambiguation task, namely distinguishing literal and nonliteral senses of ambiguous expressions. WordNet has a relatively low coverage for idiomatic expressions. In order to represent nonliteral senses, we replace the paraphrases obtained automatically from WordNet by words selected manually from online idiom dictionaries (for the nonliteral sense) and by linguistic introspection (for the literal sense). We then compare the topic distributions of literal and nonliteral senses. As the paraphrases obtained from the idiom dictionary are very short, we treat the paraphrase as a sequence of independent words instead of as a document and apply Model III (see Section 3). Table 5 shows the results of our proposed model compared with state-of-the-art systems. We find that the system significantly outperforms the majority baseline (p<<0.01, χ2 test) and the cohesion-graph based approach proposed by Sporleder and Li (2009) (p<<0.01, χ2 test). The system also outperforms the bootstrapping System Precl Recl Fl Acc. Basemaj 78.25 co-graph 50.04 69.72 58.26 78.38 boot. 71.86 66.36 69.00 87.03 Model III 67.05 81.07 73.40 87.24 Table 5: Performance on the literal or nonliteral sense disambiguation task on idioms. literal precision (Precl), literal recall (Recl), literal F-score (Fl), accuracy(Acc.). system by Li and Sporleder (2009), although not statistically significantly. This shows how a limited amount of human knowledge (e.g., paraphrases) can be added to an unsupervised system for a strong boost in performance ( Model III compared with the cohesion-graph and the bootstrapping approaches). For obvious reasons, this approach is sensitive to the quality of the paraphrases. The paraphrases chosen to characterise (aspects of) the meaning of a sense should be non-ambiguous between the literal or idiomatic meaning. For instance, ‘fire’ is not a good choice for a paraphrase of the literal reading of ‘play with fire’, since this word can be interpreted literally as ‘fire’ or metaphorically as ‘something dangerous’. The verb component word ‘play’ is a better literal paraphrase. For the same reason, this approach works well for expressions where the literal and nonliteral readings are well separated (i.e., occur in different contexts), while the performance drops for expressions whose literal and idiomatic readings can appear in a similar context. We test the performance on individual idioms on the five most frequent idioms in our corpus8 (see Table 6). We find that ‘drop the ball’ is a difficult case. The words ‘fault’, ‘mistake’, ‘fail’ or ‘miss’ can be used as the nonliteral paraphrases. However, it is also highly likely that these words are used to describe a scenario in a baseball game, in which ‘drop the ball’ is used literally. In contrast, the performance on ‘rock the boat’ is much better, since the nonliteral reading of the phrases ‘break the norm’ or ‘cause trouble’ are less likely to be linked with the literal reading ‘boat’. This may also be because ‘boat’ is not often used metaphorically in the corpus. As the topic distribution of nouns and verbs exhibit different properties, topic comparisons across parts-of-speech do not make sense. We 8We tested only on the most frequent idioms in order to avoid statistically unreliable observations. 1145 Idiom Acc. drop the ball 75.86 play with fire 91.17 break the ice 87.43 rock the boat 95.82 set in stone 89.39 Table 6: Performance on individual idioms. make the topic distributions comparable by making sure each type of paraphrase contains the same sets of parts-of-speech. For instance, we do not permit combinations of literal paraphrases which only consist of nouns and nonliteral paraphrases which only consist of verbs. 6 Conclusion We propose three models for sense disambiguation on words and multi-word expressions. The basic idea of these models is to compare the topic distribution of a target instance with the candidate sense paraphrases and choose the most probable one. While Model I and Model III model the problem in a probabilistic way, Model II uses a vector space model by comparing the cosine values of two topic vectors. Model II and Model III are completely unsupervised, while Model I needs the prior sense distribution. Model I and Model II treat the sense paraphrases as documents, while Model III treats the sense paraphrases as a collection of independent words. We test the proposed models on three tasks. We apply Model I and Model II to the WSD tasks due to the availability of more paraphrase information. Model III is applied to the idiom detection task since the paraphrases from the idiom dictionary are smaller. We find that all models outperform comparable state-of-the-art systems either quantitatively or statistically significantly. By testing our framework on three different sense disambiguation tasks, we show that the framework can be used flexibly in different application tasks. The system also points out a promising way of solving the granularity problem of word sense disambiguation, as new application tasks which need different sense granularities can utilize this framework when new paraphrases of sense clusters are available. In addition, this system can also be used in a larger context such as figurative language identification (literal or figurative) and sentiment detection (positive or negative). Acknowledgments This work was funded by the DFG within the Cluster of Excellence “Multimodal Computing and Interaction”. References H. Anaya-S´anchez, A. Pons-Porrata, R. BerlangaLlavori. 2007. TKB-UO: using sense clustering for WSD. In SemEval ’07: Proceedings of the 4th International Workshop on Semantic Evaluations, 322– 325. S. Bethard, V. T. Lai, J. H. Martin. 2009. Topic model analysis of metaphor frequency for psycholinguistic stimuli. In CALC ’09: Proceedings of the Workshop on Computational Approaches to Linguistic Creativity, 9–16, Morristown, NJ, USA. Association for Computational Linguistics. J. Birke, A. Sarkar. 2006. A clustering approach for the nearly unsupervised recognition of nonliteral language. In Proceedings of EACL-06. D. M. Blei, A. Y. Ng, M. I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Reseach, 3:993–1022. J. Boyd-Graber, D. Blei. 2007. PUTOP: turning predominant senses into a topic model for word sense disambiguation. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), 277–281. J. Boyd-Graber, D. Blei. 2008. Syntactic topic models. Computational Linguistics. J. Boyd-Graber, D. Blei, X. Zhu. 2007. A topic model for word sense disambiguation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), 1024–1033. T. Briscoe, J. Carroll. 2006. Evaluating the accuracy of an unlexicalized statistical parser on the PARC DepBank. In Proceedings of the COLING/ACL on Main conference poster sessions, 41–48. S. Brody, M. Lapata. 2009. Bayesian word sense induction. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), 103–111. A. Budanitsky, G. Hirst. 2006. Evaluating wordnetbased measures of lexical semantic relatedness. Computational Linguistics, 32(1):13–47. D. Buscaldi, P. Rosso. 2007. UPV-WSD: Combining different WSD methods by means of Fuzzy Borda Voting. In SemEval ’07: Proceedings of the 4th International Workshop on Semantic Evaluations, 434–437. J. Cai, W. S. Lee, Y. W. Teh. 2007. Improving word sense disambiguation using topic features. In Proceedings of the 2007 Joint Conference on Empirical 1146 Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), 1015–1023. Y. S. Chan, H. T. Ng, Z. Zhong. 2007. NUS-PT: exploiting parallel texts for word sense disambiguation in the English all-words tasks. In SemEval ’07: Proceedings of the 4th International Workshop on Semantic Evaluations, 253–256. S. Geman, D. Geman. 1987. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. In Readings in computer vision: issues, problems, principles, and paradigms, 564– 584. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. T. L. Griffiths, M. Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences, 101(Suppl. 1):5228–5235. T. L. Griffiths, M. Steyvers, D. M. Blei, J. B. Tenenbaum. 2005. Integrating topics and syntax. In In Advances in Neural Information Processing Systems 17, 537–544. MIT Press. T. Hofmann. 1999. Probabilistic latent semantic indexing. In SIGIR ’99: Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, 50–57. R. Ion, D. Tufis¸. 2007. Racai: meaning affinity models. In SemEval ’07: Proceedings of the 4th International Workshop on Semantic Evaluations, 282– 287, Morristown, NJ, USA. Association for Computational Linguistics. G. Katz, E. Giesbrecht. 2006. Automatic identification of non-compositional multi-word expressions using latent semantic analysis. In Proceedings of the ACL/COLING-06 Workshop on Multiword Expressions: Identifying and Exploiting Underlying Properties. B. B. Klebanov, E. Beigman, D. Diermeier. 2009. Discourse topics and metaphors. In CALC ’09: Proceedings of the Workshop on Computational Approaches to Linguistic Creativity, 1–8, Morristown, NJ, USA. Association for Computational Linguistics. L. Li, C. Sporleder. 2009. Contextual idiom detection without labelled data. In Proceedings of EMNLP09. D. McCarthy, R. Koeling, J. Weeds, J. Carroll. 2004. Finding predominant word senses in untagged text. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, 279–286. D. McCarthy. 2009. Word sense disambiguation: An overview. Language and Linguistics Compass, 3(2):537–558. G. A. Miller. 1995. WordNet: a lexical database for english. Commun. ACM, 38(11):39–41. R. Navigli, K. C. Litkowski, O. Hargraves. 2009. SemEval-2007 Task 07: Coarse-grained English allwords task. In Proceedings of the 4th International Workshop on Semantic Evaluation (SemEval-2007). R. Navigli. 2006. Meaningful clustering of senses helps boost word sense disambiguation performance. In Proceedings of the 44th Annual Meeting of the Association for Computational Liguistics joint with the 21st International Conference on Computational Liguistics (COLING-ACL 2006). M. Porter. October 2001. Snowball: A language for stemming algorithms. http: //snowball.tartarus.org/texts/ introduction.html. S. S. Pradhan, E. Loper, D. Dligach, M. Palmer. 2009. SemEval-2007 Task 07: Coarse-grained English allwords task. In Proceedings of the 4th International Workshop on Semantic Evaluation (SemEval-2007). F. Song, W. B. Croft. 1999. A general language model for information retrieval (poster abstract). In Research and Development in Information Retrieval, 279–280. P. Sorg, P. Cimiano. 2008. Cross-lingual information retrieval with explicit semantic analysis. In In Working Notes for the CLEF 2008 Workshop. C. Sporleder, L. Li. 2009. Unsupervised recognition of literal and non-literal use of idiomatic expressions. In Proceedings of EACL-09. Y. Wang, H. Bai, M. Stanton, W.-Y. Chen, E. Y. Chang. 2009. Plda: Parallel latent dirichlet allocation for large-scale applications. In Proc. of 5th International Conference on Algorithmic Aspects in Information and Management. Software available at http://code.google.com/p/plda. 1147
2010
116
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1148–1157, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics PCFGs, Topic Models, Adaptor Grammars and Learning Topical Collocations and the Structure of Proper Names Mark Johnson Department of Computing Macquarie University [email protected] Abstract This paper establishes a connection between two apparently very different kinds of probabilistic models. Latent Dirichlet Allocation (LDA) models are used as “topic models” to produce a lowdimensional representation of documents, while Probabilistic Context-Free Grammars (PCFGs) define distributions over trees. The paper begins by showing that LDA topic models can be viewed as a special kind of PCFG, so Bayesian inference for PCFGs can be used to infer Topic Models as well. Adaptor Grammars (AGs) are a hierarchical, non-parameteric Bayesian extension of PCFGs. Exploiting the close relationship between LDA and PCFGs just described, we propose two novel probabilistic models that combine insights from LDA and AG models. The first replaces the unigram component of LDA topic models with multi-word sequences or collocations generated by an AG. The second extension builds on the first one to learn aspects of the internal structure of proper names. 1 Introduction Over the last few years there has been considerable interest in Bayesian inference for complex hierarchical models both in machine learning and in computational linguistics. This paper establishes a theoretical connection between two very different kinds of probabilistic models: Probabilistic Context-Free Grammars (PCFGs) and a class of models known as Latent Dirichlet Allocation (Blei et al., 2003; Griffiths and Steyvers, 2004) models that have been used for a variety of tasks in machine learning. Specifically, we show that an LDA model can be expressed as a certain kind of PCFG, so Bayesian inference for PCFGs can be used to learn LDA topic models as well. The importance of this observation is primarily theoretical, as current Bayesian inference algorithms for PCFGs are less efficient than those for LDA inference. However, once this link is established it suggests a variety of extensions to the LDA topic models, two of which we explore in this paper. The first involves extending the LDA topic model so that it generates collocations (sequences of words) rather than individual words. The second applies this idea to the problem of automatically learning internal structure of proper names (NPs), which is useful for definite NP coreference models and other applications. The rest of this paper is structured as follows. The next section reviews Latent Dirichlet Allocation (LDA) topic models, and the following section reviews Probabilistic Context-Free Grammars (PCFGs). Section 4 shows how an LDA topic model can be expressed as a PCFG, which provides the fundamental connection between LDA and PCFGs that we exploit in the rest of the paper, and shows how it can be used to define a “sticky topic” version of LDA. The following section reviews Adaptor Grammars (AGs), a non-parametric extension of PCFGs introduced by Johnson et al. (2007b). Section 6 exploits the connection between LDA and PCFGs to propose an AG-based topic model that extends LDA by defining distributions over collocations rather than individual words, and section 7 applies this extension to the problem of finding the structure of proper names. 2 Latent Dirichlet Allocation Models Latent Dirichlet Allocation (LDA) was introduced as an explicit probabilistic counterpart to Latent Semantic Indexing (LSI) (Blei et al., 2003). Like LSI, LDA is intended to produce a lowdimensional characterisation or summary of a doc1148 W Z θ α φ β n m ℓ Figure 1: A graphical model “plate” representation of an LDA topic model. Here ℓis the number of topics, m is the number of documents and n is the number of words per document. ument in a collection of documents for information retrieval purposes. Both LSI and LDA do this by mapping documents to points in a relatively low-dimensional real-valued vector space; distance in this space is intended to correspond to document similarity. An LDA model is an explicit generative probabilistic model of a collection of documents. We describe the “smoothed” LDA model here (see page 1006 of Blei et al. (2003)) as it corresponds precisely to the Bayesian PCFGs described in section 4. It generates a collection of documents by first generating multinomials φi over the vocabulary V for each topic i ∈1, . . . , ℓ, where ℓis the number of topics and φi,w is the probability of generating word w in topic i. Then it generates each document Dj, j = 1, . . . , m in turn by first generating a multinomial θj over topics, where θj,i is the probability of topic i appearing in document j. (θj serves as the low-dimensional representation of document Dj). Finally it generates each of the n words of document Dj by first selecting a topic z for the word according to θj, and then drawing a word from φz. Dirichlet priors with parameters β and α respectively are placed on the φi and the θj in order to avoid the zeros that can arise from maximum likelihood estimation (i.e., sparse data problems). The LDA generative model can be compactly expressed as follows, where “∼” should be read as “is distributed according to”. φi ∼ Dir(β) i = 1, . . . , ℓ θj ∼ Dir(α) j = 1, . . . , m zj,k ∼ θj j = 1, . . . , m; k = 1, . . . , n wj,k ∼ φzj,k j = 1, . . . , m; k = 1, . . . , n In inference, the parameters α and β of the Dirichlet priors are either fixed (i.e., chosen by the model designer), or else themselves inferred, e.g., by Bayesian inference. (The adaptor grammar software we used in the experiments described below automatically does this kind of hyper-parameter inference). The inference task is to find the topic probability vector θj of each document Dj given the words wj,k of the documents; in general this also requires inferring the topic to word distributions φ and the topic assigned to each word zj,k. Blei et al. (2003) describe a Variational Bayes inference algorithm for LDA models based on a mean-field approximation, while Griffiths and Steyvers (2004) describe an Markov Chain Monte Carlo inference algorithm based on Gibbs sampling; both are quite effective in practice. 3 Probabilistic Context-Free Grammars Context-Free Grammars are a simple model of hierarchical structure often used to describe natural language syntax. A Context-Free Grammar (CFG) is a quadruple (N, W, R, S) where N and W are disjoint finite sets of nonterminal and terminal symbols respectively, R is a finite set of productions or rules of the form A →β where A ∈N and β ∈(N ∪W)⋆, and S ∈N is the start symbol. In what follows, it will be useful to interpret a CFG as generating sets of finite, labelled, ordered trees TA for each X ∈N ∪W. Informally, TX consists of all trees t rooted in X where for each local tree (B, β) in t (i.e., where B is a parent’s label and β is the sequence of labels of its immediate children) there is a rule B →β ∈R. Formally, the sets TX are the smallest sets of trees that satisfy the following equations. If X ∈W (i.e., if X is a terminal) then TX = {X}, i.e., TX consists of a single tree, which in turn only consists of a single node labelled X. If X ∈N (i.e., if X is a nonterminal) then TX = [ X→B1...Bn∈RX TREEX(TB1, . . . , TBn) where RA = {A →β : A →β ∈R} for each A ∈N, and TREEX(TB1, . . . , TBn) = (   PP X t1 tn . . . : ti ∈TBi, i = 1, . . . , n ) That is, TREEX(TB1, . . . , TBn) consists of the set of trees with whose root node is labelled X and whose ith child is a member of TBi. 1149 The set of trees generated by the CFG is TS, where S is the start symbol, and the set of strings generated by the CFG is the set of yields (i.e., terminal strings) of the trees in TS. A Probabilistic Context-Free Grammar (PCFG) is a pair consisting of a CFG and set of multinomial probability vectors θX indexed by nonterminals X ∈N, where θX is a distribution over the rules RX (i.e., the rules expanding X). Informally, θX→β is the probability of X expanding to β using the rule X →β ∈RX. More formally, a PCFG associates each X ∈N ∪W with a distribution GX over the trees TX as follows. If X ∈W (i.e., if X is a terminal) then GX is the distribution that puts probability 1 on the single-node tree labelled X. If X ∈N (i.e., if X is a nonterminal) then: GX = X X→B1...Bn∈RX θX→B1...BnTDX(GB1, . . . , GBn) (1) where: TDA(G1, . . . , Gn)   PP X t1 tn . . . ! = n Y i=1 Gi(ti). That is, TDA(G1, . . . , Gn) is a distribution over TA where each subtree ti is generated independently from Gi. These equations have solutions (i.e., the PCFG is said to be “consistent”) when the rule probabilities θA obey certain conditions; see e.g., Wetherell (1980) for details. The PCFG generates the distribution over trees GS, where S is the start symbol. The distribution over the strings it generates is obtained by marginalising over the trees. In a Bayesian PCFG one puts Dirichlet priors Dir(αX) on each of the multinomial rule probability vectors θX for each nonterminal X ∈N. This means that there is one Dirichlet parameter αX→β for each rule X →β ∈R in the CFG. In the “unsupervised” inference problem for a PCFG one is given a CFG, parameters αX for the Dirichlet priors over the rule probabilities, and a corpus of strings. The task is to infer the corresponding posterior distribution over rule probabilities θX. Recently Bayesian inference algorithms for PCFGs have been described. Kurihara and Sato (2006) describe a Variational Bayes algorithm for inferring PCFGs using a mean-field approximation, while Johnson et al. (2007a) describe a Markov Chain Monte Carlo algorithm based on Gibbs sampling. 4 LDA topic models as PCFGs This section explains how to construct a PCFG that generates the same distribution over a collection of documents as an LDA model, and where Bayesian inference for the PCFG’s rule probabilities yields the corresponding distributions as Bayesian inference of the corresponding LDA models. (There are several different ways of encoding LDA models as PCFGs; the one presented here is not the most succinct — it is possible to collapse the Doc and Doc′ nonterminals — but it has the advantage that the LDA distributions map straight-forwardly onto PCFG nonterminals). The terminals W of the CFG consist of the vocabulary V of the LDA model plus a set of special “document identifier” terminals “ j” for each document j ∈1, . . . , m, where m is the number of documents. In the PCFG encoding strings from document j are prefixed with “ j”; this indicates to the grammar which document the string comes from. The nonterminals consist of the start symbol Sentence, Docj and Doc′ j for each j ∈1, . . . , m, and Topici for each i ∈1, . . . , ℓ, where ℓis the number of topics in the LDA model. The rules of the CFG are all instances of the following schemata: Sentence →Doc′ j j ∈1, . . . , m Doc′ j → j j ∈1, . . . , m Doc′ j →Doc′ j Docj j ∈1, . . . , m Docj →Topici i ∈1, . . . , ℓ; j ∈1, . . . , m Topici →w i ∈1, . . . , ℓ; w ∈V Figure 2 depicts a tree generated by such a CFG. The relationship between the LDA model and the PCFG can be understood by studying the trees generated by the CFG. In these trees the leftbranching spine of nodes labelled Doc′ j propagate the document identifier throughout the whole tree. The nodes labelled Topici indicate the topics assigned to particular words, and the local trees expanding Docj to Topici (one per word in the document) indicate the distribution of topics in the document. The corresponding Bayesian PCFG associates probabilities with each of the rules in the CFG. The probabilities θTopici associated with the rules expanding the Topici nonterminals indicate how words are distributed across topics; the θTopici probabilities correspond exactly to to the φi probabilities in the LDA model. The probabilities 1150 Sentence Doc3' Doc3' Doc3' Doc3' Doc3' _3 Doc3 Topic4 shallow Doc3 Topic4 circuits Doc3 Topic4 compute Doc3 Topic7 faster Figure 2: A tree generated by the CFG encoding an LDA topic model. The prefix “ 3” indicates that this string belongs to document 3. The tree also indicates the assignment of words to topics. θDocj associated with rules expanding Docj specify the distribution of topics in document j; they correspond exactly to the probabilities θj of the LDA model. (The PCFG also specifies several other distributions that are suppressed in the LDA model. For example θSentence specifies the distribution of documents in the corpus. However, it is easy to see that these distributions do not influence the topic distributions; indeed, the expansions of the Sentence nonterminal are completely determined by the document distribution in the corpus, and are not affected by θSentence). A Bayesian PCFG places Dirichlet priors Dir(αA) on the corresponding rule probabilities θA for each A ∈N. In the PCFG encoding an LDA model, the αTopici parameters correspond exactly to the β parameters of the LDA model, and the αDocj parameters correspond to the α parameters of the LDA model. As suggested above, each document Dj in the LDA model is mapped to a string in the corpus used to train the corresponding PCFG by prefixing it with a document identifier “ j”. Given this training data, the posterior distribution over rule probabilities θDocj→Topici is the same as the posterior distribution over topics given documents θj,i in the original LDA model. As we will see below, this connection between PCFGs and LDA topic models suggests a number of interesting variants of both PCFGs and topic models. Note that we are not suggesting that Bayesian inference for PCFGs is necessarily a good way of estimating LDA topic models. Current Bayesian PCFG inference algorithms require time proportional to the cube of the length of the longest string in the training corpus, and since these strings correspond to entire documents in our embedding, blindly applying a Bayesian PCFG inference algorithm is likely to be impractical. A little reflection shows that the embedding still holds if the strings in the PCFG corpus correspond to sentences or even smaller units of the original document collection, so a single document would be mapped to multiple strings in the PCFG inference task. In this way the cubic time complexity of PCFG inference can be mitigated. Also, the trees generated by these CFGs have a very specialized left-branching structure, and it is straightforward to modify the general-purpose CFG inference procedures to avoid the cubic time complexity for such grammars: thus it may be practical to estimate topic models via grammatical inference. However, we believe that the primary value of the embedding of LDA topic models into Bayesian PCFGs is theoretical: it suggests a number of novel extensions of both topic models and grammars that may be worth exploring. Our claim here is not that these models are the best algorithms for performing these tasks, but that the relationship we described between LDA models and PCFGs suggests a variety of interesting novel models. We end this section with a simple example of such a modification to LDA. Inspired by the standard embedding of HMMs into PCFGs, we propose a “sticky topic” variant of LDA in which adjacent words are more likely to be assigned the same topic. Such an LDA extension is easy to describe as a PCFG (see Fox et al. (2008) for a similar model presented as an extended HMM). The nonterminals Sentence and Topici for i = 1, . . . , ℓhave the same interpretation as before, but we introduce new nonterminals Docj,i that indicate we have just generated a nonterminal in document j belonging to topic i. Given a collection of m documents and ℓtopics, the rule schemata are as follows: Sentence →Docj,i i ∈1, . . . , ℓ; j ∈1, . . . , m Docj,1 → j j ∈1, . . . , m Docj,i →Docj,i′ Topici i, i′ ∈1, . . . , ℓ; j ∈1, . . . , m Topici →w i ∈1, . . . , ℓ; w ∈V A sample parse generated by a “sticky topic” 1151 Sentence Doc3,7 Doc3,4 Doc3,4 Doc3,4 Doc3,1 _3 Topic4 shallow Topic4 circuits Topic4 compute Topic7 faster Figure 3: A tree generated by the “sticky topic” CFG. Here a nonterminal Doc3, 7 indicates we have just generated a word in document 3 belonging to topic 7. CFG is shown in Figure 3. The probabilities of the rules Docj,i →Docj,i′ Topici in this PCFG encode the probability of shifting from topic i to topic i′ (this PCFG can be viewed as generating the string from right to left). We can use non-uniform sparse Dirichlet priors on the probabilities of these rules to encourage “topic stickiness”. Specifically, by setting the Dirichlet parameters for the “topic shift” rules Docj,i′ →Docj,i Topici where i′ ̸= i much lower than the parameters for the “topic preservation” rules Docj,i →Docj,i Topici, Bayesian inference will be biased to find distributions in which adjacent words will tend to have the same topic. 5 Adaptor Grammars Non-parametric Bayesian inference, where the inference task involves learning not just the values of a finite vector of parameters but which parameters are relevant, has been the focus of intense research in machine learning recently. In the topicmodelling community this has lead to work on Dirichlet Processes and Chinese Restaurant Processes, which can be used to estimate the number of topics as well as their distribution across documents (Teh et al., 2006). There are two obvious non-parametric extensions to PCFGs. In the first we regard the set of nonterminals N as potentially unbounded, and try to learn the set of nonterminals required to describe the training corpus. This approach goes under the name of the “infinite HMM” or “infinite PCFG” (Beal et al., 2002; Liang et al., 2007; Liang et al., 2009). Informally, we are given a set of “basic categories”, say NP, VP, etc., and a set of rules that use these basic categories, say S →NP VP. The inference task is to learn a set of refined categories and rules (e.g., S7 →NP2 VP5) as well as their probabilities; this approach can therefore be viewed as a Bayesian version of the “split-merge” approach to grammar induction (Petrov and Klein, 2007). In the second approach, which we adopt here, we regard the set of rules R as potentially unbounded, and try to learn the rules required to describe a training corpus as well as their probabilities. Adaptor grammars are an example of this approach (Johnson et al., 2007b), where entire subtrees generated by a “base grammar” can be viewed as distinct rules (in that we learn a separate probability for each subtree). The inference task is non-parametric if there are an unbounded number of such subtrees. We review the adaptor grammar generative process below; for an informal introduction see Johnson (2008) and for details of the adaptor grammar inference procedure see Johnson and Goldwater (2009). An adaptor grammar (N, W, R, S, θ, A, C) consists of a PCFG (N, W, R, S, θ) in which a subset A ⊆N of the nonterminals are adapted, and where each adapted nonterminal X ∈A has an associated adaptor CX. An adaptor CX for X is a function that maps a distribution over trees TX to a distribution over distributions over TX (we give examples of adaptors below). Just as for a PCFG, an adaptor grammar defines distributions GX over trees TX for each X ∈ N ∪W. If X ∈W or X ̸∈A then GX is defined just as for a PCFG above, i.e., using (1). However, if X ∈A then GX is defined in terms of an additional distribution HX as follows: GX ∼ CX(HX) HX = X X→Y1...Ym∈RX θX→Y1...YmTDX(GY1, . . . , GYm) That is, the distribution GX associated with an adapted nonterminal X ∈A is a sample from adapting (i.e., applying CX to) its “ordinary” PCFG distribution HX. In general adaptors are chosen for the specific properties they have. For example, with the adaptors used here GX typically concentrates mass on a smaller subset of the trees TX than HX does. Just as with the PCFG, an adaptor grammar generates the distribution over trees GS, where S ∈N 1152 is the start symbol. However, while GS in a PCFG is a fixed distribution (given the rule probabilities θ), in an adaptor grammar the distribution GS is itself a random variable (because each GX for X ∈A is random), i.e., an adaptor grammar generates a distribution over distributions over trees TS. However, the posterior joint distribution Pr(t) of a sequence t = (t1, . . . , tn) of trees in TS is well-defined: Pr(t) = Z GS(t1) . . . GS(tn) dG where the integral is over all of the random distributions GX, X ∈A. The adaptors we use in this paper are Dirichlet Processes or two-parameter Poisson-Dirichlet Processes, for which it is possible to compute this integral. One way to do this uses the predictive distributions: Pr(tn+1 | t, HX) ∝ Z GX(t1) . . . GX(tn+1)CX(GX | HX) dGX where t = (t1, . . . , tn) and each ti ∈TX. The predictive distribution for the Dirichlet Process is the (labeled) Chinese Restaurant Process (CRP), and the predictive distribution for the two-parameter Poisson-Dirichlet process is the (labeled) PitmanYor Process (PYP). In the context of adaptor grammars, the CRP is: CRP(t | t, αX, HX) ∝nt(t) + αXHX(t) where nt(t) is the number of times t appears in t and αX > 0 is a user-settable “concentration parameter”. In order to generate the next tree tn+1 a CRP either reuses a tree t with probability proportional to number of times t has been previously generated, or else it “backs off” to the “base distribution” HX and generates a fresh tree t with probability proportional to αXHX(t). The PYP is a generalization of the CRP: PYP(t | t, aX, bX, HX) ∝max(0, nt(t) −mt aX) + (maX + bX)HX(t) Here aX ∈[0, 1] and bX > 0 are user-settable parameters, and mt is the number of times the PYP has generated t in t from the base distribution HX, and m = P t∈TX mt is the number of times any tree has been generated from HX. (In the Chinese Restaurant metaphor, mt is the number of tables labeled with t, and m is the number of occupied tables). If aX = 0 then the PYP is equivalent to a CRP with αX = bX, while if aX = 1 then the PYP generates samples from HX. Informally, the CRP has a strong preference to regenerate trees that have been generated frequently before, leading to a “rich-get-richer” dynamics. The PYP can mitigate this somewhat by reducing the effective count of previously generated trees and redistributing that probability mass to new trees generated from HX. As Goldwater et al. (2006) explain, Bayesian inference for HX given samples from GX is effectively performed from types if aX = 0 and from tokens if aX = 1, so varying aX smoothly interpolates between type-based and token-based inference. Adaptor grammars have previously been used primarily to study grammatical inference in the context of language acquisition. The word segmentation task involves segmenting a corpus of unsegmented phonemic utterance representations into words (Elman, 1990; Bernstein-Ratner, 1987). For example, the phoneme string corresponding to “you want to see the book” (with its correct segmentation indicated) is as follows: y △u ▲w △a △n △t ▲t △u ▲s △i ▲D △6 ▲b △U △k We can represent any possible segmentation of any possible sentence as a tree generated by the following unigram adaptor grammar. Sentence →Word Sentence →Word Sentence Word →Phonemes Phonemes →Phoneme Phonemes →Phoneme Phonemes The trees generated by this adaptor grammar are the same as the trees generated by the CFG rules. For example, the following skeletal parse in which all but the Word nonterminals are suppressed (the others are deterministically inferrable) shows the parse that corresponds to the correct segmentation of the string above. (Word y u) (Word w a n t) (Word t u) (Word s i) (Word d 6) (Word b u k) Because the Word nonterminal is adapted (indicated here by underlining) the adaptor grammar learns the probability of the entire Word subtrees (e.g., the probability that b u k is a Word); see Johnson (2008) for further details. 1153 6 Topic models with collocations Here we combine ideas from the unigram word segmentation adaptor grammar above and the PCFG encoding of LDA topic models to present a novel topic model that learns topical collocations. (For a non-grammar-based approach to this problem see Wang et al. (2007)). Specifically, we take the PCFG encoding of the LDA topic model described above, but modify it so that the Topici nodes generate sequences of words rather than single words. Then we adapt each of the Topici nonterminals, which means that we learn the probability of each of the sequences of words it can expand to. Sentence →Docj j ∈1, . . . , m Docj → j j ∈1, . . . , m Docj →Docj Topici i ∈1, . . . , ℓ; j ∈1, . . . , m Topici →Words i ∈1, . . . , ℓ Words →Word Words →Words Word Word →w w ∈V In order to demonstrate that this model works, we implemented this using the publicallyavailable adaptor grammar inference software,1 and ran it on the NIPS corpus (composed of published NIPS abstracts), which has previously been used for studying collocation-based topic models (Griffiths et al., 2007). Because there is no generally accepted evaluation for collocation-finding, we merely present some of the sample analyses found by our adaptor grammar. We ran our adaptor grammar with ℓ= 20 topics (i.e., 20 distinct Topici nonterminals). Adaptor grammar inference on this corpus is actually relatively efficient because the corpus provided by Griffiths et al. (2007) is already segmented by punctuation, so the terminal strings are generally rather short. Rather than set the Dirichlet parameters by hand, we placed vague priors on them and estimated them as described in Johnson and Goldwater (2009). The following are some examples of collocations found by our adaptor grammar: Topic0 →cost function Topic0 →fixed point Topic0 →gradient descent Topic0 →learning rates 1http://web.science.mq.edu.au/ ˜mjohnson/Software.htm Topic1 →associative memory Topic1 →hamming distance Topic1 →randomly chosen Topic1 →standard deviation Topic3 →action potentials Topic3 →membrane potential Topic3 →primary visual cortex Topic3 →visual system Topic10 →nervous system Topic10 →action potential Topic10 →ocular dominance Topic10 →visual field The following are skeletal sample parses, where we have elided all but the adapted nonterminals (i.e., all we show are the Topic nonterminals, since the other structure can be inferred deterministically). Note that because Griffiths et al. (2007) segmented the NIPS abstracts at punctuation symbols, the training corpus contains more than one string from each abstract. 3 (Topic5 polynomial size) (Topic15 threshold circuits) 4 (Topic11 studied) (Topic19 pattern recognition algorithms) 4 (Topic2 feedforward neural network) (Topic1 implementation) 5 (Topic11 single) (Topic10 ocular dominance stripe) (Topic12 low) (Topic3 ocularity) (Topic12 drift rate) 7 Finding the structure of proper names Grammars offer structural and positional sensitivity that is not exploited in the basic LDA topic models. Here we explore the potential for using Bayesian inference for learning linear ordering constraints that hold between elements within proper names. The Penn WSJ treebank is a widely used resource within computational linguistics (Marcus et al., 1993), but one of its weaknesses is that it does not indicate any structure internal to base noun phrases (i.e., it presents “flat” analyses of the pre-head NP elements). For many applications it would be extremely useful to have a more elaborated analysis of this kind of NP structure. For example, in an NP coreference application, if we could determine that Bill and Hillary are both first 1154 names then we could infer that Bill Clinton and Hillary Clinton are likely to refer to distinct individuals. On the other hand, because Mr in Mr Clinton is not a first name, it is possible that Mr Clinton and Bill Clinton refer to the same individual (Elsner et al., 2009). Here we present an adaptor grammar based on the insights of the PCFG encoding of LDA topic models that learns some of the structure of proper names. The key idea is that elements in proper names typically appear in a fixed order; we expect honorifics to appear before first names, which appear before middle names, which in turn appear before surnames, etc. Similarly, many company names end in fixed phrases such as Inc. Here we think of first names as a kind of topic, albeit one with a restricted positional location. One of the challenges is that some of these structural elements can be filled by multiword expressions; e.g., de Groot can be a surname. We deal with this by permitting multi-word collocations to fill the corresponding positions, and use the adaptor grammar machinery to learn these collocations. Inspired by the grammar presented in Elsner et al. (2009), our adaptor grammar is as follows, where adapted nonterminals are indicated by underlining as before. NP →(A0) (A1) . . . (A6) NP →(B0) (B1) . . . (B6) NP →Unordered+ A0 →Word+ . . . A6 →Word+ B0 →Word+ . . . B6 →Word+ Unordered →Word+ In this grammar parentheses indicate optionality, and the Kleene plus indicates iteration (these were manually expanded into ordinary CFG rules in our experiments). The grammar provides three different expansions for proper names. The first expansion says that a proper name can consist of some subset of the six different collocation classes A0 through A6 in that order, while the second expansion says that a proper name can consist of some subset of the collocation classes B0 through B6, again in that order. Finally, the third expansion says that a proper name can consist of an arbitrary sequence of “unordered” collocations (this is intended as a “catch-all” expansion to provide analyses for proper names that don’t fit either of the first two expansions). We extracted all of the proper names (i.e., phrases of category NNP and NNPS) in the Penn WSJ treebank and used them as the training corpora for the adaptor grammar just described. The adaptor grammar inference procedure found skeletal sample parses such as the following: (A0 barrett) (A3 smith) (A0 albert) (A2 j.) (A3 smith) (A4 jr.) (A0 robert) (A2 b.) (A3 van dover) (B0 aim) (B1 prime rate) (B2 plus) (B5 fund) (B6 inc.) (B0 balfour) (B1 maclaine) (B5 international) (B6 ltd.) (B0 american express) (B1 information services) (B6 co) (U abc) (U sports) (U sports illustrated) (U sports unlimited) While a full evaluation will have to await further study, in general it seems to distinguish person names from company names reasonably reliably, and it seems to have discovered that person names consist of a first name (A0), a middle name or initial (A2), a surname (A3) and an optional suffix (A4). Similarly, it seems to have uncovered that company names typically end in a phrase such as inc, ltd or co. 8 Conclusion This paper establishes a connection between two very different kinds of probabilistic models; LDA models of the kind used for topic modelling, and PCFGs, which are a standard model of hierarchical structure in language. The embedding we presented shows how to express an LDA model as a PCFG, and has the property that Bayesian inference of the parameters of that PCFG produces an equivalent model to that produced by Bayesian inference of the LDA model’s parameters. The primary value of this embedding is theoretical rather than practical; we are not advocating the use of PCFG estimation procedures to infer LDA models. Instead, we claim that the embedding suggests novel extensions to both the LDA topic models and PCFG-style grammars. We justified this claim by presenting several hybrid models that combine aspects of both topic models and 1155 grammars. We don’t claim that these are necessarily the best models for performing any particular tasks; rather, we present them as examples of models inspired by a combination of PCFGs and LDA topic models. We showed how the LDA to PCFG embedding suggested a “sticky topic” model extension to LDA. We then discussed adaptor grammars, and inspired by the LDA topic models, presented a novel topic model whose primitive elements are multi-word collocations rather than words. We concluded with an adaptor grammar that learns aspects of the internal structure of proper names. Acknowledgments This research was funded by US NSF awards 0544127 and 0631667, as well as by a start-up award from Macquarie University. I’d like to thank the organisers and audience at the Topic Modeling workshop at NIPS 2009, my former colleagues at Brown University (especially Eugene Charniak, Micha Elsner, Sharon Goldwater, Tom Griffiths and Erik Sudderth), my new colleagues at Macquarie University and the ACL reviewers for their excellent suggestions and comments on this work. Naturally all errors remain my own. References M.J. Beal, Z. Ghahramani, and C.E. Rasmussen. 2002. The infinite Hidden Markov Model. In T. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems, volume 14, pages 577–584. The MIT Press. N. Bernstein-Ratner. 1987. The phonology of parentchild speech. In K. Nelson and A. van Kleeck, editors, Children’s Language, volume 6. Erlbaum, Hillsdale, NJ. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Jeffrey Elman. 1990. Finding structure in time. Cognitive Science, 14:197–211. Micha Elsner, Eugene Charniak, and Mark Johnson. 2009. Structured generative models for unsupervised named-entity clustering. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 164–172, Boulder, Colorado, June. Association for Computational Linguistics. E. Fox, E. Sudderth, M. Jordan, and A. Willsky. 2008. An HDP-HMM for systems with state persistence. In Andrew McCallum and Sam Roweis, editors, Proceedings of the 25th Annual International Conference on Machine Learning (ICML 2008), pages 312–319. Omnipress. Sharon Goldwater, Tom Griffiths, and Mark Johnson. 2006. Interpolating between types and tokens by estimating power-law generators. In Y. Weiss, B. Sch¨olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 459– 466, Cambridge, MA. MIT Press. Thomas L. Griffiths and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences, 101:52285235. Thomas L. Griffiths, Mark Steyvers, and Joshua B. Tenenbaum. 2007. Topics in semantic representation. Psychological Review, 114(2):211244. Mark Johnson and Sharon Goldwater. 2009. Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor grammars. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 317–325, Boulder, Colorado, June. Association for Computational Linguistics. Mark Johnson, Thomas Griffiths, and Sharon Goldwater. 2007a. Bayesian inference for PCFGs via Markov chain Monte Carlo. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 139–146, Rochester, New York, April. Association for Computational Linguistics. Mark Johnson, Thomas L. Griffiths, and Sharon Goldwater. 2007b. Adaptor Grammars: A framework for specifying compositional nonparametric Bayesian models. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 641–648. MIT Press, Cambridge, MA. Mark Johnson. 2008. Using adaptor grammars to identifying synergies in the unsupervised acquisition of linguistic structure. In Proceedings of the 46th Annual Meeting of the Association of Computational Linguistics, Columbus, Ohio. Association for Computational Linguistics. Kenichi Kurihara and Taisuke Sato. 2006. Variational Bayesian grammar induction for natural language. In 8th International Colloquium on Grammatical Inference. Percy Liang, Slav Petrov, Michael Jordan, and Dan Klein. 2007. The infinite PCFG using hierarchical Dirichlet processes. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 688– 697. 1156 Percy Liang, Michael Jordan, and Dan Klein. 2009. Probabilistic grammars and hierarchical Dirichlet processes. In The Oxford Handbook of Applied Bayesian Analysis. Oxford University Press. Michell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 404–411, Rochester, New York. Association for Computational Linguistics. Y. W. Teh, M. Jordan, M. Beal, and D. Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101:1566–1581. Xuerui Wang, Andrew McCallum, and Xing Wei. 2007. Topical n-grams: Phrase and topic discovery, with an application to information retrieval. In Proceedings of the 7th IEEE International Conference on Data Mining (ICDM), pages 697–702. C.S. Wetherell. 1980. Probabilistic languages: A review and some open questions. Computing Surveys, 12:361–379. 1157
2010
117
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1158–1167, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics A Cognitive Cost Model of Annotations Based on Eye-Tracking Data Katrin Tomanek Language & Information Engineering (JULIE) Lab Universit¨at Jena Jena, Germany Udo Hahn Language & Information Engineering (JULIE) Lab Universit¨at Jena Jena, Germany Steffen Lohmann Dept. of Computer Science & Applied Cognitive Science Universit¨at Duisburg-Essen Duisburg, Germany J¨urgen Ziegler Dept. of Computer Science & Applied Cognitive Science Universit¨at Duisburg-Essen Duisburg, Germany Abstract We report on an experiment to track complex decision points in linguistic metadata annotation where the decision behavior of annotators is observed with an eyetracking device. As experimental conditions we investigate different forms of textual context and linguistic complexity classes relative to syntax and semantics. Our data renders evidence that annotation performance depends on the semantic and syntactic complexity of the decision points and, more interestingly, indicates that fullscale context is mostly negligible – with the exception of semantic high-complexity cases. We then induce from this observational data a cognitively grounded cost model of linguistic meta-data annotations and compare it with existing non-cognitive models. Our data reveals that the cognitively founded model explains annotation costs (expressed in annotation time) more adequately than non-cognitive ones. 1 Introduction Today’s NLP systems, in particular those relying on supervised ML approaches, are meta-data greedy. Accordingly, in the past years, we have witnessed a massive quantitative growth of annotated corpora. They differ in terms of the natural languages and domains being covered, the types of linguistic meta-data being solicited, and the text genres being served. We have seen largescale efforts in syntactic and semantic annotations in the past related to POS tagging and parsing, on the one hand, and named entities and relations (propositions), on the other hand. More recently, we are dealing with even more challenging issues such as subjective language, a large variety of co-reference and (e.g., RST-style) text structure phenomena, Since the NLP community is further extending their work into these more and more sophisticated semantic and pragmatic analytics, there seems to be no end in sight for increasingly complex and diverse annotation tasks. Yet, producing annotations is pretty expensive. So the question comes up, how we can rationally manage these investments so that annotation campaigns are economically doable without loss in annotation quality. The economics of annotations are at the core of Active Learning (AL) where those linguistic samples are focused on in the entire document collection, which are estimated as being most informative to learn an effective classification model (Cohn et al., 1996). This intentional selection bias stands in stark contrast to prevailing sampling approaches where annotation examples are randomly chosen. When different approaches to AL are compared with each other, or with standard random sampling, in terms of annotation efficiency, up until now, the AL community assumed uniform annotation costs for each linguistic unit, e.g. words. This claim, however, has been shown to be invalid in several studies (Hachey et al., 2005; Settles et al., 2008; Tomanek and Hahn, 2010). If uniformity does not hold and, hence, the number of annotated units does not indicate the true annotation efforts required for a specific sample, empirically more adequate cost models are needed. Building predictive models for annotation costs has only been addressed in few studies for now (Ringger et al., 2008; Settles et al., 2008; Arora et al., 2009). The proposed models are based on easy-to-determine, yet not so explanatory variables (such as the number of words to be annotated), indicating that accurate models of annotation costs remain a desideratum. We here, alternatively, consider different classes of syntactic and semantic complexity that might affect the cognitive load during the annotation process, with 1158 the overall goal to find additional and empirically more adequate variables for cost modeling. The complexity of linguistic utterances can be judged either by structural or by behavioral criteria. Structural complexity emerges, e.g., from the static topology of phrase structure trees and procedural graph traversals exploiting the topology of parse trees (see Szmrecs´anyi (2004) or Cheung and Kemper (1992) for a survey of metrics of this type). However, structural complexity criteria do not translate directly into empirically justified cost measures and thus have to be taken with care. The behavioral approach accounts for this problem as it renders observational data of the annotators’ eye movements. The technical vehicle to gather such data are eye-trackers which have already been used in psycholinguistics (Rayner, 1998). Eye-trackers were able to reveal, e.g., how subjects deal with ambiguities (Frazier and Rayner, 1987; Rayner et al., 2006; Traxler and Frazier, 2008) or with sentences which require re-analysis, so-called garden path sentences (Altmann et al., 2007; Sturt, 2007). The rationale behind the use of eye-tracking devices for the observation of annotation behavior is that the length of gaze durations and behavioral patterns underlying gaze movements are considered to be indicative of the hardness of the linguistic analysis and the expenditures for the search of clarifying linguistic evidence (anchor words) to resolve hard decision tasks such as phrase attachments or word sense disambiguation. Gaze duration and search time are then taken as empirical correlates of linguistic complexity and, hence, uncover the real costs. We therefore consider eyetracking as a promising means to get a better understanding of the nature of the linguistic annotation processes with the ultimate goal of identifying predictive factors for annotation cost models. In this paper, we first describe an empirical study where we observed the annotators’ reading behavior while annotating a corpus. Section 2 deals with the design of the study, Section 3 discusses its results. In Section 4 we then focus on the implications this study has on building cost models and compare a simple cost model mainly relying on word and character counts and additional simple descriptive characteristics with one that can be derived from experimental data as provided from eye-tracking. We conclude with experiments which reveal that cognitively grounded models outperform simpler ones relative to cost prediction using annotation time as a cost measure. Based on this finding, we suggest that cognitive criteria are helpful for uncovering the real costs of corpus annotation. 2 Experimental Design In our study, we applied, for the first time ever to the best of our knowledge, eye-tracking to study the cognitive processes underlying the annotation of linguistic meta-data, named entities in particular. In this task, a human annotator has to decide for each word whether or not it belongs to one of the entity types of interest. We used the English part of the MUC7 corpus (Linguistic Data Consortium, 2001) for our study. It contains New York Times articles from 1996 reporting on plane crashes. These articles come already annotated with three types of named entities considered important in the newspaper domain, viz. “persons”, “locations”, and “organizations”. Annotation of these entity types in newspaper articles is admittedly fairly easy. We chose this rather simple setting because the participants in the experiment had no previous experience with document annotation and no serious linguistic background. Moreover, the limited number of entity types reduced the amount of participants’ training prior to the actual experiment, and positively affected the design and handling of the experimental apparatus (see below). We triggered the annotation processes by giving our participants specific annotation examples. An example consists of a text document having one single annotation phrase highlighted which then had to be semantically annotated with respect to named entity mentions. The annotation task was defined such that the correct entity type had to be assigned to each word in the annotation phrase. If a word belongs to none of the three entity types a fourth class called “no entity” had to be assigned. The phrases highlighted for annotation were complex noun phrases (CNPs), each a sequence of words where a noun (or an equivalent nominal expression) constitutes the syntactic head and thus dominates dependent words such as determiners, adjectives, or other nouns or nominal expressions (including noun phrases and prepositional phrases). CNPs with even more elaborate internal syntactic structures, such as coordinations, appositions, or relative clauses, were isolated from 1159 their syntactic host structure and the intervening linguistic material containing these structures was deleted to simplify overly long sentences. We also discarded all CNPs that did not contain at least one entity-critical word, i.e., one which might be a named entity according to its orthographic appearance (e.g., starting with an upper-case letter). It should be noted that such orthographic signals are by no means a sufficient condition for the presence of a named entity mention within a CNP. The choice of CNPs as stimulus phrases is motivated by the fact that named entities are usually fully encoded by this kind of linguistic structure. The chosen stimulus – an annotation example with one phrase highlighted for annotation – allows for an exact localization of the cognitive processes and annotation actions performed relative to that specific phrase. 2.1 Independent Variables We defined two measures for the complexity of the annotation examples: The syntactic complexity was given by the number of nodes in the constituent parse tree which are dominated by the annotation phrase (Szmrecs´anyi, 2004).1 According to a threshold on the number of nodes in such a parse tree, we classified CNPs as having either high or low syntactic complexity. The semantic complexity of an annotation example is based on the inverse document frequency df of the words in the annotation phrase according to a reference corpus.2 We calculated the semantic complexity score of an annotation phrase as max i 1 df (wi), where wi is the i-th word of the annotation phrase. Again, we empirically determined a threshold classifying annotation phrases as having either high or low semantic complexity. Additionally, this automatically generated classification was manually checked and, if necessary, revised by two annotation experts. For instance, if an annotation phrase contained a strong trigger (e.g., a social role or job title, as with “spokeswoman” in the annotation phrase “spokeswoman Arlene”), it was classified as a low-semantic-complexity item even though it might have been assigned a high inverse document frequency (due to the infrequent word “Arlene”). 1Constituency parse structure was obtained from the OPENNLP parser (http://opennlp.sourceforge. net/) trained on PennTreeBank data. 2We chose the English part of the Reuters RCV2 corpus as the reference corpus for our experiments. Two experimental groups were formed to study different contexts. In the document context condition the whole newspaper article was shown as annotation example, while in the sentence context condition only the sentence containing the annotation phrase was presented. The participants3 were randomly assigned to one of these groups. We decided for this between-subjects design to avoid any irritation of the participants caused by constantly changing contexts. Accordingly, the participants were assigned to one of the experimental groups and corresponding context condition already in the second training phase that took place shortly before the experiment started (see below). 2.2 Hypotheses and Dependent Variables We tested the following two hypotheses: Hypothesis H1: Annotators perform differently in the two context conditions. H1 is based on the linguistically plausible assumption that annotators are expected to make heavy use of the surrounding context because such context could be helpful for the correct disambiguation of entity classes. Accordingly, lacking context, an annotator is expected to annotate worse than under the condition of full context. However, the availability of (too much) context might overload and distract annotators, with a presumably negative effect on annotation performance. Hypothesis H2: The complexity of the annotation phrases determines the annotation performance. The assumption is that high syntactic or semantic complexity significantly lowers the annotation performance. In order to test these hypotheses we collected data for the following dependent variables: (a) the annotation accuracy – we identified erroneous entities by comparison with the original gold annotations in the MUC7 corpus, (b) the time needed per annotation example, and (c) the distribution and duration of the participants’ eye gazes. 320 subjects (12 female) with an average age of 24 years (mean = 24, standard deviation (SD) = 2.8) and normal or corrected-to-normal vision capabilities took part in the study. All participants were students with a computing-related study background, with good to very good English language skills (mean = 7.9, SD = 1.2, on a ten-point scale with 1 = “poor” and 10 = “excellent”, self-assessed), but without any prior experience in annotation and without previous exposure to linguistic training. 1160 2.3 Stimulus Material According to the above definition of complexity, we automatically preselected annotation examples characterized by either a low or a high degree of semantic and syntactic complexity. After manual fine-tuning of the example set assuring an even distribution of entity types and syntactic correctness of the automatically derived annotation phrases, we finally selected 80 annotation examples for the experiment. These were divided into four subsets of 20 examples each falling into one of the following complexity classes: sem-syn: low semantic/low syntactic complexity SEM-syn: high semantic/low syntactic complexity sem-SYN: low semantic/high syntactic complexity SEM-SYN: high semantic/high syntactic complexity 2.4 Experimental Apparatus and Procedure The annotation examples were presented in a custom-built tool and its user interface was kept as simple as possible not to distract the eye movements of the participants. It merely contained one frame showing the text of the annotation example, with the annotation phrase being highlighted. A blank screen was shown after each annotation example to reset the eyes and to allow a break, if needed. The time the blank screen was shown was not counted as annotation time. The 80 annotation examples were presented to all participants in the same randomized order, with a balanced distribution of the complexity classes. A variation of the order was hardly possible for technical and analytical reasons but is not considered critical due to extensive, pre-experimental training (see below). The limitation on 80 annotation examples reduces the chances of errors due to fatigue or lack of attention that can be observed in long-lasting annotation activities. Five introductory examples (not considered in the final evaluation) were given to get the subjects used to the experimental environment. All annotation examples were chosen in a way that they completely fitted on the screen (i.e., text length was limited) to avoid the need for scrolling (and eye distraction). The position of the CNP within the respective context was randomly distributed, excluding the first and last sentence. The participants used a standard keyboard to assign the entity types for each word of the annotation example. All but 5 keys were removed from the keyboard to avoid extra eye movements for finger coordination (three keys for the positive entity classes, one for the negative “no entity” class, and one to confirm the annotation). Pre-tests had shown that the participants could easily issue the annotations without looking down at the keyboard. We recorded the participant’s eye movements on a Tobii T60 eye-tracking device which is invisibly embedded in a 17” TFT monitor and comparatively tolerant to head movements. The participants were seated in a comfortable position with their head in a distance of 60-70 cm from the monitor. Screen resolution was set to 1280 x 1024 px and the annotation examples were presented in the middle of the screen in a font size of 16 px and a line spacing of 5 px. The presentation area had no fixed height and varied depending on the context condition and length of the newspaper article. The text was always vertically centered on the screen. All participants were familiarized with the annotation task and the guidelines in a preexperimental workshop where they practiced annotations on various exercise examples (about 60 minutes). During the next two days, one after the other participated in the actual experiment which took between 15 and 30 minutes, including calibration of the eye-tracking device. Another 20-30 minutes of training time directly preceded the experiment. After the experiment, participants were interviewed and asked to fill out a questionnaire. Overall, the experiment took about two hours for each participant for which they were financially compensated. Participants were instructed to focus more on annotation accuracy than on annotation time as we wanted to avoid random guessing. Accordingly, as an extra incentive, we rewarded the three participants with the highest annotation accuracy with cinema vouchers. None of the participants reported serious difficulties with the newspaper articles or annotation tool and all understood the annotation task very well. 3 Results We used a mixed-design analysis of variance (ANOVA) model to test the hypotheses, with the context condition as between-subjects factor and the two complexity classes as within-subject factors. 3.1 Testing Context Conditions To test hypothesis H1 we compared the number of annotation errors on entity-critical words made 1161 above before anno phrase after below percentage of participants looking at a sub-area 35% 32% 100% 34% 16% average number of fixations per sub-area 2.2 14.1 1.3 Table 1: Distribution of annotators’ attention among sub-areas per annotation example. by the annotators in the two contextual conditions (complete document vs. sentence). Surprisingly, on the total of 174 entity-critical words within the 80 annotation examples, we found exactly the same mean value of 30.8 errors per participant in both conditions. There were also no significant differences in the average time needed to annotate an example in both conditions (means of 9.2 and 8.6 seconds, respectively, with F(1, 18) = 0.116, p = 0.74).4 These results seem to suggest that it makes no difference (neither for annotation accuracy nor for time) whether or not annotators are shown textual context beyond the sentence that contains the annotation phrase. To further investigate this finding we analyzed eye-tracking data of the participants gathered for the document context condition. We divided the whole text area into five sub-areas as schematically shown in Figure 1. We then determined the average proportion of participants that directed their gaze at least once at these sub-areas. We considered all fixations with a minimum duration of 100 ms, using a fixation radius (i.e., the smallest distance that separates fixations) of 30 px and excluded the first second (mainly used for orientation and identification of the annotation phrase). Figure 1: Schematic visualization of the sub-areas of an annotation example. Table 1 reveals that on average only 35% of the 4In general, we observed a high variance in the number of errors and time values between the subjects. While, e.g., the fastest participant handled an example in 3.6 seconds on the average, the slowest one needed 18.9 seconds; concerning the annotation errors on the 174 entity-critical words, these ranged between 21 and 46 errors. participants looked in the textual context above the annotation phrase embedding sentence, and even less perceived the context below (16%). The sentence parts before and after the annotation phrase were, on the average, visited by one third (32% and 34%, respectively) of the participants. The uneven distribution of the annotators’ attention becomes even more apparent in a comparison of the total number of fixations on the different text parts: 14 out of an average of 18 fixations per example were directed at the annotation phrase and the surrounding sentence, the text context above the annotation chunk received only 2.2 fixations on the average and the text context below only 1.3. Thus, the eye-tracking data indicates that the textual context is not as important as might have been expected for quick and accurate annotation. This result can be explained by the fact that participants of the document-context condition used the context whenever they thought it might help, whereas participants of the sentence-context condition spent more time thinking about a correct answer, overall with the same result. 3.2 Testing Complexity Classes To test hypothesis H2 we also compared the average annotation time and the number of errors on entity-critical words for the complexity subsets (see Table 2). The ANOVA results show highly significant differences for both annotation time and errors.5 A pairwise comparison of all subsets in both conditions with a t-test showed nonsignificant results only between the SEM-syn and syn-SEM subsets.6 Thus, the empirical data generally supports hypothesis H2 in that the annotation performance seems to correlate with the complexity of the annotation phrase, on the average. 5Annotation time results: F(1, 18) = 25, p < 0.01 for the semantic complexity and F(1, 18) = 76.5, p < 0.01 for the syntactic complexity; Annotation complexity results: F(1, 18) = 48.7, p < 0.01 for the semantic complexity and F(1, 18) = 184, p < 0.01 for the syntactic complexity. 6t(9) = 0.27, p = 0.79 for the annotation time in the document context condition, and t(9) = 1.97, p = 0.08 for the annotation errors in the sentence context condition. 1162 experimental complexity e.-c. time errors condition class words mean SD mean SD rate sem-syn 36 4.0s 2.0 2.7 2.1 .075 document SEM-syn 25 9.2s 6.7 5.1 1.4 .204 condition sem-SYN 51 9.6s 4.0 9.1 2.9 .178 SEM-SYN 62 14.2s 9.5 13.9 4.5 .224 sem-syn 36 3.9s 1.3 1.1 1.4 .031 sentence SEM-syn 25 7.5s 2.8 6.2 1.9 .248 condition sem-SYN 51 9.6s 2.8 9.0 3.9 .176 SEM-SYN 62 13.5s 5.0 14.5 3.4 .234 Table 2: Average performance values for the 10 subjects of each experimental condition and 20 annotation examples of each complexity class: number of entity-critical words, mean annotation time and standard deviations (SD), mean annotation errors, standard deviations, and error rates (number of errors divided by number of entity-critical words). 3.3 Context and Complexity We also examined whether the need for inspecting the context increases with the complexity of the annotation phrase. Therefore, we analyzed the eye-tracking data in terms of the average number of fixations on the annotation phrase and on its embedding contexts for each complexity class (see Table 3). The values illustrate that while the number of fixations on the annotation phrase rises generally with both the semantic and the syntactic complexity, the number of fixations on the context rises only with semantic complexity. The number of fixations on the context is nearly the same for the two subsets with low semantic complexity (sem-syn and sem-SYN, with 1.0 and 1.5), while it is significantly higher for the two subsets with high semantic complexity (5.6 and 5.0), independent of the syntactic complexity.7 complexity fix. on phrase fix. on context class mean SD mean SD sem-syn 4.9 4.0 1.0 2.9 SEM-syn 8.1 5.4 5.6 5.6 sem-SYN 18.1 7.7 1.5 2.0 SEM-SYN 25.4 9.3 5.0 4.1 Table 3: Average number of fixations on the annotation phrase and context for the document condition and 20 annotation examples of each complexity class. These results suggest that the need for context mainly depends on the semantic complexity of the annotation phrase, while it is less influenced by its syntactic complexity. 7ANOVA result of F(1, 19) = 19.7, p < 0.01 and significant differences also in all pairwise comparisons. phrase antecedent Figure 2: Annotation example with annotation phrase and the antecedent for “Roselawn” in the text (left), and gaze plot of one participant showing a scanning-for-coreference behavior (right). This finding is also qualitatively supported by the gaze plots we generated from the eye-tracking data. Figure 2 shows a gaze plot for one participant that illustrates a scanning-for-coreference behavior we observed for several annotation phrases with high semantic complexity. In this case, words were searched in the upper context, which according to their orthographic signals might refer to a named entity but which could not completely be resolved only relying on the information given by the annotation phrase itself and its embedding sentence. This is the case for “Roselawn” in the annotation phrase “Roselawn accident”. The context reveals that Roselawn, which also occurs in the first sentence, is a location. A similar procedure is performed for acronyms and abbreviations which cannot be resolved from the immediate local context – searches mainly visit the upper context. As indicated by the gaze movements, it also became apparent that texts were rather scanned for hints instead of being deeply read. 1163 4 Cognitively Grounded Cost Modeling We now discuss whether the findings on dependent variables from our eye-tracking study are fruitful for actually modeling annotation costs. Therefore, we learn a linear regression model with time (an operationalization of annotation costs) as the dependent variable. We compare our ‘cognitive’ model against a baseline model which relies on some simple formal text features only, and test whether the newly introduced features help predict annotation costs more accurately. 4.1 Features The features for the baseline model, character- and word-based, are similar to the ones used by Ringger et al. (2008) and Settles et al. (2008).8 Our cognitive model, however, makes additional use of features based on linguistic complexity, and includes syntactic and semantic criteria related to the annotation phrases. These features were inspired by the insights provided by our eye-tracking experiments. All features are designed such that they can automatically be derived from unlabeled data, a necessary condition for such features to be practically applicable. To account for our findings that syntactic and semantic complexity correlates with annotation performance, we added three features based on syntactic, and two based on semantic complexity measures. We decided for the use of multiple measures because there is no single agreed-upon metric for either syntactic or semantic complexity. This decision is further motivated by findings which reveal that different measures are often complementary to each other so that their combination better approximates the inherent degrees of complexity (Roark et al., 2007). As for syntactic complexity, we use two measures based on structural complexity including (a) the number of nodes of a constituency parse tree which are dominated by the annotation phrase (cf. Section 2.1), and (b) given the dependency graph of the sentence embedding the annotation phrase, we consider the distance between words for each dependency link within the annotation phrase and consider the maximum over such dis8In preliminary experiments our set of basic features comprised additional features providing information on the usage of stop words in the annotation phrase and on the number of paragraphs, sentences, and words in the respective annotation example. However, since we found these features did not have any significant impact on the model, we removed them. tance values as another metric for syntactic complexity. Lin (1996) has already shown that human performance on sentence processing tasks can be predicted using such a measure. Our third syntactic complexity measure is based on the probability of part-of-speech (POS) 2-grams. Given a POS 2-gram model, which we learned from the automatically POS-tagged MUC7 corpus, the complexity of an annotation phrase is defined by Pn i=2 P(POSi|POSi−1) where POSi refers to the POS-tag of the i-th word of the annotation phrase. A similar measure has been used by Roark et al. (2007) who claim that complex syntactic structures correlate with infrequent or surprising combinations of POS tags. As far as the quantification of semantic complexity is concerned, we use (a) the inverse document frequency df (wi) of each word wi (cf. Section 2.1), and a measure based on the semantic ambiguity of each word, i.e., the number of meanings contained in WORDNET,9 within an annotation phrase. We consider the maximum ambiguity of the words within the annotation phrase as the overall ambiguity of the respective annotation phrase. This measure is based on the assumption that annotation phrases with higher semantic ambiguity are harder to annotate than low-ambiguity ones. Finally, we add the Flesch-Kincaid Readability Score (Klare, 1963), a well-known metric for estimating the comprehensibility and reading complexity of texts. As already indicated, some of the hardness of annotations is due to tracking co-references and abbreviations. Both often cannot be resolved locally so that annotators need to consult the context of an annotation chunk (cf. Section 3.3). Thus, we also added features providing information whether the annotation phrases contain entitycritical words which may denote the referent of an antecedent of an anaphoric relation. In the same vein, we checked whether an annotation phrase contains expressions which can function as an abbreviation by virtue of their orthographical appearance, e.g., consist of at least two upper-case letters. Since our participants were sometimes scanning for entity-critical words, we also added features providing information on the number of entitycritical words within the annotation phrase. Table 4 enumerates all feature classes and single features used for determining our cost model. 9http://wordnet.princeton.edu/ 1164 Feature Group # Features Feature Description characters (basic) 6 number of characters and words per annotation phrase; test whether words in a phrase start with capital letters, consist of capital letters only, have alphanumeric characters, or are punctuation symbols words 2 number of entity-critical words and percentage of entity-critical words in the annotation phrase complexity 6 syntactic complexity: number of dominated nodes, POS n-gram probability, maximum dependency distance; semantic complexity: inverse document frequency, max. ambiguity; general linguistic complexity: Flesch-Kincaid Readability Score semantics 3 test whether entity-critical word in annotation phrase is used in document (preceding or following current phrase); test whether phrase contains an abbreviation Table 4: Features for cost modeling. 4.2 Evaluation To test how well annotation costs can be modeled by the features described above, we used the MUC7T corpus, a re-annotation of the MUC7 corpus (Tomanek and Hahn, 2010). MUC7T has time tags attached to the sentences and CNPs. These time tags indicate the time it took to annotate the respective phrase for named entity mentions of the types person, location, and organization. We here made use of the time tags of the 15,203 CNPs in MUC7T . MUC7T has been annotated by two annotators (henceforth called A and B) and so we evaluated the cost models for both annotators. We learned a simple linear regression model with the annotation time as dependent variable and the features described above as independent variables. The baseline model only includes the basic feature set, whereas the ‘cognitive’ model incorporates all features described above. Table 5 depicts the performance of both models induced from the data of annotator A and B. The coefficient of determination (R2) describes the proportion of the variance of the dependent variable that can be described by the given model. We report adjusted R2 to account for the different numbers of features used in both models. model R2 on A’s data R2 on B’s data baseline 0.4695 0.4640 cognitive 0.6263 0.6185 Table 5: Adjusted R2 values on both models and for annotators A and B. For both annotators, the baseline model is significantly outperformed in terms of R2 by our ‘cognitive’ model (p < 0.05). Considering the features that were inspired from the eye-tracking study, R2 is increased from 0.4695 to 0.6263 on the timing data of annotator A, and from 0.464 to 0.6185 on the data of annotator B. These numbers clearly demonstrate that annotation costs are more adequately modelled by the additional features we identified through our eye-tracking study. Our ‘cognitive’ model now consists of 21 coefficients. We tested for the significance of this model’s regression terms. For annotator A we found all coefficients to be significant with respect to the model (p < 0.05), for annotator B all coefficients except one were significant. Figure 6 shows the coefficients of annotator A’s ‘cognitive’ model along with the standard errors and t-values. 5 Summary and Conclusions In this paper, we explored the use of eye-tracking technology to investigate the behavior of human annotators during the assignment of three types of named entities – persons, organizations and locations – based on the eye-mind assumption. We tested two main hypotheses – one relating to the amount of contextual information being used for annotation decisions, the other relating to different degrees of syntactic and semantic complexity of expressions that had to be annotated. We found experimental evidence that the textual context is searched for decision making on assigning semantic meta-data at a surprisingly low rate (with 1165 Feature Group Feature Name/Coefficient Estimate Std. Error t value Pr(>|t|) (Intercept) 855.0817 33.3614 25.63 0.0000 characters (basic) token number -304.3241 29.6378 -10.27 0.0000 char number 7.1365 2.2622 3.15 0.0016 has token initcaps 244.4335 36.1489 6.76 0.0000 has token allcaps -342.0463 62.3226 -5.49 0.0000 has token alphanumeric -197.7383 39.0354 -5.07 0.0000 has token punctuation -303.7960 50.3570 -6.03 0.0000 words number tokens entity like 934.3953 13.3058 70.22 0.0000 percentage tokens entity like -729.3439 43.7252 -16.68 0.0000 complexity sem compl inverse document freq 392.8855 35.7576 10.99 0.0000 sem compl maximum ambiguity -13.1344 1.8352 -7.16 0.0000 synt compl number dominated nodes 87.8573 7.9094 11.11 0.0000 synt compl pos ngram probability 287.8137 28.2793 10.18 0.0000 syn complexity max dependency distance 28.7994 9.2174 3.12 0.0018 flesch kincaid readability -0.4117 0.1577 -2.61 0.0090 semantics has entity critical token used above 73.5095 24.1225 3.05 0.0023 has entity critical token used below -178.0314 24.3139 -7.32 0.0000 has abbreviation 763.8605 73.5328 10.39 0.0000 Table 6: ‘Cognitive’ model of annotator A. the exception of tackling high-complexity semantic cases and resolving co-references) and that annotation performance correlates with semantic and syntactic complexity. The results of these experiments were taken as a heuristic clue to focus on cognitively plausible features of learning empirically rooted cost models for annotation. We compared a simple cost model (basically taking the number of words and characters into account) with a cognitively grounded model and got a much higher fit for the cognitive model when we compared cost predictions of both model classes on the recently released time-stamped version of the MUC7 corpus. We here want to stress the role of cognitive evidence from eye-tracking to determine empirically relevant features for the cost model. The alternative, more or less mechanical feature engineering, suffers from the shortcoming that is has to deal with large amounts of (mostly irrelevant) features – a procedure which not only requires increased amounts of training data but also is often computationally very expensive. Instead, our approach introduces empirical, theory-driven relevance criteria into the feature selection process. Trying to relate observables of complex cognitive tasks (such as gaze duration and gaze movements for named entity annotation) to explanatory models (in our case, a timebased cost model for annotation) follows a much warranted avenue in research in NLP where feature farming becomes a theory-driven, explanatory process rather than a much deplored theory-blind engineering activity (cf. ACL-WS-2005 (2005)). In this spirit, our focus has not been on finetuning this cognitive cost model to achieve even higher fits with the time data. Instead, we aimed at testing whether the findings from our eye-tracking study can be exploited to model annotation costs more accurately. Still, future work will be required to optimize a cost model for eventual application where even more accurate cost models may be required. This optimization may include both exploration of additional features (such as domain-specific ones) as well as experimentation with other, presumably non-linear, regression models. Moreover, the impact of improved cost models on the efficiency of (cost-sensitive) selective sampling approaches, such as Active Learning (Tomanek and Hahn, 2009), should be studied. 1166 References ACL-WS-2005. 2005. Proceedings of the ACL Workshop on Feature Engineering for Machine Learning in Natural Language Processing. accessible via http://www.aclweb.org/anthology/ W/W05/W05-0400.pdf. Gerry Altmann, Alan Garnham, and Yvette Dennis. 2007. Avoiding the garden path: Eye movements in context. Journal of Memory and Language, 31(2):685–712. Shilpa Arora, Eric Nyberg, and Carolyn Ros´e. 2009. Estimating annotation cost for active learning in a multi-annotator environment. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, pages 18–26. Hintat Cheung and Susan Kemper. 1992. Competing complexity metrics and adults’ production of complex sentences. Applied Psycholinguistics, 13:53– 76. David Cohn, Zoubin Ghahramani, and Michael Jordan. 1996. Active learning with statistical models. Journal of Artificial Intelligence Research, 4:129–145. Lyn Frazier and Keith Rayner. 1987. Resolution of syntactic category ambiguities: Eye movements in parsing lexically ambiguous sentences. Journal of Memory and Language, 26:505–526. Ben Hachey, Beatrice Alex, and Markus Becker. 2005. Investigating the effects of selective sampling on the annotation task. In CoNLL 2005 – Proceedings of the 9th Conference on Computational Natural Language Learning, pages 144–151. George Klare. 1963. The Measurement of Readability. Ames: Iowa State University Press. Dekang Lin. 1996. On the structural complexity of natural language sentences. In COLING 1996 – Proceedings of the 16th International Conference on Computational Linguistics, pages 729–733. Linguistic Data Consortium. 2001. Message Understanding Conference (MUC) 7. Philadelphia: Linguistic Data Consortium. Keith Rayner, Anne Cook, Barbara Juhasz, and Lyn Frazier. 2006. Immediate disambiguation of lexically ambiguous words during reading: Evidence from eye movements. British Journal of Psychology, 97:467–482. Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 126:372–422. Eric Ringger, Marc Carmen, Robbie Haertel, Kevin Seppi, Deryle Lonsdale, Peter McClanahan, James Carroll, and Noel Ellison. 2008. Assessing the costs of machine-assisted corpus annotation through a user study. In LREC 2008 – Proceedings of the 6th International Conference on Language Resources and Evaluation, pages 3318–3324. Brian Roark, Margaret Mitchell, and Kristy Hollingshead. 2007. Syntactic complexity measures for detecting mild cognitive impairment. In Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing, pages 1–8. Burr Settles, Mark Craven, and Lewis Friedland. 2008. Active learning with real annotation costs. In Proceedings of the NIPS 2008 Workshop on CostSensitive Machine Learning, pages 1–10. Patrick Sturt. 2007. Semantic re-interpretation and garden path recovery. Cognition, 105:477–488. Benedikt M. Szmrecs´anyi. 2004. On operationalizing syntactic complexity. In Proceedings of the 7th International Conference on Textual Data Statistical Analysis. Vol. II, pages 1032–1039. Katrin Tomanek and Udo Hahn. 2009. Semisupervised active learning for sequence labeling. In ACL 2009 – Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 1039–1047. Katrin Tomanek and Udo Hahn. 2010. Annotation time stamps: Temporal metadata from the linguistic annotation process. In LREC 2010 – Proceedings of the 7th International Conference on Language Resources and Evaluation. Matthew Traxler and Lyn Frazier. 2008. The role of pragmatic principles in resolving attachment ambiguities: Evidence from eye movements. Memory & Cognition, 36:314–328. 1167
2010
118
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1168–1178, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics A Rational Model of Eye Movement Control in Reading Klinton Bicknell and Roger Levy Department of Linguistics University of California, San Diego 9500 Gilman Dr, La Jolla, CA 92093-0108 {kbicknell,rlevy}@ling.ucsd.edu Abstract A number of results in the study of realtime sentence comprehension have been explained by computational models as resulting from the rational use of probabilistic linguistic information. Many times, these hypotheses have been tested in reading by linking predictions about relative word difficulty to word-aggregated eye tracking measures such as go-past time. In this paper, we extend these results by asking to what extent reading is well-modeled as rational behavior at a finer level of analysis, predicting not aggregate measures, but the duration and location of each fixation. We present a new rational model of eye movement control in reading, the central assumption of which is that eye movement decisions are made to obtain noisy visual information as the reader performs Bayesian inference on the identities of the words in the sentence. As a case study, we present two simulations demonstrating that the model gives a rational explanation for between-word regressions. 1 Introduction The language processing tasks of reading, listening, and even speaking are remarkably difficult. Good performance at each one requires integrating a range of types of probabilistic information and making incremental predictions on the basis of noisy, incomplete input. Despite these requirements, empirical work has shown that humans perform very well (e.g., Tanenhaus, SpiveyKnowlton, Eberhard, & Sedivy, 1995). Sophisticated models have been developed that explain many of these effects using the tools of computational linguistics and large-scale corpora to make normative predictions for optimal performance in these tasks (Genzel & Charniak, 2002, 2003; Keller, 2004; Levy & Jaeger, 2007; Jaeger, 2010). To the extent that the behavior of these models looks like human behavior, it suggests that humans are making rational use of all the information available to them in language processing. In the domain of incremental language comprehension, especially, there is a substantial amount of computational work suggesting that humans behave rationally (e.g., Jurafsky, 1996; Narayanan & Jurafsky, 2001; Levy, 2008; Levy, Reali, & Griffiths, 2009). Most of this work has taken as its task predicting the difficulty of each word in a sentence, a major result being that a large component of the difficulty of a word appears to be a function of its probability in context (Hale, 2001; Smith & Levy, 2008). Much of the empirical basis for this work comes from studying reading, where word difficulty can be related to the amount of time that a reader spends on a particular word. To relate these predictions about word difficulty to the data obtained in eye tracking experiments, the eye movement record has been summarized through word aggregate measures, such as the average duration of the first fixation on a word, or the amount of time between when a word is first fixated and when the eyes move to its right (‘go-past time’). It is important to note that this notion of word difficulty is an abstraction over the actual task of reading, which is made up of more fine-grained decisions about how long to leave the eyes in their current position, and where to move them next, producing the series of relatively stable periods (fixations) and movements (saccades) that characterize the eye tracking record. While there has been much empirical work on reading at this fine-grained scale (see Rayner, 1998 for an overview), and there are a number of successful models (Reichle, Pollatsek, & Rayner, 2006; Engbert, Nuthmann, Richter, & Kliegl, 2005), little is known about the extent to which human reading behavior appears to be rational at this finer 1168 grained scale. In this paper, we present a new rational model of eye movement control in reading, the central assumption of which is that eye movement decisions are made to obtain noisy visual information, which the reader uses in Bayesian inference about the form and structure of the sentence. As a case study, we show that this model gives a rational explanation for between-word regressions. In Section 2, we briefly describe the leading models of eye movements in reading, and in Section 3, we describe how these models account for between-word regressions and the intuition behind our model’s account of them. Section 4 describes the model and its implementation and Sections 5– 6 describe two simulations we performed with the model comparing behavioral policies that make regressions to those that do not. In Simulation 1, we show that specific regressive policies outperform specific non-regressive policies, and in Simulation 2, we use optimization to directly find optimal policies for three performance measures. The results show that the regressive policies outperform non-regressive policies across a wide range of performance measures, demonstrating that our model predicts that making between-word regressions is a rational strategy for reading. 2 Models of eye movements in reading The two most successful models of eye movements in reading are E-Z Reader (Reichle, Pollatsek, Fisher, & Rayner, 1998; Reichle et al., 2006) and SWIFT (Engbert, Longtin, & Kliegl, 2002; Engbert et al., 2005). Both of these models characterize the problem of reading as one of word identification. In E-Z Reader, for example, the system identifies each word in the sentence serially, moving attention to the next word in the sentence only after processing the current word is complete, and (to slightly oversimplify), the eyes then follow the attentional shifts at some lag. SWIFT works similarly, but with the main difference being that processing and attention are distributed over multiple words, such that adjacent words can be identified in parallel. While both of these models provide a good fit to eye tracking data from reading, neither model asks the higher level question of what a rational solution to the problem would look like. The first model to ask this question, Mr. Chips (Legge, Klitz, & Tjan, 1997; Legge, Hooven, Klitz, Mansfield, & Tjan, 2002), predicts the optimal sequence of saccade targets to read a text based on a principle of minimizing the expected entropy in the distribution over identities of the current word. Unfortunately, however, the Mr. Chips model simplifies the problem of reading in a number of ways: First, it uses a unigram model as its language model, and thus fails to use any information in the linguistic context to help with word identification. Second, it only moves on to the next word after unambiguous identification of the current word, whereas there is experimental evidence that comprehenders maintain some uncertainty about the word identities. In other work, we have extended the Mr. Chips model to remove these two limitations, and show that the resulting model more closely matches human performance (Bicknell & Levy, 2010). The larger problem, however, is that each of these models uses an unrealistic model of visual input, which obtains absolute knowledge of the characters in its visual window. Thus, there is no reason for the model to spend longer on one fixation than another, and the model only makes predictions for where saccades are targeted, and not how long fixations last. Reichle and Laurent (2006) presented a rational model that overcame the limitations of Mr. Chips to produce predictions for both fixation durations and locations, focusing on the ways in which eye movement behavior is an adaptive response to the particular constraints of the task of reading. Given this focus, Reichle and Laurent used a very simple word identification function, for which the time required to identify a word was a function only of its length and the relative position of the eyes. In this paper, we present another rational model of eye movement control in reading that, like Reichle and Laurent, makes predictions for fixation durations and locations, but which focuses instead on the dynamics of word identification at the core of the task of reading. Specifically, our model identifies the words in a sentence by performing Bayesian inference combining noisy input from a realistic visual model with a language model that takes context into account. 3 Explaining between-word regressions In this paper, we use our model to provide a novel explanation for between-word regressive saccades. In reading, about 10–15% of saccades are regressive – movements from right-to-left (or to previous lines). To understand how models such as E-Z Reader or SWIFT account for re1169 gressive saccades to previous words, recall that the system identifies words in the sentence (generally) left to right, and that identification of a word in these models takes a certain amount of time and then is completed. In such a setup, why should the eyes ever move backwards? Three major answers have been put forward. One possibility given by E-Z Reader is as a response to overshoot; i.e., the eyes move backwards to a previous word because they accidentally landed further forward than intended due to motor error. Such an explanation could only account for small between-word regressions, of about the magnitude of motor error. The most recent version, E-Z Reader 10 (Reichle, Warren, & McConnell, 2009), has a new component that can produce longer between-word regressions. Specifically, the model includes a flag for postlexical integration failure, that – when triggered – will instruct the model to produce a between-word regression to the site of the failure. That is, between-word regressions in E-Z Reader 10 can arise because of postlexical processes external to the model’s main task of word identification. A final explanation for between-word regressions, which arises as a result of normal processes of word identification, comes from the SWIFT model. In the SWIFT model, the reader can fail to identify a word but move past it and continue reading. In these cases, there is a chance that the eyes will at some point move back to this unidentified word to identify it. From the present perspective, however, it is unclear how it could be rational to move past an unidentified word and decide to revisit it only much later. Here, we suggest a new explanation for between-word regressions that arises as a result of word identification processes (unlike that of E-Z Reader) and can be understood as rational (unlike that of SWIFT). Whereas in SWIFT and E-Z Reader, word recognition is a process that takes some amount of time and is then ‘completed’, some experimental evidence suggests that word recognition may be best thought of as a process that is never ‘completed’, as comprehenders appear to both maintain uncertainty about the identity of previous input and to update that uncertainty as more information is gained about the rest of the sentence (Connine, Blasko, & Hall, 1991; Levy, Bicknell, Slattery, & Rayner, 2009). Thus, it is possible that later parts of a sentence can cause a reader’s confidence in the identity of the previous regions to fall. In these cases, a rational way to respond might be to make a between-word regressive saccade to get more visual information about the (now) low confidence previous region. To illustrate this idea, consider the case of a language composed of just two strings, AB and BA, and assume that the eyes can only get noisy information about the identity of one character at a time. After obtaining a little information about the identity of the first character, the reader may be reasonably confident that its identity is A and move on to obtaining visual input about the second character. If the first noisy input about the second character also indicates that it is probably A, then the normative probability that the first character is A (and thus a rational reader’s confidence in its identity) will fall. This simple example just illustrates the point that if a reader is combining noisy visual information with a language model, then confidence in previous regions will sometimes fall. There are two ways that a rational agent might deal with this problem. The first option would be to reach a higher level of confidence in the identity of each word before moving on to the right, i.e., slowing down reading left-to-right to prevent having to make right-to-left regressions. The second option is to read left-to-right relatively more quickly, and then make occasional right-to-left regressions in the cases where probability in previous regions falls. In this paper, we present two simulations suggesting that when using a rational model to read natural language, the best strategies for coping with the problem of confidence about previous regions dropping – for any tradeoff between speed and accuracy – involve making between-word regressions. In the next section, we present the details of our model of reading and its implementation, and then we present our two simulations in the sections following. 4 Reading as Bayesian inference At its core, the framework we are proposing is one of reading as Bayesian inference. Specifically, the model begins reading with a prior distribution over possible identities of a sentence given by its language model. On the basis of that distribution, the model decides whether or not to move its eyes (and if so where to move them to) and obtains noisy visual input about the sentence at the eyes’ position. That noisy visual input then gives the likelihood term in a Bayesian belief update, where the 1170 model’s prior distribution over the identity of the sentence given the language model is updated to a posterior distribution taking into account both the language model and the visual input obtained thus far. On the basis of that new distribution, the model again selects an action and the cycle repeats. This framework is unique among models of eye movement control in reading (except Mr. Chips) in having a fully explicit model of how visual input is used to discriminate word identity. This approach stands in sharp contrast to other models, which treat the time course of word identification as an exogenous function of other influencing factors (such as word length, frequency, and predictability). The hope in our approach is that the influence of these key factors on the eye movement record will fall out as a natural consequence of rational behavior itself. For example, it is well known that the higher the conditional probability of a word given preceding material, the more rapidly that word is read (Boston, Hale, Kliegl, Patil, & Vasishth, 2008; Demberg & Keller, 2008; Ehrlich & Rayner, 1981; Smith & Levy, 2008). E-Z Reader and SWIFT incorporate this finding by specifying a dependency on word predictability in the exogenous function determining word processing time. In our framework, in contrast, we would expect such an effect to emerge as a byproduct of Bayesian inference: words with high prior probability (conditional on preceding fixations) will require less visual input to be reliably identified. An implemented model in this framework must formalize a number of pieces of the reading problem, including the possible actions available to the reader and their consequences, the nature of visual input, a means of combining visual input with prior expectations about sentence form and structure, and a control policy determining how the model will choose actions on the basis of its posterior distribution over the identities of the sentence. In the remainder of this section, we present these details of the formalization of the reading problem we used for the simulations reported in this paper: actions (4.1), visual input (4.2), formalization of the Bayesian inference problem (4.3), control policy (4.4), and finally, implementation of the model using weighted finite state automata (4.5). 4.1 Formal problem of reading: Actions For our model, we assume a series of discrete timesteps, and on each time step, the model first obtains visual input around the current location of the eyes, and then chooses between three actions: (a) continuing to fixate the currently fixated position, (b) initiating a saccade to a new position, or (c) stopping reading of the sentence. If on the ith timestep, the model chooses option (a), the timestep advances to i + 1 and another sample of visual input is obtained around the current position. If the model chooses option (c), the reading immediately ends. If a saccade is initiated (b), there is a lag of two timesteps, roughly representing the time required to plan and execute a saccade, during which the model again obtains visual input around the current position and then the eyes move – with some motor error – toward the intended target ti, landing on position ℓi. On the next time step, visual input is obtained around ℓi and another decision is made. The motor error for saccades follows the form of random error used by all major models of eye movements in reading: the landing position ℓi is normally distributed around the intended target ti with standard deviation given by a linear function of the intended distance1 ℓi ∼N ti,(δ0 +δ1|ti −ℓi−1|)2 (1) for some linear coefficients δ0 and δ1. In the experiments reported in this paper, we follow the SWIFT model in using δ0 = 0.87,δ1 = 0.084. 4.2 Noisy visual input As stated earlier, the role of noisy visual input in our model is as the likelihood term in a Bayesian inference about sentence form and identity. Therefore, if we denote the input obtained thus far from a sentence as I, all the information pertinent to the reader’s inferences can be encapsulated in the form p(I|w) for possible sentences w. We assume that the inputs deriving from each character position are conditionally independent given sentence identity, so that if wj denotes letter j of the sentence and I(j) denotes the component of visual input associated with that letter, then we can decompose p(I|w) as ∏j p(I(j)|w j). For simplicity, we assume that each character is either a lowercase letter or a space. The visual input obtained from an individual fixation can thus be summarized as a vector of likelihoods p(I(j)|w j), as shown in 1In the terminology of the literature, the model has only random motor error (variance), not systematic error (bias). Following Engbert and Krügel (2010), systematic error may arise from Bayesian estimation of the best saccade distance. 1171 . . . a s a c a* t s a t a t a t . . .   a c ... s t ...     0 0 ... 0 0 ... 1     0 0 ... 0 0 ... 1     0 0 ... 0 0 ... 1     0 0 ... 0 0 ... 1     0 0 ... 0 0 ... 1     0 0 ... 0 0 ... 1     0 0 ... 0 0 ... 1     .04 .04 ... .04 .04 ... 0     .04 .04 ... .04 .04 ... 0     .04 .04 ... .04 .04 ... 0     .08 .02 ... .04 .03 ... 0     .15 .07 ... .01 .01 ... 0     .02 .25 ... .03 .01 ... 0     .07 .01 ... .03 .003 ... 0     .05 .01 ... .002 .05 ... 0     .003 .005 ... .21 .02 ... 0     .04 .01 ... .03 .07 ... 0     .06 .01 ... .02 .12 ... 0     .05 .05 ... .07 .05 ... 0     .10 .08 ... .02 .05 ... 0   Figure 1: Peripheral and foveal visual input in the model. The asymmetric Gaussian curve indicates declining perceptual acuity centered around the fixation point (marked by ∗). The vector underneath each letter position denotes the likelihood p(I(j)|w j) for each possible letter wj, taken from a single input sample with Λ = 1/ √ 3 (see vector at the left edge of the figure for key, and Section 4.2). In peripheral vision, the letter/whitespace distinction is veridical, but no information about letter identity is obtained. Note in this particular sample, input from the fixated character and the following one is rather inaccurate. Figure 1. As in the real visual system, our visual acuity function decreases with retinal eccentricity; we follow the SWIFT model in assuming that the spatial distribution of visual processing rate follows an asymmetric Gaussian with σL = 2.41,σR = 3.74, which we discretize into processing rates for each character position. If ε denotes a character’s eccentricity in characters from the center of fixation, then the proportion of the total processing rate at that eccentricity λ(ε) is given by integrating the asymmetric Gaussian over a character width centered on that position, λ(ε) = Z ε+.5 ε−.5 1 Z exp  −x2 2σ2  dx,σ = ( σL, x < 0 σR, x ≥0 where the normalization constant Z is given by Z = r π 2 (σL +σR). From this distribution, we derive two types of visual input, peripheral input giving word boundary information and foveal input giving information about letter identity. 4.2.1 Peripheral visual input In our model, any eccentricity with a processing rate proportion λ(ε) at least 0.5% of the rate proportion for the centrally fixated character (ε ∈ [−7,12]), yields peripheral visual input, defined as veridical word boundary information indicating whether each character is a letter or a space. This roughly corresponds to empirical estimates that humans obtain useful information in reading from about 19 characters, more from the right of fixation than the left (Rayner, 1998). Hence in Figure 1, for example, left-peripheral visual input can be represented as veridical knowledge of the initial whitespace (denoted ), and a uniform distribution over the 26 letters of English for the letter a. 4.2.2 Foveal visual input In addition, for those eccentricities with a processing rate proportion λ(ε) that is at least 1% of the total processing rate (ε ∈[−5,8]) the model receives foveal visual input, defined only for letters2 to give noisy information about the letter’s identity. This threshold of 1% roughly corresponds to estimates that readers get information useful for letter identification from about 4 characters to the left and 8 to the right of fixation (Rayner, 1998). In our model, each letter is equally confusable with all others, following Norris (2006, 2009), but ignoring work on letter confusability (which could be added to future model revisions; Engel, Dougherty, & Jones, 1973; Geyer, 1977). Visual information about each character is obtained by sampling. Specifically, we represent each letter as a 26-dimensional vector, where a single element is 1 and the other 25 are zeros, and given this representation, foveal input for a letter is given as a sample from a 26-dimensional Gaussian with a 2For white space, the model is already certain of the identity because of peripheral input. 1172 mean equal to the letter’s true identity and a diagonal covariance matrix Σ(ε) = λ(ε)−1/2I. It is relatively straightforward to show that under these conditions, if we take the processing rate to be the expected change in log-odds of the true letter identity relative to any other that a single sample brings about, then the rate equals λ(ε). We scale the overall processing rate by multiplying each rate by Λ. For the experiments in this paper, we set Λ = 4. For each fixation, we sample independently from the appropriate distribution for each character position and then compute the likelihood given each possible letter, as illustrated in the non-peripheral region of Figure 1. 4.3 Inference about sentence identity Given the visual input and a language model, inferences about the identity of the sentence w can be made by standard Bayesian inference, where the prior is given by the language model and the likelihood is a function of the total visual input obtained from the first to the ith timestep Ii 1, p(w|Ii 1) = p(w)p(Ii 1|w) ∑ w′ (w′)p(Ii 1|w′). (2) If we let I(j) denote the input received about character position j and let w j denote the jth character in sentence identity w, then the likelihood can be broken down by character position as p(Ii 1|w) = n ∏ j=1 p(Ii 1(j)|w j) where n is the final character about which there is any visual input. Similarly, we can decompose this into the product of the likelihoods of each sample p(Ii 1|w) = n ∏ j=1 i ∏ t=1 p(It(j)|w j). (3) If the eccentricity of the jth character on the tth timestep ε j t is outside of foveal input or the character is a space, the inner term is 0 or 1. If the sample was from a letter in foveal input ε j t ∈[−5,8], it is the probability of sampling It(j) from the multivariate Gaussian N(wj,ΛΣ(ε j t )). 4.4 Control policy The model uses a simple policy to decide between actions based on the marginal probability m of the (a) m = [.6,.7,.6,.4,.3,.6]: Keep fixating (3) (b) m = [.6,.4,.9,.4,.3,.6]: Move back (to 2) (c) m = [.6,.7,.9,.4,.3,.6]: Move forward (to 6) (d) m = [.6,.7,.9,.8,.7,.7]: Stop reading Figure 2: Values of m for a 6 character sentence under which a model fixating position 3 would take each of its four actions, if α = .7 and β = .5. most likely character c in position j, m(j) = max c p(wn = c|Ii 1) = max c ∑ w′:w′n=c p(w′|Ii 1). (4) Intuitively, a high value of m means that the model is relatively confident about the character’s identity, and a low value that it is relatively uncertain. Given the values of this statistic, our model decides between four possible actions, as illustrated in Figure 2. If the value of this statistic for the current position of the eyes m(ℓi) is less than a parameter α, the model chooses to continue fixating the current position (2a). Otherwise, if the value of m(j) is less than β for some leftward position j < ℓi, the model initiates a saccade to the closest such position (2b). If m(j) ≥β for all j < ℓi, then the model initiates a saccade to n characters past the closest position to the right j > ℓi for which m(j) < α (2c).3 Finally, if no such positions exist to the right, the model stops reading the sentence (2d). Intuitively, then, the model reads by making a rightward sweep to bring its confidence in each character up to α, but pauses to move left if confidence in a previous character falls below β. 4.5 Implementation with wFSAs This model can be efficiently and simply implemented using weighted finite-state automata (wFSAs; Mohri, 1997) as follows: First, we begin with a wFSA representation of the language model, where each arc emits a single character (or is an epsilon-transition emitting nothing). To perform belief update given a new visual input, we create a new wFSA to represent the likelihood of each character from the sample. Specifically, this wFSA has only a single chain of states, where, e.g., the first and second state in the chain are connected by 27 (or fewer) arcs, which emit each of 3The role of n is to ensure that the model does not center its visual field on the first uncertain character. We did not attempt to optimize this parameter, but fixed n at 2. 1173 the possible characters for w1 along with their respective likelihoods given the visual input (as in the inner term of Equation 3). Next, these two wFSAs may simply be composed and then normalized, which completes the belief update, resulting in a new wFSA giving the posterior distribution over sentences. To calculate the statistic m, while it is possible to calculate it in closed form from such a wFSA relatively straightforwardly, for efficiency we use Monte Carlo estimation based on samples from the wFSA. 5 Simulation 1 With the description of our model in place, we next proceed to describe the first simulation in which we used the model to test the hypothesis that making regressions is a rational way to cope with confidence in previous regions falling. Because there is in general no single rational tradeoff between speed and accuracy, our hypothesis is that, for any given level of speed and accuracy achieved by a non-regressive policy, there is a faster and more accurate policy that makes a faster left-to-right pass but occasionally does make regressions. In the terms of our model’s policy parameters α and β described above, non-regressive policies are exactly those with β = 0, and a policy that is faster on the left-to-right pass but does make regressions is one with a lower value of α but a non-zero β. Thus, we tested the performance of our model on the reading of a corpus of text typical of that used in reading experiments at a range of reasonable non-regressive policies, as well as a set of regressive policies with lower α and positive β. Our prediction is that the former set will be strictly dominated in terms of both speed and accuracy by the latter. 5.1 Methods 5.1.1 Policy parameters We test 4 non-regressive policies (i.e., those with β = 0) with values of α ∈{.90,.95,.97,.99}, and in addition, test regressive policies with a lower range of α ∈{.85,.90,.95,.97} and β ∈{.4,.7}.4 5.1.2 Language model Our reader’s language model was an unsmoothed bigram model created using a vocabulary set con4We tested all combinations of these values of α and β except for [α,β] = [.97,.4], because we did not believe that a value of β so low in relation to α would be very different from a non-regressive policy. sisting of the 500 most frequent words in the British National Corpus (BNC) as well as all the words in our test corpus. From this vocabulary, we constructed a bigram model using the counts from every bigram in the BNC for which both words were in vocabulary (about 222,000 bigrams). 5.1.3 wFSA implementation We implemented our model with wFSAs using the OpenFST library (Allauzen, Riley, Schalkwyk, Skut, & Mohri, 2007). Specifically, we constructed the model’s initial belief state (i.e., the distribution over sentences given by its language model) by directly translating the bigram model into a wFSA in the log semiring. We then composed this wFSA with a weighted finitestate transducer (wFST) breaking words down into characters. This was done in order to facilitate simple composition with the visual likelihood wFSA defined over characters. In the Monte Carlo estimation of m, we used 5000 samples from the wFSA. Finally, to speed performance, we bounded the wFSA to have exactly the number of characters present in the actual sentence and then renormalized. 5.1.4 Test corpus We tested our model’s performance by simulating reading of the Schilling corpus (Schilling, Rayner, & Chumbley, 1998). To ensure that our results did not depend on smoothing, we only tested the model on sentences in which every bigram occurred in the BNC. Unfortunately, only 8 of the 48 sentences in the corpus met this criterion. Thus, we made single-word changes to 25 more of the sentences (mostly changing proper names and rare nouns) to produce a total of 33 sentences to read, for which every bigram did occur in the BNC. 5.2 Results and discussion For each policy we tested, we measured the average number of timesteps it took to read the sentences, as well as the average (natural) log probability of the correct sentence identity under the model’s beliefs after reading ended ‘Accuracy’. The results are plotted in Figure 3. As shown in the graph, for each non-regressive policy (the circles), there is a regressive policy that outperforms it, both in terms of average number of timesteps taken to read (further to the left) and the average log probability of the sentence identity (higher). Thus, for a range of policies, these results suggest 1174 Timesteps Accuracy −1.2 −1.0 −0.8 −0.6 G G G G 50 55 60 65 70 Beta G non−regressive (beta=0) regressive (beta=0.4) regressive (beta=0.7) Figure 3: Mean number of timesteps taken to read a sentence and (natural) log probability of the true identity of the sentence ‘Accuracy’ for a range of values of α and β. Values of α are not labeled, but increase with the number of timesteps for a constant value of β. For each non-regressive policy (β = 0), there is a policy with a lower α and higher β that achieves better accuracy in less time. that making regressions when confidence about previous regions falls is a rational reader strategy, in that it appears to lead to better performance, both in terms of speed and accuracy. 6 Simulation 2 In Simulation 2, we perform a more direct test of the idea that making regressions is a rational response to the problem of confidence falling about previous regions using optimization techniques. Specifically, we search for optimal policy parameter values (α,β) for three different measures of performance, each representing a different tradeoff between the importance of accuracy and speed. 6.1 Methods 6.1.1 Performance measures We examine performance measures interpolating between speed and accuracy of the form L(1−γ)−Tγ (5) where L is the log probability of the true identity of the sentence under the model’s beliefs at the end of reading, and T is the total number of timesteps before the model decided to stop reading. Thus, each different performance measure is determined by the weighting for time γ. We test three values of γ ∈{.025,.1,.4}. The first of these weights accuracy highly, while the final one weights 1 timestep almost as much as 1 unit of log probability. 6.1.2 Optimization of policy parameters Searching directly for optimal values of α and β for our stochastic reading model is difficult because each evaluation of the model with a particular set of parameters produces a different result. We use the PEGASUS method (Ng & Jordan, 2000) to transform this stochastic optimization problem into a deterministic one on which we can use standard optimization algorithms.5 Then, we evaluate the model’s performance at each value of α and β by reading the full test corpus and averaging performance. We then simply use coordinate ascent (in logit space) to find the optimal values of α and β for each performance measure. 6.1.3 Language model The language model used in this simulation begins with the same vocabulary set as in Sim. 1, i.e., the 500 most frequent words in the BNC and every word that occurs in our test corpus. Because the search algorithm demands that we evaluate the performance of our model at a number of parameter values, however, it is too slow to optimize α and β using the full language model that we used for Sim. 1. Instead, we begin with the same set of bigrams used in Sim. 1 – i.e., those that contain two in-vocabulary words – and trim this set by removing rare bigrams that occur less than 200 times in the BNC (except that we do not trim any bigrams that occur in our test corpus). This reduces our set of bigrams to about 19,000. 6.1.4 wFSA implementation The implementation was the same as in Sim. 1. 6.1.5 Test corpus The test corpus was the same as in Sim. 1. 6.2 Results and discussion The optimal values of α and β for each γ ∈ {.025,.1,.4} are given in Table 1 along with the mean values for L and T found at those parameter values. As the table shows, the optimization procedure successfully found values of α and β, which go up (slower reading) as γ goes down (valuing accuracy more than time). In addition, we see that the average results of reading at these parameter values are also as we would expect, with T and L going up as γ goes down. As predicted, the optimal 5Specifically, this involves fixing the random number generator for each run to produce the same values, resulting in minimizing the variance in performance across evaluations. 1175 γ α β Timesteps Log probability .025 .90 .99 41.2 -0.02 .1 .36 .80 25.8 -0.90 .4 .18 .38 16.4 -4.59 Table 1: Optimal values of α and β found for each performance measure γ tested and mean performance at those values, measured in timesteps T and (natural) log probability L. values of β found are non-zero across the range of policies, which include policies that value speed over accuracy much more than in Sim. 1. This provides more evidence that whatever the particular performance measure used, policies making regressive saccades when confidence in previous regions falls perform better than those that do not. There is one interesting difference between the results of this simulation and those of Sim. 1, which is that here, the optimal policies all have a value of β > α. That may at first seem surprising, since the model’s policy is to fixate a region until its confidence becomes greater than α and then return if it falls below β. It would seem, then, that the only reasonable values of β are those that are strictly below α. In fact, this is not the case because of the two time step delay between the decision to move the eyes and the execution of that saccade. Because of this delay, the model’s confidence when it leaves a region (relevant to β) will generally be higher than when it decided to leave (determined by α). In Simulation 2, because of the smaller grammar that was used, the model’s confidence in a region’s identity rises more quickly and this difference is exaggerated. 7 Conclusion In this paper, we presented a model that performs Bayesian inference on the identity of a sentence, combining a language model with noisy information about letter identities from a realistic visual input model. On the basis of these inferences, it uses a simple policy to determine how long to continue fixating the current position and where to fixate next, on the basis of information about where the model is uncertain about the sentence’s identity. As such, it constitutes a rational model of eye movement control in reading, extending the insights from previous results about rationality in language comprehension. The results of two simulations using this model support a novel explanation for between-word regressive saccades in reading: that they are used to gather visual input about previous regions when confidence about them falls. Simulation 1 showed that a range of policies making regressions in these cases outperforms a range of non-regressive policies. In Simulation 2, we directly searched for optimal values for the policy parameters for three different performance measures, representing different speed-accuracy trade-offs, and found that the optimal policies in each case make substantial use of between-word regressions when confidence in previous regions falls. In addition to supporting a novel motivation for between-word regressions, these simulations demonstrate the possibility for testing a range of questions that were impossible with previous models of reading related to the goals of a reader, such as how should reading behavior change as accuracy is valued more. There are a number of obvious ways for the model to move forward. One natural next step is to make the model more realistic by using letter confusability matrices. In addition, the link to previous work in sentence processing can be made tighter by incorporating syntax-based language models. It also remains to compare this model’s predictions to human data more broadly on standard benchmark measures for models of reading. The most important future development, however, will be moving toward richer policy families, which enable more intelligent decisions about eye movement control, based not just on simple confidence statistics calculated independently for each character position, but rather which utilize the rich structure of the model’s posterior beliefs about the sentence identity (and of language itself) to make more informed decisions about the best time to move the eyes and the best location to direct them next. Acknowledgments The authors thank Jeff Elman, Tom Griffiths, Andy Kehler, Keith Rayner, and Angela Yu for useful discussion about this work. This work benefited from feedback from the audiences at the 2010 LSA and CUNY conferences. The research was partially supported by NIH Training Grant T32DC000041 from the Center for Research in Language at UC San Diego to K.B., by a research grant from the UC San Diego Academic Senate to R.L., and by NSF grant 0953870 to R.L. 1176 References Allauzen, C., Riley, M., Schalkwyk, J., Skut, W., & Mohri, M. (2007). OpenFst: A general and efficient weighted finite-state transducer library. In Proceedings of the Ninth International Conference on Implementation and Application of Automata, (CIAA 2007) (Vol. 4783, p. 11-23). Springer. Bicknell, K., & Levy, R. (2010). Rational eye movements in reading combining uncertainty about previous words with contextual probability. In Proceedings of the 32nd Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society. Boston, M. F., Hale, J. T., Kliegl, R., Patil, U., & Vasishth, S. (2008). Parsing costs as predictors of reading difficulty: An evaluation using the potsdam sentence corpus. Journal of Eye Movement Research, 2(1), 1–12. Connine, C. M., Blasko, D. G., & Hall, M. (1991). Effects of subsequent sentence context in auditory word recognition: Temporal and linguistic constraints. Journal of Memory and Language, 30, 234–250. Demberg, V., & Keller, F. (2008). Data from eyetracking corpora as evidence for theories of syntactic processing complexity. Cognition, 109, 193–210. Ehrlich, S. F., & Rayner, K. (1981). Contextual effects on word perception and eye movements during reading. Journal of Verbal Learning and Verbal Behavior, 20, 641–655. Engbert, R., & Krügel, A. (2010). Readers use Bayesian estimation for eye movement control. Psychological Science, 21, 366–371. Engbert, R., Longtin, A., & Kliegl, R. (2002). A dynamical model of saccade generation in reading based on spatially distributed lexical processing. Vision Research, 42, 621–636. Engbert, R., Nuthmann, A., Richter, E. M., & Kliegl, R. (2005). SWIFT: A dynamical model of saccade generation during reading. Psychological Review, 112, 777–813. Engel, G. R., Dougherty, W. G., & Jones, B. G. (1973). Correlation and letter recognition. Canadian Journal of Psychology, 27, 317–326. Genzel, D., & Charniak, E. (2002, July). Entropy rate constancy in text. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics (pp. 199–206). Philadelphia: Association for Computational Linguistics. Genzel, D., & Charniak, E. (2003). Variation of entropy and parse trees of sentences as a function of the sentence number. In M. Collins & M. Steedman (Eds.), Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing (pp. 65–72). Sapporo, Japan: Association for Computational Linguistics. Geyer, L. H. (1977). Recognition and confusion of the lowercase alphabet. Perception & Psychophysics, 22, 487–490. Hale, J. (2001). A probabilistic Earley parser as a psycholinguistic model. In Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics (Vol. 2, pp. 159–166). New Brunswick, NJ: Association for Computational Linguistics. Jaeger, T. F. (2010). Redundancy and reduction: Speakers manage syntactic information density. Cognitive Psychology. doi:10.1016/j.cogpsych.2010.02.002. Jurafsky, D. (1996). A probabilistic model of lexical and syntactic access and disambiguation. Cognitive Science, 20, 137–194. Keller, F. (2004). The entropy rate principle as a predictor of processing effort: An evaluation against eye-tracking data. In D. Lin & D. Wu (Eds.), Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (pp. 317–324). Barcelona, Spain: Association for Computational Linguistics. Legge, G. E., Hooven, T. A., Klitz, T. S., Mansfield, J. S., & Tjan, B. S. (2002). Mr. Chips 2002: new insights from an ideal-observer model of reading. Vision Research, 42, 2219– 2234. Legge, G. E., Klitz, T. S., & Tjan, B. S. (1997). Mr. Chips: an Ideal-Observer model of reading. Psychological Review, 104, 524–553. Levy, R. (2008). A noisy-channel model of rational human sentence comprehension under uncertain input. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (pp. 234–243). Honolulu, Hawaii: Association for Computational Linguistics. Levy, R., Bicknell, K., Slattery, T., & Rayner, K. (2009). Eye movement evidence that readers maintain and act on uncertainty about past linguistic input. Proceedings of the National Academy of Sciences, 106, 21086–21090. 1177 Levy, R., & Jaeger, T. F. (2007). Speakers optimize information density through syntactic reduction. In B. Schölkopf, J. Platt, & T. Hoffman (Eds.), Advances in Neural Information Processing Systems 19 (pp. 849–856). Cambridge, MA: MIT Press. Levy, R., Reali, F., & Griffiths, T. L. (2009). Modeling the effects of memory on human online sentence processing with particle filters. In D. Koller, D. Schuurmans, Y. Bengio, & L. Bottou (Eds.), Advances in Neural Information Processing Systems 21 (pp. 937–944). Mohri, M. (1997). Finite-state transducers in language and speech processing. Computational Linguistics, 23, 269–311. Narayanan, S., & Jurafsky, D. (2001). A Bayesian model predicts human parse preference and reading time in sentence processing. In T. Dietterich, S. Becker, & Z. Ghahramani (Eds.), Advances in Neural Information Processing Systems 14 (pp. 59–65). Cambridge, MA: MIT Press. Ng, A. Y., & Jordan, M. (2000). PEGASUS: A policy search method for large MDPs and POMDPs. In Uncertainty in Artificial Intelligence, Proceedings of the Sixteenth Conference (pp. 406–415). Norris, D. (2006). The Bayesian reader: Explaining word recognition as an optimal Bayesian decision process. Psychological Review, 113, 327– 357. Norris, D. (2009). Putting it all together: A unified account of word recognition and reaction-time distributions. Psychological Review, 116, 207– 219. Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124, 372–422. Reichle, E. D., & Laurent, P. A. (2006). Using reinforcement learning to understand the emergence of “intelligent” eye-movement behavior during reading. Psychological Review, 113, 390–408. Reichle, E. D., Pollatsek, A., Fisher, D. L., & Rayner, K. (1998). Toward a model of eye movement control in reading. Psychological Review, 105, 125–157. Reichle, E. D., Pollatsek, A., & Rayner, K. (2006). E-Z Reader: A cognitive-control, serialattention model of eye-movement behavior during reading. Cognitive Systems Research, 7, 4– 22. Reichle, E. D., Warren, T., & McConnell, K. (2009). Using E-Z Reader to model the effects of higher level language processing on eye movements during reading. Psychonomic Bulletin & Review, 16, 1–21. Schilling, H. E. H., Rayner, K., & Chumbley, J. I. (1998). Comparing naming, lexical decision, and eye fixation times: Word frequency effects and individual differences. Memory & Cognition, 26, 1270–1281. Smith, N. J., & Levy, R. (2008). Optimal processing times in reading: a formal model and empirical investigation. In B. C. Love, K. McRae, & V. M. Sloutsky (Eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society (pp. 595–600). Austin, TX: Cognitive Science Society. Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 1632– 1634. 1178
2010
119
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 108–117, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Automatic Evaluation Method for Machine Translation using Noun-Phrase Chunking Hiroshi Echizen-ya Hokkai-Gakuen University S 26-Jo, W 11-chome, Chuo-ku, Sapporo, 064-0926 Japan [email protected] Kenji Araki Hokkaido University N 14-Jo, W 9-Chome, Kita-ku, Sapporo, 060-0814 Japan [email protected] Abstract As described in this paper, we propose a new automatic evaluation method for machine translation using noun-phrase chunking. Our method correctly determines the matching words between two sentences using corresponding noun phrases. Moreover, our method determines the similarity between two sentences in terms of the noun-phrase order of appearance. Evaluation experiments were conducted to calculate the correlation among human judgments, along with the scores produced using automatic evaluation methods for MT outputs obtained from the 12 machine translation systems in NTCIR7. Experimental results show that our method obtained the highest correlations among the methods in both sentence-level adequacy and fluency. 1 Introduction High-quality automatic evaluation has become increasingly important as various machine translation systems have developed. The scores of some automatic evaluation methods can obtain high correlation with human judgment in document-level automatic evaluation(Coughlin, 2007). However, sentence-level automatic evaluation is insufficient. A great gap exists between language processing of automatic evaluation and the processing by humans. Therefore, in recent years, various automatic evaluation methods particularly addressing sentence-level automatic evaluations have been proposed. Methods based on word strings (e.g., BLEU(Papineni et al., 2002), NIST(NIST, 2002), METEOR(Banerjee and Lavie., 2005), ROUGE-L(Lin and Och, 2004), and IMPACT(Echizen-ya and Araki, 2007)) calculate matching scores using only common words between MT outputs and references from bilingual humans. However, these methods cannot determine the correct word correspondences sufficiently because they fail to focus solely on phrase correspondences. Moreover, various methods using syntactic analytical tools(Pozar and Charniak, 2006; Mutton et al., 2007; Mehay and Brew, 2007) are proposed to address the sentence structure. Nevertheless, those methods depend strongly on the quality of the syntactic analytical tools. As described herein, for use with MT systems, we propose a new automatic evaluation method using noun-phrase chunking to obtain higher sentence-level correlations. Using noun phrases produced by chunking, our method yields the correct word correspondences and determines the similarity between two sentences in terms of the noun phrase order of appearance. Evaluation experiments using MT outputs obtained by 12 machine translation systems in NTCIR-7(Fujii et al., 2008) demonstrate that the scores obtained using our system yield the highest correlation with the human judgments among the automatic evaluation methods in both sentence-level adequacy and fluency. Moreover, the differences between correlation coefficients obtained using our method and other methods are statistically significant at the 5% or lower significance level for adequacy. Results confirmed that our method using noun-phrase chunking is effective for automatic evaluation for machine translation. 2 Automatic Evaluation Method using Noun-Phrase Chunking The system based on our method has four processes. First, the system determines the corre108 spondences of noun phrases between MT outputs and references using chunking. Secondly, the system calculates word-level scores based on the correct matched words using the determined correspondences of noun phrases. Next, the system calculates phrase-level scores based on the noun-phrase order of appearance. The system calculates the final scores combining word-level scores and phrase-level scores. 2.1 Correspondence of Noun Phrases by Chunking The system obtains the noun phrases from each sentence by chunking. It then determines corresponding noun phrases between MT outputs and references calculating the similarity for two noun phrases by the PER score(Su et al., 1992). In that case, PER scores of two kinds are calculated. One is the ratio of the number of match words between an MT output and reference for the number of all words of the MT output. The other is the ratio of the number of match words between the MT output and reference for the number of all words of the reference. The similarity is obtained as an F-measure between two PER scores. The high score represents that the similarity between two noun phrases is high. Figure 1 presents an example of the determination of the corresponding noun phrases. MT output : in general , [NP the amount ] of [NP the crowning fall ] is large like [NP the end ] . Reference : generally , the closer [NP it ] is to [NP the end part ] , the larger [NP the amount ] of [NP crowning drop ] is . (1) Use of noun phrase chunking MT output : in general , [NP the amount ] of [NP the crowning fall ] is large like [NP the end ] . Reference : generally , the closer [NP it ] is to [NP the end part ] , the larger [NP the amount ] of [NP crowning drop ] is . (2) Determination of corresponding noun phrases 1.0000 0.3714 0.7429 Figure 1: Example of determination of corresponding noun phrases. In Fig. 1, “the amount”, “the crowning fall” and “the end” are obtained as noun phrases in MT output by chunking, and “it”, “the end part”, “the amount” and “crowning drop” are obtained in the reference by chunking. Next, the system determines the corresponding noun phrases from these noun phrases between the MT output and reference. The score between “the end” and “the end part” is the highest among the scores between “the end” in the MT output and “it”, “the end part”, “the amount”, and “crowning drop” in the reference. Moreover, the score between “the end part” and “the end” is the highest among the scores between “the end part” in reference and “the amount”, “the crowning fall”, “the end” in the MT output. Consequently, “the end” and “the end part” are selected as noun phrases with the highest mutual scores: “the end” and “the end part” are determined as one corresponding noun phrase. In Fig. 1, “the amount” in the MT output and “the amount” in reference, and “the crowning fall” in the MT output and “crowning drop” in the reference also are determined as the respective corresponding noun phrases. The noun phrase for which the score between it and other noun phrases is 0.0 (e.g., “it” in reference) has no corresponding noun phrase. The use of the noun phrases is effective because the frequency of the noun phrases is higher than those of other phrases. The verb phrases are not used for this study, but they can also be generated by chunking. It is difficult to determine the corresponding verb phrases correctly because the words in each verb phrase are often fewer than the noun phrases. 2.2 Word-level Score The system calculates the word-level scores between MT output and reference using the corresponding noun phrases. First, the system determines the common words based on Longest Common Subsequence (LCS). The system selects only one LCS route when several LCS routes exist. In such cases, the system calculates the Route Score (RS) using the following Eqs. (1) and (2): RS =  c∈LCS  w∈c weight(w) β (1) 109 weight(w) = ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ words in corresponding 2 noun phrase words in non 1 corresponding noun phrase (2) In Eq. (1), β is a parameter for length weighting of common parts; it is greater than 1.0. Figure 2 portrays an example of determination of the common parts. In the first process of Fig. 2, LCS is 7. In this example, several LCS routes exist. The system selects the LCS route which has “,”, “the amount of”, “crowning”, “is”, and “.” as the common parts. The common part is the part for which the common words appear continuously. In contrast, IMPACT selects a different LCS route that includes “, the”, “amount of”, “crowning”, “is”, and “.” as the common parts. In IMPACT, using no analytical knowledge, the LCS route is determined using the information of the number of words in the common parts and the position of the common parts. The RS for LCS route selected using our method is 32 (= 12.0 + (2 + 2 + 1)2.0 + 22.0 + 12.0 + 12.0) when β is 2.0. The RS for LCS route selected by IMPACT is 19 (= (1 + 1)2.0 + (2 + 1)2.0 + 22.0 + 12.0 + 12.0). In the LCS route selected by IMPACT, the weight of “the” in the common part “, the” is 1 because “the” in the reference is not included in the corresponding noun phrase. In the LCS route selected using our method, the weight of “the” in “the amount of” is 2 because “the” in MT output and “the” in the reference are included in the corresponding noun phrase “NP1”. Therefore, the system based on our method can select the correct LCS route. Moreover, the word-level score is calculated using the common parts in the selected LCS route as the following Eqs. (3), (4), and (5). Rwd = ⎛ ⎝ RN i=0 αi c∈LCS length(c)β mβ ⎞ ⎠ 1 β (3) Pwd = ⎛ ⎝ RN i=0 αi c∈LCS length(c)β nβ ⎞ ⎠ 1 β (4) MT output : in general , [NP1 the amount ] of [NP2 the crowning fall ] is large like [NP3 the end ] . Reference : generally , the closer [NP it ] is to [NP3 the end part ] , the larger [NP1 the amount ] of [NP2 crowning drop ] is . (1) First process for determination of common parts : LCS = 7 (2) Second process for determination of common parts : LCS=3 Our method MT output : in general , [NP1 the amount ] of [NP2 the crowning fall ] is large like [NP3 the end ] . Reference : generally , the closer [NP it ] is to [NP3 the end part ] , the larger [NP1 the amount ] of [NP2 crowning drop ] is . Our method MT output : in general , [NP1 the amount ] of [NP2 the crowning fall ] is large like [NP3 the end ] . Reference : generally , the closer [NP it ] is to [NP3 the end part ] , the larger [NP1 the amount ] of [NP2 crowning drop ] is . IMPACT 12.0 (2+2+1)2.0 22.0 12.0 12.0 (1+1)2.0(2+1)2.0 22.0 12.0 12.0 Figure 2: Example of common-part determination. scorewd = (1 + γ2)RwdPwd Rwd + γ2Pwd (5) Equation (3) represents recall and Eq. (4) represents precision. Therein, m signifies the word number of the reference in Eq. (3), and n stands for the word number of the MT output in Eq. (4). Here, RN denotes the repetition number of the determination process of the LCS route, and i, which has initial value 0, is the counter for RN. In Eqs. (3) and (4), α is a parameter for the repetition process of the determination of LCS route, and is less than 1.0. Therefore, Rwd and Pwd becomes small as the appearance order of the common parts between MT output and reference is different. Moreover, length(c) represents the number of words in each common part; β is a parameter related to the length weight of common parts, as in Eq. (1). In this case, the weight of each common word in the common part is 1. The system calculates scorewd as the wordlevel score in Eq. (5). In Eq. (5), γ is determined as Pwd/Rwd. The scorewd is between 0.0 and 1.0. 110 In the first process of Fig. 2, αi c∈LCS length(c)β is 13.0 (=0.50 × (12.0 + 32.0 + 12.0 + 12.0 + 12.0)) when α and β are 0.5 and 2.0, respectively. In this case, the counter i is 0. Moreover, in the second process of Fig. 2, αi c∈LCS length(c)β is 2.5 (=0.51×(12.0 +22.0)) using two common parts “the” and “the end”, except the common parts determined using the first process. In Fig. 2, RN is 1 because the system finishes calculating αi c∈LCS length(c)β when counter i became 1: this means that all common parts were processed until the second process. As a result, Rwd is 0.1969 (=  (13.0 + 2.5)/202.0 = √ 0.0388), and Pwd is 0.2625 (=  (13.0 + 2.5)/152.0 = √ 0.0689). Consequently, scorewd is 0.2164 (=(1+1.33322)×0.1969×0.2625 0.1969+1.33322×0.2625 ). In this case, γ becomes 1.3332 (= 0.2625 0.1969). The system can determine the matching words correctly using the corresponding noun phrases between the MT output and the reference. The system calculates scorewd multi using Rwd multi and Pwd multi which are, respectively, maximum Rwd and Pwd when multiple references are used as the following Eqs. (6), (7) and (8). In Eq. (8), γ is determined as Pwd multi/Rwd multi. The scorewd multi is between 0.0 and 1.0. Rwd multi = maxu j=1 ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝  RN i=0  αi c∈LCS length(c)β  j mβ j ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ 1 β ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ (6) Pwd multi = maxu j=1 ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝  RN i=0  αi c∈LCS length(c)β  j nβ j ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ 1 β ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ (7) scorewd multi = (1 + γ2Rwd multi)Pwd multi Rwd multi + γ2Pwd multi (8) 2.3 Phrase-level Score The system calculates the phrase-level score using the noun phrases obtained by chunking. First, the system extracts only noun phrases from sentences. Then it generalizes each noun phrase as each word. Figure 3 presents examples of generalization by noun phrases. MT output : in general , [NP1 the amount ] of [NP2 the crowning fall ] is large like [NP3 the end ] . Reference : generally , the closer [NP it ] is to [NP3 the end part ] , the larger [NP1 the amount ] of [NP2 crowning drop ] is . (1) Corresponding noun phrases (2) Generalization by noun phrases MT output : NP1 NP2 NP3 Reference : NP NP3 NP1 NP2 Figure 3: Example of generalization by noun phrases. Figure 3 presents three corresponding noun phrases between the MT output and the reference. The noun phrase “it”, which has no corresponding noun phrase, is expressed as “NP” in the reference. Consequently, the MT output is generalized as “NP1 NP2 NP3”; the reference is generalized as “NP NP3 NP1 NP2”. Subsequently, the system obtains the phraselevel score between the generalized MT output and reference as the following Eqs. (9), (10), and (11). Rnp = ⎛ ⎜ ⎝ RN i=0 αi cnpp∈LCS length(cnpp)β mcnp × √mno cnp β ⎞ ⎟ ⎠ 1 β (9) Pnp = ⎛ ⎜ ⎝ RN i=0 αi cnpp∈LCS length(cnpp)β ncnp × √nno cnp β ⎞ ⎟ ⎠ 1 β (10) 111 Table 1: Machine translation system types. System No. 1 System No. 2 System No. 3 System No. 4 System No. 5 System No. 6 Type SMT SMT RBMT SMT SMT SMT System No. 7 System No. 8 System No. 9 System No. 10 System No. 11 System No. 12 Type SMT SMT EBMT SMT SMT RBMT scorenp = (1 + γ2)RnpPnp Rnp + γ2Pnp (11) In Eqs. (9) and (10), cnpp denotes the common noun phrase parts; mcnp and ncnp respectively signify the quantities of common noun phrases in the reference and MT output. Moreover, mno cnp and nno cnp are the quantities of noun phrases except the common noun phrases in the reference and MT output. The values of mno cnp and nno cnp are processed as 1 when no non-corresponding noun phrases exist. The square root used for mno cnp and nno cnp is to decrease the weight of the noncorresponding noun phrases. In Eq. (11), γ is determined as Pnp/Rnp. In Fig. 3, Rnp and Pnp are 0.7071 (=  1×22.0+0.5×12.0 (3×1)2.0 ) when α is 0.5 and β is 2.0. Therefore, scorenp is 0.7071. The system obtains scorenp multi calculating the average of scorenp when multiple references are used as the following Eq. (12). scorenp multi = u j=0 (scorenp)j u (12) 2.4 Final Score The system calculates the final score by combining the word-level score and the phraselevel score as shown in the following Eq. (13). score = scorewd + δ × scorenp 1 + δ (13) Therein, δ represents a parameter for the weight of scorenp: it is between 0.0 and 1.0. The ratio of scorewd to scorenp is 1:1 when δ is 1.0. Moreover, scorewd multi and scorenp multi are used for Eq. (13) in multiple references. In Figs. 2 and 3, the final score between the MT output and the reference is 0.4185 (=0.2164+0.7×0.7071 1+0.7 ) when δ is 0.7. The system can realize high-quality automatic evaluation using both word-level information and phraselevel information. 3 Experiments 3.1 Experimental Procedure We calculated the correlation between the scores obtained using our method and scores produced by human judgment. The system based on our method obtained the evaluation scores for 1,200 English output sentences related to the patent sentences. These English output sentences are sentences that 12 machine translation systems in NTCIR-7 translated from 100 Japanese sentences. Moreover, the number of references to each English sentence in 100 English sentences is four. These references were obtained from four bilingual humans. Table 1 presents types of the 12 machine translation systems. Moreover, three human judges evaluated 1,200 English output sentences from the perspective of adequacy and fluency on a scale of 1–5. We used the median value in the evaluation results of three human judges as the final scores of 1–5. We calculated Pearson’s correlation efficient and Spearman’s rank correlation efficient between the scores obtained using our method and the scores by human judgments in terms of sentence-level adequacy and fluency. Additionally, we calculated the correlations between the scores using seven other methods and the scores by human judgments to compare our method with other automatic evaluation methods. The other seven methods were IMPACT, ROUGE-L, BLEU1, NIST, NMGWN(Ehara, 2007; Echizen-ya et al., 2009), METEOR2, and WER(Leusch et al., 2003). Using our method, 0.1 was used as the value of the parameter α in Eqs. (3)-(10) and 1.1 was used as the value of the parameter β in Eqs. (1)–(10). Moreover, 0.3 was used as the value of the parameter δ in Eq. (13). These val1BLEU was improved to perform sentence-level evaluation: the maximum N value between MT output and reference is used(Echizen-ya et al., 2009). 2The matching modules of METEOR are the exact and stemmed matching module, and a WordNet-based synonym-matching module. 112 Table 2: Pearson’s correlation coefficient for sentence-level adequacy. No. 1 No. 2 No. 3 No. 4 No. 5 No. 6 No. 7 Our method 0.7862 0.4989 0.5970 0.5713 0.6581 0.6779 0.7682 IMPACT 0.7639 0.4487 0.5980 0.5371 0.6371 0.6255 0.7249 ROUGE-L 0.7597 0.4264 0.6111 0.5229 0.6183 0.5927 0.7079 BLEU 0.6473 0.2463 0.4230 0.4336 0.3727 0.4124 0.5340 NIST 0.5135 0.2756 0.4142 0.3086 0.2553 0.2300 0.3628 NMG-WN 0.7010 0.3432 0.6067 0.4719 0.5441 0.5885 0.5906 METEOR 0.4509 0.0892 0.3907 0.2781 0.3120 0.2744 0.3937 WER 0.7464 0.4114 0.5519 0.5185 0.5461 0.5970 0.6902 Our method II 0.7870 0.5066 0.5967 0.5191 0.6529 0.6635 0.7698 BLEU with our method 0.7244 0.3935 0.5148 0.5231 0.4882 0.5554 0.6459 No. 8 No. 9 No. 10 No. 11 No. 12 Avg. All Our method 0.7664 0.7208 0.6355 0.7781 0.5707 0.6691 0.6846 IMPACT 0.7007 0.7125 0.5981 0.7621 0.5345 0.6369 0.6574 ROUGE-L 0.6834 0.7042 0.5691 0.7480 0.5293 0.6228 0.6529 BLEU 0.5188 0.5884 0.3697 0.5459 0.4357 0.4607 0.4722 NIST 0.4218 0.4092 0.1721 0.3521 0.4769 0.3493 0.3326 NMG-WN 0.6658 0.6068 0.6116 0.6770 0.5740 0.5818 0.5669 METEOR 0.3881 0.4947 0.3127 0.2987 0.4162 0.3416 0.2958 WER 0.6656 0.6570 0.5740 0.7491 0.5301 0.6031 0.5205 Our method II 0.7676 0.7217 0.6343 0.7917 0.5474 0.6632 0.6774 BLEU with our method 0.6395 0.6696 0.5139 0.6611 0.5079 0.5698 0.5790 ues of the parameter are determined using English sentences from Reuters articles(Utiyama and Isahara, 2003). Moreover, we obtained the noun phrases using a shallow parser(Sha and Pereira, 2003) as the chunking tool. We revised some erroneous results that were obtained using the chunking tool. 3.2 Experimental Results As described in this paper, we performed comparison experiments using our method and seven other methods. Tables 2 and 3 respectively show Pearson’s correlation coefficient for sentence-level adequacy and fluency. Tables 4 and 5 respectively show Spearman’s rank correlation coefficient for sentence-level adequacy and fluency. In Tables 2–5, bold typeface signifies the maximum correlation coefficients among eight automatic evaluation methods. Underlining in our method signifies that the differences between correlation coefficients obtained using our method and IMPACT are statistically significant at the 5% significance level. Moreover, “Avg.” signifies the average of the correlation coefficients obtained by 12 machine translation systems in respective automatic evaluation methods, and “All” are the correlation coefficients using the scores of 1,200 output sentences obtained using the 12 machine translation systems. 3.3 Discussion In Tables 2–5, the “Avg.” score of our method is shown to be higher than those of other methods. Especially in terms of the sentence-level adequacy shown in Tables 2 and 4, “Avg.” of our method is about 0.03 higher than that of IMPACT. Moreover, in system No. 8 and “All” of Tables 2 and 4, the differences between correlation coefficients obtained using our method and IMPACT are statistically significant at the 5% significance level. Moreover, we investigated the correlation of machine translation systems of every type. Table 6 shows “All” of Pearson’s correlation coefficient and Spearman’s rank correlation coefficient in SMT (i.e., system Nos. 1–2, system Nos. 4–8 and system Nos. 10–11) and RBMT (i.e., system Nos. 3 and 12). The scores of 900 output sentences obtained by 9 machine 113 Table 3: Pearson’s correlation coefficient for sentence-level fluency. No. 1 No. 2 No. 3 No. 4 No. 5 No. 6 No. 7 Our method 0.5853 0.3782 0.5689 0.4673 0.5739 0.5344 0.7193 IMPACT 0.5581 0.3407 0.5821 0.4586 0.5768 0.4852 0.6896 ROUGE-L 0.5551 0.3056 0.5925 0.4391 0.5666 0.4475 0.6756 BLEU 0.4793 0.0963 0.4488 0.3033 0.4690 0.3602 0.5272 NIST 0.4139 0.0257 0.4987 0.1682 0.3923 0.2236 0.3749 NMG-WN 0.5782 0.3090 0.5434 0.4680 0.5070 0.5234 0.5363 METEOR 0.4050 0.1405 0.4420 0.1825 0.4259 0.2336 0.4873 WER 0.5143 0.3031 0.5220 0.4262 0.4936 0.4405 0.6351 Our method II 0.5831 0.3689 0.5753 0.3991 0.5610 0.5445 0.7186 BLEU with our method 0.5425 0.2304 0.5115 0.3770 0.5358 0.4741 0.6142 No. 8 No. 9 No. 10 No. 11 No. 12 Avg. All Our method 0.5796 0.6424 0.3241 0.5920 0.4321 0.5331 0.5574 IMPACT 0.5612 0.6320 0.3492 0.6034 0.4166 0.5211 0.5469 ROUGE-L 0.5414 0.6347 0.3231 0.5889 0.4127 0.5069 0.5387 BLEU 0.5040 0.5521 0.2134 0.4783 0.4078 0.4033 0.4278 NIST 0.3682 0.3811 0.1682 0.3116 0.4484 0.3146 0.3142 NMG-WN 0.5526 0.5799 0.4509 0.6308 0.4124 0.5007 0.5074 METEOR 0.2511 0.4153 0.1376 0.3351 0.2902 0.3122 0.2933 WER 0.5492 0.6421 0.3962 0.6228 0.4063 0.4960 0.4478 Our method II 0.5774 0.6486 0.3428 0.5975 0.4197 0.5280 0.5519 BLEU with our method 0.5660 0.6247 0.2536 0.5495 0.4550 0.4770 0.5014 translation systems in SMT and the scores of 200 output sentences obtained by 2 machine translation systems in RBMT are used respectively. However, EBMT is not included in Table 6 because EBMT is only system No. 9. In Table 6, our method obtained the highest correlation among the eight methods, except in terms of the adequacy of RBMT in Pearson’s correlation coefficient. The differences between correlation coefficients obtained using our method and IMPACT are statistically significant at the 5% significance level for adequacy of SMT. To confirm the effectiveness of noun-phrase chunking, we performed the experiment using a system combining BLEU with our method. In this case, BLEU scores were used as scorewd in Eq. (13). This experimental result is shown as “BLEU with our method” in Tables 2–5. In the results of “BLEU with our method” in Tables 2–5, underlining signifies that the differences between correlation coefficients obtained using BLEU with our method and BLEU alone are statistically significant at the 5% significance level. The coefficients of correlation for BLEU with our method are higher than those of BLEU in any machine translation system, “Avg.” and “All” in Tables 2–5. Moreover, for sentence-level adequacy, BLEU with our method is significantly better than BLEU in almost all machine translation systems and “All” in Tables 2 and 4. These results indicate that our method using noun-phrase chunking is effective for some methods and that it is statistically significant in each machine translation system, not only “All”, which has large sentences. Subsequently, we investigated the precision of the determination process of the corresponding noun phrases described in section 2.1: in the results of system No. 1, we calculated the precision as the ratio of the number of the correct corresponding noun phrases for the number of all noun-phrase correspondences obtained using the system based on our method. Results show that the precision was 93.4%, demonstrating that our method can determine the corresponding noun phrases correctly. Moreover, we investigated the relation be114 Table 4: Spearman’s rank correlation coefficient for sentence-level adequacy. No. 1 No. 2 No. 3 No. 4 No. 5 No. 6 No. 7 Our method 0.7456 0.5049 0.5837 0.5146 0.6514 0.6557 0.6746 IMPACT 0.7336 0.4881 0.5992 0.4741 0.6382 0.5841 0.6409 ROUGE-L 0.7304 0.4822 0.6092 0.4572 0.6135 0.5365 0.6368 BLEU 0.5525 0.2206 0.4327 0.3449 0.3230 0.2805 0.4375 NIST 0.5032 0.2438 0.4218 0.2489 0.2342 0.1534 0.3529 NMG-WN 0.7541 0.3829 0.5579 0.4472 0.5560 0.5828 0.6263 METEOR 0.4409 0.1509 0.4018 0.2580 0.3085 0.1991 0.4115 WER 0.6566 0.4147 0.5478 0.4272 0.5524 0.4884 0.5539 Our method II 0.7478 0.4972 0.5817 0.4892 0.6437 0.6428 0.6707 BLEU with our method 0.6644 0.3926 0.5065 0.4522 0.4639 0.4715 0.5460 No. 8 No. 9 No. 10 No. 11 No. 12 Avg. All Our method 0.7298 0.7258 0.5961 0.7633 0.6078 0.6461 0.6763 IMPACT 0.6703 0.7067 0.5617 0.7411 0.5583 0.6164 0.6515 ROUGE-L 0.6603 0.6983 0.5340 0.7280 0.5281 0.6012 0.6435 BLEU 0.4571 0.5827 0.3220 0.4987 0.4302 0.4069 0.4227 NIST 0.4255 0.4424 0.1313 0.2950 0.4785 0.3276 0.3062 NMG-WN 0.6863 0.6524 0.6412 0.7015 0.5728 0.5968 0.5836 METEOR 0.4242 0.4776 0.3335 0.2861 0.4455 0.3448 0.2887 WER 0.6234 0.6480 0.5463 0.7131 0.5684 0.5617 0.4797 Our method II 0.7287 0.7255 0.5936 0.7761 0.5798 0.6397 0.6699 BLEU with our method 0.5850 0.6757 0.4596 0.6272 0.5452 0.5325 0.5474 tween the correlation obtained by our method and the quality of chunking. In “Our method” shown in Tables 2–5, noun phrases for which some erroneous results obtained using the chunking tool were revised. “Our method II” of Tables 2–5 used noun phrases that were given as results obtained using the chunking tool. Underlining in “Our method II” of Tables 2–5 signifies that the differences between correlation coefficients obtained using our method II and IMPACT are statistically significant at the 5% significance level. Fundamentally, in both “Avg.” and “All” of Tables 2–5, the correlation coefficients of our method II without the revised noun phrases are lower than those of our method using the revised noun phrases. However, the difference between our method and our method II in “Avg.” and “All” of Tables 2–5 is not large. The performance of the chunking tool has no great influence on the results of our method because scorewd in Eqs. (3), (4), and (5) do not depend strongly on the performance of the chunking tool. For example, in sentences shown in Fig. 2, all common parts are the same as the common parts of Fig. 2 when “the crowning fall” in the MT output and “crowning drop” in the reference are not determined as the noun phrases. Other common parts are determined correctly because the weight of the common part “the amount of” is higher than those of other common parts by Eqs. (1) and (2). Consequently, the determination of the common parts except “the amount of” is not difficult. In other language sentences, we already performed the experiments using Japanese sentences from Reuters articles(Oyamada et al., 2010). Results show that the correlation coefficients of IMPACT with our method, for which IMPACT scores were used as scorewd in Eq. (13), were highest among some methods. Therefore, our method might not be languagedependent. Nevertheless, experiments using various language data are necessary to elucidate this point. 4 Conclusion As described herein, we proposed a new automatic evaluation method for machine transla115 Table 5: Spearman’s rank correlation coefficient for sentence-level fluency. No. 1 No. 2 No. 3 No. 4 No. 5 No. 6 No. 7 Our method 0.5697 0.3299 0.5446 0.4199 0.5733 0.5060 0.6459 IMPACT 0.5481 0.3285 0.5572 0.3976 0.5960 0.4317 0.6334 ROUGE-L 0.5470 0.3041 0.5646 0.3661 0.5638 0.3879 0.6255 BLEU 0.4157 0.0559 0.4286 0.2018 0.4475 0.2569 0.4909 NIST 0.4209 0.0185 0.4559 0.1093 0.3186 0.1898 0.3634 NMG-WN 0.5569 0.3461 0.5381 0.4300 0.5052 0.5264 0.5328 METEOR 0.4608 0.1429 0.4438 0.1783 0.4073 0.1596 0.4821 WER 0.4469 0.2395 0.5087 0.3292 0.4995 0.3482 0.5637 Our method II 0.5659 0.3216 0.5484 0.3773 0.5638 0.5211 0.6343 BLEU with our method 0.5188 0.1534 0.4793 0.3005 0.5255 0.3942 0.5676 No. 8 No. 9 No. 10 No. 11 No. 12 Avg. All Our method 0.5646 0.6617 0.3319 0.6256 0.4485 0.5185 0.5556 IMPACT 0.5471 0.6454 0.3222 0.6319 0.4358 0.5062 0.5489 ROUGE-L 0.5246 0.6428 0.2949 0.6159 0.3928 0.4858 0.5359 BLEU 0.4882 0.5419 0.1407 0.4740 0.4176 0.3633 0.3971 NIST 0.4150 0.4193 0.0889 0.3006 0.4752 0.2980 0.2994 NMG-WN 0.5684 0.5850 0.4451 0.6502 0.4387 0.5102 0.5156 METEOR 0.2911 0.4267 0.1735 0.3264 0.3512 0.3158 0.2886 WER 0.5320 0.6505 0.3828 0.6501 0.4003 0.4626 0.4193 Our method II 0.5609 0.6687 0.3629 0.6223 0.4384 0.5155 0.5531 BLEU with our method 0.5470 0.6213 0.2184 0.5808 0.4870 0.4495 0.4825 Table 6: Correlation coefficient for SMT and RBMT. Pearson’s correlation coefficient Spearman’s rank correlation coefficient Adequacy Fluency Adequacy Fluency SMT RBMT SMT RBMT SMT RBMT SMT RBMT Our method 0.7054 0.5840 0.5477 0.5016 0.6710 0.5961 0.5254 0.5003 IMPACT 0.6721 0.5650 0.5364 0.4960 0.6397 0.5811 0.5162 0.4951 ROUGE-L 0.6560 0.5691 0.5179 0.4988 0.6225 0.5701 0.4942 0.4783 NMG-WN 0.5958 0.5850 0.5201 0.4732 0.6129 0.5755 0.5238 0.4959 tion. Our method calculates the scores for MT outputs using noun-phrase chunking. Consequently, the system obtains scores using the correctly matched words and phrase-level information based on the corresponding noun phrases. Experimental results demonstrate that our method yields the highest correlation among eight methods in terms of sentencelevel adequacy and fluency. Future studies will improve our method, enabling it to achieve high correlation in sentence-level fluency. Future studies will also include experiments using data of various languages. Acknowledgements This work was done as research under the AAMT/JAPIO Special Interest Group on Patent Translation. The Japan Patent Information Organization (JAPIO) and the National Institute of Informatics (NII) provided corpora used in this work. The author gratefully acknowledges JAPIO and NII for their support. Moreover, this work was partially supported by Grants from the High-Tech Research Center of Hokkai-Gakuen University and the Kayamori Foundation of Informational Science Advancement. 116 References Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In Proc. of ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, 65–72. Deborah Coughlin. 2003. Correlating Automated and Human Assessments of Machine Translation Quality. In Proc. of MT Summit IX, 63–70. Hiroshi Echizen-ya and Kenji Araki. 2007. Automatic Evaluation of Machine Translation based on Recursive Acquisition of an Intuitive Common Parts Continuum. In Proc. of MT Summit XII, 151–158. Hiroshi Echizen-ya, Terumasa Ehara, Sayori Shimohata, Atsushi Fujii, Masao Utiyama, Mikio Yamamoto, Takehito Utsuro and Noriko Kando. 2009. Meta-Evaluation of Automatic Evaluation Methods for Machine Translation using Patent Translation Data in NTCIR-7. In Proc. of the 3rd Workshop on Patent Translation, 9–16. Terumasa Ehara. 2007. Rule Based Machine Translation Combined with Statistical Post Editor for Japanese to English Patent Translation. In Proc. of MT Summit XII Workshop on Patent Translation, 13–18. Atsushi Fujii, Masao Utiyama, Mikio Yamamoto and Takehito Utsuro. 2008. Overview of the Patent Translation Task at the NTCIR-7 Workshop. In Proc. of 7th NTCIR Workshop Meeting on Evaluation of Information Access Technologies: Information Retrieval, Question Answering and Cross-lingual Information Access, 389– 400. Gregor Leusch, Nicola Ueffing and Hermann Ney. 2003. A Novel String-to-String Distance Measure with Applications to Machine Translation Evaluation. In Proc. of MT Summit IX, 240– 247. Chin-Yew Lin and Franz Josef Och. 2004. Automatic Evaluation of Machine Translation Quality Using Longest Common Subsequence and Skip-Bigram Statistics. In Proc. of ACL’04, 606–613. Dennis N. Mehay and Chris Brew. 2007. BLEUˆATRE: Flattening Syntactic Dependencies for MT Evaluation. In Proc. of MT Summit XII, 122–131. Andrew Mutton, Mark Dras, Stephen Wan and Robert Dale. 2007. GLEU: Automatic Evaluation of Sentence-Level Fluency. In Proc. of ACL’07, 344–351. NIST. 2002. Automatic Evaluation of Machine Translation Quality Using N-gram Co-Occurrence Statistics. http://www.nist.gov/speech/tests/mt/doc/ ngram-study.pdf. Takashi Oyamada, Hiroshi Echizen-ya and Kenji Araki. 2010. Automatic Evaluation of Machine Translation Using both Words Information and Comprehensive Phrases Information. In IPSJ SIG Technical Report, Vol.2010-NL-195, No. 3 (in Japanese). Kishore Papineni, Salim Roukos, Todd Ward and Wei-Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proc. of ACL’02, 311–318. Michael Pozar and Eugene Charniak. 2006. Bllip: An Improved Evaluation Metric for Machine Translation. Brown University Master Thesis. Fei Sha and Fernando Pereira. 2003. Shallow Parsing with Conditional Random Fields. In Proc. of HLT-NAACL 2003, 134–141. Keh-Yih Su, Ming-Wen Wu and Jing-Shin Chang. 1992. A New Quantitative Quality Measure for Machine Translation Systems. In Proc. of GOLING’92, 433–439. Masao Utiyama and Hitoshi Isahara. 2003. Reliable Measures for Aligning Japanese–English News Articles and Sentences. In Proc. of the ACL’03, pp.72–79. 117
2010
12